425a396
Allow call events to be logged without a room id (#2937)
Click to expand commit body
Prior to this PR, we assumed that all call events needed a room_id, but
we now have call-based actions that don't need a room_id - for instance,
you can right click a channel and view the notes while not in a call. In
this case, there is no room_id. We want to be able to track these
events, which requires removing the restriction that requires a room_id.
Release Notes:
- N/A
Joseph T. Lyons
created
39e13b6
Allow call events to be logged without a room id
Joseph T. Lyons
created
a8d5d93
Max out corner radii to half the smaller dimension of the parent box
d03a89c
Rejoin channel notes after brief connection loss (#2930)
Click to expand commit body
* [x] Re-send operations that weren't sent while disconnected
* [x] Apply other clients' operations that were missed while
disconnected
* [x] Update collaborators that joined / left while disconnected
* [x] Inform current collaborators that your peer id has changed
* [x] Refresh channel buffer collaborators on server restart
* [x] randomized test
Max Brunsfeld
created
58f58a6
Tolerate channel buffer operations being re-sent
Max Brunsfeld
created
ed2aed4
Update test name in randomized-test-minimize script
Max Brunsfeld
created
b75e69d
Check that channel notes text converges in randomized test
a2e91e4
Use preview server when not on stable (#2909)
Click to expand commit body
This PR updates our client code to connect to preview whenever we're not
on stable. This will make it more likely that we'll be able to
collaborate on a dev build, but obviously won't work if there's a
protocol change on main that hasn't made its way to preview yet.
This PR ships a series of optimizations for the semantic search engine.
Mostly focused on removing invalid states, optimizing requests to
OpenAI, and reducing token usage.
Release Notes (Preview-Only):
- Added eager incremental indexing in the background on a debounce.
- Added a local embeddings cache for reducing redundant calls to OpenAI.
- Moved to an Embeddings Queue model which ensures optimal batch sizes
at the token level, and atomic file & document writes.
- Adjusted OpenAI Embedding API requests to use provided backoff delays
during Rate Limiting.
- Removed flush races between parsing files step and embedding queue
steps.
- Moved truncation to parsing step reducing the probability that OpenAI
encounters bad data.
Kyle Caverly
created
0307cb8
Start sketching a collab panel in storybook
This should have no user-visible impact.
For vim `.` to repeat it's important that actions are replayable.
Currently editor::MoveDown *sometimes* moves the cursor down, and
*sometimes* selects the next completion.
For replay we need to be able to separate the two.
Conrad Irwin
created
95b72a7
Re-index project when a worktree is registered