hckrnws
I've spent a lot of time studying CRDTs & order theory in the last year & will publish an article too. Local-first apps are not easy to build, complex data structures (say, calendar repetitions with exceptions) become harder to model. Everything must converge automatically while clients can branch off.
In general, you don't really get to compact tombstones meaningfully without consensus so you really are pushing at least remnants of the entire log around to each client indefinitely. You also can't easily upgrade your db you're stuck looking after legacy data structures indefinitely - or you have to impose arbitrary cut off points.
List CRDTs - which text CRDTs are built from are probably unavoidable except for really really simple applications. Over the last 15 years they have evolved, roughly: WOOT (paper) -> RGA (paper) -> YATA (paper) / YJS (js + later rust port) -> Peritext (paper) / Automerge (rust/js/swift) -> Loro (Rust/js). Cola (rust) is another recent one. The big libs (yjs, automerge, loro) offer full doc models.
Mostly the later ones improve on space, time & intent capture (not interleaving concurrent requests).
The same few guys (Martin Kleppman, Kevin Jahns, Joseph Gentle, probably others) pop up all over the more recent optimisations.
> In general, you don't really get to compact tombstones meaningfully without consensus so you really are pushing at least remnants of the entire log around to each client indefinitely.
This is often much less of a problem in practice than people think. Git repositories also grow without bound, but nobody seems to really notice or care. For diamond types (my library), I lz4 compress the text content itself within the .dt files. The metadata is so small that in many cases the space savings from lz4 compression makes the resulting .dt file - including full change history - smaller than the document on disk.
If anyone really cares about this problem, there are a couple other approaches:
1. In our Eg-walker algorithm, we don't use stable guids to identify insert positions. Thats where other algorithms go wrong, because they need to store these guids indefinitely in case someone references them. For eg-walker, we just store relative positions. This makes the algorithm work more like git, where you can do shallow clones. And everything works, until you want to merge a branch which was split off earlier than your clone point. Then you should be able to download the earlier edits back to the common branch point and merge. To support merging, you also only need to store the metadata of earlier edits. You don't need the inserted content. The metadata is usually tiny (~1-4 bits per keystroke for text is common with compression.)
2. Mike Toomim's Antimatter algorithm is a distributed consensus protocol which can detect when its safe to purge the metadata for old changes. It works even in the presence of partitions in the network. (Its conservative by default - if a partition happens, the network will keep metadata around until that peer comes back, or until some timeout.)
> The same few guys (Martin Kleppman, Kevin Jahns, Joseph Gentle, probably others) pop up all over the more recent optimisations.
Alex Good is another. He's behind a lot of the huge improvements to automerge over the last few years. He's wicked smart.
The man himself. Yeah agreed you guys have solved it. It is more a misfired crud dev instinct for me that sees it as wasteful. Just a different paradigm and not big in practice.
I've got eg-walker & Diamond Types in my reading/youtube backlog. Diamond Types went further down the backlog because of "wip" on the repo! I will look into Antimatter too.
> It is more a misfired crud dev instinct for me that sees it as wasteful.
You're not alone. Lots of people bring this up. And it can be a real problem for some applications, if they store ephemeral data (like mouse cursor positions) within the CRDT itself.
re: git repositories
That's partly because repositories rarely need to be cloned in their entirety. As such, even when you need to do it and it's a couple hundreds mb taking a few minutes, it's tolerated.
In situations where a document needs to be cold loaded often, the size of the document is felt more acutely. Figma has a notion of GC-ing tombstones. But the tombstones in questions aren't even something that get created in regular editing, it happens in a much more narrow case having to do with local copies of shared components. Even that caused problems for a small portion of files -- but if a file got that large, it was also likely to be important.
> even when you need to do it and it's a couple hundreds mb taking a few minutes, it's tolerated.
Well written CRDTs should grow slower than git repositories.
> In situations where a document needs to be cold loaded often, the size of the document is felt more acutely.
With eg-walker (and similar algorithms) you can usually just load and store the native document snapshot at the current point-in-time, and work with that. And by that I mean, just the actual json or text or whatever that the application is interacting with.
Many current apps will first download an entire automerge document (with full history), then using the automerge API to fetch the data. Instead, using eg-walker and similar approaches, you can just download 2 things:
- Current state (eg a single string for a text file, or raw JSON for other kinds of data) alongside the current version
- History. This can be as detailed & go as far back as you want. If you download the full history (with data), you can reconstruct the document at any point in time. If you only download the metadata, you can't go back in time. But you can merge changes. You need history to merge changes. But you only need access to history metadata, and only as far back as the fork point with what you're merging.
If you're working online (eg figma), you can just download history lazily.
For client/server editing, in most cases you don't need any history at all. You can just fetch the current document snapshot (eg a text or json object). And users can start editing immediately. It only gets fancy when you try to merge concurrent changes. But thats quite rare in practice.
> If you're working online (eg figma), you can just download history lazily.
You can download the history lazily, that's a special case of incrementally loading the document.
If the history is only history, then sure. But I was understanding that we were talking about something that may need to be referenced but can eventually be GCed (e.g. tombstones-style). Then lazy loading makes user operations that need to reference that data go from synchronous operations to asynchronous operations. This is a massive jump in complexity. Incrementally loading data is not the hard part. The hard part is how product features need to be built on the assumption that the data is incrementally loaded.
Something that not all collaborative apps will care about, but Figma certainly did, is the concept that the client may go offline with unsynced changes for an indefinite amount of time. So the tombstones may need to be referenced by new-looking old changes, which increases the likelihood of hitting "almost-never" edge cases by quite a bit.
> But I was understanding that we were talking about something that may need to be referenced but can eventually be GCed (e.g. tombstones-style)
Yeah my point is that this is only a limitation of certain types of CRDTs. Automerge and Yjs both suffer from this.
However, it’s not a requirement for anything built on egwalker. Or generally any CRDTs in the pure operation log style. Egwalker (and by extension, loro and diamond types) can generate and broadcast edits synchronously. Even without any history loaded.
Thank you for your work. Diamond Types is probably my favorite piece of programming, transliterating the algo for line edits for a situation where I needed a a different algo for the line management itself, is probably the most rewarding deep dive on a algorithm I've ever done.
I’m glad you enjoyed it! What did you build?
The tombstone problem is exactly why I built DocNode. You're right that you can't compact them without consensus, so DocNode just doesn't create them. It assumes a server decides the order, which is already the case in 99% of collaborative apps. The result: a doc stays the same size whether it was edited once or a thousand times. Yjs for the same doc grows every time someone types, pastes, or undoes. Check it out: https://docukit.dev
Regarding the growing log in CRDTs, it doesn't have to be that way. Most of these popular CRDT libs use Merkle DAG only to build up the log of the changes. But there is a better way, you can combine a Merkle DAG for ordering + prolly trees for storing the actual state of the data. That gives you total ordering, an easy way to prune old data when you choose to, and an easy way to sync. Look into fireproof for how this is combined.
Regarding distributed schemas, I agree, there's a lot of complexity there but it's worth looking into projects like https://www.inkandswitch.com/cambria/ and https://github.com/grafana/thema, which try to solve the problem. It's a young space though. If anyone else knows about similar projects or efforts, please let me know. Very interested in the space.
Interesting! Do you mind explaining the idea in more detail?
As far as I understand, libraries like Automerge use the Merkle DAG to encode a document as an immutable bundle of state changes aka operation log + the causal ordering which enables conflict free merging between multiple peers. The final document is reconstructed by combining the state transitions. So the Merkle DAG is both the state and the causal relationship between mutations which allows the merge "magic".
Prolly trees allow you to store history independent data which is trivial to sync, diff and merge, regardless of insert order, merge order or originating peer. A Merkle DAG layered on top of prolly trees (event reference prolly tree roots) gives you causality so that peers can agree on a common view of history. So it's very useful because you can check integrity and travel in time, but you can keep as much of it as you want, because it's not necessary for constructing the current state. Prolly trees give you the current state and the easy syncing, diff,merge. So you can truncate the history as needed for your use case.
For a production ready implementation of prolly trees you can check Dolt. For a combination of Merkle DAG (https://github.com/storacha/pail) and prolly trees you can check https://github.com/fireproof-storage/fireproof
I love diamond types personally, which I think Loro uses in certain cases. I find not only is DT fast under normal usage, allows for easy archiving and retrieval of past data but it also quickly outputs a per user casuality rendering that LLMs can pretty easily rectify in most cases. Compact and archival to flat files works very well.
Last-Write-Win CRDTs are nice, but I wish the article talked about where CRDT really shine, which is when the state truly converge in a non-destructive way, for example:
1) Counters
While not really useful, they demonstrate this well:
- mutations are +n and -n
- their order do not matter
- converging the state is a matter of applying the operations of remote peers locally
2) Append-only data structuresUseful for accounting, or replication of time-series/logs with no master/slave relationship between nodes (where writes would be accepted only on a "master" node).
- the only mutation is "append"
- converging the state is applying the peers operations then sorting by timestamp
EDIT: add more3) Multi Value registers (and maps)
Similar to Last-Write-Win registers (and maps), but all writes are kept, the value becomes a set of concurrent values.
4) Many more...
Each is useful for specific use cases. And since not everybody is making collaborative tools, but many are working on distributed systems, I think it's worth it to mention this.
On another note, the article talks about state based CRDTs, where you need to share the whole state. In the examples I gave above, they are operation based CRDTs, where you need to share the operations done on the state and recompute it when needed.
For example, in the Elixir ecosystem, we have Horde ( https://hexdocs.pm/horde/readme.html ) which allows distributing a worker pool over multiple nodes, it's backed by DeltaCrdt ( https://hexdocs.pm/delta_crdt/DeltaCrdt.html ).
Delta-CRDTs are an optimization over state based CRDTs where you share state diffs instead of the whole state (described in this paper: https://arxiv.org/pdf/1603.01529 ).
Enjoyed the article! As someone who's worked on bits of collaborative software (and is currently helping build one at work), I'd caution people from using CRDTs in general. A central server is the right tool for the job in most cases; there are certain things that are difficult to handle with CRDTs, like permissions and acquiring locks on objects.
Edit: I had an excerpt here which I completely misread. Sorry.
Permissions can be handled with capability systems. Keyhive [0] is the furthest along on this. I've also made my own prototype [1] showing how certificate capabilities can be composed with CRDTs.
[0] https://www.inkandswitch.com/keyhive/notebook/
[1] https://spritely.institute/news/composing-capability-securit...
Heh. Find how to grant permissions/ acquire lock in git. You can not. That is fundamental to distributed systems.
Downside: harder to think about it all.
Upside: a rocket may hit the datacenter.
From what I remember about Figma, it can be proclaimed CRDT. Google Docs got their sync algorithm before CRDT was even known (yep, I remember those days!).
Early versions of google docs didn't even implement OT correctly. There were various cases where if two people bolded and unbolded text in a certain order, while connecting and disconnecting their wifi, the documents would go out of sync. You'd end up looking at different documents, even after everything reconnected.
I (obviously) care a lot about fixing these sort of bugs. But its important to remember that in many applications, it matters a lot less than people think.
Of course. But typical, live collaborative software doesn't need to be (and shouldn't be) decentralized. In such software it's annoying to not be able to (even speculatively) acquire unique access to an object. I'd be very surprised if Google Docs used CRDTs now.
Part time CRDT researcher here. I think CRDTs would work great for google docs. Google docs has a problem where if too many people open the document at the same time, it needs to lock the document. I assume this is because the document is "owned" by a single computer. Or they use DB-transactional ordering on edits.
If I implemented google docs today, I'd use a CRDT between all the servers. This would let you have multi-master server scaling, and everything on the server side would be lock-free.
Between the server and the client, CRDTs would be fine. With eg-walker, you can just send the current document state to the user with no history (until its actually needed). In eg-walker, you can merge any remote change so long as you have history going back to a common fork point. If merges happen and the client is missing history, just have them fetch the server for historical data.
Alternately, you can bridge from a CRDT to OT for the client/server comms. The client side code is a bit simpler that way, but clients may need server affinity - which would complicate your server setup.
Here's a 2019 Figma article on this, not sure if its representative of their current system
> OTs power most collaborative text-based apps such as Google Docs. They’re the most well-known technique but in researching them, we quickly realized they were overkill for what we wanted to achieve ...
> Figma's tech is instead inspired by something called CRDTs, which stands for conflict-free replicated data types. ... Figma isn't using true CRDTs though. CRDTs are designed for decentralized systems where there is no single central authority to decide what the final state should be
https://www.figma.com/blog/how-figmas-multiplayer-technology...
My absolute favorite kind of blog post and the same structure/style I use.
Also a really well written piece.
I haven't dug into this deeply, but to me CRDTs look like a P2P data structure abstracted to the programming language variable level. The article says they shine when you don't want a central server. But most communication libraries already handle authentication and multiple peers well — and if you designate one peer as canonical (via leader election), conflict resolution is solved. I'm curious what use cases make avoiding a central server worth the paradigm shift. That said, it's a choice of approach — some may prefer the CRDT paradig
CRDTs let you avoid leader election / strict consensus. The canonical example is a google doc with multiple editors.
Comment was deleted :(
Previously...
An interactive intro to CRDTs - https://news.ycombinator.com/item?id=37764581 - Oct 2023 (130 comments)
I tried vibe-coding a CRDT app last year in the pre-Sonnet 4.5 era (based partly on this write-up), and it really struggled with timing issues.
It would be interesting to try again now.
It appears to have gotten better! Sharing a project soon.
Crafted by Rajat
Source Code