Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

History servers #120

Open
progval opened this issue Apr 20, 2024 · 2 comments
Open

History servers #120

progval opened this issue Apr 20, 2024 · 2 comments

Comments

@progval
Copy link
Collaborator

progval commented Apr 20, 2024

Currently, all servers serve all the history. This allows sable_ircd to directly answer IRCv3 ChatHistory requests from clients.
However, this means memory size puts a hard limit on the retention period. Libera.Chat gets about 1 million messages a day and assuming an average 1 day retention period and 1-10kB per message, this means 1-10GB of memory use already.

The idea (from @spb) to fix this is to add dedicated servers for long-term storage. They would use an off-the-shelf database, like PostgreSQL, to store the history past what fits in sable_ircd servers' memory; and sable_ircd would query it over the RPC when needed (ie. ChatHistory requests for old messages, IRCv3 Search, ...).

Additionally, we want this history database to be replicated, for durability. As a bonus, we may be able to get high-available and even load balancing. There are two ways to design this:

  1. a leader "history server" writes to a PostgreSQL database. This database is streamed to replicas' PostgreSQL database. Each replica "history server" may read from its own database.
  2. all "history servers" are equal, they all write to their own PostgreSQL/SQLite/rocksdb/... database. The databases are fully independent from each other.

Option 2 is (IMO) way simpler because we don't need any coordination at all. Option 1 has the advantage of possibly being able to share sable_network's design, if that's what we settle on in #119.

@spb
Copy link
Collaborator

spb commented Apr 20, 2024

Option 2 was my plan here; the network sync layer already provides an eventually consistent view of what needs to be stored, so streaming that independently into long-term storage shouldn't pose any issues. Queries can then be directed to any online history-capable node.

The only time this doesn't work is when adding a new history node, or if one has been offline longer than the lifetime of messages in the synchronised state. Either of those will require a stream from an existing node to get to a point where the global state can start from, but I think doing this as an occasional batch process will be easier than trying to do it live.

@CounterPillow
Copy link

CounterPillow commented Dec 28, 2024

I have a somewhat terrible scopecreep proposal: instead of using a replicating SQL server or an SQL server per instance backed by disk, take a page out of object storage infra's book and consider the total available memory across all servers as the capacity and try to distribute n (e.g. 3) copies of each message across different nodes. Then, when a ChatHistory request arrives, fetch the messages not stored locally from the other servers in the network by figuring out who has them and asking those servers to provide them.

The benefit here is that redundancy is built into the ircd itself and scales with it, if a node falls over there's no difference between it not able to serve IRC clients and it not being able to serve chat history. Additionally, there's no single point of failure, they're not copies of a single master DB. The downside is that this would be really complex to implement, might open up the ircd to amplification attacks against other nodes, and no node would have a complete view of all chat history.

EDIT: Apparently valkey in cluster mode could be sorta made to do that? https://valkey.io/topics/cluster-tutorial/

EDIT2: Maybe SQL servers are sharded when replicating as well (side note: is sharded + replicated = shartet?) and I didn't realise this, and I'm also misreading the opening post's mention of memory consumption as too hard of a requirement for the data to be in memory, in which case just pretend I never made this post.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants