Skip to content

Draft: Implement simple write-ahead-log between hived & postgres for livesync

Dan Notestein requested to merge write-ahead-log into develop

This commit only affects the behavior of sql_serializer in "live sync" mode.

The haf write-ahead-log allows hived to proceed without waiting for sql_serializer to commit its changes to the HAF database, allowing something similar to pipelined parallel processing by hived and the sql_serializer+postgres. In other words, hived and haf are running asynchronously (although hived will block temporarily if it gets too far ahead of HAF).

This is the sequence of events:

Hived notifies sql_serializer about new blocks and sql_serializer copies the data into a cache, spawns threads to process the cached data, then waits for those threads to complete generating SQL statements (to this point, this is how live sync has always worked). Previously, the write_queue thread would then execute the SQL calls and wait for them to complete. Now, these SQL calls are instead added to the HAF write-ahead-log (can be viewed as similar to a queue) and then proceeds back to normal write_queue processing. A separate thread reads data from the write-ahead-log and executes the SQL commands.

In the event of a crash or hardware failure, the HAF write ahead log allows the HAF database to "catch up" to the current location of hived's statefile (assuming it actually fell behind). For details on the behavior of the write ahead log itself, see: https://gitlab.syncad.com/hive/haf/-/blob/80d1fc3e9c8db509b0c20a40e35ad49319c48a91/src/sql_serializer/include/hive/plugins/sql_serializer/write_ahead_log.hpp

Edited by Dan Notestein

Merge request reports

Loading