The whole database_api plugin with dependesnces is linked into haf shared library .so
The database_api drags chain database and serves as the base for querying the state
sharedmemory.bin is created for every haf application context in PG_DATA the user given directory.
Any haf application using the consensus state provider can replay it forward any number of blocks
Each block is read from haf block api as json haf database with pqxx library queries and then consumed by the hive evaluator apply_block method. There is also a non-transactional variation that discards transaction data in favor of just blocks + operations.
5M of replay takes
3 hours in json version
in pqxx version
in 1 block chunks
with reestablishing the connection every chunk: estimated 526min(1M) x 5 = 2630 min = 43 h = 0.0063 sec per block
with reusing the connection: 81 minutes () : 0.00097 sec per bloc