speedsup p2p sync
Compare changes
- Bartek Wrona authored
dataflow.txt
0 → 100644
+ 30
− 0
- pre_apply_operation_handler/post_apply_operation_handler - main data consumer, methods `on_pre_apply_operation`/`on_post_apply_operation`. Required for backward compatibility of operations ordering (with AH impl). PRE- Handles all operations except hardfork virtual operation, which is handled by post_apply_operation_handler (and vice versa).
Main purpose of such methods is to gather data specific to processed operation (like operation data itself like also list of its impacted accounts) and store them in the cache layer. IMPORTANT NOTE: some virtual operations must be supplemented (what is intentionally not performed during regular state evaluation to limit overhead), just to provide full data required for 3rd party services (like Hivemind). The same step was done in old AH-RocksDB plugin. See `hive::util::supplement_operation` call.
- pre_reindex - method `on_pre_reindex` useful (similary to on_pre_apply_block) to establish initial database setup (like also load already processed partial data in case of replay resume). Also it switches SQL serializer processing mode into MASSIVE one (all incoming data are also IMPLICITLY irreversible)
`livesync_data_dumper` - performs cached data conversion into SQL representation compatible to the `hive.push_block` procedure. Also, all data are written into database using one transaction. What important, there is possible to use multiple threads during converting cached data into a strings being next concatnated into final query.
`reindex_data_dumper` - responsible for cached data conversion to the format compatible to direct table INSERTs. Also allows multithreaded conversion (every kind of data is processed concurrently), but some of cached data (like transactions, operations and account-operations) can be also dumped in multithreaded way (using number of threads specific in the config file).