Change the way mock blocks are generated during testing
Currently in the hivemind code - there are two ways to include mock data for testing:
- When the block number containing mock data is lower than the HEAD block num in the database => just include all operations in the currently existing block
(ex. mock data in 4_000_000 while having 5_000_000 blocks replayed)
- When there should be additional mock blocks created after all blocks currently stored in the database => we need to create and append the mock block to the database also.
(ex. mock data in 5_000_001 while having only 5_000_000 replayed)
Scenario no. 2 has some problems:
- while converting hivemind to the HAf based app, in the new synchronization mode - from the HAf database we cannot just append the additional blocks to the
blocks
table located in thehive
schema. This means we cannot get rid of thehive_blocks
table in thehivemind_app
schema and we would have a lot of redundant data increasing the total size of the HAf database - additional code related to determining that we have already processed all blocks from the HAf database, and now we should create additional mock blocks, which could lead to bugs in the end-user production code (code located in
massive_blocks_data_provider.py, lines 227 and 293
, now it is done by queryinghive.blocks
and asking for the last block)
If we could prepare an extended version of the HAF database - which would contain 5_000_000
blocks as we have already and additional blocks needed we could get rid of the problems mentioned above.
These blocks could be left blank and mock data could be entered into them as it is currently in scenario no. 1 or we could completely get rid of mocking operations on the hivemind's side (additional benefits) and immediately prepare mock blocks containing the required operations and store them in the HAf database. This solution would completely eliminate any influence of the test-related code on the production code.