Skip to content

close the node when HAF sync fail

Marcin requested to merge mi_massive_sync_trigger into develop

there is no need to dump all cached blocks (not yet dumped to the database ) when the plugin is closing. Because HAF has never more blocks dumped than node has processed, then we don't need to flushing all cached blocks when the node is going to close, they will be dumped with replay after the node will start. In case of reindex some part of irreversible blocks may stay inconistent ( they are not claimed by the randevouz trigger), they won't be achived by the HAF applications and after restart of the node they will be removed, and the rest of blocks from the node state will be dumped during continuing the reindex.. Similar situation occurs with P2P sync. In case of LIVE sync the situation is much simpler: each block must be flushed to stay in the node state, and if a block stays in the cache it means it is not already confirmed in the state.

Merge request reports