Support for pruned HAF instance
To limit amount of resources needed to have functional HAF instance we could add pruning mode (similar to the one specific to hived block log) where only data specific to last N blocks are held in database. Then applications which does not need to have access to whole historical data in blockchain could use deployments with enabled pruning.
Asssumptions:
- There is no need for any change in the application code since it receives data by the
hive.app_next_iteration
call. - Reproducibility: starting HAF sync/replay from scratch in this mode should again reiterate all data (so provide whole dataset to the deployed applications) even only their subset will be persistently stored (data for last N blocks).
- Data prunning happens when all deployed applications cross some block (so not only block range decides about pruning condition). It is needed since each app processes own data asynchronously
- It is application decision if it would prune own evaluated state data or not. For example hivemind could hold all data (as currently) even HAF instance will be pruned or have implemented own pruning to keep only data from last week.