Improve support for hived node replay when HAF database is already filled
To provide basic sanity we should store in the HAF database version of hived binary, which created the data. Then during another hived replay, when existing blocks are already processed we can assume that data already collected for them will match new one again.
Right now above works only for reversible blocks and another replay is rejected when database contains data for irreversible part.