Blockchain conversion procedure should allow to prepare an "iceberg tip" data
Regression testing new features (usually at CI), especially those activated at new hardforks, must contain data specific to latest blocks in the mainnet blockchain. It is possible to use full mirrornet instance for it, but such scenario uses a lot of time and resources, since we have to at least replay hived node each time up to head block.
This could be achieved by creating a new micro-blockchain containing latest N blocks present at mainnet at the moment.
Such block processing is already done in blockchain converter tool, but it can fail since it can put only operations referencing accounts/posts etc which were created long time before selecting a set of operations (i.e. accounts being active in the block range 68-69M blocks can be created at first blocks).
To support it, before pushing all such operations to target chain, we should process such operations and collect set of accounts and posts (probably also further data set will be needed) which should be created before actual set of transactions will be executed.
Of course we can accept some part of rejected transactions (because of above reasons), but the tool/procedure shall definitely track such number and fail if expected limit was exceeded.