hbt5 hivemind/hafbe replay while simultaneously serving traffic: 2024-03-27T22:18:43.894814687Z INFO - hive.indexer.sync:187 - [SINGLE] Switched to single block processing mode after: 03d 06h 59m 19s
If we have a 1.27.4 blocklog locally, we can try to reproduce this problem. Otherwise we should probably just close this issue.
To be clear, without OBI, I probably wouldn't consider this a feasible solution.
You've misunderstood the purpose of SAVEPOINTS associated with this idea. It is not to make commits of irreversible blocks. It is only for allowing various rollback points for the apps processing in the "reversible" transaction thread. As far as I can tell at the moment, SAVEPOINTS should correct serve this purpose.
Probably the confusion lies in an expectation that this model is supposed to always allow non-irreversible apps to always process irreversible blocks as soon as hived knows they are irreversible, but this is not the case. This is definitely a theoretical disadvantage of this method over the existing HAF model which essentially does that. But if the scheme is feasible, I think that this disadvantage should not manifest much in practice because of OBI, which should mostly allow for the headblock to be made irreversible. And the potential performance improvement for "normal" operation (fast confirms occurring on most blocks) and software maintenance benefits would likely outweigh the disadvantage.
A preliminary experiment for this issue was done here: !372 and likely related problem discussed here: hivemind#222
This is related to haf#49
Closing as not worth the time.
Those two "scam" proposals were created when proposal creation was free of charge, each daily asking for 999,999,999,999,999 HBD for the next 16 years.
Although there is little chance that they will be approved, from a practical point of view, it is still possible that they are, with the immediate effect of burning DHF's budget every hour or transferring it to an account that we do not know who controls it.
Although they can be hidden by front-ends, this solution has the following drawbacks:
I propose to hardcode their removal in the next hardfork and submit it to the witnesses and the community for approval.
Partially implemented: Head block is logged and hash can now vary across block logs as block compressions can easily vary.
Working being done in !1224
hived (running with haf) was sent a SIGINT and attempted to shut down gracefully. It began the shutdown procedure, but during p2p plugin shutdown it failed to process all blocks in the queue in the 30 seconds allotted, and then aborted.
The relevant part of the log:
2023-09-13T20:33:44.026609 chain_plugin.cpp:1155 connection_count_cha ] peer_count changed: 19
2023-09-13T20:36:17.446593 chain_plugin.cpp:1155 connection_count_cha ] peer_count changed: 20
2023-09-13T20:36:18.043819 chain_plugin.cpp:1182 accept_block ] Syncing Blockchain --- Got block: #78380000 time: 2023-09-13T20:34:12 producer: deathwing
Performing cleanup....
Hived pid: 156
[1]+ 152 Running { sudo --user=hived -En /bin/bash <<EOF
echo "Attempting to execute hived using additional command line arguments:" "${HIVED_ARGS[@]}"
/home/hived/bin/hived --webserver-ws-endpoint=0.0.0.0:${WS_PORT} --webserver-http-endpoint=0.0.0.0:${HTTP_PORT} --p2p-endpoint=0.0.0.0:${P2P_PORT} --data-dir="$DATADIR" --shared-file-dir="$SHM_DIR" --plugin=sql_serializer --psql-url="dbname=haf_block_log host=/var/run/postgresql port=5432" ${HIVED_ARGS[@]} 2>&1 | tee -i hived.log
echo "$? Hived process finished execution."
EOF
stop_postresql; } &
2023-09-13T20:37:54.662816 application.cpp:99 handle_signal ] _last_signal_code: 2
2023-09-13T20:37:54.663209 application.cpp:90 generate_interrupt_r ] interrupt requested!
2023-09-13T20:37:54.663698 webserver_plugin.cpp:651 plugin_pre_shutdown ] Shutting down webserver_plugin...
Waiting for hived finish...
2023-09-13T20:37:54.664728 webserver_plugin.cpp:277 operator() ] ws io service exit
2023-09-13T20:37:54.665498 webserver_plugin.cpp:310 operator() ] http io service exit
2023-09-13T20:37:54.665800 p2p_plugin.cpp:545 plugin_pre_shutdown ] Shutting down P2P Plugin...
2023-09-13T20:37:54.666103 shutdown_mgr.hpp:96 wait ] Processing of 'P2P_BLOCK' in progress...
2023-09-13T20:37:54.766185 shutdown_mgr.hpp:105 wait ] attempt: 1/300, reason: timeout, future status(internal): 1 ...
2023-09-13T20:37:54.766229 shutdown_mgr.hpp:106 wait ] Details: Wait for: P2P_BLOCK. Currently 73 of 'P2P_BLOCK' items are processed...
2023-09-13T20:37:54.866312 shutdown_mgr.hpp:105 wait ] attempt: 2/300, reason: timeout, future status(internal): 1 ...
2023-09-13T20:37:54.866360 shutdown_mgr.hpp:106 wait ] Details: Wait for: P2P_BLOCK. Currently 73 of 'P2P_BLOCK' items are processed...
2023-09-13T20:37:54.966441 shutdown_mgr.hpp:105 wait ] attempt: 3/300, reason: timeout, future status(internal): 1 ...
2023-09-13T20:37:54.966488 shutdown_mgr.hpp:106 wait ] Details: Wait for: P2P_BLOCK. Currently 73 of 'P2P_BLOCK' items are processed...
2023-09-13T20:37:55.066568 shutdown_mgr.hpp:105 wait ] attempt: 4/300, reason: timeout, future status(internal): 1 ...
2023-09-13T20:37:55.066608 shutdown_mgr.hpp:106 wait ] Details: Wait for: P2P_BLOCK. Currently 73 of 'P2P_BLOCK' items are processed...
2023-09-13T20:37:55.166692 shutdown_mgr.hpp:105 wait ] attempt: 5/300, reason: timeout, future status(internal): 1 ...
2023-09-13T20:37:55.166744 shutdown_mgr.hpp:106 wait ] Details: Wait for: P2P_BLOCK. Currently 73 of 'P2P_BLOCK' items are processed...
2023-09-13T20:37:55.266844 shutdown_mgr.hpp:105 wait ] attempt: 6/300, reason: timeout, future status(internal): 1 ...
2023-09-13T20:37:55.266912 shutdown_mgr.hpp:106 wait ] Details: Wait for: P2P_BLOCK. Currently 73 of 'P2P_BLOCK' items are processed...
2023-09-13T20:37:55.367007 shutdown_mgr.hpp:105 wait ] attempt: 7/300, reason: timeout, future status(internal): 1 ...
2023-09-13T20:37:55.367057 shutdown_mgr.hpp:106 wait ] Details: Wait for: P2P_BLOCK. Currently 73 of 'P2P_BLOCK' items are processed...
2023-09-13T20:37:55.467148 shutdown_mgr.hpp:105 wait ] attempt: 8/300, reason: timeout, future status(internal): 1 ...
2023-09-13T20:37:55.467191 shutdown_mgr.hpp:106 wait ] Details: Wait for: P2P_BLOCK. Currently 73 of 'P2P_BLOCK' items are processed...
2023-09-13T20:37:55.567275 shutdown_mgr.hpp:105 wait ] attempt: 9/300, reason: timeout, future status(internal): 1 ...
2023-09-13T20:37:55.567322 shutdown_mgr.hpp:106 wait ] Details: Wait for: P2P_BLOCK. Currently 73 of 'P2P_BLOCK' items are processed...
.... quite a few omitted ....
2023-09-13T20:38:24.506561 shutdown_mgr.hpp:105 wait ] attempt: 298/300, reason: timeout, future status(internal): 1 ...
2023-09-13T20:38:24.506592 shutdown_mgr.hpp:106 wait ] Details: Wait for: P2P_BLOCK. Currently 73 of 'P2P_BLOCK' items are processed...
2023-09-13T20:38:24.606686 shutdown_mgr.hpp:105 wait ] attempt: 299/300, reason: timeout, future status(internal): 1 ...
2023-09-13T20:38:24.606717 shutdown_mgr.hpp:106 wait ] Details: Wait for: P2P_BLOCK. Currently 73 of 'P2P_BLOCK' items are processed...
2023-09-13T20:38:24.706812 shutdown_mgr.hpp:105 wait ] attempt: 300/300, reason: timeout, future status(internal): 1 ...
2023-09-13T20:38:24.706843 shutdown_mgr.hpp:106 wait ] Details: Wait for: P2P_BLOCK. Currently 73 of 'P2P_BLOCK' items are processed...
2023-09-13T20:38:24.806921 shutdown_mgr.hpp:105 wait ] attempt: 301/300, reason: timeout, future status(internal): 1 ...
2023-09-13T20:38:24.806952 shutdown_mgr.hpp:106 wait ] Details: Wait for: P2P_BLOCK. Currently 73 of 'P2P_BLOCK' items are processed...
Executing `pre shutdown` for all plugins...
Before shutting down...
Plugin: p2p raised an exception...
10 assert_exception: Assert Exception
++cnt <= time_maximum
Closing the P2P plugin is terminated
{"name":"P2P plugin"}
shutdown_mgr.hpp:108 wait
terminate called without an active exception
1 Hived process finished execution.
Attempting to stop Postgresql...
* Stopping PostgreSQL 14 database server
Hived finish done.
Attempting to stop Postgresql...
* Stopping PostgreSQL 14 database server
...done.
Waiting for postgres process: 69 finish...
Postgres process: 69 finished.
...done.
Waiting for postgres process: 69 finish...
Postgres process: 69 finished.
Cleanup actions done.
@Lucius Can you edit the description of this MR to describe conceptually exactly what was done here?
This was probably fixed, so closing.
How to repeat?
CREATE EXTENSION hive_fork_manager CASCADE;
SELECT hive.app_create_context( 'any_context' )
SELECT hive.app_context_detach( 'any_context' )
SELECT hive.app_context_attach( 'any_context', 1550 )
Now the context any_context
has: current_block_num
=1550 and irreversible_block
=0.
Description
For SELECT * FROM hive.irreversible_data
we have consistent_block=null
, but in hive.app_context_attach
we have a check
IF _last_synced_block > __head_of_irreversible_block THEN
RAISE EXCEPTION 'Cannot attach context % because the block num % is grater than top of irreversible block %'
, _context, _last_synced_block, __head_of_irreversible_block;
END IF;
How to solve?
We have 2 options:
hive.irreversible_data
should have consistent_block=0
@bwrona please assign someone to this issue.
Blocked waiting for massive sync changes.
This API will be served by a postgresT server. Note that these calls can be different in terms of parameters and responses as compared to existing json API.