hive issues
https://gitlab.syncad.com/groups/hive/-/issues
2023-10-17T06:28:07Z
https://gitlab.syncad.com/hive/hive/-/issues/572
beekeeper | SIGINT randomly closes with `-2` return code and do not removes t...
2023-10-17T06:28:07Z
Mateusz Żebrak
beekeeper | SIGINT randomly closes with `-2` return code and do not removes the `beekeeper.pid` file
FYI: @Trela
This behavior could be observed in random failing clive tests.
#### In the logs of the failing test, we can observe:
```plaintext
2023-08-31 14:07:33.741 | ℹ️ INFO | clive.__private.core.beekeeper.handle:__start:175 - ...
FYI: @Trela
This behavior could be observed in random failing clive tests.
#### In the logs of the failing test, we can observe:
```plaintext
2023-08-31 14:07:33.741 | ℹ️ INFO | clive.__private.core.beekeeper.handle:__start:175 - Starting Beekeeper...
2023-08-31 14:07:33.742 | 🐞 DEBUG | clive.__private.core.beekeeper.notifications:listen:36 - Notifications server is listening on 39353...
2023-08-31 14:07:33.764 | ℹ️ INFO | clive.__private.core.beekeeper.notifications:notify:40 - Got notification: {'value': {'current_status': 'starting without a session'}, 'time': '2023-08-31T14:07:33', 'name': 'hived_status'}
2023-08-31 14:07:33.766 | ℹ️ INFO | aiohttp.web_log:log:206 - 127.0.0.1 [31/Aug/2023:14:07:33 +0000] "PUT / HTTP/1.1" 204 0 "-" "-"
2023-08-31 14:07:33.767 | ℹ️ INFO | clive.__private.core.beekeeper.notifications:notify:40 - Got notification: {'value': {'current_status': 'signals attached'}, 'time': '2023-08-31T14:07:33', 'name': 'hived_status'}
2023-08-31 14:07:33.767 | 🐞 DEBUG | clive.__private.core.beekeeper.notifications:notify:50 - Beekeeper reports to be ready
2023-08-31 14:07:33.768 | ℹ️ INFO | aiohttp.web_log:log:206 - 127.0.0.1 [31/Aug/2023:14:07:33 +0000] "PUT / HTTP/1.1" 204 0 "-" "-"
2023-08-31 14:07:33.768 | ℹ️ INFO | clive.__private.core.beekeeper.notifications:notify:40 - Got notification: {'value': {'address': '0.0.0.0', 'port': 39881, 'type': 'HTTP'}, 'time': '2023-08-31T14:07:33', 'name': 'webserver listening'}
2023-08-31 14:07:33.769 | 🐞 DEBUG | clive.__private.core.beekeeper.notifications:notify:47 - Got notification with http address on: http://127.0.0.1:39881
2023-08-31 14:07:33.769 | ℹ️ INFO | aiohttp.web_log:log:206 - 127.0.0.1 [31/Aug/2023:14:07:33 +0000] "PUT / HTTP/1.1" 204 0 "-" "-"
2023-08-31 14:07:38.756 | 🐞 DEBUG | clive.__private.core.beekeeper.handle:__run_beekeeper:207 - Got webserver http endpoint: `http://127.0.0.1:39881`
2023-08-31 14:07:38.757 | ℹ️ INFO | clive.__private.core.beekeeper.handle:__start:186 - Beekeeper started on http://127.0.0.1:39881.
2023-08-31 14:07:38.763 | ℹ️ INFO | clive.__private.core.beekeeper.handle:_send:147 - Returning model: id_=0 jsonrpc='2.0' result=CreateSession(token='d540aaa8ee678152ad698338c37b111ab5f033387ea7c40b4c67e117207b386a')
2023-08-31 14:07:38.765 | ℹ️ INFO | clive.__private.core.commands.abc.command:_log_execution_info:40 - Executing command: CreateWallet
2023-08-31 14:07:38.769 | ℹ️ INFO | clive.__private.core.beekeeper.handle:_send:147 - Returning model: id_=0 jsonrpc='2.0' result=Create(password='password')
2023-08-31 14:07:38.769 | ℹ️ INFO | clive.__private.core.app_state:activate:33 - Mode switched to ACTIVE.
2023-08-31 14:07:38.771 | ℹ️ INFO | clive.__private.core.beekeeper.handle:close:160 - Closing Beekeeper...
2023-08-31 14:07:38.772 | ℹ️ INFO | clive.__private.core.beekeeper.handle:_send:147 - Returning model: id_=0 jsonrpc='2.0' result=EmptyResponse()
2023-08-31 14:07:38.774 | 🐞 DEBUG | clive.__private.core.beekeeper.executable:close:105 - Beekeeper closed with return code of `-2`.
2023-08-31 14:07:38.775 | 🐞 DEBUG | clive.__private.core.beekeeper.executable:__wait_for_pid_file_to_be_deleted:140 - Beekeeper PID file was deleted in 0.00 seconds.
2023-08-31 14:07:38.775 | 🐞 DEBUG | clive.__private.core.beekeeper.notifications:close:76 - Notifications server closed
2023-08-31 14:07:38.775 | ℹ️ INFO | clive.__private.core.beekeeper.handle:close:166 - Beekeeper closed.
2023-08-31 14:07:38.776 | ℹ️ INFO | clive.__private.core.beekeeper.handle:__start:175 - Starting Beekeeper...
2023-08-31 14:07:38.776 | 🐞 DEBUG | clive.__private.core.beekeeper.notifications:listen:36 - Notifications server is listening on 34353...
2023-08-31 14:07:38.793 | ℹ️ INFO | clive.__private.core.beekeeper.notifications:notify:40 - Got notification: {'value': {'current_status': 'starting without a session'}, 'time': '2023-08-31T14:07:38', 'name': 'hived_status'}
2023-08-31 14:07:38.794 | ℹ️ INFO | aiohttp.web_log:log:206 - 127.0.0.1 [31/Aug/2023:14:07:38 +0000] "PUT / HTTP/1.1" 204 0 "-" "-"
2023-08-31 14:07:38.795 | ℹ️ INFO | clive.__private.core.beekeeper.notifications:notify:40 - Got notification: {'value': {'current_status': 'signals attached'}, 'time': '2023-08-31T14:07:38', 'name': 'hived_status'}
2023-08-31 14:07:38.796 | 🐞 DEBUG | clive.__private.core.beekeeper.notifications:notify:50 - Beekeeper reports to be ready
2023-08-31 14:07:38.796 | ℹ️ INFO | aiohttp.web_log:log:206 - 127.0.0.1 [31/Aug/2023:14:07:38 +0000] "PUT / HTTP/1.1" 204 0 "-" "-"
2023-08-31 14:07:38.797 | ℹ️ INFO | clive.__private.core.beekeeper.notifications:notify:40 - Got notification: {'value': {'address': '0.0.0.0', 'port': 44521, 'type': 'HTTP'}, 'time': '2023-08-31T14:07:38', 'name': 'webserver listening'}
2023-08-31 14:07:38.798 | 🐞 DEBUG | clive.__private.core.beekeeper.notifications:notify:47 - Got notification with http address on: http://127.0.0.1:44521
2023-08-31 14:07:38.798 | ℹ️ INFO | aiohttp.web_log:log:206 - 127.0.0.1 [31/Aug/2023:14:07:38 +0000] "PUT / HTTP/1.1" 204 0 "-" "-"
2023-08-31 14:07:43.790 | 🐞 DEBUG | clive.__private.core.beekeeper.handle:__run_beekeeper:207 - Got webserver http endpoint: `http://127.0.0.1:44521`
2023-08-31 14:07:43.791 | ℹ️ INFO | clive.__private.core.beekeeper.handle:__start:186 - Beekeeper started on http://127.0.0.1:44521.
2023-08-31 14:07:43.797 | ℹ️ INFO | clive.__private.core.beekeeper.handle:_send:147 - Returning model: id_=0 jsonrpc='2.0' result=CreateSession(token='4af7c8b37f24b1c99ba962773f5ba9e8843cfa7cba4ba2c896942c0528a4a4b3')
2023-08-31 14:07:43.801 | ℹ️ INFO | clive.__private.core.beekeeper.handle:_send:147 - Returning model: id_=0 jsonrpc='2.0' result=ListWallets(wallets=[])
2023-08-31 14:07:43.803 | ℹ️ INFO | clive.__private.core.beekeeper.handle:_send:147 - Returning model: id_=0 jsonrpc='2.0' result=EmptyResponse()
2023-08-31 14:07:43.806 | ℹ️ INFO | clive.__private.core.beekeeper.handle:_send:147 - Returning model: id_=0 jsonrpc='2.0' result=ListWallets(wallets=[WalletDetails(name='wallet', unlocked=False)])
2023-08-31 14:07:43.809 | ℹ️ INFO | clive.__private.core.beekeeper.handle:close:160 - Closing Beekeeper...
2023-08-31 14:07:43.811 | ℹ️ INFO | clive.__private.core.beekeeper.handle:_send:147 - Returning model: id_=0 jsonrpc='2.0' result=EmptyResponse()
2023-08-31 14:07:43.813 | 🐞 DEBUG | clive.__private.core.beekeeper.executable:close:105 - Beekeeper closed with return code of `-2`.
2023-08-31 14:07:53.833 | 🐞 DEBUG | clive.__private.core.beekeeper.notifications:close:76 - Notifications server closed
2023-08-31 14:07:53.862 | 🐞 DEBUG | asyncio.selector_events:__init__:54 - Using selector: EpollSelector
2023-08-31 14:07:53.913 | 🐞 DEBUG | asyncio.selector_events:__init__:54 - Using selector: EpollSelector
```
and beekeeper logs
```plaintext
458785ms json_rpc_plugin.cpp:222 initialize ] initializing JSON RPC plugin
458786ms webserver_plugin.cpp:584 plugin_initialize ] initializing webserver plugin
458786ms webserver_plugin.cpp:587 plugin_initialize ] configured with 1 thread pool size
458786ms webserver_plugin.cpp:590 plugin_initialize ] Compression in webserver is disabled
458786ms webserver_plugin.cpp:602 plugin_initialize ] configured http to listen on 0.0.0.0:0
458786ms beekeeper_app_init.cpp:123 initialize_program_o ] initializing options
458787ms notifications.cpp:64 setup ] setting up notification handler for 1 address
458791ms beekeeper_app_init.cpp:166 initialize_program_o ] Backtrace on segfault is enabled.
458792ms webserver_plugin.cpp:290 operator() ] start processing http thread
458792ms webserver_plugin.cpp:305 operator() ] start listening for http requests on 0.0.0.0:44521
463795ms json_rpc_plugin.cpp:439 rpc ] message: {"id":0,"jsonrpc":"2.0","method":"beekeeper_api.create_session","params":{"notifications_endpoint":"127.0.0.1:34353","salt":"139898871443712"}}
463800ms json_rpc_plugin.cpp:439 rpc ] message: {"id":0,"jsonrpc":"2.0","method":"beekeeper_api.list_wallets","params":{"token":"4af7c8b37f24b1c99ba962773f5ba9e8843cfa7cba4ba2c896942c0528a4a4b3"}}
463803ms json_rpc_plugin.cpp:439 rpc ] message: {"id":0,"jsonrpc":"2.0","method":"beekeeper_api.open","params":{"wallet_name":"wallet","token":"4af7c8b37f24b1c99ba962773f5ba9e8843cfa7cba4ba2c896942c0528a4a4b3"}}
463805ms json_rpc_plugin.cpp:439 rpc ] message: {"id":0,"jsonrpc":"2.0","method":"beekeeper_api.list_wallets","params":{"token":"4af7c8b37f24b1c99ba962773f5ba9e8843cfa7cba4ba2c896942c0528a4a4b3"}}
463810ms json_rpc_plugin.cpp:439 rpc ] message: {"id":0,"jsonrpc":"2.0","method":"beekeeper_api.close_session","params":{"token":"4af7c8b37f24b1c99ba962773f5ba9e8843cfa7cba4ba2c896942c0528a4a4b3"}}
463811ms application.cpp:99 handle_signal ] _last_signal_code: 2
463811ms application.cpp:90 generate_interrupt_r ] interrupt requested!
463811ms webserver_plugin.cpp:651 plugin_pre_shutdown ] Shutting down webserver_plugin...
463811ms webserver_plugin.cpp:310 operator() ] http io service exit
463812ms application.cpp:475 finish ] Waiting for logging_thread quit
463812ms application.cpp:477 finish ] logging_thread quit done
```
Here are artifacts from this failing test (job 670337): [failing.tar.gz](/uploads/acd6e05b32d4d2f3263afde254881a24/failing.tar.gz)
#### And while it's green:
```plaintext
2023-08-31 14:05:30.712 | ❌ ERROR | asyncio.base_events:default_exception_handler:1744 - Task was destroyed but it is pending!
task: <Task pending name='Task-84' coro=<RequestHandler.start() done, defined at /builds/hive/clive/venv/lib/python3.10/site-packages/aiohttp/web_protocol.py:462> wait_for=<Future finished result=None>>
2023-08-31 14:05:30.862 | ℹ️ INFO | clive.__private.core.beekeeper.handle:__start:175 - Starting Beekeeper...
2023-08-31 14:05:30.864 | 🐞 DEBUG | clive.__private.core.beekeeper.notifications:listen:36 - Notifications server is listening on 41905...
2023-08-31 14:05:30.884 | ℹ️ INFO | clive.__private.core.beekeeper.notifications:notify:40 - Got notification: {'value': {'current_status': 'starting without a session'}, 'time': '2023-08-31T14:05:30', 'name': 'hived_status'}
2023-08-31 14:05:30.886 | ℹ️ INFO | aiohttp.web_log:log:206 - 127.0.0.1 [31/Aug/2023:14:05:30 +0000] "PUT / HTTP/1.1" 204 0 "-" "-"
2023-08-31 14:05:30.887 | ℹ️ INFO | clive.__private.core.beekeeper.notifications:notify:40 - Got notification: {'value': {'current_status': 'signals attached'}, 'time': '2023-08-31T14:05:30', 'name': 'hived_status'}
2023-08-31 14:05:30.887 | 🐞 DEBUG | clive.__private.core.beekeeper.notifications:notify:50 - Beekeeper reports to be ready
2023-08-31 14:05:30.888 | ℹ️ INFO | aiohttp.web_log:log:206 - 127.0.0.1 [31/Aug/2023:14:05:30 +0000] "PUT / HTTP/1.1" 204 0 "-" "-"
2023-08-31 14:05:30.888 | ℹ️ INFO | clive.__private.core.beekeeper.notifications:notify:40 - Got notification: {'value': {'address': '0.0.0.0', 'port': 34139, 'type': 'HTTP'}, 'time': '2023-08-31T14:05:30', 'name': 'webserver listening'}
2023-08-31 14:05:30.889 | 🐞 DEBUG | clive.__private.core.beekeeper.notifications:notify:47 - Got notification with http address on: http://127.0.0.1:34139
2023-08-31 14:05:30.889 | ℹ️ INFO | aiohttp.web_log:log:206 - 127.0.0.1 [31/Aug/2023:14:05:30 +0000] "PUT / HTTP/1.1" 204 0 "-" "-"
2023-08-31 14:05:35.878 | 🐞 DEBUG | clive.__private.core.beekeeper.handle:__run_beekeeper:207 - Got webserver http endpoint: `http://127.0.0.1:34139`
2023-08-31 14:05:35.879 | ℹ️ INFO | clive.__private.core.beekeeper.handle:__start:186 - Beekeeper started on http://127.0.0.1:34139.
2023-08-31 14:05:35.885 | ℹ️ INFO | clive.__private.core.beekeeper.handle:_send:147 - Returning model: id_=0 jsonrpc='2.0' result=CreateSession(token='b90094a2d0aa2621f1bbe68535a0c7388b62d163641a6e6a3920b06cfe9bc372')
2023-08-31 14:05:35.888 | ℹ️ INFO | clive.__private.core.commands.abc.command:_log_execution_info:40 - Executing command: CreateWallet
2023-08-31 14:05:35.890 | ℹ️ INFO | clive.__private.core.beekeeper.handle:_send:147 - Returning model: id_=0 jsonrpc='2.0' result=Create(password='password')
2023-08-31 14:05:35.891 | ℹ️ INFO | clive.__private.core.app_state:activate:33 - Mode switched to ACTIVE.
2023-08-31 14:05:35.892 | ℹ️ INFO | clive.__private.core.beekeeper.handle:close:160 - Closing Beekeeper...
2023-08-31 14:05:35.894 | ℹ️ INFO | clive.__private.core.beekeeper.handle:_send:147 - Returning model: id_=0 jsonrpc='2.0' result=EmptyResponse()
2023-08-31 14:05:36.108 | 🐞 DEBUG | clive.__private.core.beekeeper.executable:close:105 - Beekeeper closed with return code of `0`.
2023-08-31 14:05:36.110 | 🐞 DEBUG | clive.__private.core.beekeeper.executable:__wait_for_pid_file_to_be_deleted:140 - Beekeeper PID file was deleted in 0.00 seconds.
2023-08-31 14:05:36.111 | 🐞 DEBUG | clive.__private.core.beekeeper.notifications:close:76 - Notifications server closed
2023-08-31 14:05:36.112 | ℹ️ INFO | clive.__private.core.beekeeper.handle:close:166 - Beekeeper closed.
2023-08-31 14:05:36.112 | ℹ️ INFO | clive.__private.core.beekeeper.handle:__start:175 - Starting Beekeeper...
2023-08-31 14:05:36.114 | 🐞 DEBUG | clive.__private.core.beekeeper.notifications:listen:36 - Notifications server is listening on 42591...
2023-08-31 14:05:36.137 | ℹ️ INFO | clive.__private.core.beekeeper.notifications:notify:40 - Got notification: {'value': {'current_status': 'starting without a session'}, 'time': '2023-08-31T14:05:36', 'name': 'hived_status'}
2023-08-31 14:05:36.138 | ℹ️ INFO | aiohttp.web_log:log:206 - 127.0.0.1 [31/Aug/2023:14:05:36 +0000] "PUT / HTTP/1.1" 204 0 "-" "-"
2023-08-31 14:05:36.140 | ℹ️ INFO | clive.__private.core.beekeeper.notifications:notify:40 - Got notification: {'value': {'current_status': 'signals attached'}, 'time': '2023-08-31T14:05:36', 'name': 'hived_status'}
2023-08-31 14:05:36.140 | 🐞 DEBUG | clive.__private.core.beekeeper.notifications:notify:50 - Beekeeper reports to be ready
2023-08-31 14:05:36.141 | ℹ️ INFO | aiohttp.web_log:log:206 - 127.0.0.1 [31/Aug/2023:14:05:36 +0000] "PUT / HTTP/1.1" 204 0 "-" "-"
2023-08-31 14:05:36.142 | ℹ️ INFO | clive.__private.core.beekeeper.notifications:notify:40 - Got notification: {'value': {'address': '0.0.0.0', 'port': 33197, 'type': 'HTTP'}, 'time': '2023-08-31T14:05:36', 'name': 'webserver listening'}
2023-08-31 14:05:36.143 | 🐞 DEBUG | clive.__private.core.beekeeper.notifications:notify:47 - Got notification with http address on: http://127.0.0.1:33197
2023-08-31 14:05:36.143 | ℹ️ INFO | aiohttp.web_log:log:206 - 127.0.0.1 [31/Aug/2023:14:05:36 +0000] "PUT / HTTP/1.1" 204 0 "-" "-"
2023-08-31 14:05:41.129 | 🐞 DEBUG | clive.__private.core.beekeeper.handle:__run_beekeeper:207 - Got webserver http endpoint: `http://127.0.0.1:33197`
2023-08-31 14:05:41.130 | ℹ️ INFO | clive.__private.core.beekeeper.handle:__start:186 - Beekeeper started on http://127.0.0.1:33197.
2023-08-31 14:05:41.136 | ℹ️ INFO | clive.__private.core.beekeeper.handle:_send:147 - Returning model: id_=0 jsonrpc='2.0' result=CreateSession(token='712241ececec2f17484a73ab7d03c4e1eaad25f4242e1c9f8b5ef40ac1d33e4c')
2023-08-31 14:05:41.140 | ℹ️ INFO | clive.__private.core.beekeeper.handle:_send:147 - Returning model: id_=0 jsonrpc='2.0' result=ListWallets(wallets=[])
2023-08-31 14:05:41.143 | ℹ️ INFO | clive.__private.core.beekeeper.handle:_send:147 - Returning model: id_=0 jsonrpc='2.0' result=EmptyResponse()
2023-08-31 14:05:41.145 | ℹ️ INFO | clive.__private.core.beekeeper.handle:_send:147 - Returning model: id_=0 jsonrpc='2.0' result=ListWallets(wallets=[WalletDetails(name='wallet', unlocked=False)])
2023-08-31 14:05:41.148 | ℹ️ INFO | clive.__private.core.beekeeper.handle:close:160 - Closing Beekeeper...
2023-08-31 14:05:41.150 | ℹ️ INFO | clive.__private.core.beekeeper.handle:_send:147 - Returning model: id_=0 jsonrpc='2.0' result=EmptyResponse()
2023-08-31 14:05:41.152 | 🐞 DEBUG | clive.__private.core.beekeeper.executable:close:105 - Beekeeper closed with return code of `-2`.
2023-08-31 14:05:41.153 | 🐞 DEBUG | clive.__private.core.beekeeper.executable:__wait_for_pid_file_to_be_deleted:140 - Beekeeper PID file was deleted in 0.00 seconds.
2023-08-31 14:05:41.153 | 🐞 DEBUG | clive.__private.core.beekeeper.notifications:close:76 - Notifications server closed
2023-08-31 14:05:41.153 | ℹ️ INFO | clive.__private.core.beekeeper.handle:close:166 - Beekeeper closed.
2023-08-31 14:05:41.175 | 🐞 DEBUG | asyncio.selector_events:__init__:54 - Using selector: EpollSelector
2023-08-31 14:05:41.177 | 🐞 DEBUG | asyncio.selector_events:__init__:54 - Using selector: EpollSelector
```
and beekeeper:
```plaintext
336125ms json_rpc_plugin.cpp:222 initialize ] initializing JSON RPC plugin
336125ms webserver_plugin.cpp:584 plugin_initialize ] initializing webserver plugin
336125ms webserver_plugin.cpp:587 plugin_initialize ] configured with 1 thread pool size
336125ms webserver_plugin.cpp:590 plugin_initialize ] Compression in webserver is disabled
336126ms webserver_plugin.cpp:602 plugin_initialize ] configured http to listen on 0.0.0.0:0
336126ms beekeeper_app_init.cpp:123 initialize_program_o ] initializing options
336127ms notifications.cpp:64 setup ] setting up notification handler for 1 address
336134ms beekeeper_app_init.cpp:166 initialize_program_o ] Backtrace on segfault is enabled.
336135ms webserver_plugin.cpp:290 operator() ] start processing http thread
336135ms webserver_plugin.cpp:305 operator() ] start listening for http requests on 0.0.0.0:33197
341134ms json_rpc_plugin.cpp:439 rpc ] message: {"id":0,"jsonrpc":"2.0","method":"beekeeper_api.create_session","params":{"notifications_endpoint":"127.0.0.1:42591","salt":"140566037592720"}}
341139ms json_rpc_plugin.cpp:439 rpc ] message: {"id":0,"jsonrpc":"2.0","method":"beekeeper_api.list_wallets","params":{"token":"712241ececec2f17484a73ab7d03c4e1eaad25f4242e1c9f8b5ef40ac1d33e4c"}}
341142ms json_rpc_plugin.cpp:439 rpc ] message: {"id":0,"jsonrpc":"2.0","method":"beekeeper_api.open","params":{"wallet_name":"wallet","token":"712241ececec2f17484a73ab7d03c4e1eaad25f4242e1c9f8b5ef40ac1d33e4c"}}
341144ms json_rpc_plugin.cpp:439 rpc ] message: {"id":0,"jsonrpc":"2.0","method":"beekeeper_api.list_wallets","params":{"token":"712241ececec2f17484a73ab7d03c4e1eaad25f4242e1c9f8b5ef40ac1d33e4c"}}
341150ms json_rpc_plugin.cpp:439 rpc ] message: {"id":0,"jsonrpc":"2.0","method":"beekeeper_api.close_session","params":{"token":"712241ececec2f17484a73ab7d03c4e1eaad25f4242e1c9f8b5ef40ac1d33e4c"}}
341150ms application.cpp:99 handle_signal ] _last_signal_code: 2
341150ms application.cpp:90 generate_interrupt_r ] interrupt requested!
341150ms webserver_plugin.cpp:651 plugin_pre_shutdown ] Shutting down webserver_plugin...
341150ms webserver_plugin.cpp:310 operator() ] http io service exit
341151ms application.cpp:475 finish ] Waiting for logging_thread quit
341151ms application.cpp:477 finish ] logging_thread quit done
```
Here are artifacts from this test (job 670336): [success.tar.gz](/uploads/2e673695eaddb7ed408afecb6dc47011/success.tar.gz)
Here is a job when such a situation occurs: https://gitlab.syncad.com/hive/clive/-/jobs/670337
In the CI logs, we can observe that a test failed because of \
`Beekeeper PID file /builds/hive/clive/tests/unit/beekeeper/generated_during_test_wallet/test_wallet_open/beekeeper/beekeeper.pid was not deleted in 10.0 seconds.`
and this is happening when beekeeper closes with `-2` (but not always), sometimes this file is deleted though:
```plaintext
2023-08-31 14:05:36.108 | :beetle: DEBUG | clive.\__private.core.beekeeper.executable:close:105 - Beekeeper closed with return code of `0`.
2023-08-31 14:05:36.110 | :beetle: DEBUG | clive.\__private.core.beekeeper.executable:\__wait_for_pid_file_to_be_deleted:140 - Beekeeper PID file was deleted in 0.00 seconds.
2023-08-31 14:05:36.111 | :beetle: DEBUG | clive.\__private.core.beekeeper.notifications:close:76 - Notifications server closed
```
vs
```plaintext
2023-08-31 14:05:41.152 | :beetle: DEBUG | clive.\__private.core.beekeeper.executable:close:105 - Beekeeper closed with return code of `-2`.
2023-08-31 14:05:41.153 | :beetle: DEBUG | clive.\__private.core.beekeeper.executable:\__wait_for_pid_file_to_be_deleted:140 - Beekeeper PID file was deleted in 0.00 seconds.
2023-08-31 14:05:41.153 | :beetle: DEBUG | clive.\__private.core.beekeeper.notifications:close:76 - Notifications server closed
```
vs
```plaintext
2023-08-31 14:07:43.813 | :beetle: DEBUG | clive.\__private.core.beekeeper.executable:close:105 - Beekeeper closed with return code of `-2`.
2023-08-31 14:07:53.833 | :beetle: DEBUG | clive.\__private.core.beekeeper.notifications:close:76 - Notifications server closed
```
and some successful jobs from the same pipeline: https://gitlab.syncad.com/hive/clive/-/jobs/670336 https://gitlab.syncad.com/hive/clive/-/jobs/670335 https://gitlab.syncad.com/hive/clive/-/jobs/670330
This is randomly happening on other tests also, not only the one mentioned above: https://gitlab.syncad.com/hive/clive/-/jobs/670306
https://gitlab.syncad.com/hive/clive/-/issues/47
Private key in memo
2023-11-22T13:03:06Z
Aleksandra Grabowska
Private key in memo
The validation should not allow to send a transfer with private key in the memo.
The validation should not allow to send a transfer with private key in the memo.
Planned features and fixes
https://gitlab.syncad.com/hive/clive/-/issues/46
Some nodes respond with `Unable to send request to endpoint`
2023-11-22T13:08:45Z
Mateusz Żebrak
Some nodes respond with `Unable to send request to endpoint`
Observed on `techcoderx.com` and `rpc.ausbit.dev`
```plaintext
1 2023-08-31 12:52:26.451 | ❌ ERROR | clive.exceptions:__init__:27 - Problem occurred during communication with: url=https://rpc.ausbit.dev, request=[{"id": 0, "jsonrpc"...
Observed on `techcoderx.com` and `rpc.ausbit.dev`
```plaintext
1 2023-08-31 12:52:26.451 | ❌ ERROR | clive.exceptions:__init__:27 - Problem occurred during communication with: url=https://rpc.ausbit.dev, request=[{"id": 0, "jsonrpc": "2.0", "method": "database_api.find_accounts", "params": { "accounts": ["alice", "bob", "timmy", "john"]}},{"id": 1, "jsonrpc": "2.0", "method": "rc_api.find_rc_accounts", "params": {"accounts": ["alice", "bob", "timmy", "john"]}},{"id": 2, "jsonrpc": "2.0", "method": "reputation_api.get_a ccount_reputations", "params": {"account_lower_bound": "alice", "limit": 1}},{"id": 3, "jsonrpc": "2.0", "method": "account_history_api.get_account_history", "params": {"account": "alice", "limit": 1, "operation_filter_low": 112589 9906842623, "include_reversible": true}},{"id": 4, "jsonrpc": "2.0", "method": "database_api.list_decline_voting_rights_requests", "params": {"start": "alice", "limit": 1, "order": "by_account"}},{"id": 5, "jsonrpc": "2.0", "method ": "database_api.list_change_recovery_account_requests", "params": {"start": "alice", "limit": 1, "order": "by_account"}},{"id": 6, "jsonrpc": "2.0", "method": "database_api.list_owner_histories", "params": {"start": ["alice", "197 0-01-01T01:00:00.000000"], "limit": 1}},{"id": 7, "jsonrpc": "2.0", "method": "reputation_api.get_account_reputations", "params": {"account_lower_bound": "bob", "limit": 1}},{"id": 8, "jsonrpc": "2.0", "method": "account_history_ap i.get_account_history", "params": {"account": "bob", "limit": 1, "operation_filter_low": 1125899906842623, "include_reversible": true}},{"id": 9, "jsonrpc": "2.0", "method": "database_api.list_decline_voting_rights_requests", "para ms": {"start": "bob", "limit": 1, "order": "by_account"}},{"id": 10, "jsonrpc": "2.0", "method": "database_api.list_change_recovery_account_requests", "params": {"start": "bob", "limit": 1, "order": "by_account"}},{"id": 11, "jsonr pc": "2.0", "method": "database_api.list_owner_histories", "params": {"start": ["bob", "1970-01-01T01:00:00.000000"], "limit": 1}},{"id": 12, "jsonrpc": "2.0", "method": "reputation_api.get_account_reputations", "params": {"account _lower_bound": "timmy", "limit": 1}},{"id": 13, "jsonrpc": "2.0", "method": "account_history_api.get_account_history", "params": {"account": "timmy", "limit": 1, "operation_filter_low": 1125899906842623, "include_reversible": true} },{"id": 14, "jsonrpc": "2.0", "method": "database_api.list_decline_voting_rights_requests", "params": {"start": "timmy", "limit": 1, "order": "by_account"}},{"id": 15, "jsonrpc": "2.0", "method": "database_api.list_change_recovery _account_requests", "params": {"start": "timmy", "limit": 1, "order": "by_account"}},{"id": 16, "jsonrpc": "2.0", "method": "database_api.list_owner_histories", "params": {"start": ["timmy", "1970-01-01T01:00:00.000000"], "limit": 1}},{"id": 17, "jsonrpc": "2.0", "method": "reputation_api.get_account_reputations", "params": {"account_lower_bound": "john", "limit": 1}},{"id": 18, "jsonrpc": "2.0", "method": "account_history_api.get_account_history", "params": {"account": "john", "limit": 1, "operation_filter_low": 1125899906842623, "include_reversible": true}},{"id": 19, "jsonrpc": "2.0", "method": "database_api.list_decline_voting_rights_requests", "params": {"start": "john", "limit": 1, "order": "by_account"}},{"id": 20, "jsonrpc": "2.0", "method": "database_api.list_change_recovery_account_requests", "params": {"start": "john", "limit": 1, "order": "by_account"}},{"id": 21, "jsonrpc": "2.0", "method": "databa se_api.list_owner_histories", "params": {"start": ["john", "1970-01-01T01:00:00.000000"], "limit": 1}}], response={"jsonrpc":"2.0","id":2,"code":-32700,"message":"Unable to send request to endpoint.","error":"error sending request for url (http://136.243.7.21:3169/): connection closed before message completed"}
```
Planned features and fixes
https://gitlab.syncad.com/hive/clive/-/issues/45
Some nodes respond with `Request entity too large error`
2023-11-22T13:08:50Z
Mateusz Żebrak
Some nodes respond with `Request entity too large error`
Some nodes (observed on https://anyx.io) throw the "Request entity too large error" while sending too large batch request:
```plaintext
47 2023-08-31 12:23:28.817 | ❌ ERROR | clive.exceptions:__init__:25 - Problem occurred during co...
Some nodes (observed on https://anyx.io) throw the "Request entity too large error" while sending too large batch request:
```plaintext
47 2023-08-31 12:23:28.817 | ❌ ERROR | clive.exceptions:__init__:25 - Problem occurred during communication with: url=https://anyx.io, request=[{"id": 0, "jsonrpc": "2.0", "method": "database_api.find_accounts", "params": {"accounts": ["alice", "john", "bob", ""
timmy"]}},{"id": 1, "jsonrpc": "2.0", "method": "rc_api.find_rc_accounts", "params": {"accounts": ["alice", "john", "bob", "timmy"]}},{"id": 2, "jsonrpc": "2.0", "method": "reputation_api.get_account_reputations", "params": {"account_lower_bound": "alice", "limii t": 1}},{"id": 3, "jsonrpc": "2.0", "method": "account_history_api.get_account_history", "params": {"account": "alice", "limit": 1, "operation_filter_low": 1125899906842623, "include_reversible": true}},{"id": 4, "jsonrpc": "2.0", "method": "database_api.list_dee cline_voting_rights_requests", "params": {"start": "alice", "limit": 1, "order": "by_account"}},{"id": 5, "jsonrpc": "2.0", "method": "database_api.list_change_recovery_account_requests", "params": {"start": "alice", "limit": 1, "order": "by_account"}},{"id": 6,, "jsonrpc": "2.0", "method": "database_api.list_owner_histories", "params": {"start": ["alice", "1970-01-01T1:00:00.000000"], "limit": 1}},{"id": 7, "jsonrpc": "2.0", "method": "reputation_api.get_account_reputations", "params": {"account_lower_bound": "john", "limit": 1}},{"id": 8, "jsonrpc": "2.0", "method": "account_history_api.get_account_history", "params": {"account": "john", "limit": 1, "operation_filter_low": 1125899906842623, "include_reversible": true}},{"id": 9, "jsonrpc": "2.0", "method": "database_api.liss t_decline_voting_rights_requests", "params": {"start": "john", "limit": 1, "order": "by_account"}},{"id": 10, "jsonrpc": "2.0", "method": "database_api.list_change_recovery_account_requests", "params": {"start": "john", "limit": 1, "order": "by_account"}},{"id":: 11, "jsonrpc": "2.0", "method": "database_api.list_owner_histories", "params": {"start": ["john", "1970-01-01T1:00:00.000000"], "limit": 1}},{"id": 12, "jsonrpc": "2.0", "method": "reputation_api.get_account_reputations", "params": {"account_lower_bound": "bobb ", "limit": 1}},{"id": 13, "jsonrpc": "2.0", "method": "account_history_api.get_account_history", "params": {"account": "bob", "limit": 1, "operation_filter_low": 1125899906842623, "include_reversible": true}},{"id": 14, "jsonrpc": "2.0", "method": "database_apii .list_decline_voting_rights_requests", "params": {"start": "bob", "limit": 1, "order": "by_account"}},{"id": 15, "jsonrpc": "2.0", "method": "database_api.list_change_recovery_account_requests", "params": {"start": "bob", "limit": 1, "order": "by_account"}},{"idd ": 16, "jsonrpc": "2.0", "method": "database_api.list_owner_histories", "params": {"start": ["bob", "1970-01-01T1:00:00.000000"], "limit": 1}},{"id": 17, "jsonrpc": "2.0", "method": "reputation_api.get_account_reputations", "params": {"account_lower_bound": "tii mmy", "limit": 1}},{"id": 18, "jsonrpc": "2.0", "method": "account_history_api.get_account_history", "params": {"account": "timmy", "limit": 1, "operation_filter_low": 1125899906842623, "include_reversible": true}},{"id": 19, "jsonrpc": "2.0", "method": "databass e_api.list_decline_voting_rights_requests", "params": {"start": "timmy", "limit": 1, "order": "by_account"}},{"id": 20, "jsonrpc": "2.0", "method": "database_api.list_change_recovery_account_requests", "params": {"start": "timmy", "limit": 1, "order": "by_accounn t"}},{"id": 21, "jsonrpc": "2.0", "method": "database_api.list_owner_histories", "params": {"start": ["timmy", "1970-01-01T01:00:00.000000"], "limit": 1}}], response=Request Entity Too Large
```
Planned features and fixes
https://gitlab.syncad.com/hive/clive/-/issues/43
Notifications - create a place to store their history
2023-08-31T06:41:41Z
Mateusz Żebrak
Notifications - create a place to store their history
Planned features and fixes
https://gitlab.syncad.com/hive/clive/-/issues/42
Use single session of aiohttp instead of creating it for every request
2023-09-06T06:48:46Z
Mateusz Żebrak
Use single session of aiohttp instead of creating it for every request
See: https://gitlab.syncad.com/hive/clive/-/merge_requests/162 and https://gitlab.syncad.com/hive/clive/-/merge_requests/162/diffs?commit_id=13576d83e64eb83c962d860c75bd451b7fe07c38
Might be related: https://github.com/aio-libs/aiohttp/...
See: https://gitlab.syncad.com/hive/clive/-/merge_requests/162 and https://gitlab.syncad.com/hive/clive/-/merge_requests/162/diffs?commit_id=13576d83e64eb83c962d860c75bd451b7fe07c38
Might be related: https://github.com/aio-libs/aiohttp/issues/6138
Planned features and fixes
https://gitlab.syncad.com/hive/haf/-/issues/155
HAF instance communication stalled at some point
2023-08-29T18:07:14Z
Bartek Wrona
HAF instance communication stalled at some point
After starting an already filled HAF instance which was down some time (it had to catch up c.a. 200k blocks) it dropped indexes, performed massive sync, entered lived sync and started building up indexes again. This process again blocked...
After starting an already filled HAF instance which was down some time (it had to catch up c.a. 200k blocks) it dropped indexes, performed massive sync, entered lived sync and started building up indexes again. This process again blocked processing for significant time (c.a. 2hrs).
After that, node started to process blocks, but it seems it only consumed a part already filled into write-queue and even still had connected P2P peers didn't receive any block after that.
Maybe this problem is rather related to P2P communication layer.
Attached hived output.
[hived.log.hung](/uploads/86e5ad8e23343b66406aff4d40d7d554/hived.log.hung)
https://gitlab.syncad.com/hive/clive/-/issues/41
Check if we can get rid of extra screen-stack managing methods
2023-11-15T10:21:46Z
Mateusz Żebrak
Check if we can get rid of extra screen-stack managing methods
it might be possible to replace such a methods with the new textual modes functionality.
Check: https://github.com/Textualize/textual/issues/3127
it might be possible to replace such a methods with the new textual modes functionality.
Check: https://github.com/Textualize/textual/issues/3127
Planned features and fixes
https://gitlab.syncad.com/hive/hive/-/issues/568
beekeeper | ----webserver-thread-pool-size problem with signed value
2023-10-12T09:29:52Z
Wieslaw Kedzierski
beekeeper | ----webserver-thread-pool-size problem with signed value
The problem occurs when we try to assign a signed value to this flag.
Internally, `webserver-thread-pool-size` refers to `thread_pool_size_t` which is a `uint32_t` type. It cannot assign a negative value to it, so it will try to assign...
The problem occurs when we try to assign a signed value to this flag.
Internally, `webserver-thread-pool-size` refers to `thread_pool_size_t` which is a `uint32_t` type. It cannot assign a negative value to it, so it will try to assign `4294967295`.
Example:
```
./beekeeper --notifications-endpoint 127.0.0.1:8000 --webserver-http-endpoint 127.0.0.1:6666 --salt "avocado" --webserver-thread-pool-size -1
1709560ms json_rpc_plugin.cpp:222 initialize ] initializing JSON RPC plugin
1709560ms webserver_plugin.cpp:584 plugin_initialize ] initializing webserver plugin
1709560ms webserver_plugin.cpp:587 plugin_initialize ] configured with 4294967295 thread pool size
1709560ms webserver_plugin.cpp:590 plugin_initialize ] Compression in webserver is disabled
1709560ms webserver_plugin.cpp:602 plugin_initialize ] configured http to listen on 127.0.0.1:6666
1709560ms beekeeper_app_init.cpp:120 initialize_program_o ] initializing options
1709561ms notifications.cpp:64 setup ] setting up notification handler for 1 address
1709563ms beekeeper_app_init.cpp:157 initialize_program_o ] Backtrace on segfault is enabled.
Setting up a startup_io_handler...
1709563ms webserver_plugin.cpp:290 operator() ] start processing http thread
1709563ms webserver_plugin.cpp:305 operator() ] start listening for http requests on 127.0.0.1:6666
Throw location unknown (consider using BOOST_THROW_EXCEPTION)
Dynamic exception type: boost::wrapexcept<boost::thread_resource_error>
std::exception::what: boost::thread_resource_error: Resource temporarily unavailable
```
`1709560ms webserver_plugin.cpp:587 plugin_initialize ] configured with 4294967295 thread pool size`
It is a problem with `webserver_plugin`, but the beekeeper inherited it.
https://gitlab.syncad.com/hive/hivemind/-/issues/207
Sync crashes after finishing massive sync. Some connections not closed.
2024-02-27T22:45:09Z
Ben Swinburn
shmoogleosukami@gmail.com
Sync crashes after finishing massive sync. Some connections not closed.
I've been having this issue of my hivemind sync seemingly disappearing after a while for a while now. Having learnt that it auto removed on close it made it difficult to figure out why
I've finaly had it crash/close while watching the l...
I've been having this issue of my hivemind sync seemingly disappearing after a while for a while now. Having learnt that it auto removed on close it made it difficult to figure out why
I've finaly had it crash/close while watching the logs and now know why it's not working.
Not sure what to do about it though, I have rarely seen my node enter single block mode either because it's to slow to keep up (usualy due to reputation calcualtions taking too long) or it just closes likely with the below error.
Update:
Using HAF 1.27.4 and HAF Hivemind 1.27.4
I've attached my config.ini as requested by blocktrades (sensitive data omitted ofcourse).
Update 2:
VM:
- running 8 vcores and 128GB ram
- Storage type, Software Raid0 (host) Enterprise SSDs x2, 4TB each.
- These drives only serve the HIVE VM.
- Haf, Hafah and Haf Hivemind all run on the same VM. All are run as docker containers.
Host machine:
- Proxmox base OS
- CPU: Xeon E5-2690 v2 x2 (2 Sockets)
- Ram: 256GB ECC DDR3
- MOBO: ASUS Z9PE-D8 WS
Other VMs running:
- small reverse proxy feeding data to hive and various other endpoints
- A family media server
- A game server for satisfactory
- and a small VM for a friend.
All these a rather light weight and barely dent the server resources. None of these VMs are on the same drives as HIVE.
Update 3:
Some addition and perhaps relevant info. On my server in it's current state of not going to single block mode, massive sync is usually processing around 140 blocks with each 'pass' so to say.
Each batch of blocks it works on takes long enough that there are 140 odd blocks to process again. and this loops for a number of times until it inevitably throws the error below.
It can run this way anywhere from 15mins to several hours. Recently it's mostly been lasting 30mins to and hour before dying. (I've been sat with a console open watching the logs go by as I do other stuff.
<pre>
INFO - hive.db.db_state:520 - [MASSIVE] After massive sync actions done in 4456.2228s
INFO - hive.db.db_state:76 - [MASSIVE] Massive sync complete!
INFO - hive.indexer.hive_db.haf_functions:37 - Trying to attach app context with block number: 77797737
INFO - hive.indexer.hive_db.haf_functions:39 - App context attaching done.
INFO - hive.db.adapter:90 - Closing database connection: 'PostDataCache'
INFO - hive.db.adapter:90 - Closing database connection: 'Reputations'
INFO - hive.db.adapter:90 - Closing database connection: 'Votes'
INFO - hive.db.adapter:90 - Closing database connection: 'Follow'
INFO - hive.db.adapter:90 - Closing database connection: 'Posts'
INFO - hive.db.adapter:90 - Closing database connection: 'Reblog'
INFO - hive.db.adapter:90 - Closing database connection: 'Notify'
INFO - hive.db.adapter:90 - Closing database connection: 'Accounts'
INFO - hive.db.adapter:90 - Closing database connection: 'PayoutStats'
INFO - hive.db.adapter:90 - Closing database connection: 'Mentions'
INFO - hive.db.adapter:90 - Closing database connection: 'MassiveBlocksProvider_OperationsData'
INFO - hive.db.adapter:90 - Closing database connection: 'MassiveBlocksProvider_BlocksData'
INFO - hive.indexer.sync:71 - Exiting HAF mode synchronization
WARNING - hive.server.common.payout_stats:16 - Rebuilding payout_stats_view in separate transaction
INFO - hive.indexer.sync:76 - LAST IMPORTED BLOCK IS: 77797737
INFO - hive.indexer.sync:77 - LAST COMPLETED BLOCK IS: 77797737
INFO - hive.indexer.hive_db.haf_functions:34 - Context already attached - attaching skipped.
INFO - hive.db.adapter:90 - Closing database connection: 'PostDataCache'
INFO - hive.db.adapter:90 - Closing database connection: 'Reputations'
INFO - hive.db.adapter:90 - Closing database connection: 'Votes'
INFO - hive.db.adapter:90 - Closing database connection: 'Follow'
INFO - hive.db.adapter:90 - Closing database connection: 'Posts'
INFO - hive.db.adapter:90 - Closing database connection: 'Reblog'
INFO - hive.db.adapter:90 - Closing database connection: 'Notify'
INFO - hive.db.adapter:90 - Closing database connection: 'Accounts'
INFO - hive.db.adapter:90 - Closing database connection: 'PayoutStats'
INFO - hive.db.adapter:90 - Closing database connection: 'Mentions'
INFO - hive.db.adapter:90 - Closing database connection: 'root'
INFO - hive.db.adapter:103 - Disposing SQL engine
INFO - hive.conf:254 - The database is disconnected...
Traceback (most recent call last):
File "/home/hivemind/.hivemind-venv/bin/hive", line 8, in <module>
sys.exit(run())
File "/home/hivemind/.hivemind-venv/lib/python3.8/site-packages/hive/cli.py", line 73, in run
launch_mode(mode, conf)
File "/home/hivemind/.hivemind-venv/lib/python3.8/site-packages/hive/cli.py", line 87, in launch_mode
sync.run()
File "/home/hivemind/.hivemind-venv/lib/python3.8/site-packages/hive/indexer/sync.py", line 151, in run
self._assert_connections_closed(active_connections_before, active_connections_after_massive)
File "/home/hivemind/.hivemind-venv/lib/python3.8/site-packages/hive/indexer/sync.py", line 366, in _assert_connections_closed
assert set(connections_before) == set(connections_after), assert_message
AssertionError: Some db connections used in MASSIVE sync were not closed!
before: [('hivemind_root',), ('hivemind_root',)]
after: [('hivemind_root',), ('hivemind_root',), ('hivemind_PayoutStats',)]
Exiting docker entrypoint...
</pre>
[config.ini](/uploads/a581401642b1ee9bf1e68a91d1e68c06/config.ini)
Haf Log:[haflog.zip](/uploads/ce1b432882c20a26eff95ec6295b6403/haflog.zip)
The hivemind sync failed sometime between 2AM 1/09 and 2:45AM
https://gitlab.syncad.com/hive/hive/-/issues/567
beekeeper | remove unnecessary command line flags
2023-08-25T07:37:16Z
Wieslaw Kedzierski
beekeeper | remove unnecessary command line flags
Beekeeper has a set of flags that are no important to it - it is just inherited after `application:app`.
We should print/handle only those valid for beekepeer.
List of valid flags:
Inherited after:
`webserver`:
* `--webserver-http-e...
Beekeeper has a set of flags that are no important to it - it is just inherited after `application:app`.
We should print/handle only those valid for beekepeer.
List of valid flags:
Inherited after:
`webserver`:
* `--webserver-http-endpoint arg`,
* `--webserver-thread-pool-size arg (=32)`.
`json_rpc`:
* `--log-json-rpc arg`
Our own:
* `--notifications-endpoint`,
* `--wallet-dir`,
* `--unlock-timeout`,
* `--backtrace`,
* `--salt`,
* `--export-keys-wallet-name`,
* `--export-keys-wallet-password`,
List of flags that should be removed:
`application:app` :
* `--plugin plugin-name`,
* `--list-plugins`,
* `--generate-completions`,
* `--data-dir, -d`,
* `--config, -c`,
* `--dump-config`.
`webserver`:
* `--webserver-unix-endpoint arg`,
* `--webserver-ws-endpoint arg`,
* `--webserver-ws-deflate arg (=0)`,
* `--rpc-endpoint arg`.
Mariusz Trela
Mariusz Trela
https://gitlab.syncad.com/hive/hive/-/issues/565
exception in plugin (get_comment) which ... shouldn't be there?
2024-02-07T21:20:50Z
Gandalf
exception in plugin (get_comment) which ... shouldn't be there?
`Caught exception in plugin: 13 N5boost10wrapexceptISt12out_of_rangeEE: unknown key`
example:
```
2023-08-10T12:06:56.697 database.cpp:1405 notify_pre_apply_ope ] Caught exception in plugin: 13 N5boost10wrapexceptISt12out_o...
`Caught exception in plugin: 13 N5boost10wrapexceptISt12out_of_rangeEE: unknown key`
example:
```
2023-08-10T12:06:56.697 database.cpp:1405 notify_pre_apply_ope ] Caught exception in plugin: 13 N5boost10wrapexceptISt12out_of_rangeEE: unknown key
unknown key:
{"author":2426139,"permlink":"splinterboost-daily-report-8-10-2023","what":"unknown key"}
database.cpp:735 get_comment
{"author":"splinterboost","permlink":"splinterboost-daily-report-8-10-2023"}
database.cpp:745 get_comment
```
moving into consensus is our way to go?
FYI @ABW @bwrona
HF-28
https://gitlab.syncad.com/hive/hive/-/issues/563
Tests - beekeeper - performance
2024-01-19T09:15:03Z
Wieslaw Kedzierski
Tests - beekeeper - performance
Test beekeepers functionality under heavy load.
Under heavy load, we should check:
- [ ] validate read/write operations,
- [ ] validate timeout influence,
- [ ] try to simulate a crash.
Test beekeepers functionality under heavy load.
Under heavy load, we should check:
- [ ] validate read/write operations,
- [ ] validate timeout influence,
- [ ] try to simulate a crash.
Wieslaw Kedzierski
Wieslaw Kedzierski
https://gitlab.syncad.com/hive/hive/-/issues/562
Tests - beekeeper - use cases
2024-02-14T09:16:51Z
Wieslaw Kedzierski
Tests - beekeeper - use cases
Tests beekeeper workflow including:
- One session of beekeeper:
- New wallet
- [x] Create a session with one wallet and one key,
- [x] Create a session with many wallets with one key,
- [x] Create a session with one wallet...
Tests beekeeper workflow including:
- One session of beekeeper:
- New wallet
- [x] Create a session with one wallet and one key,
- [x] Create a session with many wallets with one key,
- [x] Create a session with one wallet and many keys,
- [x] Create a session with many wallets and many keys.
- Existing wallets
- [ ] Create a session with one wallet and one key,
- [ ] Create a session with many wallets with one key,
- [ ] Create a session with one wallet and many keys,
- [ ] Create a session with many wallets and many keys.
- Many sessions of beekeeper:
- New wallet
- [ ] Create a session with one wallet and one key,
- [ ] Create a session with many wallets with one key,
- [ ] Create a session with one wallet and many keys,
- [ ] Create a session with many wallets and many keys.
- Existing wallets
- [ ] Create a session with one wallet and one key,
- [ ] Create a session with many wallets with one key,
- [ ] Create a session with one wallet and many keys,
- [ ] Create a session with many wallets and many keys.
Wieslaw Kedzierski
Wieslaw Kedzierski
https://gitlab.syncad.com/hive/hive/-/issues/559
Tests - beekeeper - commandline
2023-10-16T08:07:29Z
Wieslaw Kedzierski
Tests - beekeeper - commandline
Based on the issue https://gitlab.syncad.com/hive/hive/-/issues/567 command line flags that will be covered are:
Application Command Line Options:
- [x] help,
- [x] version.
Application Options:
- [x] backtrace,
- [x] export_keys_wal...
Based on the issue https://gitlab.syncad.com/hive/hive/-/issues/567 command line flags that will be covered are:
Application Command Line Options:
- [x] help,
- [x] version.
Application Options:
- [x] backtrace,
- [x] export_keys_wallet_name,
- [x] export_keys_wallet_password,
- [x] log_json_rpc,
- [x] notifications_endpoint,
- [x] unlock_timeout,
- [x] wallet_dir,
- [x] webserver_http_endpoint,
- [x] webserver_thread_pool_size.
Wieslaw Kedzierski
Wieslaw Kedzierski
https://gitlab.syncad.com/hive/clive/-/issues/37
Refactor `table` screens
2024-02-02T14:23:40Z
Mateusz Żebrak
Refactor `table` screens
Views showing a table like `ManageAuthorities`, `OperationsCart` should be refactored, and a base for such a table screen should be created.
Views showing a table like `ManageAuthorities`, `OperationsCart` should be refactored, and a base for such a table screen should be created.
MVP - Minimum Viable Product
https://gitlab.syncad.com/hive/clive/-/issues/36
Add screen for managing watched accounts
2024-03-26T09:00:29Z
Mateusz Żebrak
Add screen for managing watched accounts
Possibly related: #37
This screen probably should look similar to ManageAuthorities, OperationsCart, so users can remove account from watched or add a new one.
Possibly related: #37
This screen probably should look similar to ManageAuthorities, OperationsCart, so users can remove account from watched or add a new one.
10th release
Jakub Ziebinski
Jakub Ziebinski
https://gitlab.syncad.com/hive/test-tools/-/issues/38
Problem with uncatched exception from node
2023-08-21T10:17:39Z
Radosław Masłowski
Problem with uncatched exception from node
I recognize some problem with uncatched exception from node and pytest react with "no test run".
I get this problem in haf mirrornet tests run local(commit https://gitlab.syncad.com/hive/haf/-/commit/58757585aa33da216908c67b922b297b4ba5...
I recognize some problem with uncatched exception from node and pytest react with "no test run".
I get this problem in haf mirrornet tests run local(commit https://gitlab.syncad.com/hive/haf/-/commit/58757585aa33da216908c67b922b297b4ba59491) in test: `test_replay_in_mirrornet.py`
<p>
<details>
<summary>Logs from pytest:</summary>
<pre><code>
(venv) haf_admin@95f259f2d99d:/workspace/haf$ cd /workspace/haf ; /usr/bin/env /workspace/haf/venv/bin/python /workspace/.vscode-server/extensions/ms-python.python-2023.15.12301911/pythonFiles/lib/python/debugpy/adapter/../../debugpy/launcher 46001 -- -m pytest --numprocesses=auto /workspace/haf/tests/integration/system/haf/mirrornet_tests/test_replay_in_mirrornet.py --timeout=360000 --block-log-path=/workspace/block_logs/new/block_log --snapshot-path=/workspace/manual_runs/witness_node_for_snapshot/snapshot -m\ mirrornet -s --full-trace
================================================================================================================== test session starts ===================================================================================================================
platform linux -- Python 3.10.12, pytest-7.2.2, pluggy-1.2.0
rootdir: /workspace/haf/tests/integration, configfile: pytest.ini
plugins: rerunfailures-10.2, tavern-2.2.0, repeat-0.9.1, xdist-3.2.0, timeout-2.1.0
timeout: 360000.0s
timeout method: signal
timeout func_only: False
[gw0] Python 3.10.12 (main, Jun 11 2023, 05:26:28) [GCC 11.4.0]
[gw1] Python 3.10.12 (main, Jun 11 2023, 05:26:28) [GCC 11.4.0]
[gw2] Python 3.10.12 (main, Jun 11 2023, 05:26:28) [GCC 11.4.0]
[gw4] Python 3.10.12 (main, Jun 11 2023, 05:26:28) [GCC 11.4.0]
[gw3] Python 3.10.12 (main, Jun 11 2023, 05:26:28) [GCC 11.4.0]
[gw5] Python 3.10.12 (main, Jun 11 2023, 05:26:28) [GCC 11.4.0]
[gw6] Python 3.10.12 (main, Jun 11 2023, 05:26:28) [GCC 11.4.0]
[gw7] Python 3.10.12 (main, Jun 11 2023, 05:26:28) [GCC 11.4.0]
[gw9] Python 3.10.12 (main, Jun 11 2023, 05:26:28) [GCC 11.4.0]
[gw8] Python 3.10.12 (main, Jun 11 2023, 05:26:28) [GCC 11.4.0]
[gw11] Python 3.10.12 (main, Jun 11 2023, 05:26:28) [GCC 11.4.0]
[gw10] Python 3.10.12 (main, Jun 11 2023, 05:26:28) [GCC 11.4.0]
[gw12] Python 3.10.12 (main, Jun 11 2023, 05:26:28) [GCC 11.4.0]
[gw15] Python 3.10.12 (main, Jun 11 2023, 05:26:28) [GCC 11.4.0]
[gw13] Python 3.10.12 (main, Jun 11 2023, 05:26:28) [GCC 11.4.0]
[gw14] Python 3.10.12 (main, Jun 11 2023, 05:26:28) [GCC 11.4.0]
gw0 [2] / gw1 [2] / gw2 [2] / gw3 [2] / gw4 [2] / gw5 [2] / gw6 [2] / gw7 [2] / gw8 [2] / gw9 [2] / gw10 [2] / gw11 [2] / gw12 [2] / gw13 [2] / gw14 [2] / gw15 [2]
scheduling tests via LoadScheduling
tests/integration/system/haf/mirrornet_tests/test_replay_in_mirrornet.py::test_replay[disabled_indexes_in_replay]
tests/integration/system/haf/mirrornet_tests/test_replay_in_mirrornet.py::test_replay[enabled_indexes]
!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!! KeyboardInterrupt !!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!
config = <_pytest.config.Config object at 0x7fb0c24216f0>, doit = <function _main at 0x7fb0c269beb0>
def wrap_session(
config: Config, doit: Callable[[Config, "Session"], Optional[Union[int, ExitCode]]]
) -> Union[int, ExitCode]:
"""Skeleton command line program."""
session = Session.from_config(config)
session.exitstatus = ExitCode.OK
initstate = 0
try:
try:
config._do_configure()
initstate = 1
config.hook.pytest_sessionstart(session=session)
initstate = 2
> session.exitstatus = doit(config, session) or 0
venv/lib/python3.10/site-packages/_pytest/main.py:270:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
config = <_pytest.config.Config object at 0x7fb0c24216f0>, session = <Session integration exitstatus=<ExitCode.OK: 0> testsfailed=0 testscollected=2>
def _main(config: Config, session: "Session") -> Optional[Union[int, ExitCode]]:
"""Default command line protocol for initialization, session,
running tests and reporting."""
config.hook.pytest_collection(session=session)
> config.hook.pytest_runtestloop(session=session)
venv/lib/python3.10/site-packages/_pytest/main.py:324:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = <_HookCaller 'pytest_runtestloop'>, kwargs = {'session': <Session integration exitstatus=<ExitCode.OK: 0> testsfailed=0 testscollected=2>}, firstresult = True
def __call__(self, **kwargs: object) -> Any:
assert (
not self.is_historic()
), "Cannot directly call a historic hook - use call_historic instead."
self._verify_all_args_are_provided(kwargs)
firstresult = self.spec.opts.get("firstresult", False) if self.spec else False
> return self._hookexec(self.name, self._hookimpls, kwargs, firstresult)
venv/lib/python3.10/site-packages/pluggy/_hooks.py:433:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = <_pytest.config.PytestPluginManager object at 0x7fb0c2efb550>, hook_name = 'pytest_runtestloop'
methods = [<HookImpl plugin_name='main', plugin=<module '_pytest.main' from '/workspace/haf/venv/lib/python3.10/site-packages/_p...b0bd9ab610>>, <HookImpl plugin_name='logging-plugin', plugin=<_pytest.logging.LoggingPlugin object at 0x7fb0bd9aa080>>]
kwargs = {'session': <Session integration exitstatus=<ExitCode.OK: 0> testsfailed=0 testscollected=2>}, firstresult = True
def _hookexec(
self,
hook_name: str,
methods: Sequence[HookImpl],
kwargs: Mapping[str, object],
firstresult: bool,
) -> object | list[object]:
# called from all hookcaller instances.
# enable_tracing will set its own wrapping function at self._inner_hookexec
> return self._inner_hookexec(hook_name, methods, kwargs, firstresult)
venv/lib/python3.10/site-packages/pluggy/_manager.py:112:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = <xdist.dsession.DSession object at 0x7fb0bd9ab610>
@pytest.hookimpl
def pytest_runtestloop(self):
self.sched = self.config.hook.pytest_xdist_make_scheduler(
config=self.config, log=self.log
)
assert self.sched is not None
self.shouldstop = False
while not self.session_finished:
> self.loop_once()
venv/lib/python3.10/site-packages/xdist/dsession.py:117:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = <xdist.dsession.DSession object at 0x7fb0bd9ab610>
def loop_once(self):
"""Process one callback from one of the workers."""
while 1:
if not self._active_nodes:
# If everything has died stop looping
self.triggershutdown()
raise RuntimeError("Unexpectedly no active workers available")
try:
> eventcall = self.queue.get(timeout=2.0)
venv/lib/python3.10/site-packages/xdist/dsession.py:131:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = <queue.Queue object at 0x7fb0bd9ab640>, block = True, timeout = 2.0
def get(self, block=True, timeout=None):
'''Remove and return an item from the queue.
If optional args 'block' is true and 'timeout' is None (the default),
block if necessary until an item is available. If 'timeout' is
a non-negative number, it blocks at most 'timeout' seconds and raises
the Empty exception if no item was available within that time.
Otherwise ('block' is false), return an item if one is immediately
available, else raise the Empty exception ('timeout' is ignored
in that case).
'''
with self.not_empty:
if not block:
if not self._qsize():
raise Empty
elif timeout is None:
while not self._qsize():
self.not_empty.wait()
elif timeout < 0:
raise ValueError("'timeout' must be a non-negative number")
else:
endtime = time() + timeout
while not self._qsize():
remaining = endtime - time()
if remaining <= 0.0:
raise Empty
> self.not_empty.wait(remaining)
/usr/lib/python3.10/queue.py:180:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = <Condition(<unlocked _thread.lock object at 0x7fb0bdfda440>, 0)>, timeout = 1.9999994300305843
def wait(self, timeout=None):
"""Wait until notified or until a timeout occurs.
If the calling thread has not acquired the lock when this method is
called, a RuntimeError is raised.
This method releases the underlying lock, and then blocks until it is
awakened by a notify() or notify_all() call for the same condition
variable in another thread, or until the optional timeout occurs. Once
awakened or timed out, it re-acquires the lock and returns.
When the timeout argument is present and not None, it should be a
floating point number specifying a timeout for the operation in seconds
(or fractions thereof).
When the underlying lock is an RLock, it is not released using its
release() method, since this may not actually unlock the lock when it
was acquired multiple times recursively. Instead, an internal interface
of the RLock class is used, which really unlocks it even when it has
been recursively acquired several times. Another internal interface is
then used to restore the recursion level when the lock is reacquired.
"""
if not self._is_owned():
raise RuntimeError("cannot wait on un-acquired lock")
waiter = _allocate_lock()
waiter.acquire()
self._waiters.append(waiter)
saved_state = self._release_save()
gotit = False
try: # restore state no matter what (e.g., KeyboardInterrupt)
if timeout is None:
waiter.acquire()
gotit = True
else:
if timeout > 0:
> gotit = waiter.acquire(True, timeout)
/usr/lib/python3.10/threading.py:324:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
cls = <class 'test_tools.__private.raise_exception_helper.RaiseExceptionHelper'>, signal_number = 2, current_stack_frame = <frame at 0x561d42d3d240, file '/usr/lib/python3.10/threading.py', line 332, code wait>
@classmethod
def __external_error_handler(cls, signal_number, current_stack_frame) -> None:
with cls.__lock:
if cls.__last_exception is None:
# Default SIGINT handler raises KeyboardInterrupt, so below code is not executed
> signal.default_int_handler(signal_number, current_stack_frame)
E KeyboardInterrupt
hive/tests/hive-local-tools/test-tools/package/test_tools/__private/raise_exception_helper.py:17: KeyboardInterrupt
================================================================================================================= no tests ran in 4.16s ==================================================================================================================
</code></pre>
</details>
</p>
I find in `last_run`, the node close with code -2
<p>
<details>
<summary>last_run</summary>
<pre><code>
2023-08-21 09:22:09,344 [INFO] HafNode0: Running HafNode0, replaying and waiting for live... (node.py:475)
2023-08-21 09:22:09,344 [INFO] HafNode0: Preparing database postgresql:///haf_block_log_d0540a77cf48416eba1c4c4de04e3294 (_haf_node.py:70)
2023-08-21 09:22:09,677 [DEBUG] HafNode0: Notifications server is listening on 127.0.0.1:37503... (node.py:191)
2023-08-21 09:22:09,677 [DEBUG] HafNode0: /workspace/haf/build/hive/programs/hived/hived -d . --chain-id 42 --skeleton-key 5JNHfZYKGaomSFvd4NUdQ9qMcEAC43kujbfjueTHpVapX1Kzq2n --force-replay (node.py:90)
2023-08-21 09:22:09,677 [INFO] HafNode0: Using time_offset @2016-09-15 19:47:24 (fake_time.py:12)
2023-08-21 09:22:10,080 [DEBUG] HafNode0: Closed with -2 return code (node.py:138)
2023-08-21 09:22:10,177 [DEBUG] HafNode0: Notifications server closed (node.py:242)
2023-08-21 09:22:10,182 [DEBUG] HafNode0: Notifications server closed (node.py:242)
</code></pre>
</details>
</p>
In stderr I found reason, why this exception was thrown:
<pre><code>2844008ms data_processor.cpp:212 handle_exception ] Data processor Check consistency of irreversible data detected SQL statement execution failure. Failing statement: `SELECT hive.initialize_extension_data();'.
</code></pre>
<p>
<details>
<summary>Full stderr.</summary>
<pre><code>
2844003ms json_rpc_plugin.cpp:222 initialize ] initializing JSON RPC plugin
2844003ms webserver_plugin.cpp:584 plugin_initialize ] initializing webserver plugin
2844003ms webserver_plugin.cpp:587 plugin_initialize ] configured with 32 thread pool size
2844003ms webserver_plugin.cpp:590 plugin_initialize ] Compression in webserver is disabled
2844003ms webserver_plugin.cpp:602 plugin_initialize ] configured http to listen on 0.0.0.0:0
2844003ms webserver_plugin.cpp:619 plugin_initialize ] configured ws to listen on 0.0.0.0:0
2844003ms notifications.cpp:64 setup ] setting up notification handler for 1 address
2844003ms database.cpp:673 set_chain_id ] hive_chain_id: 4200000000000000000000000000000000000000000000000000000000000000
2844003ms chain_plugin.cpp:961 plugin_initialize ] Setting custom skeleton key: 5JNHfZYKGaomSFvd4NUdQ9qMcEAC43kujbfjueTHpVapX1Kzq2n
2844003ms witness_plugin.cpp:593 plugin_initialize ] Initializing witness plugin
2844004ms witness_plugin.cpp:613 plugin_initialize ] warning: stale production is enabled, make sure you know what you are doing.
2844004ms witness_plugin.cpp:620 plugin_initialize ] warning: required witness participation=0, normally this should be set to 33
2844004ms account_by_key_plugin.cpp:289 plugin_initialize ] Initializing account_by_key plugin
2844004ms market_history_plugin.cpp:175 plugin_initialize ] market_history: plugin_initialize() begin
2844004ms market_history_plugin.cpp:199 plugin_initialize ] market_history: plugin_initialize() end
2844004ms sql_serializer.cpp:730 plugin_initialize ] Initializing sql serializer plugin
2844004ms sql_serializer.cpp:52 is_database_correct ] Checking correctness of database...
2844004ms data_processor.cpp:132 trigger ] Trying to trigger data processor: Check correctness...
2844004ms data_processor.cpp:136 trigger ] Data processor: Check correctness triggerred...
2844004ms data_processor.cpp:142 trigger ] Waiting until data_processor Check correctness will consume a data...
2844004ms data_processor.cpp:56 operator() ] Entering data processor thread: Check correctness
2844004ms data_processor.cpp:63 operator() ] Check correctness data processor is connecting ...
2844004ms data_processor.cpp:65 operator() ] Check correctness data processor connected successfully ...
2844004ms data_processor.cpp:71 operator() ] Check correctness data processor is waiting for DATA-READY signal...
2844004ms data_processor.cpp:75 operator() ] Check correctness data processor resumed by DATA-READY signal...
2844004ms data_processor.cpp:86 operator() ] Check correctness data processor consumed data - notifying trigger process...
2844004ms data_processor.cpp:92 operator() ] Check correctness data processor starts a data processing...
2844004ms data_processor.cpp:147 trigger ] Leaving trigger of data data processor: Check correctness...
2844004ms transaction_controllers.cpp:124 do_reconnect ] Trying to connect to database: `postgresql:///haf_block_log_d0540a77cf48416eba1c4c4de04e3294'...
2844004ms data_processor.cpp:185 join ] Trying to resume data processor: Check correctness...
2844004ms data_processor.cpp:187 join ] Data processor: Check correctness resumed...
2844006ms transaction_controllers.cpp:126 do_reconnect ] Connected to database: `postgresql:///haf_block_log_d0540a77cf48416eba1c4c4de04e3294'.
2844007ms data_processor.cpp:103 operator() ] Check correctness data processor finished processing a data chunk...
2844007ms data_processor.cpp:111 operator() ] Leaving data processor thread: Check correctness
2844007ms data_processor.cpp:198 join ] Data processor: Check correctness finished execution...
2844007ms data_processor.cpp:132 trigger ] Trying to trigger data processor: Check consistency of irreversible data...
2844007ms data_processor.cpp:136 trigger ] Data processor: Check consistency of irreversible data triggerred...
2844007ms data_processor.cpp:56 operator() ] Entering data processor thread: Check consistency of irreversible data
2844007ms data_processor.cpp:142 trigger ] Waiting until data_processor Check consistency of irreversible data will consume a data...
2844007ms data_processor.cpp:63 operator() ] Check consistency of irreversible data data processor is connecting ...
2844007ms data_processor.cpp:65 operator() ] Check consistency of irreversible data data processor connected successfully ...
2844007ms data_processor.cpp:71 operator() ] Check consistency of irreversible data data processor is waiting for DATA-READY signal...
2844007ms data_processor.cpp:75 operator() ] Check consistency of irreversible data data processor resumed by DATA-READY signal...
2844007ms data_processor.cpp:86 operator() ] Check consistency of irreversible data data processor consumed data - notifying trigger process...
2844007ms data_processor.cpp:92 operator() ] Check consistency of irreversible data data processor starts a data processing...
2844007ms data_processor.cpp:147 trigger ] Leaving trigger of data data processor: Check consistency of irreversible data...
2844007ms transaction_controllers.cpp:124 do_reconnect ] Trying to connect to database: `postgresql:///haf_block_log_d0540a77cf48416eba1c4c4de04e3294'...
2844007ms data_processor.cpp:185 join ] Trying to resume data processor: Check consistency of irreversible data...
2844007ms data_processor.cpp:187 join ] Data processor: Check consistency of irreversible data resumed...
2844008ms transaction_controllers.cpp:126 do_reconnect ] Connected to database: `postgresql:///haf_block_log_d0540a77cf48416eba1c4c4de04e3294'.
2844008ms data_processor.cpp:212 handle_exception ] Data processor Check consistency of irreversible data detected SQL statement execution failure. Failing statement: `SELECT hive.initialize_extension_data();'.
2844008ms data_processor.cpp:16 kill_node ] An error occured and HAF is stopping synchronization...
</code></pre>
</details>
</p>
I try to found reason, why exception was not handle and i found, the keyboard incorrupt is thrown directy after `hived subprocess popen`: https://gitlab.syncad.com/hive/test-tools/-/blob/master/package/test_tools/__private/node.py#L108.
https://gitlab.syncad.com/hive/hive/-/issues/556
v1.27.5 RC review (includes performance testing, etc)
2024-03-28T03:00:58Z
Gandalf
v1.27.5 RC review (includes performance testing, etc)
Notes:
- [x] `plugin = rc` in `config.ini` (that might be there explicitly specified from previous versions) will no longer work with v1.27.5-rc0
Notes:
- [x] `plugin = rc` in `config.ini` (that might be there explicitly specified from previous versions) will no longer work with v1.27.5-rc0
Gandalf
Gandalf
https://gitlab.syncad.com/hive/denser/-/issues/151
A user suggested adding ability to embed posts in other web sites
2023-08-17T19:35:29Z
Dan Notestein
A user suggested adding ability to embed posts in other web sites
No idea of how much utility/effort involved, just passing along the idea: https://hive.blog/hive-102930/@vikisecrets/hive-feature-request-make-it-possible-to-embed-hive-posts-on-other-websites
No idea of how much utility/effort involved, just passing along the idea: https://hive.blog/hive-102930/@vikisecrets/hive-feature-request-make-it-possible-to-embed-hive-posts-on-other-websites