Hivemind dies if HAF is restarted
Attempting to start Hivemind process...
INFO - hive.conf:205 - The database instance is created...
INFO - hive.db.adapter:48 - A database offers maximum connections: 100. Required 15 connections.
INFO - hive.db.adapter:90 - Closing database connection: 'root'
INFO - hive.db.adapter:103 - Disposing SQL engine
INFO - hive.conf:254 - The database is disconnected...
INFO - hive.conf:205 - The database instance is created...
INFO - hive.indexer.sync:52 - Entering HAF mode synchronization
INFO - hive.db.db_state:44 - [MASSIVE] Welcome to hive!
INFO - hive.db.db_state:55 - [MASSIVE] Continue with massive sync...
INFO - hive.indexer.sync:67 - hivemind_version : 1.27.3.0.0+g3bfe4820-dirty.20221220T062630
INFO - hive.indexer.sync:68 - hivemind_git_rev : 3bfe4820fbc726c589d314eca700147f834cc63f
INFO - hive.indexer.sync:69 - hivemind_git_date : 2022-12-20 06:26:30
INFO - hive.indexer.sync:71 - database_schema_version : 34
INFO - hive.indexer.sync:72 - database_patch_date : 2023-01-22 17:13:48.695594
INFO - hive.indexer.sync:73 - database_patched_to_revision : 9d2cc15bea71a39139abdf49569e0eac6dd0b970
INFO - hive.indexer.sync:75 - last_block_from_view : 77274301
INFO - hive.indexer.sync:76 - last_imported_block : 77274301
INFO - hive.indexer.sync:77 - last_completed_block : 77274301
INFO - hive.indexer.hive_db.haf_functions:34 - Context already attached - attaching skipped.
INFO - hive.indexer.sync:89 - Using HAF database as block data provider, pointed by url: 'postgresql://haf_app_admin@HAF:5432/haf_block_log'
INFO - hive.indexer.sync:99 - Last imported block is: 77274301
INFO - hive.indexer.sync:186 - Querying for next block for app context...
WARNING - hive.db.adapter:270 - [SQL-ERR] IntegrityError in query SELECT * FROM hive.app_next_block('hivemind_app') ({})
INFO - hive.indexer.sync:71 - Exiting HAF mode synchronization
WARNING - hive.server.common.payout_stats:16 - Rebuilding payout_stats_view in separate transaction
INFO - hive.indexer.sync:76 - LAST IMPORTED BLOCK IS: 77274301
INFO - hive.indexer.sync:77 - LAST COMPLETED BLOCK IS: 77274301
INFO - hive.indexer.hive_db.haf_functions:34 - Context already attached - attaching skipped.
INFO - hive.db.adapter:90 - Closing database connection: 'PostDataCache'
INFO - hive.db.adapter:90 - Closing database connection: 'Reputations'
INFO - hive.db.adapter:90 - Closing database connection: 'Votes'
INFO - hive.db.adapter:90 - Closing database connection: 'Follow'
INFO - hive.db.adapter:90 - Closing database connection: 'Posts'
INFO - hive.db.adapter:90 - Closing database connection: 'Reblog'
INFO - hive.db.adapter:90 - Closing database connection: 'Notify'
INFO - hive.db.adapter:90 - Closing database connection: 'Accounts'
INFO - hive.db.adapter:90 - Closing database connection: 'PayoutStats'
INFO - hive.db.adapter:90 - Closing database connection: 'Mentions'
INFO - hive.db.adapter:90 - Closing database connection: 'root'
INFO - hive.db.adapter:103 - Disposing SQL engine
INFO - hive.conf:254 - The database is disconnected...
Traceback (most recent call last):
File "/home/hivemind/.hivemind-venv/lib/python3.8/site-packages/sqlalchemy/engine/base.py", line 1819, in _execute_context
self.dialect.do_execute(
File "/home/hivemind/.hivemind-venv/lib/python3.8/site-packages/sqlalchemy/engine/default.py", line 732, in do_execute
cursor.execute(statement, parameters)
psycopg2.errors.ForeignKeyViolation: insert or update on table "contexts" violates foreign key constraint "fk_hive_app_context"
DETAIL: Key is not present in table "events_queue".
CONTEXT: SQL statement "UPDATE hive.contexts
SET events_id = __next_fork_event_id - 1 -- -1 because we pretend that we stay just before the next fork
WHERE id = __context_id"
PL/pgSQL function hive.squash_fork_events(text) line 36 at SQL statement
SQL statement "SELECT hive.squash_fork_events( _context )"
PL/pgSQL function hive.squash_events(text) line 13 at PERFORM
SQL statement "SELECT hive.squash_events( _context_name )"
PL/pgSQL function hive.app_next_block_forking_app(text) line 17 at PERFORM
PL/pgSQL function hive.app_next_block(text) line 8 at RETURN
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "/home/hivemind/.hivemind-venv/bin/hive", line 8, in <module>
sys.exit(run())
File "/home/hivemind/.hivemind-venv/lib/python3.8/site-packages/hive/cli.py", line 73, in run
launch_mode(mode, conf)
File "/home/hivemind/.hivemind-venv/lib/python3.8/site-packages/hive/cli.py", line 87, in launch_mode
sync.run()
File "/home/hivemind/.hivemind-venv/lib/python3.8/site-packages/hive/indexer/sync.py", line 102, in run
self._lbound, self._ubound = self._query_for_app_next_block()
File "/home/hivemind/.hivemind-venv/lib/python3.8/site-packages/hive/indexer/sync.py", line 187, in _query_for_app_next_block
lbound, ubound = self._db.query_row(f"SELECT * FROM hive.app_next_block('{SCHEMA_NAME}')")
File "/home/hivemind/.hivemind-venv/lib/python3.8/site-packages/hive/db/adapter.py", line 170, in query_row
res = self._query(sql, **kwargs)
File "/home/hivemind/.hivemind-venv/lib/python3.8/site-packages/hive/db/adapter.py", line 271, in _query
raise e
File "/home/hivemind/.hivemind-venv/lib/python3.8/site-packages/hive/db/adapter.py", line 264, in _query
result = self._basic_connection.execution_options(autocommit=False).execute(query, **kwargs)
File "/home/hivemind/.hivemind-venv/lib/python3.8/site-packages/sqlalchemy/engine/base.py", line 1306, in execute
return meth(self, multiparams, params, _EMPTY_EXECUTION_OPTS)
File "/home/hivemind/.hivemind-venv/lib/python3.8/site-packages/sqlalchemy/sql/elements.py", line 332, in _execute_on_connection
return connection._execute_clauseelement(
File "/home/hivemind/.hivemind-venv/lib/python3.8/site-packages/sqlalchemy/engine/base.py", line 1498, in _execute_clauseelement
ret = self._execute_context(
File "/home/hivemind/.hivemind-venv/lib/python3.8/site-packages/sqlalchemy/engine/base.py", line 1862, in _execute_context
self._handle_dbapi_exception(
File "/home/hivemind/.hivemind-venv/lib/python3.8/site-packages/sqlalchemy/engine/base.py", line 2043, in _handle_dbapi_exception
util.raise_(
File "/home/hivemind/.hivemind-venv/lib/python3.8/site-packages/sqlalchemy/util/compat.py", line 208, in raise_
raise exception
File "/home/hivemind/.hivemind-venv/lib/python3.8/site-packages/sqlalchemy/engine/base.py", line 1819, in _execute_context
self.dialect.do_execute(
File "/home/hivemind/.hivemind-venv/lib/python3.8/site-packages/sqlalchemy/engine/default.py", line 732, in do_execute
cursor.execute(statement, parameters)
sqlalchemy.exc.IntegrityError: (psycopg2.errors.ForeignKeyViolation) insert or update on table "contexts" violates foreign key constraint "fk_hive_app_context"
DETAIL: Key is not present in table "events_queue".
CONTEXT: SQL statement "UPDATE hive.contexts
SET events_id = __next_fork_event_id - 1 -- -1 because we pretend that we stay just before the next fork
WHERE id = __context_id"
PL/pgSQL function hive.squash_fork_events(text) line 36 at SQL statement
SQL statement "SELECT hive.squash_fork_events( _context )"
PL/pgSQL function hive.squash_events(text) line 13 at PERFORM
SQL statement "SELECT hive.squash_events( _context_name )"
PL/pgSQL function hive.app_next_block_forking_app(text) line 17 at PERFORM
PL/pgSQL function hive.app_next_block(text) line 8 at RETURN
[SQL: SELECT * FROM hive.app_next_block('hivemind_app')]
(Background on this error at: https://sqlalche.me/e/14/gkpj)
Exiting docker entrypoint...
As mentioned in the title, if Hivemind sync is running and HAF is restarted in the meanwhile, it seems to be bricking Hivemind altogether, despite HAF running again without any issues, Hivemind sync doesn't start no matter how many retries.