hive issueshttps://gitlab.syncad.com/groups/hive/-/issues2020-05-09T23:11:50Zhttps://gitlab.syncad.com/hive/hive-js/-/issues/2Implement replacements for any data served up by condenser_api that is not av...2020-05-09T23:11:50ZDan NotesteinImplement replacements for any data served up by condenser_api that is not available except by condenser_api now (assuming an app needs it)List in this issue any data that your app currently gets from a condenser_api call that you can't figure out how to get without that call. List the condenser_api call (and the relevant data field that you can't get, if it's not obvious):...List in this issue any data that your app currently gets from a condenser_api call that you can't figure out how to get without that call. List the condenser_api call (and the relevant data field that you can't get, if it's not obvious):
* [ ]?Emmanuel King TurnerEmmanuel King Turnerhttps://gitlab.syncad.com/hive/hive-js/-/issues/3Implement the RC methods2020-05-05T10:45:48ZederalengImplement the RC methodsAdd rc_api requests to the methods file according to the documentation https://developers.hive.io/apidefinitions/#rc_api.find_rc_accountsAdd rc_api requests to the methods file according to the documentation https://developers.hive.io/apidefinitions/#rc_api.find_rc_accountsEmmanuel King TurnerEmmanuel King Turnerhttps://gitlab.syncad.com/hive/hive/-/issues/300--force-replay should signal plugins to cleanup as appropriate2022-07-12T09:22:51ZDan Notestein--force-replay should signal plugins to cleanup as appropriateFor example, a force-replay on an upgraded hived node can crash if there is an account_history rocksdb database present. To fix this issue, user must currently manually delete the ah rocksdb database.For example, a force-replay on an upgraded hived node can crash if there is an account_history rocksdb database present. To fix this issue, user must currently manually delete the ah rocksdb database.Mariusz TrelaMariusz Trelahttps://gitlab.syncad.com/hive/haf/-/issues/59hive.app_context_detach('the_app'); does not work when HAF broken app died2022-07-14T13:00:52ZBartek Wronahive.app_context_detach('the_app'); does not work when HAF broken app diedThe reason are filled shadow tables. Even app_context_detach implementation suggest to squash events and throw away such reversible data it does not work and following exception is generated:
ERROR: Cannot detach a table hivemind_app.h...The reason are filled shadow tables. Even app_context_detach implementation suggest to squash events and throw away such reversible data it does not work and following exception is generated:
ERROR: Cannot detach a table hivemind_app.hive_accounts. Shadow table hive.shadow_hivemind_app_hive_accounts is not empty
CONTEXT: PL/pgSQL function hive.detach_table(text,text) line 19 at RAISE
SQL statement "SELECT hive.detach_table( hrt.origin_table_schema, hrt.origin_table_name )
FROM hive.registered_tables hrt
WHERE hrt.context_id = __context_id"
PL/pgSQL function hive.context_detach(text) line 11 at PERFORM
SQL statement "SELECT hive.context_detach( _context )"
PL/pgSQL function hive.app_context_detach(text) line 3 at PERFORM
SQL state: P0001Mariusz TrelaMariusz Trelahttps://gitlab.syncad.com/hive/hive/-/issues/331hived stalled instead of performing exit-before-sync2023-01-19T10:56:27ZGandalfhived stalled instead of performing exit-before-syncReplay of recent `develop` for the purpose of BDE/RC analysis (reference data) stalled at the end<br>
Running with: `--exit-before-sync --advanced-benchmark --dump-memory-details --set-benchmark-interval 28800`<br>
```
58964 1873360ms d...Replay of recent `develop` for the purpose of BDE/RC analysis (reference data) stalled at the end<br>
Running with: `--exit-before-sync --advanced-benchmark --dump-memory-details --set-benchmark-interval 28800`<br>
```
58964 1873360ms database.cpp:500 close ] Database is closed
58964 1873360ms chain_plugin.cpp:960 plugin_shutdown ] database closed successfully
58964 1873371ms p2p_plugin.cpp:88 ~p2p_plugin_impl ] P2P plugin is closing...
58964 1873371ms shutdown_mgr.hpp:56 wait ] Processing of 'shutdown-state type: HIVE_P2P_BLOCK_HANDLER' in progress...
58964 1873371ms shutdown_mgr.hpp:72 wait ] A value from a different thread is read...
58964 1873371ms shutdown_mgr.hpp:77 wait ] Processing of 'shutdown-state type: HIVE_P2P_BLOCK_HANDLER' was finished...
58964 1873371ms shutdown_mgr.hpp:56 wait ] Processing of 'shutdown-state type: HIVE_P2P_TRANSACTION_HANDLER' in progress...
58964 1873371ms shutdown_mgr.hpp:72 wait ] A value from a different thread is read...
58964 1873371ms shutdown_mgr.hpp:77 wait ] Processing of 'shutdown-state type: HIVE_P2P_TRANSACTION_HANDLER' was finished...
58964 1873371ms p2p_plugin.cpp:90 ~p2p_plugin_impl ] P2P plugin was closed...
```
No significant CPU/storage activity for long period of time, eventually resorted to using ^C<br>
I'm going to check if that's something repeatable (currently running with the new rc cost branch)Mariusz TrelaMariusz Trelahttps://gitlab.syncad.com/hive/haf/-/issues/99Replaying in 2 steps using a `force-replay` switch loses data from `account_o...2023-02-13T10:21:12ZMariusz TrelaReplaying in 2 steps using a `force-replay` switch loses data from `account_operations` tableIn both cases replaying up to 5mln blocks is done in 2 steps. In a second case a number of records in `account_operations` table is smaller.
```
Case 1:
./hived -d DIR --replay-blockchain --stop-replay-at-block 4100000 --exit-before-syn...In both cases replaying up to 5mln blocks is done in 2 steps. In a second case a number of records in `account_operations` table is smaller.
```
Case 1:
./hived -d DIR --replay-blockchain --stop-replay-at-block 4100000 --exit-before-sync
./hived -d DIR --replay-blockchain --stop-replay-at-block 5000000 --exit-before-sync
```
results( type of data, number of records ):
- blocks: `5000000`
- accounts: `92462`
- transactions: `6961192`
- transactions_multisig: `450`
- operations: `19792321`
- account_operations: `29489856`
```
Case 2:
./hived -d DIR --force-replay --stop-replay-at-block 4100000 --exit-before-sync
./hived -d DIR --force-replay --stop-replay-at-block 5000000 --exit-before-sync
```
results( type of data, number of records ):
- blocks: `5000000`
- accounts: `92462`
- transactions: `6961192`
- transactions_multisig: `450`
- operations: `19792321`
- account_operations: `20208620`Mariusz TrelaMariusz Trelahttps://gitlab.syncad.com/hive/denser/-/issues/83Indicate and handle external links2023-07-25T15:11:01ZGandalfIndicate and handle external linksMake sure that external links are indicated properly, same way as in condenser<br>
reference: https://hive.blog/hivefest/@hivefest/save-the-date-hivefest-2023-22-26-september-rosarito-mexico-ola-surfista
(see: `hivefe.st` links)Make sure that external links are indicated properly, same way as in condenser<br>
reference: https://hive.blog/hivefest/@hivefest/save-the-date-hivefest-2023-22-26-september-rosarito-mexico-ola-surfista
(see: `hivefe.st` links)Damian JanusDamian Janus