hive issueshttps://gitlab.syncad.com/groups/hive/-/issues2023-09-06T12:44:25Zhttps://gitlab.syncad.com/hive/hive/-/issues/552Request to add upvote and downvote to Hive DHF proposals2023-09-06T12:44:25ZazirconRequest to add upvote and downvote to Hive DHF proposalsAs I hive stakeholder, I am requesting to add a feature of downvote to DHF proposals for hive. Currently stakeholder can only upvote a DHF proposal. There is alternative like return proposal, but this is not very convinient for community...As I hive stakeholder, I am requesting to add a feature of downvote to DHF proposals for hive. Currently stakeholder can only upvote a DHF proposal. There is alternative like return proposal, but this is not very convinient for community action. Since all hive posts have upvote and downvote to express community wisdom and consensus, DHF proposal should also have the same feature.
Regards,
azirconhttps://gitlab.syncad.com/hive/hivemind/-/issues/133Remove active_votes from post data across bridge and condenser_api2021-05-17T21:41:43ZAndrzej LisakRemove active_votes from post data across bridge and condenser_apiAlmost every call in `bridge` and `condenser_api` that returns posts includes `active_votes`. For calls related to popular posts, especially `hot` and `trending`, data about votes can take even 90% of whole result (and the same percentag...Almost every call in `bridge` and `condenser_api` that returns posts includes `active_votes`. For calls related to popular posts, especially `hot` and `trending`, data about votes can take even 90% of whole result (and the same percentage of response time). That data is mostly unused, needlessly eating bandwidth and server CPU. Even the `hiveblocks.com` that shows all the vote details, it does so only after user clicks on the link - a moment when frontend should ask for votes in separate call. As far as I can see the only information about votes displayed directly with posts is their amount. That information can easily be provided instead of full vote details.
Note that for hivemind prior to HF24 it kind of made sense to include vote details in posts. It was because votes were stored as part of post in large string that needed to be parsed in order to compute data such as number of votes, number of upvotes, cumulative rshares etc. Since hivemind had to decompose vote string in order to supplement data about the post anyway, it was natural to include vote details in the same result. It is no longer true for current hivemind. Now everything related to pulling and processing `active_votes` is waste.https://gitlab.syncad.com/hive/hive/-/issues/381Add IPv6 Support For P2P2024-02-05T04:31:17ZRishi PantheeAdd IPv6 Support For P2PCurrently hived doesn't support ipv6 p2p nodes. This doesn't allow for hived to run in a purely ipv6 environment.
Adding a v6 only host in the config leads to a unknown host exception:
```
The host name can not be resolved: seed6.rish...Currently hived doesn't support ipv6 p2p nodes. This doesn't allow for hived to run in a purely ipv6 environment.
Adding a v6 only host in the config leads to a unknown host exception:
```
The host name can not be resolved: seed6.rishipanthee.com
{"hostname":"seed6.rishipanthee.com"}
p2p_plugin.cpp:66 resolve_string_to_ip_endpoints
{"endpoint_string":"seed6.rishipanthee.com:2001"}
p2p_plugin.cpp:75 resolve_string_to_ip_endpoints while adding seed node seed6.rishipanthee.com:2001
1617975ms p2p_plugin.cpp:434 plugin_initialize ] caught exception 0 exception: unspecified
process exited with: Host not found (authoritative)
{"message":"Host not found (authoritative)"}
asio.cpp:88 resolve_handler
```
And using a v6 address + port in config leads to a similar issue:
```
{"endpoint_string":"2001:db8::1:2001"}
p2p_plugin.cpp:75 resolve_string_to_ip_endpoints while adding seed node 2001:db8::1:2001
2016021ms p2p_plugin.cpp:434 plugin_initialize ] caught exception 0 exception: unspecified
process exited with: Host not found (authoritative)
{"message":"Host not found (authoritative)"}
asio.cpp:88 resolve_handler
```https://gitlab.syncad.com/hive/hivemind/-/issues/125Implement the other community types included in the original design doc2021-03-13T14:38:07ZSergioImplement the other community types included in the original design docAccording to [this doc](https://gitlab.syncad.com/hive/hivemind/-/blob/master/docs/communities.md) there should have been 3 types of communities.
We have the first one, but still missing the second and third option:
1. ~~**Topic**: any...According to [this doc](https://gitlab.syncad.com/hive/hivemind/-/blob/master/docs/communities.md) there should have been 3 types of communities.
We have the first one, but still missing the second and third option:
1. ~~**Topic**: anyone can post or comment~~
2. **Journal**: guests can comment but not post. only members can post.
3. **Council**: only members can post or comment
This has been discussed in this [Hive Dev meeting](https://gitlab.syncad.com/hive/tasks_without_projects_yet/-/issues/29) and I'm opening the ticket to start a discussionHowoHowohttps://gitlab.syncad.com/hive/hive/-/issues/13Feature: Add app/meta tag to operations2020-06-27T15:29:46ZtherealwolfFeature: Add app/meta tag to operationsWe've already got the meta tag for posts & comments, which is being used to determine which dapp was used to broadcast it. The data is then being used to be display statistics at hivedapps.com & stateofthedapps.com as well as give an est...We've already got the meta tag for posts & comments, which is being used to determine which dapp was used to broadcast it. The data is then being used to be display statistics at hivedapps.com & stateofthedapps.com as well as give an estimation of how many on-chain user Hive has. This is obviously quite important to determine how active the chain is.
It would be good if we had the same tag for other operations. For example: vote, transfer, transfer_to_vesting, etc.
The tag could either be meta (json as string) or simply app (hiveblog/0.1).https://gitlab.syncad.com/hive/haf/-/issues/218proposition for new HAF application main loop2024-03-27T16:30:28ZMarcinproposition for new HAF application main loopNow, we have a few real HAF applications (Hivemind, HAF Block Explorer, Balance Tracker), and we've received feedback from them regarding the complexity of the HAF apps API and the application's main loop. Here's my personal, subjective ...Now, we have a few real HAF applications (Hivemind, HAF Block Explorer, Balance Tracker), and we've received feedback from them regarding the complexity of the HAF apps API and the application's main loop. Here's my personal, subjective list of issues:
1. Hardly anyone reads the HAF documentation, and even when they do, they quickly forget about crucial details (such as the fact that the first block in the returned block range is the current block).
2. Complicated problem with attach and detach contexts: when to do this, how to start/restart the application, how to save some data when the application is detached, auto detach, current block when a context is detached
3. Finding the right place in the app's code for the main loop to issue a commit is challenging.
4. The function app_next_block alters the internal state of contexts, and applications often overlook this fact in their loops, leading to incorrect synchronization
5. Braking synchronization of an app. is challenging
Due to the issues above, the main loops of the application incorporate custom, complex logic, resulting in overly lengthy code.
The application's main loop must be straightforward and doesn't require developers to understand HAF details deeply.
My postulates are:
1. hide attach/detach context from apps. developers
2. change hive.app_next_block to a procedure to enable it to issue a COMMIT.
3. app_next_block becomes one and only one method that delivers a range of blocks to the application, no more separate iteration blocks by detached apps
4. only app_next_block manipulates the current block of contexts
Based on experience with the already implemented applications, it looks like each application divides the synchronization process into stages (i.e hivemind process blocks massively with disabled indexes, then massively with enabled indexes, and then in live mode, when reversible blocks are processed block by block). IMO all stateful applications (those that have registered tables) use stages, so I suggest expressing these stages explicitly by applications and associating them with contexts. Each application will deliver description of stages, it may be an ARRAY
os stages, whereas stage contains name, minimum distance to head block when a stage is enabled, and maximum number of blocks that can be processed in one turn, here is a pseudocode example:
[ (NO_INDEXES, 1000000, 1000), (WITH_INDEXES, 100, 1000), (LIVE, 1, 1 ) ]
hive.app_next_block will deliver the stage name and range of blocks to process in this stage. Detaching and attaching
context will be executed by hive.app_next_block, the same as current_block_modification. The next calling hive.app_next_block
will issue COMMIT, to save the previous iteration to the database. If an app wants to break the loop, it must exit it immediately after
calling hive.app_next_block.
Here is an example of an application with a main loop:
```
create_context( 'hivemind, ARRAY[ (NO_INDEXES, 1000000, 1000), (WITH_INDEXES, 100, 1000), ('LIVE', 1, 1 ) );
while True:
range = hive.app_next_block( 'hivemind' )
-- if sync must end then simply break the loop after hive.app_next_block, do not commit
if break_request:
break;
if range IS NULL:
continue;
switch (range.stage)
case 'NO_INDEXES':
disable_indexes();
process_blocks_massivly( range.first, range.last )
case 'WITH_INDEXES':
enable_indexes();
process_blocks_massivly( range.first, range.last )
case 'LIVE':
enable_indexes();
process_one_block( range.first )
default:
ASSERT( FALSE, 'Unknown stage' )
```
Here is a draft of a new hive.app_next_block algorithm:
![diagrams2](/uploads/087391e2a73c8832590e0626d020c04f/diagrams2.png)
Regarding group context, I think the lead context stages should be usedMarcinMarcinhttps://gitlab.syncad.com/hive/HAfAH/-/issues/49Historical HP equivalent of the VESTS2024-01-22T13:27:20ZMahdi YariHistorical HP equivalent of the VESTSIn certain operations like `withdraw_vesting` we have VESTS that we can possibly include their HP equivalent in the returned AH call. Front-ends use the current vests/hp ratio which results in wrong value displayed for the HP. We have th...In certain operations like `withdraw_vesting` we have VESTS that we can possibly include their HP equivalent in the returned AH call. Front-ends use the current vests/hp ratio which results in wrong value displayed for the HP. We have the data in HAF for fixing this problem.https://gitlab.syncad.com/hive/hive/-/issues/398Modify block_logs in python tests, to contain preparations only for one type...2022-10-19T14:57:41ZRadosław MasłowskiModify block_logs in python tests, to contain preparations only for one type of tests.Now We have a situation the block log is shared between multiple tests. @pbatko suggest, the better approach would be to create separate, as small and simple as possible, block logs for each group of tests, which have similar needs.
I t...Now We have a situation the block log is shared between multiple tests. @pbatko suggest, the better approach would be to create separate, as small and simple as possible, block logs for each group of tests, which have similar needs.
I think this is good idea. The only drawback of this solution, in my opinion, is the expansion of the file system for tests that check many different functionalities.
What do you think about it? @kmochocki @mzebrak @kudmich @mkudelaBartek WronaBartek Wronahttps://gitlab.syncad.com/hive/hive/-/issues/151Standardization of block retrieval2022-10-20T11:13:34ZarcangeStandardization of block retrievalCurrently we have 6 different functions to retrieve information from blocks:
1. block_api.get_block
2. block_api.get_block_range
3. condenser_api.get_block
4. condenser_api.get_ops_in_block
5. account_history_api.get_ops_in_block
6. ac...Currently we have 6 different functions to retrieve information from blocks:
1. block_api.get_block
2. block_api.get_block_range
3. condenser_api.get_block
4. condenser_api.get_ops_in_block
5. account_history_api.get_ops_in_block
6. account_history_api.enum_virtual_ops
With few exceptions, they each have a different parameters' format and provide different results (structure, format, ...).
[1,2,3] do not return vops
[4,5] return ops and/or vops but not block data
[6] returns vops but not ops nor block data
[1,2] return tx ids as a global array
[3] returns tx ids as a global array and inside transactions objects (the later is easier to manage)
[6] returns tx ids in each operation object
[4] returns result as an array
[5] returns result is an object containing an array
[6] is the only one to effectively return `operation_id`
Properties like block_num, timestamp are sometimes duplicated in each transaction/operation which increases the volume of data returned and transferred.
Some APIs return operation type with the `_operation` suffix, some do not.
... and so on
Couldn't we optimize all this by providing one API call and above all standardizing how data are returned?
`get_block_range` looks like the favorite candidate to me.
It is quite new, less used up to now, and therefore less prone to break things if we change it.
My wish is it could return vops and has `transactions_ids` in each transaction object instead of being grouped in an array at the end.https://gitlab.syncad.com/hive/hivemind/-/issues/160Incoming Delegations API2021-12-21T07:14:43ZRishi PantheeIncoming Delegations APIhttps://gitlab.syncad.com/hive/hive/-/issues/26, was decided to move to Hivemind. Currently only way to get incoming delegations is to piece everything from the chain(which can take a lot of time on larger accounts) or use a third party ...https://gitlab.syncad.com/hive/hive/-/issues/26, was decided to move to Hivemind. Currently only way to get incoming delegations is to piece everything from the chain(which can take a lot of time on larger accounts) or use a third party software like hivesql.https://gitlab.syncad.com/hive/hivemind/-/issues/152Allow a different format for votes in 'bridge' API response2021-05-17T21:27:26ZSergioAllow a different format for votes in 'bridge' API responseWhen there's a really small vote (0 rshares) it's not possible for frontends to know if it's a very small vote or a vote removed. Also there are a few other problems that can probably be solved changing the API response format a little b...When there's a really small vote (0 rshares) it's not possible for frontends to know if it's a very small vote or a vote removed. Also there are a few other problems that can probably be solved changing the API response format a little bit.
My personal preference on this would be to allow a different format in the API response using an additional parameter.
Current format:
```
active_votes: [{rshares: 0, voter: "meowcurator"}]
```
New possible format:
```
active_votes: [{perc: 100, voter: "meowcurator"}]
```
I know that the rshares is really important to know how much a specific vote contributed to the post payout, but I think it's not commonly required and can be retrieved with the `get_active_votes` call whenever a popup is opened or required. As soon as you open the post or right in the feeds the important things to know are:
- Who voted on the post
- If it is an upvote or a downvote
- Also the network payload will be smaller (and this is the original reason to remove the perc from the response)
- And it will solve the above issue reported by @rishi556https://gitlab.syncad.com/hive/hivemind/-/issues/137Extract 'image' and 'users' metadata form the post/comment content and includ...2021-05-17T21:33:31ZSergioExtract 'image' and 'users' metadata form the post/comment content and include in the API responseMost frontends right now include some additional metadata for posts and comments. Usually at least `image` and `users` are provided.
I think will be cleaner to drop those metadata when publishing a post/comment and compute both those fi...Most frontends right now include some additional metadata for posts and comments. Usually at least `image` and `users` are provided.
I think will be cleaner to drop those metadata when publishing a post/comment and compute both those fields (array) in Hivemind. The fields will always be provide in the API response as it is right now.
This way Hivemind will not rely on data provided in the operation (that are not guaranteed to be up to date with the post content) and can also provide those field for posts published without including those values (improving consistency).https://gitlab.syncad.com/hive/hivemind/-/issues/108Allow 'reblog' to a specific community2021-05-17T21:37:17ZSergioAllow 'reblog' to a specific communityCurrently users can 'reblog' a post originally published into a community into their own blog. Would be nice to be able to 'reblog' into other communities too.
And while sharing it should be possible to provide a little context from the...Currently users can 'reblog' a post originally published into a community into their own blog. Would be nice to be able to 'reblog' into other communities too.
And while sharing it should be possible to provide a little context from the user doing the reblog. Basically a short text to explain why the reblog has been done and why it is relevant for a specific community)https://gitlab.syncad.com/hive/hive/-/issues/50Auto Witness Disabling2021-12-06T17:47:53ZRishi PantheeAuto Witness DisablingLots of witnesses are still enabled even though they've shut down. Auto disabling them after 30 days of not having produced a block, by having their signing key be the null key will prevent them from missing more blocks. They can easily ...Lots of witnesses are still enabled even though they've shut down. Auto disabling them after 30 days of not having produced a block, by having their signing key be the null key will prevent them from missing more blocks. They can easily reenable to get back on track once they are back. Along with this there's also the thought of not allowing voting for disabled witnesses.https://gitlab.syncad.com/hive/clive/-/issues/156when testing cli add wrapper for invoke method of CliRunner2024-02-28T16:40:38ZMarcin Sobczykwhen testing cli add wrapper for invoke method of CliRunnerWe use CliRunner from typer library in a way similar to https://typer.tiangolo.com/tutorial/testing/
So for example in test for savings we invoke
```python
result = runner.invoke(
cli,
[
"process",
...We use CliRunner from typer library in a way similar to https://typer.tiangolo.com/tutorial/testing/
So for example in test for savings we invoke
```python
result = runner.invoke(
cli,
[
"process",
"savings",
"deposit",
f"--amount={amount_to_deposit.as_legacy()}",
f"--password={WORKING_ACCOUNT.name}",
f"--sign={WORKING_ACCOUNT_KEY_ALIAS}",
],
)
```
We could have wrapper for this method as we use this very often in multiple scenarios for example smth like
`process_savings_deposit(cli_with_runner, amount_to_deposit, WORKING_ACCOUNT.name, WORKING_ACCOUNT_KEY_ALIAS)`
or
`process_savings_deposit(cli_with_runner, amount=amount_to_deposit, password=WORKING_ACCOUNT.name, sign=WORKING_ACCOUNT_KEY_ALIAS)`
This will apply not only for savings but for (almost) all cli tests
For the record here is discussion about this topic in tests for savings: https://gitlab.syncad.com/hive/clive/-/merge_requests/280#note_152961Marcin SobczykMarcin Sobczykhttps://gitlab.syncad.com/hive/wax/-/issues/15Extend HiveChain to contain configurable API Node as options2024-02-29T11:42:01ZEfeExtend HiveChain to contain configurable API Node as optionsIn case of decentralising of Hive based applications, that would be right place to extend HiveChain interface in Wax to contain configurable API Node addresses. These should be saved down to local storage for persistence.
It should take...In case of decentralising of Hive based applications, that would be right place to extend HiveChain interface in Wax to contain configurable API Node addresses. These should be saved down to local storage for persistence.
It should take care of:
1. Keeping list of Node addresses (with possibility of initializing with default addresses).
2. Function to retrieve all existing addresses.
3. Function to add new address.
4. Function to remove address from list by id/name/address.
5. Each address should have an alias/tag or multiple aliases/tags.
6. Function to set which addresses selected to use.https://gitlab.syncad.com/hive/clive/-/issues/145Rewrite "find_proposal" and "find_witness" to use CommandDataRetrieval and re...2024-01-29T13:25:35ZMarcin SobczykRewrite "find_proposal" and "find_witness" to use CommandDataRetrieval and return Proposal/Witness models from cliveCurrently we use Proposal/Witness model from schemas. Using model from clive would allow us to make data formatting in one place, we use clive models already in "retrieve_proposals_data" and "retrieve_witnesses_data", all those commands ...Currently we use Proposal/Witness model from schemas. Using model from clive would allow us to make data formatting in one place, we use clive models already in "retrieve_proposals_data" and "retrieve_witnesses_data", all those commands could be unified and return same model. Currently there is![Clipboard_-_January_29__2024_2_09_PM](/uploads/3ec8ab9eb70346281e01fcb0369039f2/Clipboard_-_January_29__2024_2_09_PM.png) but this could be written similarily as in class WitnessesDataRetrieval(CommandDataRetrieval[HarvestedDataRaw, SanitizedData, WitnessesData])https://gitlab.syncad.com/hive/hive/-/issues/636beekeeper | Improve test_default_values test2024-01-08T08:53:12ZWieslaw Kedzierskibeekeeper | Improve test_default_values testWe need to improve test_default_values.py test, so that it will check if there were new cli values checked.We need to improve test_default_values.py test, so that it will check if there were new cli values checked.Wieslaw KedzierskiWieslaw Kedzierskihttps://gitlab.syncad.com/hive/hive/-/issues/614beekeeper | Problem (?) with locking wallet2023-11-20T12:36:08ZWieslaw Kedzierskibeekeeper | Problem (?) with locking walletBased on https://gitlab.syncad.com/hive/clive/-/jobs/794458
We have a test, that checks if the wallet is locked after some period of time - by passing `--unlock-timeout` flag.
Here is the test
```
async def check_wallet_lock(beekeepe...Based on https://gitlab.syncad.com/hive/clive/-/jobs/794458
We have a test, that checks if the wallet is locked after some period of time - by passing `--unlock-timeout` flag.
Here is the test
```
async def check_wallet_lock(beekeeper: Beekeeper, required_status: bool) -> None:
"""Check if wallets are have required unlock status."""
response_list_wallets = await beekeeper.api.list_wallets()
for wallet in response_list_wallets.wallets:
assert wallet.unlocked == required_status
@pytest.mark.parametrize("unlock_timeout", [2, 3, 4])
async def test_unlock_time(unlock_timeout: int) -> None:
"""Test will check command line flag --unlock-time."""
beekeeper = await Beekeeper().launch(unlock_timeout=unlock_timeout)
await beekeeper.api.create(wallet_name="wallet_name")
await check_wallet_lock(beekeeper, True) <-- here wallet SHOULD be unlocked
await asyncio.sleep(int(unlock_timeout))
await check_wallet_lock(beekeeper, False) <-- here wallet SHOULD be locked
```
Here we are launching bk with flag --unlock-timeout, wait that time, and check if the wallet was locked. Yet, lately, we have encountered an issue during CI, that shows us that the wallet was unlocked after this period of time.
Here is a log (where unlock-timeout = 2[s]):
```
2029744ms json_rpc_plugin.cpp:225 initialize ] initializing JSON RPC plugin
2029744ms webserver_plugin.cpp:587 plugin_initialize ] initializing webserver plugin
2029744ms webserver_plugin.cpp:590 plugin_initialize ] configured with 1 thread pool size
2029744ms webserver_plugin.cpp:593 plugin_initialize ] Compression in webserver is disabled
2029745ms webserver_plugin.cpp:605 plugin_initialize ] configured http to listen on 0.0.0.0:0
2029745ms beekeeper_app_init.cpp:163 initialize_program_o ] initializing options
2029745ms notifications.cpp:64 setup ] setting up notification handler for 1 address
2029747ms beekeeper_app_init.cpp:188 initialize_program_o ] Backtrace on segfault is enabled.
2029748ms application.cpp:193 startup ] Setting up a startup_io_handler...
2029748ms webserver_plugin.cpp:293 operator() ] start processing http thread
2029748ms application.cpp:505 exec ] Entering application main loop...
2029748ms webserver_plugin.cpp:308 operator() ] start listening for http requests on 0.0.0.0:42429
2029754ms json_rpc_plugin.cpp:443 rpc ] message: {"id":0,"jsonrpc":"2.0","method":"beekeeper_api.create_session","params":{"notifications_endpoint":"127.0.0.1:37403","salt":"140318016414640"}}
2029764ms json_rpc_plugin.cpp:443 rpc ] message: {"id":0,"jsonrpc":"2.0","method":"beekeeper_api.create","params":{"wallet_name":"wallet_name","token":"f97302aa47cf5b0ff625e09ea4a8339c29b3b05c55b843e12b712b93bb64b2fe"}}
2029765ms beekeeper_wallet.cpp:189 save_wallet_file ] saving wallet to file /builds/hive/clive/tests/functional/beekeeper/commandline/application_options/generated_during_test_unlock_timeout/test_unlock_time_with_parameters_2/beekeeper/wallet_name.wallet
2029770ms json_rpc_plugin.cpp:443 rpc ] message: {"id":0,"jsonrpc":"2.0","method":"beekeeper_api.list_wallets","params":{"token":"f97302aa47cf5b0ff625e09ea4a8339c29b3b05c55b843e12b712b93bb64b2fe"}}
2031776ms json_rpc_plugin.cpp:443 rpc ] message: {"id":0,"jsonrpc":"2.0","method":"beekeeper_api.list_wallets","params":{"token":"f97302aa47cf5b0ff625e09ea4a8339c29b3b05c55b843e12b712b93bb64b2fe"}}
```
Two last lines are crucial
```
2029770ms json_rpc_plugin.cpp:443 rpc ] message: {"id":0,"jsonrpc":"2.0","method":"beekeeper_api.list_wallets","params":{"token":""}}
2031776ms json_rpc_plugin.cpp:443 rpc ] message: {"id":0,"jsonrpc":"2.0","method":"beekeeper_api.list_wallets","params":{"token":""}}
```
If we substrack 2031776ms - 2029770ms = 2006ms
If we take into account 2s of unlock-timeout we get 6ms for lock time...
`YES, I KNOW THAT IS A VERY SMALL VALUE`, and end-user would not see it, but the user can be a bot as well, and now it makes a difference (as we can see on that test).
We discussed it with @mzebrak and decided that it would be nice, to make an issue, so that we may discuss if we should be concern or not.
That's why this issue has a `discussion` label.
@Trela @bwronaMariusz TrelaMariusz Trelahttps://gitlab.syncad.com/hive/hive/-/issues/598Prepare debugging tool to allow direct shared memory file analysis2023-12-19T15:41:45ZBartek WronaPrepare debugging tool to allow direct shared memory file analysisThis tool would be useful for internal debugging of hived problems we periodically have.
Planned design is to have an option there to:
- generate (regular) snapshot from pointed shared memory file. Initially we can dump whole snapshot, ...This tool would be useful for internal debugging of hived problems we periodically have.
Planned design is to have an option there to:
- generate (regular) snapshot from pointed shared memory file. Initially we can dump whole snapshot, in next steps try to implement some filtering, but maybe this is not worth at this step due to good performance of snapshot dumper.
- in next step (by specifying different option to the tool) dump specified multiindex to JSON output. It could be very useful (altough I am not sure how complex it could be) to dump all index contents in default "by_id" manner and additionally dump separate associations defined by specific indexes mapped to earlier generated object IDs.
It could be very good to have also a way to process/dump undo state structures saved in shared memory file, being specific to given index.Andrzej SuchAndrzej Such