hive merge requestshttps://gitlab.syncad.com/hive/hive/-/merge_requests2024-03-29T13:18:41Zhttps://gitlab.syncad.com/hive/hive/-/merge_requests/1226Draft: Adjustable Block Log Splitting and Pruning2024-03-29T13:18:41ZŁukasz BujakDraft: Adjustable Block Log Splitting and Pruninghttps://gitlab.syncad.com/hive/hive/-/merge_requests/1262Bump test-tools, wax, helpy2024-03-29T11:46:59ZWieslaw KedzierskiBump test-tools, wax, helpyhttps://gitlab.syncad.com/hive/hive/-/merge_requests/1261Draft: Password in beekeeper `remove_key` endpoint is not needed anymore.2024-03-29T10:47:07ZMariusz TrelaDraft: Password in beekeeper `remove_key` endpoint is not needed anymore.https://gitlab.syncad.com/hive/hive/-/merge_requests/1254Draft: `metadata_collector` requires information about HF212024-03-29T08:05:21ZMariusz TrelaDraft: `metadata_collector` requires information about HF21https://gitlab.syncad.com/hive/hive/-/merge_requests/1243Draft: Beekeepy2024-03-28T04:12:49ZKrzysztof Mochockikmochocki@syncad.comDraft: Beekeepyhttps://gitlab.syncad.com/hive/hive/-/merge_requests/1224Draft: Implementation of `colony` plugin for producing large amounts of trans...2024-03-27T18:35:13ZAndrzej LisakDraft: Implementation of `colony` plugin for producing large amounts of transactions.The goal of the plugin is to generate random transactions (from among 5 types) for every block according to given settings. It should work faster than external script, since it is not limited by API and network throughput. It is also cap...The goal of the plugin is to generate random transactions (from among 5 types) for every block according to given settings. It should work faster than external script, since it is not limited by API and network throughput. It is also capable of configuring itself on top of initial state and then to dynamically adjust production rate in case it is generating too many transactions for the set size of blocks.
The following options configure the plugin:
- `colony-sign-with` - set of WIF private keys to be used to sign every transaction produced. The key set must match active authority of at least one account present in initial state. At startup the plugin selects suitable accounts to be used as colony workers.
- `colony-threads` - number of threads used during production. Default is `COLONY_DEFAULT_THREADS` (4). Each thread has its own exclusive set of worker accounts as well as comment buffer (it is in order to allow threads to work with minimal locking).
- `colony-transactions-per-block` - target number of transactions produced per block. When not set, plugin uses sum of weights from individual type-of-transaction settings. If those are also omitted, default is 1000. As mentioned above, if the rate exceeds ability of the node to absorb transactions into blocks, the rate is dynamically limited. Value `0` takes away rate limiting - use it only with queen mode witness or your node will be overrun by pending transactions.
- `colony-start-at-block` - allows shifting start of work to specific block. The work will start when block with given number becomes head block (or at plugin startup in case that block already passed), which means first transactions will be targeted for next block. The option is most useful when testing with multiple nodes - pushing start of work away from start of nodes allows the nodes to establish p2p communication without extra traffic from transactions. Also in case you are using multiple nodes with `colony` enabled, it is recommended to start work at different times on different nodes - it is to lessen the chance for different nodes to randomly create duplicate or otherwise conflicting transactions.
- `colony-no-broadcast` - disables broadcasting of produced transactions. It is only suitable for situation when you have active witness on the same node, primarily for queen mode or unit tests.
- `colony-article` - json with parameters for article generation (root comment with random beneficiaries) - for details see below
- `colony-reply` - for replies (comments to articles/other replies)
- `colony-vote` - for votes on comments
- `colony-transfer` - for minimal HBD transfers between accounts with memo
- `colony-custom` - for custom_json operations with random id and one string value
If no type-of-transaction setting is used minimal custom jsons will be produced.
Each type-of-transaction setting is a json with the following fields:
- `min` - minimal extra size added to operation (f.e. text in body of comment) - cannot be negative
- `max` - maximal extra size added to operation, has to be at least as much as `min`; each type of operation has different max value (60000 for articles, 20000 for replies, 0 for votes, 2047 for transfers and 8184 for custom_jsons)
- `weight` - relative share of that type of operation in transaction mix, cannot be negative
- `exponent` - parameter affecting randomness, cannot be negative; effective extra size values are calculated as (max-min) * rand(0.0..1.0)<sup>exponent</sup> + min, so exponents above 1.0 favor small effective extra sizes, while values below 1.0 tend to produce bigger extra sizes
Example parameters:
```
colony-article = {"min":100,"max":5000,"weight":16,"exponent":4}
colony-reply = {"min":30,"max":1000,"weight":110,"exponent":5}
colony-vote = {"weight":2070}
colony-transfer = {"min":0,"max":350,"weight":87,"exponent":4}
colony-custom = {"min":20,"max":400,"weight":6006,"exponent":1}
```
These are frequency values taken from mainnet from RC stats, multiplied by 220 and divided by 28800 (one day worth of blocks):
```
comments: "count": 16537 (1.52% in relation to other selected operations), "history_bytes": 881
votes: "count": 271050 (24.98%), "history_bytes": 142
transfers: "count": 11364 (1.05%), "history_bytes": 210
custom_jsons: "count": 786241 (72.45%), "history_bytes": 327
```
Run for a short while on top of state from block_log with 100000 accounts with 2MB blocks, colony produced the following results:
```
Production stats for thread colony_worker_0
Number of transactions: 169922
Articles: 4664 (%2.74 - 0.2% with substitutions taken into account), avg.extra: 1109, avg.size: 1351
Replies: 2015 (%1.18 - 1.30% with substitutions), avg.extra: 197, avg.size: 360
Votes: 38363 (%22.57 - 25.01% with substitutions), avg.extra: 0, avg.size: 131
Transfers: 1781 (%1.05), avg.extra: 67, avg.size: 190
Custom jsons: 123099 (%72.44), avg.extra: 209, avg.size: 325
196 replies and 4135 votes substituted with articles due to lack of proper target comment
3 transactions failed with exception (including 0 due to lack of RC)
...(continued with 3 other threads)
```
The resulting values are very close to those from mainnet.https://gitlab.syncad.com/hive/hive/-/merge_requests/1260Draft: Witness related operation tests2024-03-27T12:50:39ZMateusz KudelaDraft: Witness related operation testsRequires:
- https://gitlab.syncad.com/hive/schemas/-/merge_requests/87
- https://gitlab.syncad.com/hive/helpy/-/merge_requests/40
- https://gitlab.syncad.com/hive/test-tools/-/merge_requests/203
Resolves issues:
- https://gitlab.syncad...Requires:
- https://gitlab.syncad.com/hive/schemas/-/merge_requests/87
- https://gitlab.syncad.com/hive/helpy/-/merge_requests/40
- https://gitlab.syncad.com/hive/test-tools/-/merge_requests/203
Resolves issues:
- https://gitlab.syncad.com/hive/hive/-/issues/633
- https://gitlab.syncad.com/hive/hive/-/issues/634
- https://gitlab.syncad.com/hive/hive/-/issues/635
- https://gitlab.syncad.com/hive/hive/-/issues/645https://gitlab.syncad.com/hive/hive/-/merge_requests/1191CI rewrite for parallel replay2024-03-26T10:56:36ZKonrad BotorCI rewrite for parallel replay# CI rewrite for parallel replay
## Merge request prerequisites
Requires common-ci-configuration!37 to be merged first.
## Runner tag prerequisites
- every runner capable of replay must be tagged with the tag defined by `DATA_REPLAY_...# CI rewrite for parallel replay
## Merge request prerequisites
Requires common-ci-configuration!37 to be merged first.
## Runner tag prerequisites
- every runner capable of replay must be tagged with the tag defined by `DATA_REPLAY_TAG` (currently *data-cache-storage*)
- every runner capable of replay and running on the same server must be tagged with a tag unique to that server (eg. *hive-builder-5* for the current runners)
- every runner capable of replay must have a maximum of 10 tags (an arbitrary limit explained later on)
This way every cache pool is represented by a unique combination of tags.
**Note**: Currently the two runners capable of replay do not have the server-specific tag, but since they run on the same server the solution works anyway. The tag needs to be added, however, before configuring any other runners on other servers to run replay jobs.
## How it works
1. Job *determine-runner-tag*, tagged with `$DATA_REPLAY_TAG` starts on one of the replay-capable runners. The specific runner is determined by GitLab's algorithm.
2. Job *determine-runner-tag* reads all the tags of the runners it's running on from the `$CI_RUNNER_TAGS` variable and saves those tags in a dotenv file in separate variables prefixed with `RUNNER_TAG_`.
3. Trigger job *main-pipeline-trigger* reads the dotenv file and passes the first 10 `RUNNER_TAG_` variables to the new pipeline it triggers - after replacing the old prefix with `DYNAMIC_RUNNER_TAG_`.
4. All the jobs in the main pipeline tagged with tags from `$DYNAMIC_RUNNER_TAG_0` to `$DYNAMIC_RUNNER_TAG_9` pick up the variables passed by the trigger job and run on a runner with those tags. Since the tags uniquely identify a specific server/cache pool, it is guaranteed that all the jobs will run on the same server and thus have access to the same cache.
**Note**: Unfortunately, there seems to be no way of passing an arbitrary number of variables from the dotenv file to the child pipeline. As such I decided to pass a maximum of 10 tags. This can be easily changed, but I do not foresee a need to have more than 10 tags per replay-capable runner any time soon.
**Note 2**: The test results from the child pipeline are imported to the partner pipeline in a way loosely based on this MR: https://gitlab.com/gitlab-org/gitlab/-/merge_requests/97588. Apparently, even GitLab developers themselves need this feature and yet it doesn't exist. Any new JUnit-report-generating jobs need to be added to the list of jobs to import test results from defined in job *dynamic-pipeline-test-results-collector*.
**Warning**: All the *CI rewrite for parallel replay* merge requests in all the projects need to be merged before changing runner tags or adding new runners to avoid issues - the original configuration allows the replay jobs to run on any runner tagged with *data-cache-storage*.Konrad BotorKonrad Botorhttps://gitlab.syncad.com/hive/hive/-/merge_requests/1258Updated publishing job2024-03-26T08:55:11ZKonrad BotorUpdated publishing jobUpdated publishing jobUpdated publishing jobKonrad BotorKonrad Botorhttps://gitlab.syncad.com/hive/hive/-/merge_requests/1251Issue #665 - Class `fc::ofstream` should use only one type of a file mode. No...2024-03-25T07:56:48ZMariusz TrelaIssue #665 - Class `fc::ofstream` should use only one type of a file mode. Now...Issue #665 - Class `fc::ofstream` should use only one type of a file mode. Now are used 2 types: `std::ios_base::openmode` and `fc::ofstream::mode`Issue #665 - Class `fc::ofstream` should use only one type of a file mode. Now are used 2 types: `std::ios_base::openmode` and `fc::ofstream::mode`https://gitlab.syncad.com/hive/hive/-/merge_requests/1200Draft: add queen-mode to witness plugin2024-03-14T13:25:51ZAndrzej LisakDraft: add queen-mode to witness pluginWitness plugin in queen mode is a tool for preparing block logs with blocks (particularly the bigger ones) filled as much as possible (or up to desired size). While it is possible to prepare such block logs on sufficiently powerful serve...Witness plugin in queen mode is a tool for preparing block logs with blocks (particularly the bigger ones) filled as much as possible (or up to desired size). While it is possible to prepare such block logs on sufficiently powerful server with regular script that uses APIs, the idea behind queen mode is to allow to do that on regular machine. Moreover, if the machine has sufficient power, it is possible to produce blocks significantly faster than it would normally take. It enables preparation of large block logs to be used for further testing with replay or live mode through use of `pacemaker` (or mock-peer in the future).
Initially it was supposed to be separate plugin, however it turned out that almost everything is the same as in witness plugin, therefore it was made as (testnet only) option for existing plugin.
While it might work in different configurations its intended use is the following:
- use one testnet node
- configure API and/or plugin that will be used to receive / generate transactions
- add `plugin = witness` to `config.ini`
- add all witnesses you plan to use in your block log as well as their signing keys with `witness` and `private-key` settings respectively
- add `queen-mode = 0` for max size blocks (as selected through witness properties) or `queen-mode = <size>` for desired size of blocks (they will still be smaller in case witnesses voted for blocks smaller than given `<size>`)
- use configured state (use one of prebuilt block logs for replay or a snapshot that already has voted in witnesses, desired size of blocks set, stable vest/rc price and accounts created); remember to use `alternate-chain-spec` compatible with your chosen block log / snapshot
- start the node - witness will wait for incoming transactions until they fill up the block; the first transaction that would fill up the block or go over limit will trigger block production
- use any means of producing transactions and passing it to the node; for big blocks, slow computer or simply because rate of incoming transactions is slow, it is more likely that the node will have to wait for incoming transactions - this is normal; if you are using `colony` plugin for generating transactions, the best is to use unrestricted rate, otherwise it might produce too few transactions to fill the block and the process will stop (`queen` will wait for more transactions to produce block, while `colony` will wait for block to reset its transaction counters)
- you can force block production with use of `debug_node_api.debug_generate_blocks` (but don't use `edit_if_needed` - it might prevent resulting block log from being viable for replay due to required use of skip flags)
- when finishing production of block log either force production of last block (preferred), or add one block worth of some filler transactions - if you don't do that, there will be some transactions you'd want in block log that will remain in the node as pending and never reach the block
The step above - use of configured state - can be replaced with sending transactions normally and forcing block production (similarly to what happens in `witness_tests/queen_mode_test` added in this MR), but it is just much more convenient to use block log or snapshot prepared in advance.https://gitlab.syncad.com/hive/hive/-/merge_requests/1204chain spec verification2024-03-12T20:47:34ZAndrzej Suchchain spec verificationPost 1.27.5Andrzej SuchAndrzej Suchhttps://gitlab.syncad.com/hive/hive/-/merge_requests/1239Draft: Attempt to disable most of p2p logging in tests to see if load lowers ...2024-03-07T19:03:12ZDan NotesteinDraft: Attempt to disable most of p2p logging in tests to see if load lowers during CIhttps://gitlab.syncad.com/hive/hive/-/merge_requests/1129Draft: Lightweight block reader implementation2024-02-29T21:49:13ZŁukasz BujakDraft: Lightweight block reader implementationPost 1.27.5https://gitlab.syncad.com/hive/hive/-/merge_requests/1131Draft: Upgrade docker images to ubuntu 23.102024-02-29T21:48:58ZEric FriasDraft: Upgrade docker images to ubuntu 23.10Requires https://gitlab.syncad.com/hive/wax/-/merge_requests/45Requires https://gitlab.syncad.com/hive/wax/-/merge_requests/45Post 1.27.5https://gitlab.syncad.com/hive/hive/-/merge_requests/1177supplement market history API calls get_recent_trades/get_trade_history with ...2024-02-29T21:48:32ZAndrzej Lisaksupplement market history API calls get_recent_trades/get_trade_history with maker/taker namesThe data was already present in MH indexes, just not passed to API call output.
Needs test-tools!185, helpy!21 and schemas!79The data was already present in MH indexes, just not passed to API call output.
Needs test-tools!185, helpy!21 and schemas!79Post 1.27.5https://gitlab.syncad.com/hive/hive/-/merge_requests/1192Optimize collecting dynamic memory usage stats of indices2024-02-29T21:48:20ZKrzysztof LeśniakOptimize collecting dynamic memory usage stats of indices- At each modification of item though index, measure and cache it's dynamic size, so that it's not necessary to do a walk over the structures at each benchmark interval
- Removed call to `dump` from `measure` function. This was causing a...- At each modification of item though index, measure and cache it's dynamic size, so that it's not necessary to do a walk over the structures at each benchmark interval
- Removed call to `dump` from `measure` function. This was causing an awful slowdown of the replay, because at every benchmark interval `replay_benchmark.json` would be written with all the measurements, just to be overwritten at next intervalPost 1.27.5Krzysztof LeśniakKrzysztof Leśniakhttps://gitlab.syncad.com/hive/hive/-/merge_requests/1164draft: Shared memory file util tool2024-02-21T09:38:24ZAndrzej Suchdraft: Shared memory file util toolCloses #598
It is draft because this MR should be updated after: https://gitlab.syncad.com/hive/hive/-/merge_requests/1204Closes #598
It is draft because this MR should be updated after: https://gitlab.syncad.com/hive/hive/-/merge_requests/1204Andrzej SuchAndrzej Suchhttps://gitlab.syncad.com/hive/hive/-/merge_requests/1215Draft: docker file correctly reports hived status now2024-02-20T22:26:54ZŁukasz BujakDraft: docker file correctly reports hived status nowhttps://gitlab.syncad.com/hive/hive/-/merge_requests/1189Draft: Add option to disable logging of communication with node2024-02-14T07:34:12ZKrzysztof Mochockikmochocki@syncad.comDraft: Add option to disable logging of communication with node~Add `HELPY_DISABLE_REQUEST_LOGGING` environ which disables logging of requests~
requires:
- test-tools: https://gitlab.syncad.com/hive/test-tools/-/merge_requests/194
- helpy: https://gitlab.syncad.com/hive/helpy/-/merge_requests/31~Add `HELPY_DISABLE_REQUEST_LOGGING` environ which disables logging of requests~
requires:
- test-tools: https://gitlab.syncad.com/hive/test-tools/-/merge_requests/194
- helpy: https://gitlab.syncad.com/hive/helpy/-/merge_requests/31Bartek WronaBartek Wrona