hive issueshttps://gitlab.syncad.com/groups/hive/-/issues2024-01-08T08:53:12Zhttps://gitlab.syncad.com/hive/hive/-/issues/636beekeeper | Improve test_default_values test2024-01-08T08:53:12ZWieslaw Kedzierskibeekeeper | Improve test_default_values testWe need to improve test_default_values.py test, so that it will check if there were new cli values checked.We need to improve test_default_values.py test, so that it will check if there were new cli values checked.Wieslaw KedzierskiWieslaw Kedzierskihttps://gitlab.syncad.com/hive/clive/-/issues/132Test - TUI Plan test2024-01-26T13:58:37ZAleksandra GrabowskaTest - TUI Plan testThe test plan for TUI clive.
### Basic positive paths
I. Operations
1. Transfer https://gitlab.syncad.com/hive/clive/-/issues/103
2. Savings https://gitlab.syncad.com/hive/clive/-/issues/110
3. Governance
a. voting for a witness htt...The test plan for TUI clive.
### Basic positive paths
I. Operations
1. Transfer https://gitlab.syncad.com/hive/clive/-/issues/103
2. Savings https://gitlab.syncad.com/hive/clive/-/issues/110
3. Governance
a. voting for a witness https://gitlab.syncad.com/hive/clive/-/issues/127
b. voting for a proposal
c. set a proxy
II. Configuration
1. Select a node - https://gitlab.syncad.com/hive/clive/-/issues/136
2. Manage key aliases - https://gitlab.syncad.com/hive/clive/-/issues/137
3. Onboarding - https://gitlab.syncad.com/hive/clive/-/issues/138
III. General
1. Saving a transaction to a file - https://gitlab.syncad.com/hive/clive/-/issues/139
2. Loading a transaction from a file - https://gitlab.syncad.com/hive/clive/-/issues/140
To do:
Cart
Cart and loading from file (nothing should change)
Acitivate/inactivate (clive asks about activation if an operation is created in inactive mode)Aleksandra GrabowskaAleksandra Grabowskahttps://gitlab.syncad.com/hive/hive/-/issues/635Tests - operation in Hive - witness set properties2024-03-27T12:50:30ZAleksandra GrabowskaTests - operation in Hive - witness set properties### Operation: witness_set_properties_operation, // 42
See: https://gitlab.syncad.com/hive/hive/-/blob/master/doc/witness_parameters.md
### 1. Test case - A witness wants to update properties.
- [ ] 1.1. A witness creates an operation...### Operation: witness_set_properties_operation, // 42
See: https://gitlab.syncad.com/hive/hive/-/blob/master/doc/witness_parameters.md
### 1. Test case - A witness wants to update properties.
- [ ] 1.1. A witness creates an operation witness_set_properties_operation and changes `account_creation_fee`.
- [ ] 1.2. A witness creates an operation witness_set_properties_operation and changes `account_subsidy_budget`.
- [ ] 1.3. A witness creates an operation witness_set_properties_operation and changes `account_subsidy_decay`.
- [ ] 1.4. A witness creates an operation witness_set_properties_operation and changes `maximum_block_size`.
- [ ] 1.5. A witness creates an operation witness_set_properties_operation and changes `hbd_interest_rate`.
- [ ] 1.6. A witness creates an operation witness_set_properties_operation and changes `hbd_exchange_rate`.
- [ ] 1.7. A witness creates an operation witness_set_properties_operation and changes `url`.
- [ ] 1.8. A witness creates an operation witness_set_properties_operation and changes `new_signing_key`.
- [ ] 1.9. A witness creates an operation witness_update_operation and changes everything.
##### Expected results:
1. The property (properties) is updated.
2. The RC is paid.https://gitlab.syncad.com/hive/hive/-/issues/634Tests - operation in Hive - witness update operation2024-03-27T12:50:30ZAleksandra GrabowskaTests - operation in Hive - witness update operation### Operation: witness_update_operation, // 11
### 1. Test case - A user wants to become a witness.
- [ ] 1.1 A user creates an operation witness_update_operation and fills in all required fields.
##### Expected results:
1. The wit...### Operation: witness_update_operation, // 11
### 1. Test case - A user wants to become a witness.
- [ ] 1.1 A user creates an operation witness_update_operation and fills in all required fields.
##### Expected results:
1. The witness is created.
2. The RC is paid.
3. The fee is paid.
### 2. Test case - A user doesn't want to be a witness.
##### Preconditions:
1. A user is a witness.
- [ ] 2.1 A user creates an operation witness_update_operation and leaves the field `block_signing_key` empty.
##### Expected results:
1. A user is not witness any more.
2. The RC is paid.
### 3. Test case - A user wants to update properties.
##### Preconditions:
1. A user is a witness.
- [ ] 3.1 A user creates an operation witness_update_operation and changes `maximum_block_size`.
- [ ] 3.2 A user creates an operation witness_update_operation and changes `hbd_interest_rate`.
- [ ] 3.3 A user creates an operation witness_update_operation and changes `block_signing_key`.
- [ ] 3.4 A user creates an operation witness_update_operation and changes `url`.
- [ ] 3.5 A user creates an operation witness_update_operation and changes everything.
##### Expected results:
1. The property (properties) is updated.
2. The RC is paid.https://gitlab.syncad.com/hive/hive/-/issues/633Tests - operation in Hive - feed price2024-03-27T12:50:30ZAleksandra GrabowskaTests - operation in Hive - feed price### Operation: feed_publish_operation, // 7
### Test case (positive): A witness creates a feed price operation.
##### Preconditions:
1. There is a witness
##### Test cases
- [ ] 1.1 Witness creates a feed price operation.
##### E...### Operation: feed_publish_operation, // 7
### Test case (positive): A witness creates a feed price operation.
##### Preconditions:
1. There is a witness
##### Test cases
- [ ] 1.1 Witness creates a feed price operation.
##### Expected results:
1. The operation is added to the blockchain.
2. The RC cost is paid.
### Test case (negative): User (who is not a witness) creates a feed price operation.
##### Preconditions:
1. There is a user A .
##### Test cases
- [ ] 2.1 User A (who is not a witness) tries to create a feed price operation.
##### Expected results:
1. An error occurs - the operation is not added to the blockchain.
2. The RC cost is not paid.https://gitlab.syncad.com/hive/wax/-/issues/11Missing import in wax stub .pyi file, Deprecated functions placed there (?), ...2024-01-10T13:38:34ZMateusz ŻebrakMissing import in wax stub .pyi file, Deprecated functions placed there (?), A tool to verify the stub files during CIRelated: https://gitlab.syncad.com/hive/clive/-/commit/4d9450b50fba371b00656a334ccf47d475e2c304
* [ ] Missing import in wax stub .pyi file
* [ ] Deprecated functions placed in the .pyi file
* [ ] A tool to verify the stub files during C...Related: https://gitlab.syncad.com/hive/clive/-/commit/4d9450b50fba371b00656a334ccf47d475e2c304
* [ ] Missing import in wax stub .pyi file
* [ ] Deprecated functions placed in the .pyi file
* [ ] A tool to verify the stub files during CI
### Missing import in wax stub .pyi file
---
While bumping wax version from `0.0.3a2.dev16+af71c58` to `0.0.3a2.dev39+44db07f` in the `clive` repo I encountered the following error messages from one of our code static analysis tools (`mypy`)
```bash
clive/__private/core/iwax.py:146: error: Returning Any from function declared to return "python_ref_block_data" [no-any-return]
```
and it resulted these lines (this is \`clive\` code adding a simple layer around the rough wax interface, so we don't have to encode every time):
```python
def get_tapos_data(block_id: str) -> wax.python_ref_block_data:
return wax.get_tapos_data(block_id.encode()) # <---- this line generated an error
```
That's because wax.get_tapos_data has a return type hinting of `Any`: ![image](/uploads/a5f1f01b23077d6a492b7b0d5da6db48/image.png)
When in the wax stub file (wax.pyi) it already has: ![image](/uploads/e11cd19e5fcff4b74f7a2c2d9ed4fa23/image.png)
This type-hinting does not work because there is an import of `python_ref_block_data` missing at the top: ![image](/uploads/fdb029377de6b064b5d71ad741d24414/image.png)
... So changing it to: ![image](/uploads/e8146384a14c6ef6d4dd6b6dfc48a7c1/image.png)
solves the issue. And that probably would be a simple fix.
### Deprecated functions placed there (?)
---
While looking at the wax.pyi I encountered a strange thing - some of the stubs defined there have the "stub for item found icon" like: ![image](/uploads/61a62760dd040300809aa8b95bd71545/image.png)
but some does not: ![image](/uploads/e1d1c8815721412a7a7f9fd9e51a85cd/image.png)
Those who don't have it are:
- `calculate_legacy_transaction_id`
- `calculate_proto_legacy_transaction_id`
- `get_tapos_data`
and as I checked it seems to me that:
- `calculate_proto_legacy_transaction_id`
is no longer available in wax. So should be removed. But IDK why:
- `calculate_legacy_transaction_id`
- `get_tapos_data`
also has this icon (maybe something is wrong with them also so IDE notify me about it?)
### A tool to verify the stub file during CI
---
As you can see it's very easy to have a mismatch between stub `.pyi` and actual code. In other repositories, we have a lint tool running on the CI like `pre-commit` or `mypy`. I think it would be very good to spend a moment on this and investigate what could we do to have some checks running on the CI if `.pyi` is consistent with the actual code. (Note: as wax.py is a generated code it may be not so obvious, you may need to do this only after installation, IDK)
It may be helpful:
- https://mypy.readthedocs.io/en/stable/stubgen.html
- https://github.com/MarcoGorelli/cython-linthttps://gitlab.syncad.com/hive/hive/-/issues/632Tests - operation in Hive - custom_json_operation2024-01-05T09:19:41ZAleksandra GrabowskaTests - operation in Hive - custom_json_operation### Operation: custom_json_operation, // 18
### Test cases (positive): User creates a correct custom json operation.
##### Preconditions:
1. There are users A, B and C.
2. Users have enough RC.
##### Test cases
- [ ] 1.1 User A crea...### Operation: custom_json_operation, // 18
### Test cases (positive): User creates a correct custom json operation.
##### Preconditions:
1. There are users A, B and C.
2. Users have enough RC.
##### Test cases
- [ ] 1.1 User A creates a custom json operation with a required posting authority of user A and user A signs it with the posting authority.
- [ ] 1.2 User A creates a custom json operation with a required posting authority of user B and user B signs it with the posting authority.
- [ ] 1.3 User A creates a custom json operation with a required posting authorities of users B and C and users B and C sign it with the posting authorities.
- [ ] 1.4 User A creates a custom json operation with a required active authority of user A and user A signs it with the active authority.
- [ ] 1.5 User A creates a custom json operation with a required active authority of user B and user B signs it with the active authority.
- [ ] 1.6 User A creates a custom json operation with a required active authorities of users B and C and users B and C sign it with the active authorities.
- [ ] 1.7 User A creates a custom json operation with a required active authority of user A and posting authority of user A and user A signs it with the required authorities.
- [ ] 1.8 User A creates custom json operation with a required active authority of user B and posting authority of user C and users B and C sign it with the required authorities.
##### Expected results:
1. The operation is added to the blockchain.
2. The RC cost is paid by the first account specified on the operation.
### Test cases (negative): User creates an incorrect custom json operation.
##### Preconditions:
1. There are users A,B.
##### Test cases
- [ ] 2.1 User A creates an incorrect custom json operation with a required posting authority of user A and user A signs it with the posting authority.
- [ ] 2.2 User A creates an incorrect custom json operation with a required active authority of user A and user A signs it with the active authority.
##### Expected results:
1. An error occurs - the operation is not added to the blockchain.
2. The RC cost is not paid.https://gitlab.syncad.com/hive/hivemind/-/issues/226Speed up hivemind replay2024-01-03T21:21:13ZDan NotesteinSpeed up hivemind replayPreliminary performance analysis using htop suggests that hivemind replay is likely CPU-bound in the python code nowadays, so the python code needs to be profiled. Early guesses for culprits would be routines doing string processing.
In...Preliminary performance analysis using htop suggests that hivemind replay is likely CPU-bound in the python code nowadays, so the python code needs to be profiled. Early guesses for culprits would be routines doing string processing.
In the meantime, despite the python code being the likely bottleneck, I'm still attaching pghero data for the postgres times of an 81M block replay so we have an easily available record of it (data is from s16, our fastest system):
```
272 min 7% 7 ms 2,350,502hivemind
SELECT * FROM hivemind_app.enum_operations4hivemind($1, $2)
234 min 6% 6 ms 2,350,502hivemind
SELECT ho.id, ho.block_num, ho.op_type_id, ho.op_type_id >= $3 AS is_virtual, ho.body::VARCHAR
FROM hive.hivemind_app_operations_view ho
WHERE ho.block_num BETWEEN _first_block AND _last_block
AND (ho.op_type_id < $4
OR ho.op_type_id in ($5, $6, $7, $8, $9)
)
ORDER BY ho.block_num, ho.id
119 min 3% 0 ms 64,535,752hivemind
SELECT is_new_post, id, author_id, permlink_id, post_category, parent_id, community_id, is_valid, is_muted, depth
FROM hivemind_app.process_hive_post_operation(($1)::varchar, ($2)::varchar, ($3)::varchar, ($4)::varchar, ($5)::timestamp, ($6)::integer, ($7)::integer, (ARRAY[$8])::VARCHAR[])
94 min 3% 0 ms 97,530,592hivemind
INSERT INTO hivemind_app.hive_posts as hp
(parent_id, depth, community_id, category_id,
root_id, is_muted, is_valid,
author_id, permlink_id, created_at, updated_at, sc_hot, sc_trend, active, payout_at, cashout_time, counter_deleted, block_num, block_num_created)
SELECT
s.parent_id,
s.depth,
(s.composite).community_id,
s.category_id,
s.root_id,
(s.composite).is_muted,
s.is_valid,
s.author_id,
s.permlink_id,
s.created_at,
s.updated_at,
s.sc_hot,
s.sc_trend,
s.active,
s.payout_at,
s.cashout_time,
s.counter_deleted,
s.block_num,
s.block_num_created
FROM (
SELECT
hivemind_app.process_community_post(_block_num, _community_support_start_block, _parent_permlink, ha.id, $8, php.is_muted, php.community_id) as composite,
php.id AS parent_id, php.depth + $9 AS depth,
COALESCE(php.category_id, (select hcg.id from hivemind_app.hive_category_data hcg where hcg.category = _parent_permlink)) AS category_id,
(CASE(php.root_id)
WHEN $10 THEN php.id
ELSE php.root_id
END) AS root_id,
php.is_valid AS is_valid,
ha.id AS author_id, hpd.id AS permlink_id, _date AS created_at,
_date AS updated_at,
hivemind_app.calculate_time_part_of_hot(_date) AS sc_hot,
hivemind_app.calculate_time_part_of_trending(_date) AS sc_trend,
_date AS active, (_date + INTERVAL $11) AS payout_at, (_date + INTERVAL $12) AS cashout_time,
$13 AS counter_deleted,
_block_num as block_num, _block_num as block_num_created
FROM hivemind_app.hive_accounts ha,
hivemind_app.hive_permlink_data hpd,
hivemind_app.hive_posts php
INNER JOIN hivemind_app.hive_accounts pha ON pha.id = php.author_id
INNER JOIN hivemind_app.hive_permlink_data phpd ON phpd.id = php.permlink_id
WHERE pha.name = _parent_author AND phpd.permlink = _parent_permlink AND
ha.name = _author AND hpd.permlink = _permlink AND php.counter_deleted = $14
) s
ON CONFLICT ON CONSTRAINT hive_posts_ux1 DO UPDATE SET
--- During post update it is disallowed to change: parent-post, category, community-id
--- then also depth, is_valid and is_muted is impossible to change
--- post edit part
updated_at = _date,
active = _date,
block_num = _block_num
RETURNING (xmax = $15) as is_new_post, hp.id, hp.author_id, hp.permlink_id, (SELECT hcd.category FROM hivemind_app.hive_category_data hcd WHERE hcd.id = hp.category_id) as post_category, hp.parent_id, hp.community_id, hp.is_valid, hp.is_muted, hp.depth
93 min 3% 0 ms 130,197,762hivemind
INSERT INTO hivemind_app.hive_permlink_data
(permlink)
values
(
_permlink
)
ON CONFLICT DO NOTHING
89 min 2% 69 ms 77,414hived_group
SELECT hive.set_irreversible($1)
78 min 2% 0 ms 32,667,170hivemind
INSERT INTO hivemind_app.hive_posts as hp
(parent_id, depth, community_id, category_id,
root_id, is_muted, is_valid,
author_id, permlink_id, created_at, updated_at, sc_hot, sc_trend,
active, payout_at, cashout_time, counter_deleted, block_num, block_num_created,
tags_ids)
SELECT
s.parent_id,
s.depth,
(s.composite).community_id,
s.category_id,
s.root_id,
(s.composite).is_muted,
s.is_valid,
s.author_id,
s.permlink_id,
s.created_at,
s.updated_at,
s.sc_hot,
s.sc_trend,
s.active,
s.payout_at,
s.cashout_time,
s.counter_deleted,
s.block_num,
s.block_num_created,
s.tags_ids
FROM (
SELECT
hivemind_app.process_community_post(_block_num, _community_support_start_block, _parent_permlink, ha.id, $9,$10, $11) as composite,
$12 AS parent_id, $13 AS depth,
(SELECT hcg.id FROM hivemind_app.hive_category_data hcg WHERE hcg.category = _parent_permlink) AS category_id,
$14 as root_id, -- will use id as root one if no parent
$15 AS is_valid,
ha.id AS author_id, hpd.id AS permlink_id, _date AS created_at,
_date AS updated_at,
hivemind_app.calculate_time_part_of_hot(_date) AS sc_hot,
hivemind_app.calculate_time_part_of_trending(_date) AS sc_trend,
_date AS active, (_date + INTERVAL $16) AS payout_at, (_date + INTERVAL $17) AS cashout_time,
$18 AS counter_deleted,
_block_num as block_num, _block_num as block_num_created,
(
SELECT ARRAY_AGG( prepare_tags )
FROM hivemind_app.prepare_tags( ARRAY_APPEND(_metadata_tags, _parent_permlink ) )
) as tags_ids
FROM
hivemind_app.hive_accounts ha,
hivemind_app.hive_permlink_data hpd
WHERE ha.name = _author and hpd.permlink = _permlink
) s
ON CONFLICT ON CONSTRAINT hive_posts_ux1 DO UPDATE SET
--- During post update it is disallowed to change: parent-post, category, community-id
--- then also depth, is_valid and is_muted is impossible to change
--- post edit part
updated_at = _date,
active = _date,
block_num = _block_num,
tags_ids = EXCLUDED.tags_ids
RETURNING (xmax = $19) as is_new_post, hp.id, hp.author_id, hp.permlink_id, _parent_permlink as post_category, hp.parent_id, hp.community_id, hp.is_valid, hp.is_muted, hp.depth
73 min 2% 0 ms 28,166,528hivemind
SELECT is_new_post, id, author_id, permlink_id, post_category, parent_id, community_id, is_valid, is_muted, depth
FROM hivemind_app.process_hive_post_operation(($1)::varchar, ($2)::varchar, ($3)::varchar, ($4)::varchar, ($5)::timestamp, ($6)::integer, ($7)::integer, ($8)::VARCHAR[])
62 min 2% 0 ms 16,763,302hivemind
SELECT is_new_post, id, author_id, permlink_id, post_category, parent_id, community_id, is_valid, is_muted, depth
FROM hivemind_app.process_hive_post_operation(($1)::varchar, ($2)::varchar, ($3)::varchar, ($4)::varchar, ($5)::timestamp, ($6)::integer, ($7)::integer, (ARRAY[$8,$9,$10,$11,$12])::VARCHAR[])
60 min 2% 183 ms 19,625hivemind
SELECT hivemind_app.update_hive_posts_root_id(1, 81522953)
60 min 2% 183 ms 19,625hivemind
UPDATE hivemind_app.hive_posts uhp
SET root_id = id
WHERE uhp.root_id = 0 AND (_first_block_num IS NULL OR (uhp.block_num >= _first_block_num AND uhp.block_num <= _last_block_num))
49 min 1% 0 ms 32,667,170hivemind
INSERT INTO
hivemind_app.hive_tag_data AS htd(tag)
SELECT UNNEST( __tags )
ON CONFLICT("tag") DO UPDATE SET tag=EXCLUDED.tag --trick to always return id
RETURNING htd.id
44 min 1% 34 ms 77,414hived_group
SELECT hive.copy_transactions_to_irreversible( __irreversible_head_block, _block_num )
44 min 1% 34 ms 77,414hived_group
INSERT INTO hive.transactions
SELECT
htr.block_num
, htr.trx_in_block
, htr.trx_hash
, htr.ref_block_num
, htr.ref_block_prefix
, htr.expiration
, htr.signature
FROM
hive.transactions_reversible htr
JOIN ( SELECT
DISTINCT ON ( hbr.num ) hbr.num
, hbr.fork_id
FROM hive.blocks_reversible hbr
WHERE
hbr.num <= _new_irreversible_block
AND hbr.num > _head_block_of_irreversible_blocks
ORDER BY hbr.num ASC, hbr.fork_id DESC
) as num_and_forks ON htr.block_num = num_and_forks.num AND htr.fork_id = num_and_forks.fork_id
30 min 0.8% 91 ms 19,625hivemind
SELECT hivemind_app.update_notification_cache($1, $2, $3)
28 min 0.8% 84 ms 19,625hivemind
INSERT INTO hivemind_app.hive_notification_cache
(block_num, type_id, created_at, src, dst, dst_post_id, post_id, score, payload, community, community_title)
SELECT nv.block_num, nv.type_id, nv.created_at, nv.src, nv.dst, nv.dst_post_id, nv.post_id, nv.score, nv.payload, nv.community, nv.community_title
FROM hivemind_app.hive_raw_notifications_view nv
WHERE nv.block_num > __limit_block AND (_first_block_num IS NULL OR nv.block_num BETWEEN _first_block_num AND _last_block_num)
ORDER BY nv.block_num, nv.type_id, nv.created_at, nv.src, nv.dst, nv.dst_post_id, nv.post_id
23 min 0.6% 69 ms 19,625hivemind
SELECT hivemind_app.update_hive_posts_api_helper(1, 81522953)
23 min 0.6% 69 ms 19,625hivemind
INSERT INTO hivemind_app.hive_posts_api_helper (id, author_s_permlink)
SELECT hp.id, ha.name || '/' || hpd_p.permlink
FROM hivemind_app.live_posts_comments_view hp
JOIN hivemind_app.hive_accounts ha ON (ha.id = hp.author_id)
JOIN hivemind_app.hive_permlink_data hpd_p ON (hpd_p.id = hp.permlink_id)
WHERE hp.block_num BETWEEN _first_block_num AND _last_block_num
ON CONFLICT (id) DO NOTHING
22 min 0.6% 5,241 ms 256hivemind
INSERT INTO hivemind_app.__post_children
(id, child_count)
SELECT
h1.parent_id AS queried_parent,
SUM(COALESCE((SELECT pc.child_count FROM hivemind_app.__post_children pc WHERE pc.id = h1.id),
$3
) + $4
) AS count
FROM hivemind_app.hive_posts h1
WHERE (h1.parent_id != $5 OR __depth = $6) AND h1.counter_deleted = $7 AND h1.id != $8 AND h1.depth = __depth
GROUP BY h1.parent_id
ON CONFLICT ON CONSTRAINT __post_children_pkey DO UPDATE
SET child_count = hivemind_app.__post_children.child_count + excluded.child_count
22 min 0.6% 0 ms 11,118,867hived_group
SELECT $3 FROM ONLY "hive"."operations_reversible" x WHERE "id" OPERATOR(pg_catalog.=) $1 AND "fork_id" OPERATOR(pg_catalog.=) $2 FOR KEY SHARE OF x
21 min 0.6% 17 ms 77,414hived_group
SELECT hive.remove_obsolete_reversible_data( _block_num )
21 min 0.6% 16 ms 77,414hived_group
SELECT hive.copy_account_operations_to_irreversible( __irreversible_head_block, _block_num )
21 min 0.6% 16 ms 77,414hived_group
INSERT INTO hive.account_operations
SELECT
haor.block_num
, haor.account_id
, haor.account_op_seq_no
, haor.operation_id
, haor.op_type_id
FROM
hive.account_operations_reversible haor
JOIN (
SELECT
DISTINCT ON ( hbr.num ) hbr.num
, hbr.fork_id
FROM hive.blocks_reversible hbr
WHERE
hbr.num <= _new_irreversible_block
AND hbr.num > _head_block_of_irreversible_blocks
ORDER BY hbr.num ASC, hbr.fork_id DESC
) as num_and_forks ON haor.fork_id = num_and_forks.fork_id AND haor.block_num = num_and_forks.num
15 min 0.4% 0 ms 4,338,941hivemind
SELECT is_new_post, id, author_id, permlink_id, post_category, parent_id, community_id, is_valid, is_muted, depth
FROM hivemind_app.process_hive_post_operation(($1)::varchar, ($2)::varchar, ($3)::varchar, ($4)::varchar, ($5)::timestamp, ($6)::integer, ($7)::integer, (ARRAY[$8,$9,$10,$11])::VARCHAR[])
14 min 0.4% 11 ms 77,414hived_group
DELETE FROM hive.operations_reversible hor
WHERE hor.block_num <= _new_irreversible_block
13 min 0.4% 0 ms 3,007,711hivemind
SELECT is_new_post, id, author_id, permlink_id, post_category, parent_id, community_id, is_valid, is_muted, depth
FROM hivemind_app.process_hive_post_operation(($1)::varchar, ($2)::varchar, ($3)::varchar, ($4)::varchar, ($5)::timestamp, ($6)::integer, ($7)::integer, (ARRAY[$8,$9,$10,$11,$12,$13,$14,$15,$16,$17])::VARCHAR[])
13 min 0.4% 0 ms 4,598,467hivemind
SELECT is_new_post, id, author_id, permlink_id, post_category, parent_id, community_id, is_valid, is_muted, depth
FROM hivemind_app.process_hive_post_operation(($1)::varchar, ($2)::varchar, ($3)::varchar, ($4)::varchar, ($5)::timestamp, ($6)::integer, ($7)::integer, (ARRAY[$8,$9])::VARCHAR[])
13 min 0.3% 10 ms 77,394hived_group
INSERT INTO hive.account_operations_reversible VALUES( ( unnest( _account_operations ) ).*, __fork_id )
12 min 0.3% 0 ms 21,751,239hivemind
UPDATE
hivemind_app.hive_posts hp
SET
max_accepted_payout = $1,
percent_hbd = $2,
allow_votes = $3,
allow_curation_rewards = $4,
beneficiaries = $5
WHERE
hp.author_id = (SELECT id FROM hivemind_app.hive_accounts WHERE name = $6) AND
hp.permlink_id = (SELECT id FROM hivemind_app.hive_permlink_data WHERE permlink = $7)
10 min 0.3% 30 ms 19,624hivemind
SELECT hivemind_app.update_posts_rshares(81522953, 81580434)
9 min 0.2% 0 ms 2,808,779hivemind
SELECT is_new_post, id, author_id, permlink_id, post_category, parent_id, community_id, is_valid, is_muted, depth
FROM hivemind_app.process_hive_post_operation(($1)::varchar, ($2)::varchar, ($3)::varchar, ($4)::varchar, ($5)::timestamp, ($6)::integer, ($7)::integer, (ARRAY[$8,$9,$10])::VARCHAR[])
7 min 0.2% 0 ms 1,447,096hivemind
SELECT is_new_post, id, author_id, permlink_id, post_category, parent_id, community_id, is_valid, is_muted, depth
FROM hivemind_app.process_hive_post_operation(($1)::varchar, ($2)::varchar, ($3)::varchar, ($4)::varchar, ($5)::timestamp, ($6)::integer, ($7)::integer, (ARRAY[$8,$9,$10,$11,$12,$13,$14])::VARCHAR[])
6 min 0.2% 0 ms 1,278,484hivemind
SELECT is_new_post, id, author_id, permlink_id, post_category, parent_id, community_id, is_valid, is_muted, depth
FROM hivemind_app.process_hive_post_operation(($1)::varchar, ($2)::varchar, ($3)::varchar, ($4)::varchar, ($5)::timestamp, ($6)::integer, ($7)::integer, (ARRAY[$8,$9,$10,$11,$12,$13,$14,$15])::VARCHAR[])
6 min 0.2% 0 ms 1,146,373hivemind
SELECT is_new_post, id, author_id, permlink_id, post_category, parent_id, community_id, is_valid, is_muted, depth
FROM hivemind_app.process_hive_post_operation(($1)::varchar, ($2)::varchar, ($3)::varchar, ($4)::varchar, ($5)::timestamp, ($6)::integer, ($7)::integer, (ARRAY[$8,$9,$10,$11,$12,$13])::VARCHAR[])
5 min 0.1% 0 ms 1,095,534hivemind
SELECT is_new_post, id, author_id, permlink_id, post_category, parent_id, community_id, is_valid, is_muted, depth
FROM hivemind_app.process_hive_post_operation(($1)::varchar, ($2)::varchar, ($3)::varchar, ($4)::varchar, ($5)::timestamp, ($6)::integer, ($7)::integer, (ARRAY[$8,$9,$10,$11,$12,$13,$14,$15,$16])::VARCHAR[])
5 min 0.1% 0 ms 6,408,849hived_group
SELECT $2 FROM ONLY "hive"."accounts" x WHERE "id" OPERATOR(pg_catalog.=) $1 FOR KEY SHARE OF x
4 min 0.1% 0 ms 12,758,752hivemind
UPDATE hivemind_app.hive_accounts SET lastread_at = $1 WHERE name = $2
4 min 0.1% 0 ms 2,350,502hivemind
SELECT * FROM hivemind_app.enum_blocks4hivemind($1, $2)
4 min < 0.1% 0 ms 4,710,018hived_group
SELECT $3 FROM ONLY "hive"."account_operations_reversible" x WHERE $1 OPERATOR(pg_catalog.=) "operation_id" AND $2 OPERATOR(pg_catalog.=) "fork_id" FOR KEY SHARE OF x
4 min < 0.1% 3 ms 77,414hived_group
DELETE FROM hive.transactions_reversible htr
WHERE htr.block_num <= _new_irreversible_block
4 min < 0.1% 3 ms 77,414hived_group
DELETE FROM hive.account_operations_reversible har
USING hive.operations_reversible hor
WHERE
har.operation_id = hor.id
AND har.fork_id = hor.fork_id
AND hor.block_num <= _new_irreversible_block
3 min < 0.1% 0 ms 32,667,170hivemind
INSERT INTO hivemind_app.hive_category_data
(category)
VALUES (_parent_permlink)
ON CONFLICT (category) DO NOTHING
3 min < 0.1% 10 ms 19,625hivemind
SELECT hivemind_app.update_follow_count($1, $2)
```https://gitlab.syncad.com/hive/haf/-/issues/201Create process for periodically re-clustering of tables2024-02-29T20:24:27ZDan NotesteinCreate process for periodically re-clustering of tablesProbably the simplest solution for now is to create a script that redirects traffic to other nodes, re-clusters the table(s), then restores traffic. It might make sense to run the haf docker with hived disabled (i.e. --skip-hived) as wel...Probably the simplest solution for now is to create a script that redirects traffic to other nodes, re-clusters the table(s), then restores traffic. It might make sense to run the haf docker with hived disabled (i.e. --skip-hived) as well during the time of reclustering, not sure without testing.Post-1.27.5https://gitlab.syncad.com/hive/HAfAH/-/issues/47Setup images are the production-deployable images, CI uses confusingly named ...2024-01-04T23:09:50ZDan NotesteinSetup images are the production-deployable images, CI uses confusingly named "instance" imageThere is no real "instance" image needed for hafah, we just use the "setup" image. The image called "instance" is only used by CI (at least, I assume CI uses it). Maybe we can rename it to something that more clearly ties it to CI or dev...There is no real "instance" image needed for hafah, we just use the "setup" image. The image called "instance" is only used by CI (at least, I assume CI uses it). Maybe we can rename it to something that more clearly ties it to CI or development, so that no one will try to use it in a production environment.https://gitlab.syncad.com/hive/haf/-/issues/196HAF recovery after loss of ram-based statefile2024-01-12T07:43:03ZDan NotesteinHAF recovery after loss of ram-based statefileA haf server (shed14) with a fully synced hivemind and ramdisk-based state file lost power (bad UPS). Docker images: shed14: 12/21 replayed thread-names with older hive ON hmind_howo 631b0543
`HAF_IMAGE=registry.gitlab.syncad.com/hive/h...A haf server (shed14) with a fully synced hivemind and ramdisk-based state file lost power (bad UPS). Docker images: shed14: 12/21 replayed thread-names with older hive ON hmind_howo 631b0543
`HAF_IMAGE=registry.gitlab.syncad.com/hive/haf/instance:6671c63`
Since the database was ok, I just did a `replay-blockchain` of hived to fix the statefile. After replay, hived died while trying to restore indexes:
```
2023-12-28T07:46:10.914081 indexes_controler.cpp:117 operator() ] Attempting to execute query: `SELECT hive.restore_indexes( 'hive.applied_hardforks' );`...
2023-12-28T07:46:10.914158 indexes_controler.cpp:120 operator() ] Query processor: `SELECT hive.restore_indexes( 'hive.irreversible_data' );' Creating of enable indexes done in 1.52499999999999991 ms
2023-12-28T07:46:10.914177 indexes_controler.cpp:124 operator() ] The enable indexes have been created...
2023-12-28T07:46:10.942131 data_processor.cpp:214 handle_exception ] Data processor Query processor: `SELECT hive.restore_indexes( 'hive.applied_hardforks' );' detected SQL statement execution failure. Failing statement: `SELECT hive.restore_indexes( 'hive.applied_hardforks' );'.
2023-12-28T07:46:10.942180 data_processor.cpp:16 kill_node ] An error occured and HAF is stopping synchronization...
```
Maybe some ordering problem related to how constraints are restored?
```
haf_block_log=# \dS+ hive.blocks
Table "hive.blocks"
Column | Type | Collation | Nullable | Default | Storage | Compression | Stats target | Description
-------------------------+-----------------------------+-----------+----------+---------+----------+-------------+--------------+-------------
num | integer | | not null | | plain | | |
hash | bytea | | not null | | extended | | |
prev | bytea | | not null | | extended | | |
created_at | timestamp without time zone | | not null | | plain | | |
producer_account_id | integer | | not null | | plain | | |
transaction_merkle_root | bytea | | not null | | extended | | |
extensions | jsonb | | | | extended | | |
witness_signature | bytea | | not null | | extended | | |
signing_key | text | | not null | | extended | | |
hbd_interest_rate | hive.interest_rate | | | | plain | | |
total_vesting_fund_hive | hive.hive_amount | | | | main | | |
total_vesting_shares | hive.vest_amount | | | | main | | |
total_reward_fund_hive | hive.hive_amount | | | | main | | |
virtual_supply | hive.hive_amount | | | | main | | |
current_supply | hive.hive_amount | | | | main | | |
current_hbd_supply | hive.hbd_amount | | | | main | | |
dhf_interval_ledger | hive.hbd_amount | | | | main | | |
Access method: heap
haf_block_log=# \dS+ hive.irreversible_data
Table "hive.irreversible_data"
Column | Type | Collation | Nullable | Default | Storage | Compression | Stats target | Description
------------------+---------+-----------+----------+---------+---------+-------------+--------------+-------------
id | integer | | not null | | plain | | |
consistent_block | integer | | | | plain | | |
is_dirty | boolean | | not null | | plain | | |
Indexes:
"pk_irreversible_data" PRIMARY KEY, btree (id)
Access method: heap
haf_block_log=# ALTER TABLE hive.irreversible_data ADD CONSTRAINT fk_1_hive_irreversible_data FOREIGN KEY (consistent_block) REFERENCES hive.blocks(num) NOT VALID;
ERROR: there is no unique constraint matching given keys for referenced table "blocks"
```
```
table_name | index_constraint_name | command | is_constraint | is_index | is_foreign_key
----------------------------+----------------------------------------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+---------------+----------+----------------
hive.irreversible_data | fk_1_hive_irreversible_data | ALTER TABLE hive.irreversible_data ADD CONSTRAINT fk_1_hive_irreversible_data FOREIGN KEY (consistent_block) REFERENCES hive.blocks(num) NOT VALID | f | f | t
hive.blocks | fk_1_hive_blocks | ALTER TABLE hive.blocks ADD CONSTRAINT fk_1_hive_blocks FOREIGN KEY (producer_account_id) REFERENCES hive.accounts(id) DEFERRABLE INITIALLY DEFERRED NOT VALID | f | f | t
hive.transactions | fk_1_hive_transactions | ALTER TABLE hive.transactions ADD CONSTRAINT fk_1_hive_transactions FOREIGN KEY (block_num) REFERENCES hive.blocks(num) NOT VALID | f | f | t
hive.transactions_multisig | fk_1_hive_transactions_multisig | ALTER TABLE hive.transactions_multisig ADD CONSTRAINT fk_1_hive_transactions_multisig FOREIGN KEY (trx_hash) REFERENCES hive.transactions(trx_hash) NOT VALID | f | f | t
hive.operations | fk_1_hive_operations | ALTER TABLE hive.operations ADD CONSTRAINT fk_1_hive_operations FOREIGN KEY (block_num) REFERENCES hive.blocks(num) NOT VALID | f | f | t
hive.operations | fk_2_hive_operations | ALTER TABLE hive.operations ADD CONSTRAINT fk_2_hive_operations FOREIGN KEY (op_type_id) REFERENCES hive.operation_types(id) NOT VALID | f | f | t
hive.applied_hardforks | fk_1_hive_applied_hardforks | ALTER TABLE hive.applied_hardforks ADD CONSTRAINT fk_1_hive_applied_hardforks FOREIGN KEY (hardfork_vop_id) REFERENCES hive.operations(id) NOT VALID | f | f | t
hive.applied_hardforks | fk_2_hive_applied_hardforks | ALTER TABLE hive.applied_hardforks ADD CONSTRAINT fk_2_hive_applied_hardforks FOREIGN KEY (block_num) REFERENCES hive.blocks(num) NOT VALID | f | f | t
hive.accounts | fk_1_hive_accounts | ALTER TABLE hive.accounts ADD CONSTRAINT fk_1_hive_accounts FOREIGN KEY (block_num) REFERENCES hive.blocks(num) MATCH FULL NOT VALID | f | f | t
hive.account_operations | hive_account_operations_fk_1 | ALTER TABLE hive.account_operations ADD CONSTRAINT hive_account_operations_fk_1 FOREIGN KEY (account_id) REFERENCES hive.accounts(id) NOT VALID | f | f | t
hive.account_operations | hive_account_operations_fk_2 | ALTER TABLE hive.account_operations ADD CONSTRAINT hive_account_operations_fk_2 FOREIGN KEY (operation_id) REFERENCES hive.operations(id) NOT VALID | f | f | t
hive.account_operations | hive_account_operations_fk_3 | ALTER TABLE hive.account_operations ADD CONSTRAINT hive_account_operations_fk_3 FOREIGN KEY (op_type_id) REFERENCES hive.operation_types(id) NOT VALID | f | f | t
hive.blocks | pk_hive_blocks | ALTER TABLE hive.blocks ADD CONSTRAINT pk_hive_blocks PRIMARY KEY (num) | t | f | f
hive.blocks | hive_blocks_created_at_idx | CREATE INDEX hive_blocks_created_at_idx ON hive.blocks USING btree (created_at) | f | t | f
hive.blocks | hive_blocks_producer_account_id_idx | CREATE INDEX hive_blocks_producer_account_id_idx ON hive.blocks USING btree (producer_account_id) | f | t | f
hive.transactions | pk_hive_transactions | ALTER TABLE hive.transactions ADD CONSTRAINT pk_hive_transactions PRIMARY KEY (trx_hash) | t | f | f
hive.transactions | hive_transactions_block_num_trx_in_block_idx | CREATE INDEX hive_transactions_block_num_trx_in_block_idx ON hive.transactions USING btree (block_num, trx_in_block) | f | t | f
hive.transactions_multisig | pk_hive_transactions_multisig | ALTER TABLE hive.transactions_multisig ADD CONSTRAINT pk_hive_transactions_multisig PRIMARY KEY (trx_hash, signature) | t | f | f
hive.applied_hardforks | pk_hive_applied_hardforks | ALTER TABLE hive.applied_hardforks ADD CONSTRAINT pk_hive_applied_hardforks PRIMARY KEY (hardfork_num) | t | f | f
hive.applied_hardforks | hive_applied_hardforks_block_num_idx | CREATE INDEX hive_applied_hardforks_block_num_idx ON hive.applied_hardforks USING btree (block_num) | f | t | f
hive.accounts | pk_hive_accounts_id | ALTER TABLE hive.accounts ADD CONSTRAINT pk_hive_accounts_id PRIMARY KEY (id) | t | f | f
hive.accounts | uq_hive_accounst_name | ALTER TABLE hive.accounts ADD CONSTRAINT uq_hive_accounst_name UNIQUE (name) | t | f | f
hive.accounts | hive_accounts_block_num_idx | CREATE INDEX hive_accounts_block_num_idx ON hive.accounts USING btree (block_num) | f | t | f
hive.account_operations | hive_account_operations_uq2 | ALTER TABLE hive.account_operations ADD CONSTRAINT hive_account_operations_uq2 UNIQUE (account_id, operation_id) | t | f | f
hive.blocks | fk_1_hive_blocks | ALTER TABLE hive.blocks ADD CONSTRAINT fk_1_hive_blocks FOREIGN KEY (producer_account_id) REFERENCES hive.accounts(id) DEFERRABLE INITIALLY DEFERRED NOT VALID | f | f | t
hive.transactions | fk_1_hive_transactions | ALTER TABLE hive.transactions ADD CONSTRAINT fk_1_hive_transactions FOREIGN KEY (block_num) REFERENCES hive.blocks(num) NOT VALID | f | f | t
hive.transactions_multisig | fk_1_hive_transactions_multisig | ALTER TABLE hive.transactions_multisig ADD CONSTRAINT fk_1_hive_transactions_multisig FOREIGN KEY (trx_hash) REFERENCES hive.transactions(trx_hash) NOT VALID | f | f | t
hive.operations | fk_1_hive_operations | ALTER TABLE hive.operations ADD CONSTRAINT fk_1_hive_operations FOREIGN KEY (block_num) REFERENCES hive.blocks(num) NOT VALID | f | f | t
hive.operations | fk_2_hive_operations | ALTER TABLE hive.operations ADD CONSTRAINT fk_2_hive_operations FOREIGN KEY (op_type_id) REFERENCES hive.operation_types(id) NOT VALID | f | f | t
hive.applied_hardforks | fk_1_hive_applied_hardforks | ALTER TABLE hive.applied_hardforks ADD CONSTRAINT fk_1_hive_applied_hardforks FOREIGN KEY (hardfork_vop_id) REFERENCES hive.operations(id) NOT VALID | f | f | t
hive.applied_hardforks | fk_2_hive_applied_hardforks | ALTER TABLE hive.applied_hardforks ADD CONSTRAINT fk_2_hive_applied_hardforks FOREIGN KEY (block_num) REFERENCES hive.blocks(num) NOT VALID | f | f | t
hive.accounts | fk_1_hive_accounts | ALTER TABLE hive.accounts ADD CONSTRAINT fk_1_hive_accounts FOREIGN KEY (block_num) REFERENCES hive.blocks(num) MATCH FULL NOT VALID | f | f | t
hive.account_operations | hive_account_operations_fk_1 | ALTER TABLE hive.account_operations ADD CONSTRAINT hive_account_operations_fk_1 FOREIGN KEY (account_id) REFERENCES hive.accounts(id) NOT VALID | f | f | t
hive.account_operations | hive_account_operations_fk_2 | ALTER TABLE hive.account_operations ADD CONSTRAINT hive_account_operations_fk_2 FOREIGN KEY (operation_id) REFERENCES hive.operations(id) NOT VALID | f | f | t
hive.account_operations | hive_account_operations_fk_3 | ALTER TABLE hive.account_operations ADD CONSTRAINT hive_account_operations_fk_3 FOREIGN KEY (op_type_id) REFERENCES hive.operation_types(id) NOT VALID | f | f | t
hive.blocks | pk_hive_blocks | ALTER TABLE hive.blocks ADD CONSTRAINT pk_hive_blocks PRIMARY KEY (num) | t | f | f
hive.blocks | hive_blocks_created_at_idx | CREATE INDEX hive_blocks_created_at_idx ON hive.blocks USING btree (created_at) | f | t | f
hive.blocks | hive_blocks_producer_account_id_idx | CREATE INDEX hive_blocks_producer_account_id_idx ON hive.blocks USING btree (producer_account_id) | f | t | f
hive.transactions | pk_hive_transactions | ALTER TABLE hive.transactions ADD CONSTRAINT pk_hive_transactions PRIMARY KEY (trx_hash) | t | f | f
hive.transactions | hive_transactions_block_num_trx_in_block_idx | CREATE INDEX hive_transactions_block_num_trx_in_block_idx ON hive.transactions USING btree (block_num, trx_in_block) | f | t | f
hive.transactions_multisig | pk_hive_transactions_multisig | ALTER TABLE hive.transactions_multisig ADD CONSTRAINT pk_hive_transactions_multisig PRIMARY KEY (trx_hash, signature) | t | f | f
hive.applied_hardforks | pk_hive_applied_hardforks | ALTER TABLE hive.applied_hardforks ADD CONSTRAINT pk_hive_applied_hardforks PRIMARY KEY (hardfork_num) | t | f | f
hive.applied_hardforks | hive_applied_hardforks_block_num_idx | CREATE INDEX hive_applied_hardforks_block_num_idx ON hive.applied_hardforks USING btree (block_num) | f | t | f
hive.accounts | pk_hive_accounts_id | ALTER TABLE hive.accounts ADD CONSTRAINT pk_hive_accounts_id PRIMARY KEY (id) | t | f | f
hive.accounts | uq_hive_accounst_name | ALTER TABLE hive.accounts ADD CONSTRAINT uq_hive_accounst_name UNIQUE (name) | t | f | f
hive.accounts | hive_accounts_block_num_idx | CREATE INDEX hive_accounts_block_num_idx ON hive.accounts USING btree (block_num) | f | t | f
hive.account_operations | hive_account_operations_uq2 | ALTER TABLE hive.account_operations ADD CONSTRAINT hive_account_operations_uq2 UNIQUE (account_id, operation_id) | t | f | f
hive.account_operations | hive_account_operations_uq_1 | ALTER TABLE hive.account_operations ADD CONSTRAINT hive_account_operations_uq_1 UNIQUE (account_id, account_op_seq_no) | t | f | f
hive.account_operations | hive_account_operations_type_account_id_op_seq_idx | CREATE UNIQUE INDEX hive_account_operations_type_account_id_op_seq_idx ON hive.account_operations USING btree (op_type_id, account_id, account_op_seq_no DESC) INCLUDE (operation_id, block_num) | f | t | f
```
(26 rows)https://gitlab.syncad.com/hive/HAfAH/-/issues/46Some get_account_history calls still slow2024-03-01T00:41:25ZDan NotesteinSome get_account_history calls still slowSee this MR for details: https://gitlab.syncad.com/hive/HAfAH/-/merge_requests/83See this MR for details: https://gitlab.syncad.com/hive/HAfAH/-/merge_requests/83https://gitlab.syncad.com/hive/haf_api_node/-/issues/4api node fails while loading data2024-01-03T17:57:51Zmcfarhatapi node fails while loading datathis is probably the third time node fails while loading data, this time I've got the log. i used block_log to load blockchain data with replay option as argument, it was going well until at around block 63M connection to db server got l...this is probably the third time node fails while loading data, this time I've got the log. i used block_log to load blockchain data with replay option as argument, it was going well until at around block 63M connection to db server got lost.
Attached is the log file and the env file.
[.env](/uploads/9c40bbb5405da8ae3e6506e2c3a323ca/.env)
[acti.zip](/uploads/e5b73558fe4377104e70f1169f855b7e/acti.zip)
I tried bringing down docker compose which it did, but starting over still no go. I got the "inconsistent data" error message againhttps://gitlab.syncad.com/hive/hivemind/-/issues/224can't stop hivemind docker?2023-12-22T12:41:10ZMahdi Yarican't stop hivemind docker?hivemind is done syncing and is live on head block. I'm watching logs and `docker stop` command doesn't interrupt it.hivemind is done syncing and is live on head block. I'm watching logs and `docker stop` command doesn't interrupt it.https://gitlab.syncad.com/hive/denser/-/issues/202Code inconsistency among the apps?2023-12-22T10:18:48ZGandalfCode inconsistency among the apps?So we have three "apps" here, `blog`, `wallet` and `auth`. How it is that every each of them have very different login screen?
That is (would be) a maintenance nightmare if we go on that way.
See that #199 #200 #201 behaves differently o...So we have three "apps" here, `blog`, `wallet` and `auth`. How it is that every each of them have very different login screen?
That is (would be) a maintenance nightmare if we go on that way.
See that #199 #200 #201 behaves differently on every site.https://gitlab.syncad.com/hive/denser/-/issues/199Login form: Input elements should have autocomplete attributes2024-01-11T15:19:34ZGandalfLogin form: Input elements should have autocomplete attributesMore info: https://goo.gl/9p2vKq
suggested: "current-password"
etc.More info: https://goo.gl/9p2vKq
suggested: "current-password"
etc.Damian JanusDamian Janushttps://gitlab.syncad.com/hive/haf/-/issues/194Create docker-internal HAF-maintenance service2024-02-29T20:24:45ZBartek WronaCreate docker-internal HAF-maintenance serviceThe following discussion from !392 should be addressed:
- [ ] @bwrona started a [discussion](https://gitlab.syncad.com/hive/haf/-/merge_requests/392#note_147498):
> SUPERUSER and `haf_administrators_group` membership should be avo...The following discussion from !392 should be addressed:
- [ ] @bwrona started a [discussion](https://gitlab.syncad.com/hive/haf/-/merge_requests/392#note_147498):
> SUPERUSER and `haf_administrators_group` membership should be avoided here when external haf-maintenance process will be incorporated.
Also this can be solved then: https://gitlab.syncad.com/hive/haf/-/merge_requests/392#note_147499
To create such service we should:
- create dedicated haf-maintainer role having proper permissions
- create some psql based scripts (usually just calling SQL procedures) using dedicated database role. Also such procedures should have set correct permissions to allow their execution only by haf-maintainer role
- discover some task-scheduler solution available for docker deployment. We can try: pg_cron or regular crond.Post-1.27.5https://gitlab.syncad.com/hive/haf/-/issues/192investigate how to utilize postgres SECURITY INVOKER and SECURITY DEFINER2024-01-27T13:49:05ZMarcininvestigate how to utilize postgres SECURITY INVOKER and SECURITY DEFINERIt may be useful for HAF to use SECURITY INVOKER and SECURITY DEFINER function options to better and simply protect internal data (like shadow tables etc.)
https://www.postgresql.org/docs/14/sql-createfunction.htmlIt may be useful for HAF to use SECURITY INVOKER and SECURITY DEFINER function options to better and simply protect internal data (like shadow tables etc.)
https://www.postgresql.org/docs/14/sql-createfunction.htmlMarcinMarcinhttps://gitlab.syncad.com/hive/jussi/-/issues/5better cache for blocks2023-12-19T22:01:13ZGandalfbetter cache for blocks```
{"e":"TypeError(\"'>' not supported between instances of 'NoneType' and 'int'\",)","lirb":81172157,"request_string":"{\"id\":1,\"jsonrpc\":\"2.0\",\"method\":\"block_api.get_block\",\"params\":{\"block_num\":81172159}}","jsonrpc_resp...```
{"e":"TypeError(\"'>' not supported between instances of 'NoneType' and 'int'\",)","lirb":81172157,"request_string":"{\"id\":1,\"jsonrpc\":\"2.0\",\"method\":\"block_api.get_block\",\"params\":{\"block_num\":81172159}}","jsonrpc_response":{"jsonrpc":"2.0","result":{},"id":1},"event":"Unable to cache using last irreversible block","logger":"jussi.cache.utils","level":"warning"}
```
Jussi might not know what the actual irreversible block is.
FYI @bwrona @danhttps://gitlab.syncad.com/hive/haf/-/issues/191Applications examples from src/hive_fork_manager/doc needs to be updated and ...2023-12-19T13:12:17ZMarcinApplications examples from src/hive_fork_manager/doc needs to be updated and then tested on CICurrently, applications from https://gitlab.syncad.com/hive/haf/-/tree/develop/src/hive_fork_manager/doc/examples?ref_type=heads are not running
on CI, and in consequence, some of them are no longer valid and do not work. The application...Currently, applications from https://gitlab.syncad.com/hive/haf/-/tree/develop/src/hive_fork_manager/doc/examples?ref_type=heads are not running
on CI, and in consequence, some of them are no longer valid and do not work. The applications need to be updated to the current shape of HAF interfaces and then tested on CI
@Trela