Delete comment and rtl issues.
Delete comment and rtl issues.
Gandalf (01b5185e) at 11 Feb 11:34
Move Comment and Vote tool classes on lower level on local_tools
Gandalf (bbec698d) at 11 Feb 11:32
I've already discussed general idea for that, TL;DR:
Denser instance that can be configured to handle one specified community, let it be HiveDevs for example: https://hive.blog/trending/hive-139531
For MVP we want to focus on that single community, no outside of community content.
MVP can have denser look, but should be easily configurable with "themes" / "skins".
For MVP "easily configurable" means easily by developers who can prepare few of such themes for few most important clients.
Such instance could be then deployed to https://hivedevs.openhive.network for example.
See: https://www.splintertalk.io/ for example
Community focused denser instance is currently considered as most wanted (ASAP grade) feature ;-)
While looking into #655 and examining update_median_witness_props it was reveled that some of the witness properties are activated too soon (with the next witness schedule yet to come).
This is hardfork change
I viewed that as a normal state of things, and for me it was non issue, or at least not an issue big enough to consider the change.
More on that was discussed on an OpenHive.Chat's #witness channel, I will provide more info later on, for now it's just a placeholder not to forget about the issue.
see: libraries/chain/witness_schedule.cpp there's a update_median_witness_props
It handles:
account_creation_fee
maximum_block_size
hbd_interest_rate
account_subsidy_budget
account_subsidy_decay
taking into account values that witnesses scheduled for production has signaled as their properties.
FYI: we are implicitly assuming odd number of elements, because with even it gets odd
(i.e. so our active.size()/2
works, giving us "almost median" value which we agreed that is good enough for that purpose, and avoids getting average value of two middle elements)
Please note that while such fluctuations are normal and expected and with high activity in voting (stakeholders disagree on what APR should be effective but voting in / out some of the witnesses), current issue can happen without any actions of stakeholders, just by the nature of changing witness schedule that happens roughly every minute.
Challenge here is that it operates on a witness schedule and there all the witnesses are equal, including "the 21st".
This is hardfork change
For example such urls:
Are all interchangeable, and by changing a frontend (whether it's blogging platform such as hive.blog or ecency.com, or a block explorer like hiveblocks.com) should direct to an expected content.
so any link you see on hive.blog should be easily viewed in explorer
While testing !1204 I got a false positive:
3228982ms application.cpp:110 startup ] Startup...
3228983ms chain_plugin.cpp:1596 plugin_startup ] Chain plugin initialization...
3228983ms chain_plugin.cpp:683 initial_settings ] Starting chain with shared_file_size: 25769803776 bytes
3228983ms chain_plugin.cpp:1600 plugin_startup ] Database opening...
3228983ms chain_plugin.cpp:767 open ] Opening shared memory from /haf-pool/old-style-backup-production/exchange-test/blockchain
3228983ms block_log.cpp:168 open ] Opening blocklog /haf-pool/old-style-backup-production/exchange-test/blockchain/block_log
3228983ms block_log.cpp:195 open ] my->block_log_size: 495694027305
3228993ms block_log_artifacts.cpp:330 open ] Opening artifacts file /haf-pool/old-style-backup-production/exchange-test/blockchain/block_log.artifacts in read & write mode ...
3228993ms block_log_artifacts.cpp:437 load_header ] Loaded header containing: git rev: 0acce16829c0702ac09d9e51beb57b9fac157c22, format version: 1.1, head_block_num: 81844489, tail_block_num: 1, generating_interrupted_at_block: 0, dirty_closed: 0
3228993ms block_log_artifacts.cpp:600 verify_if_blocks_fro ] Starting deep verification of already collected artifacts for the block range: 81844488 : 81844478. Any error during this process means that the artifacts don't match the block_log.
3228995ms block_log_artifacts.cpp:652 verify_if_blocks_fro ] Artifacts file matches block_log file.
3228996ms chainbase.cpp:247 open ] Compiler and build environment read from persistent storage: `{"compiler":"11.4.0", "debug":0, "apple":0, "windows":0, {"version":{"blockchain_version":"1.27.5","hive_revision":"9c678c23cc1858f3a64151e0bb51567231dbcb77","fc_revision":"9c678c23cc1858f3a64151e0bb51567231dbcb77","node_type":"mainnet"}}, "plugins" : ["account_by_key", "account_by_key_api", "account_history_api", "account_history_rocksdb", "block_api", "chain", "condenser_api", "database_api", "json_rpc", "network_broadcast_api", "p2p", "rc_api", "state_snapshot", "transaction_status", "transaction_status_api", "wallet_bridge_api", "webserver", "witness"]}'
3229068ms database.cpp:179 open ] 4130200 blockchain_config_mismatch_exception: Blockchain config from shared memory file mismatch current version of app.
Mismatch between blockchain configuration loaded from shared memory file and the current one
Full data about blockchain configuration are in files: current_blockchain_config.log, loaded_blockchain_config.log
{"current_config_filename":"current_blockchain_config.log","loaded_config_filename":"loaded_blockchain_config.log"}
database.cpp:3479 verify_match_of_blockchain_configuration
3229068ms database.cpp:179 open ] args.data_dir: /haf-pool/old-style-backup-production/exchange-test/blockchain args.shared_mem_dir: /haf-pool/old-style-backup-production/exchange-test/blockchain args.shared_file_size: 25769803776
3229071ms blockchain_worker_thread_pool.cpp:511 shutdown ] shutting down worker threads
3229071ms blockchain_worker_thread_pool.cpp:516 shutdown ] worker threads successfully shut down
3229071ms chain_plugin.cpp:787 open ] Error opening database. If the binary or configuration has changed, replay the blockchain explicitly using `--force-replay`.
3229071ms chain_plugin.cpp:788 open ] Error: {"code":4130200,"name":"blockchain_config_mismatch_exception","message":"Blockchain config from shared memory file mismatch current version of app.","stack":[{"context":{"level":"error","file":"database.cpp","line":3479,"method":"verify_match_of_blockchain_configuration","hostname":"","thread_name":"th_a","timestamp":"2024-02-07T14:53:49"},"format":"Mismatch between blockchain configuration loaded from shared memory file and the current one\nFull data about blockchain configuration are in files: ${current_config_filename}, ${loaded_config_filename}","data":{"current_config_filename":"current_blockchain_config.log","loaded_config_filename":"loaded_blockchain_config.log"}},{"context":{"level":"warn","file":"database.cpp","line":179,"method":"open","hostname":"","thread_name":"th_a","timestamp":"2024-02-07T14:53:49"},"format":"rethrow","data":{"args.data_dir":"/haf-pool/old-style-backup-production/exchange-test/blockchain","args.shared_mem_dir":"/haf-pool/old-style-backup-production/exchange-test/blockchain","args.shared_file_size":25769803776}}],"extension":{}}
I got a as-chain-spec-verification
branch,
{"version":{"blockchain_version":"1.27.5","hive_revision":"9c678c23cc1858f3a64151e0bb51567231dbcb77","fc_revision":"9c678c23cc1858f3a64151e0bb51567231dbcb77","node_type":"mainnet"}}
using existing block_log and the block_log.artifacts I did a forced replay (i.e. from scratch) with --exit-before-sync
Once it was replayed I just started it again to go on with sync and got above error.
Currently hived doesn't check and doesn't mind if its database state is from the future (i.e. head block time > now) when it starts (after optional replay or snapshot loading).
It happens usually due to system time being incorrectly set. When the node is block producer's it may cause the blocks to be produced too early and force situation in which designated producer is deemed to be missing its block producing slot (a real life situation encountered already).
This merge request helps avoid this situation, shutting down the node in case head block time > now on node start.
FYI: IMHO this issue is minor and shouldn't block upcoming release.
(As a side note it would be nice to have details on actual versions / values that doesn't match)
While testing !1204 (but apparently not limited to that branch) I found that despite rocksdb opened successfully storage at location
1250711ms account_history_rocksdb_plugin.cpp:556 openDb ] RocksDB opened successfully storage at location: `/haf-pool/old-style-backup-production/exchange-test/blockchain/account-history-rocksdb-storage'.
it failed because of store minor version mismatch
1250721ms main.cpp:176 main ] 10 assert_exception: Assert Exception
minor == STORE_MINOR_VERSION
Store minor version mismatch
{}
account_history_rocksdb_plugin.cpp:756 verifyStoreVersion
So question is if it's not too strict (i.e. for versions that doesn't require replay this check would effectively enforce it)
And the problem is annoying also because --force-replay
can't actually force replay.
Of course simple workaround with deleting account-history-rocksdb-storage
helps.
Also it needs to warn about chain headblock being old.
Use case: both hived and haf are disconnected from p2p and not synced with chain, even though hived and haf are on the same block
badges (a.k.a Truly Decentralized Badges) was mentioned in a Denser's feature request: denser#181
Idea of badges was presented by peakd: https://hive.blog/hive-198327/@peak.answers/how-to-create-a-badge-on-peakd-com
What we want is a simple decentralized algorithm that makes use of follow and mute:
user
is followed by an account matching name badge-??????
, where ?
are digits (I believe it's currently exactly six digits)user
doesn't have such a badge account mutedThere are few old configuration options and related plugins that we should get rid of to keep things simple and clean. For instance:
--follow-max-feed-size arg (=500)
Set the maximum size of cached feed for an account
--follow-start-feeds arg (=0)
Block time (in epoch seconds) when to start calculating feeds
--tags-start-promoted arg (=0)
Block time (in epoch seconds) when to start calculating promoted content. Should be 1 week prior to current
time.
--tags-skip-startup-update
Skip updating tags on startup. Can safely be skipped when starting a previously running node. Should not be
skipped when reindexing.
and related tags and follow plugins.
Also some more cleanup needed for other deprecated options:
--stop-replay-at-block arg
[ DEPRECATED ] Stop replay after reaching given block number
--exit-after-replay
[ DEPRECATED ] Exit after reaching given block number
--force-validate
Force validation of all transactions. Deprecated in favor of p2p-force-validate
Anything more?
gandalf@grey:~$ docker run hiveio/hive:v-develop --help
setting user hived uid to value 1000
+ '[' -n '' ']'
Shared memory file directory (SHM_DIR) /home/hived/datadir/blockchain does not exist. Exiting.
It fails when SHM_DIR is not overridden, because the default one points to a dir that doesn't exist yet.
Gandalf (bbec698d) at 25 Jan 20:58
Added missing labels to the Dockerfile. Removed references to CI-sp...