Skip to content
Snippets Groups Projects

Compare revisions

Changes are shown as if the source revision was being merged into the target revision. Learn more about comparing revisions.

Source

Select target project
No results found
Select Git revision
  • 0.19.1-p2pfix-paas
  • 0.19.2-p2p-debug-logging
  • 0.19.2-rebroadcast
  • 0.21.x
  • 0.22.x
  • 0.23.0
  • 0.23.0-maintenance-fixes
  • 0.24.0
  • 1052-sqrt-curve
  • 1096-api-category-object
  • 1278-steemit-testnet
  • 1314
  • 133-debug-edit-script
  • 1414-wip-block-log-recovery
  • 1419_docker_static_build
  • 1457-example-step1
  • 1457-example-step2
  • 1457-example-step3
  • 150-cli_wallet-signing-with-an-account-authority-is-currently-not-supported
  • 1508-smt-setup-operation
  • 153-python-based-regression-test-covering-bugs-fixed-in-https-gitlab-syncad-com-hive-hive
  • 153-python-based-regression-test-covering-bugs-fixed-in-https-gitlab-syncad-com-hive-hive-2
  • 1562-ro-build-dir
  • 1594-performance
  • 1595-string-pooling
  • 1633-allowed-syms
  • 1657-mem-pool-fix
  • 1657-trx-api
  • 1671-ah-pruning
  • 1677
  • 1682-Implement-SMT-transfers
  • 169-cli_wallet-add-support-for-offline-use
  • 1715-PR
  • 1724-timed-event-scheduler
  • 1766-egg-proof-of-concept
  • 1807-generate-ticks-py
  • 1824_to_string_crash
  • 184-invalid-op_in_trx-value-in-account_history-when-multiple-operations-in-single-transaction
  • 1870-PR
  • 1888_broken_smt_asset_serialization
  • 1907-Use-visitor-to-visit-all-operation-extensions
  • 1915_init_hardforks_overrun
  • 1932-use-class-serialization-codegen
  • 1944-canonical-signatures
  • 1947-follow-tags-opt
  • 1947-follow-tags-opt2
  • 1949-account-history-opt2
  • 1952-unique-index
  • 1986-dump-api-call-request-and-response
  • 1987_rocksdb_prototype_develop
  • 1999-follow-develop
  • 20-verify-correctness-of-validate_invariants-method-and-generate-an-execution-report
  • 200-implement-regression-test-covering-this-issue-195
  • 2017-08-29-memo-protection-develop
  • 2017-08-29-memo-protection-master
  • 20171021-master-no-ah
  • 2018-05-24-rc
  • 2018-10-22-ubuntu-18-04-build-fix-wip
  • 2018-10-31-dump-block-log
  • 20180222-bump-ahnode-mem
  • 20180531-remove-follow-tags
  • 20180626-master-bump-ahnode-mem
  • 20180702-fix-vesting-withdrawals-ahnode
  • 20180702-fix-vesting-withdrawals-steemd
  • 20180803-gcovr-relative-dir
  • 20180824-testnet
  • 20180917-increase-fork-buffer
  • 2019-removal-of-20s-comment-limit
  • 20190501-update-seed-nodes
  • 20190503-reduce-docker-build-parallelism
  • 20190520-testnet
  • 20190522-sps-inflation
  • 20190530-combine-ahnode
  • 20190614-testnet
  • 20190621-enable-rewards-api
  • 20190627-combine-ahnode
  • 20190627-testnet
  • 20190628-testnet
  • 20190629-testnet
  • 20190701-bmic-testnet
  • 20190701-testnet
  • 20190705-custom-op-limit
  • 20190705-testnet
  • 20190710-testnet
  • 20190717-testnet-bmic
  • 20190729-0.20.12
  • 20190729-0.20.12-non-mira
  • 20190729-0.21.0-non-mira
  • 20190729-0.21.0-non-mira-update-configs
  • 20190802-force-open-revision-no-mira
  • 20190802-smt-docker-cleanup
  • 20190805-0.20.12
  • 20190805-0.20.12-non-mira
  • 20190805-0.21.0-non-mira
  • 20190805-debug-testnet
  • 20190805-testnet
  • 20190828-v0.22.0-no-mira
  • 20190830-hf22-overflow-check
  • 20190902-in-memory-steemdsync
  • 20191003-testnet
  • 0.13.0-rc2
  • 0.13.0-rc3
  • 0.14.0-shared-db
  • 0.9.0
  • 1.27.10
  • 1.27.11rc1
  • 1.27.11rc2
  • 1.27.11rc3
  • 1.27.11rc4
  • 1.27.11rc5
  • 1.27.5
  • 1.27.5rc8
  • 1.27.6
  • 1.27.6-rc0-beekeeper-
  • 1.27.6-rc1-beekeeper-
  • 1.27.6-rc2-beekeeper-
  • 1.27.6-rc3-beekeeper-
  • 1.27.6-rc4-beekeeper-
  • 1.27.6rc9
  • 1.27.7
  • 1.27.7rc0
  • 1.27.7rc10
  • 1.27.7rc11
  • 1.27.7rc12
  • 1.27.7rc13
  • 1.27.7rc14
  • 1.27.7rc15
  • 1.27.7rc16
  • 1.27.8
  • 1.27.9
  • 20211209_auto
  • 20211216_auto
  • 20211230_auto
  • 20220106_auto
  • 20220107_auto
  • 20220118_auto
  • 20220124_auto
  • 20220125_auto
  • 20220201_auto
  • 20220203_auto
  • 20220208_auto
  • 20220210_auto
  • 20220214_auto
  • 20220215_auto
  • 20220217_auto
  • 20220222_auto
  • 20220224_auto
  • 20220301_auto
  • 20220303_auto
  • 20220310_auto
  • 20220315_auto
  • 20220318_auto
  • 20220322_auto
  • 20220324_auto
  • 20220329_auto
  • 20220331_auto
  • 20220407_auto
  • 20220412_auto
  • 20220415_auto
  • 20220418_auto
  • 20220422_auto
  • 20220425_auto
  • 20220429_auto
  • 20220506_auto
  • 20220509_auto
  • 20220513_auto
  • 20220516_auto
  • 20220520_auto
  • 20220523_auto
  • 20220527_auto
  • 20220530_auto
  • 20220603_auto
  • 20220606_auto
  • 20220610_auto
  • 20220613_auto
  • 20220617_auto
  • 20220620_auto
  • 20220624_auto
  • 20220701_auto
  • 20220704_auto
  • 20220708_auto
  • 20220711_auto
  • 20220715_auto
  • 20220718_auto
  • 20220720_auto
  • 20220722_auto
  • 20220725_auto
  • 20220727_auto
  • 20220728_auto
  • 20220729_auto
  • 20220804_auto
  • 20220809_auto
  • 20220816_auto
  • 20220822_auto
  • 20220825_auto
  • 20220829_auto
  • 20220901_auto
  • 20220906_auto
  • Invalidates-chain-state_2022_12_06
  • Invalidates-chain-state_2022_12_12
200 results

Target

Select target project
  • hive/hive
1 result
Select Git revision
  • 0.19.1-p2pfix-paas
  • 0.19.2-p2p-debug-logging
  • 0.19.2-rebroadcast
  • 0.21.x
  • 0.22.x
  • 0.23.0
  • 0.23.0-maintenance-fixes
  • 0.24.0
  • 1052-sqrt-curve
  • 1096-api-category-object
  • 1278-steemit-testnet
  • 1314
  • 133-debug-edit-script
  • 1414-wip-block-log-recovery
  • 1419_docker_static_build
  • 1457-example-step1
  • 1457-example-step2
  • 1457-example-step3
  • 150-cli_wallet-signing-with-an-account-authority-is-currently-not-supported
  • 1508-smt-setup-operation
  • 153-python-based-regression-test-covering-bugs-fixed-in-https-gitlab-syncad-com-hive-hive
  • 153-python-based-regression-test-covering-bugs-fixed-in-https-gitlab-syncad-com-hive-hive-2
  • 1562-ro-build-dir
  • 1594-performance
  • 1595-string-pooling
  • 1633-allowed-syms
  • 1657-mem-pool-fix
  • 1657-trx-api
  • 1671-ah-pruning
  • 1677
  • 1682-Implement-SMT-transfers
  • 169-cli_wallet-add-support-for-offline-use
  • 1715-PR
  • 1724-timed-event-scheduler
  • 1766-egg-proof-of-concept
  • 1807-generate-ticks-py
  • 1824_to_string_crash
  • 184-invalid-op_in_trx-value-in-account_history-when-multiple-operations-in-single-transaction
  • 1870-PR
  • 1888_broken_smt_asset_serialization
  • 1907-Use-visitor-to-visit-all-operation-extensions
  • 1915_init_hardforks_overrun
  • 1932-use-class-serialization-codegen
  • 1944-canonical-signatures
  • 1947-follow-tags-opt
  • 1947-follow-tags-opt2
  • 1949-account-history-opt2
  • 1952-unique-index
  • 1986-dump-api-call-request-and-response
  • 1987_rocksdb_prototype_develop
  • 1999-follow-develop
  • 20-verify-correctness-of-validate_invariants-method-and-generate-an-execution-report
  • 200-implement-regression-test-covering-this-issue-195
  • 2017-08-29-memo-protection-develop
  • 2017-08-29-memo-protection-master
  • 20171021-master-no-ah
  • 2018-05-24-rc
  • 2018-10-22-ubuntu-18-04-build-fix-wip
  • 2018-10-31-dump-block-log
  • 20180222-bump-ahnode-mem
  • 20180531-remove-follow-tags
  • 20180626-master-bump-ahnode-mem
  • 20180702-fix-vesting-withdrawals-ahnode
  • 20180702-fix-vesting-withdrawals-steemd
  • 20180803-gcovr-relative-dir
  • 20180824-testnet
  • 20180917-increase-fork-buffer
  • 2019-removal-of-20s-comment-limit
  • 20190501-update-seed-nodes
  • 20190503-reduce-docker-build-parallelism
  • 20190520-testnet
  • 20190522-sps-inflation
  • 20190530-combine-ahnode
  • 20190614-testnet
  • 20190621-enable-rewards-api
  • 20190627-combine-ahnode
  • 20190627-testnet
  • 20190628-testnet
  • 20190629-testnet
  • 20190701-bmic-testnet
  • 20190701-testnet
  • 20190705-custom-op-limit
  • 20190705-testnet
  • 20190710-testnet
  • 20190717-testnet-bmic
  • 20190729-0.20.12
  • 20190729-0.20.12-non-mira
  • 20190729-0.21.0-non-mira
  • 20190729-0.21.0-non-mira-update-configs
  • 20190802-force-open-revision-no-mira
  • 20190802-smt-docker-cleanup
  • 20190805-0.20.12
  • 20190805-0.20.12-non-mira
  • 20190805-0.21.0-non-mira
  • 20190805-debug-testnet
  • 20190805-testnet
  • 20190828-v0.22.0-no-mira
  • 20190830-hf22-overflow-check
  • 20190902-in-memory-steemdsync
  • 20191003-testnet
  • 0.13.0-rc2
  • 0.13.0-rc3
  • 0.14.0-shared-db
  • 0.9.0
  • 1.27.10
  • 1.27.11rc1
  • 1.27.11rc2
  • 1.27.11rc3
  • 1.27.11rc4
  • 1.27.11rc5
  • 1.27.5
  • 1.27.5rc8
  • 1.27.6
  • 1.27.6-rc0-beekeeper-
  • 1.27.6-rc1-beekeeper-
  • 1.27.6-rc2-beekeeper-
  • 1.27.6-rc3-beekeeper-
  • 1.27.6-rc4-beekeeper-
  • 1.27.6rc9
  • 1.27.7
  • 1.27.7rc0
  • 1.27.7rc10
  • 1.27.7rc11
  • 1.27.7rc12
  • 1.27.7rc13
  • 1.27.7rc14
  • 1.27.7rc15
  • 1.27.7rc16
  • 1.27.8
  • 1.27.9
  • 20211209_auto
  • 20211216_auto
  • 20211230_auto
  • 20220106_auto
  • 20220107_auto
  • 20220118_auto
  • 20220124_auto
  • 20220125_auto
  • 20220201_auto
  • 20220203_auto
  • 20220208_auto
  • 20220210_auto
  • 20220214_auto
  • 20220215_auto
  • 20220217_auto
  • 20220222_auto
  • 20220224_auto
  • 20220301_auto
  • 20220303_auto
  • 20220310_auto
  • 20220315_auto
  • 20220318_auto
  • 20220322_auto
  • 20220324_auto
  • 20220329_auto
  • 20220331_auto
  • 20220407_auto
  • 20220412_auto
  • 20220415_auto
  • 20220418_auto
  • 20220422_auto
  • 20220425_auto
  • 20220429_auto
  • 20220506_auto
  • 20220509_auto
  • 20220513_auto
  • 20220516_auto
  • 20220520_auto
  • 20220523_auto
  • 20220527_auto
  • 20220530_auto
  • 20220603_auto
  • 20220606_auto
  • 20220610_auto
  • 20220613_auto
  • 20220617_auto
  • 20220620_auto
  • 20220624_auto
  • 20220701_auto
  • 20220704_auto
  • 20220708_auto
  • 20220711_auto
  • 20220715_auto
  • 20220718_auto
  • 20220720_auto
  • 20220722_auto
  • 20220725_auto
  • 20220727_auto
  • 20220728_auto
  • 20220729_auto
  • 20220804_auto
  • 20220809_auto
  • 20220816_auto
  • 20220822_auto
  • 20220825_auto
  • 20220829_auto
  • 20220901_auto
  • 20220906_auto
  • Invalidates-chain-state_2022_12_06
  • Invalidates-chain-state_2022_12_12
200 results
Show changes
Commits on Source (109)
Showing
with 567 additions and 166 deletions
...@@ -15,8 +15,8 @@ testnet_node_build: ...@@ -15,8 +15,8 @@ testnet_node_build:
stage: build stage: build
image: "$CI_REGISTRY_IMAGE/builder$BUILDER_IMAGE_TAG" image: "$CI_REGISTRY_IMAGE/builder$BUILDER_IMAGE_TAG"
script: script:
# LOW_MEMORY=OFF CLEAR_VOTES=OFF TESTNET=ON ENABLE_MIRA=OFF # LOW_MEMORY=OFF CLEAR_VOTES=OFF TESTNET=ON ENABLE_MIRA=OFF HIVE_LINT=ON
- ./ciscripts/build.sh OFF OFF ON OFF - ./ciscripts/build.sh OFF OFF ON OFF ON
- mkdir -p "$CI_JOB_NAME"/tests/unit - mkdir -p "$CI_JOB_NAME"/tests/unit
- mv build/install-root "$CI_JOB_NAME" - mv build/install-root "$CI_JOB_NAME"
- mv contrib/hived.run "$CI_JOB_NAME" - mv contrib/hived.run "$CI_JOB_NAME"
...@@ -35,8 +35,8 @@ consensus_build: ...@@ -35,8 +35,8 @@ consensus_build:
stage: build stage: build
image: "$CI_REGISTRY_IMAGE/builder$BUILDER_IMAGE_TAG" image: "$CI_REGISTRY_IMAGE/builder$BUILDER_IMAGE_TAG"
script: script:
# LOW_MEMORY=ON CLEAR_VOTES=ON TESTNET=OFF ENABLE_MIRA=OFF # LOW_MEMORY=ON CLEAR_VOTES=ON TESTNET=OFF ENABLE_MIRA=OFF HIVE_LINT=ON
- ./ciscripts/build.sh ON ON OFF OFF - ./ciscripts/build.sh ON ON OFF OFF ON
- mkdir "$CI_JOB_NAME" - mkdir "$CI_JOB_NAME"
- mv build/install-root "$CI_JOB_NAME" - mv build/install-root "$CI_JOB_NAME"
- mv contrib/hived.run "$CI_JOB_NAME" - mv contrib/hived.run "$CI_JOB_NAME"
...@@ -86,6 +86,13 @@ plugin_test: ...@@ -86,6 +86,13 @@ plugin_test:
tags: tags:
- public-runner-docker - public-runner-docker
.beem_setup : &beem_setup |
git clone --depth=1 --single-branch --branch dk-hybrid-operations https://gitlab.syncad.com/hive/beem.git
cd beem
python3 setup.py install
cd ..
mkdir -p build/tests/hive-node-data
beem_tests: beem_tests:
stage: test stage: test
needs: needs:
...@@ -95,13 +102,7 @@ beem_tests: ...@@ -95,13 +102,7 @@ beem_tests:
variables: variables:
PYTHONPATH: $CI_PROJECT_DIR/tests/functional PYTHONPATH: $CI_PROJECT_DIR/tests/functional
script: script:
# boilerplate for installing latested beem - *beem_setup
- git clone --depth=1 --single-branch --branch dk-hybrid-operations https://gitlab.syncad.com/hive/beem.git
- cd beem
- python3 setup.py install
- cd ..
# stuff specific to this test
- mkdir -p build/tests/hive-node-data
- cd tests/functional/python_tests/dhf_tests - cd tests/functional/python_tests/dhf_tests
- "python3 run_proposal_tests.py initminer hive.fund 5JNHfZYKGaomSFvd4NUdQ9qMcEAC43kujbfjueTHpVapX1Kzq2n --run-hived $CI_PROJECT_DIR/testnet_node_build/install-root/bin/hived --working-dir=$CI_PROJECT_DIR/build/tests/hive-node-data" - "python3 run_proposal_tests.py initminer hive.fund 5JNHfZYKGaomSFvd4NUdQ9qMcEAC43kujbfjueTHpVapX1Kzq2n --run-hived $CI_PROJECT_DIR/testnet_node_build/install-root/bin/hived --working-dir=$CI_PROJECT_DIR/build/tests/hive-node-data"
- rm -rf $CI_PROJECT_DIR/build/tests/hive-node-data - rm -rf $CI_PROJECT_DIR/build/tests/hive-node-data
...@@ -130,13 +131,7 @@ list_proposals_tests: ...@@ -130,13 +131,7 @@ list_proposals_tests:
variables: variables:
PYTHONPATH: $CI_PROJECT_DIR/tests/functional PYTHONPATH: $CI_PROJECT_DIR/tests/functional
script: script:
# boilerplate for installing latested beem - *beem_setup
- git clone --depth=1 --single-branch --branch dk-hybrid-operations https://gitlab.syncad.com/hive/beem.git
- cd beem
- python3 setup.py install
- cd ..
# stuff specific to this test
- mkdir -p build/tests/hive-node-data
- cd tests/functional/python_tests/dhf_tests - cd tests/functional/python_tests/dhf_tests
- "python3 list_proposals_tests.py initminer initminer 5JNHfZYKGaomSFvd4NUdQ9qMcEAC43kujbfjueTHpVapX1Kzq2n --run-hived $CI_PROJECT_DIR/testnet_node_build/install-root/bin/hived --working-dir=$CI_PROJECT_DIR/build/tests/hive-node-data --junit-output=list_proposals_tests.xml" - "python3 list_proposals_tests.py initminer initminer 5JNHfZYKGaomSFvd4NUdQ9qMcEAC43kujbfjueTHpVapX1Kzq2n --run-hived $CI_PROJECT_DIR/testnet_node_build/install-root/bin/hived --working-dir=$CI_PROJECT_DIR/build/tests/hive-node-data --junit-output=list_proposals_tests.xml"
artifacts: artifacts:
...@@ -149,6 +144,28 @@ list_proposals_tests: ...@@ -149,6 +144,28 @@ list_proposals_tests:
tags: tags:
- public-runner-docker - public-runner-docker
cli_wallet_tests:
stage: test
needs:
- job: testnet_node_build
artifacts: true
image: "$CI_REGISTRY_IMAGE/test$TEST_IMAGE_TAG"
variables:
PYTHONPATH: $CI_PROJECT_DIR/tests/functional
script:
- *beem_setup
- cd tests/functional/python_tests/cli_wallet
- "python3 run.py --hive-path $CI_PROJECT_DIR/testnet_node_build/install-root/bin/hived --hive-working-dir=$CI_PROJECT_DIR/build/tests/hive-node-data --path-to-cli $CI_PROJECT_DIR/testnet_node_build/install-root/bin --creator initminer --wif 5JNHfZYKGaomSFvd4NUdQ9qMcEAC43kujbfjueTHpVapX1Kzq2n --junit-output=cli_wallet_tests.xml"
artifacts:
paths:
- tests/functional/python_tests/cli_wallet/tests/logs/cli_wallet.log
reports:
junit: tests/functional/python_tests/cli_wallet/cli_wallet_tests.xml
when: always
expire_in: 6 months
tags:
- public-runner-docker
hived_options_tests: hived_options_tests:
stage: test stage: test
needs: needs:
...@@ -159,10 +176,45 @@ hived_options_tests: ...@@ -159,10 +176,45 @@ hived_options_tests:
PYTHONPATH: $CI_PROJECT_DIR/tests/functional PYTHONPATH: $CI_PROJECT_DIR/tests/functional
script: script:
- cd tests/functional/python_tests/hived - cd tests/functional/python_tests/hived
- apt-get update -y && apt-get install -y python3 python3-pip python3-dev
- pip3 install -U psutil
- "python3 hived_options_tests.py --run-hived $CI_PROJECT_DIR/testnet_node_build/install-root/bin/hived" - "python3 hived_options_tests.py --run-hived $CI_PROJECT_DIR/testnet_node_build/install-root/bin/hived"
tags: tags:
- public-runner-docker - public-runner-docker
hived_replay_tests:
stage: test
needs:
- job: consensus_build
artifacts: true
image: "$CI_REGISTRY_IMAGE/builder$BUILDER_IMAGE_TAG"
variables:
PYTHONPATH: $CI_PROJECT_DIR/tests/functional
script:
- export ROOT_DIRECTORY=$PWD
- mkdir $ROOT_DIRECTORY/replay_logs
- cd tests/functional/python_tests/hived
- apt-get update -y && apt-get install -y python3 python3-pip python3-dev
- pip3 install -U wget psutil junit_xml gcovr secp256k1prp requests
- $CI_PROJECT_DIR/consensus_build/install-root/bin/truncate_block_log /blockchain/block_log /tmp/block_log 3000000
# quick replays for 10k blocks, with node restarts
- "python3 snapshot_1.py --run-hived $CI_PROJECT_DIR/consensus_build/install-root/bin/hived --block-log /tmp/block_log --blocks 10000 --artifact-directory $ROOT_DIRECTORY/replay_logs"
- "python3 snapshot_2.py --run-hived $CI_PROJECT_DIR/consensus_build/install-root/bin/hived --block-log /tmp/block_log --blocks 10000 --artifact-directory $ROOT_DIRECTORY/replay_logs"
# group of tests, that uses one node with 5 milion blocks replayed
- "python3 start_replay_tests.py --run-hived $CI_PROJECT_DIR/consensus_build/install-root/bin/hived --blocks 3000000 --block-log /tmp/block_log --test-directory $PWD/replay_based_tests --artifact-directory $ROOT_DIRECTORY/replay_logs"
artifacts:
paths:
- replay_logs
when: always
expire_in: 6 months
tags:
- public-runner-docker
- hived-for-tests
package_consensus_node: package_consensus_node:
stage: package stage: package
needs: needs:
...@@ -182,3 +234,4 @@ package_consensus_node: ...@@ -182,3 +234,4 @@ package_consensus_node:
- "echo ===> the consensus node image for this build is: $CI_REGISTRY_IMAGE/consensus_node:$CI_COMMIT_SHORT_SHA" - "echo ===> the consensus node image for this build is: $CI_REGISTRY_IMAGE/consensus_node:$CI_COMMIT_SHORT_SHA"
tags: tags:
- public-runner-docker - public-runner-docker
...@@ -81,6 +81,9 @@ if( ENABLE_MIRA ) ...@@ -81,6 +81,9 @@ if( ENABLE_MIRA )
endif() endif()
OPTION( LOW_MEMORY_NODE "Build source for low memory node (ON OR OFF)" OFF ) OPTION( LOW_MEMORY_NODE "Build source for low memory node (ON OR OFF)" OFF )
include( CMakeDependentOption )
CMAKE_DEPENDENT_OPTION( STORE_ACCOUNT_METADATA "Keep the json_metadata for accounts, normally discarded on low memory nodes" OFF "LOW_MEMORY_NODE" ON )
MESSAGE( STATUS "LOW_MEMORY_NODE: ${LOW_MEMORY_NODE}" ) MESSAGE( STATUS "LOW_MEMORY_NODE: ${LOW_MEMORY_NODE}" )
if( LOW_MEMORY_NODE ) if( LOW_MEMORY_NODE )
MESSAGE( STATUS " " ) MESSAGE( STATUS " " )
...@@ -88,16 +91,16 @@ if( LOW_MEMORY_NODE ) ...@@ -88,16 +91,16 @@ if( LOW_MEMORY_NODE )
MESSAGE( STATUS " " ) MESSAGE( STATUS " " )
SET( CMAKE_CXX_FLAGS "${CMAKE_CXX_FLAGS} -DIS_LOW_MEM" ) SET( CMAKE_CXX_FLAGS "${CMAKE_CXX_FLAGS} -DIS_LOW_MEM" )
SET( CMAKE_C_FLAGS "${CMAKE_C_FLAGS} -DIS_LOW_MEM" ) SET( CMAKE_C_FLAGS "${CMAKE_C_FLAGS} -DIS_LOW_MEM" )
endif()
OPTION (COLLECT_ACCOUNT_METADATA "Allows to enable/disable storing account metadata" ON) MESSAGE( STATUS "STORE_ACCOUNT_METADATA: ${STORE_ACCOUNT_METADATA}" )
MESSAGE( STATUS "COLLECT_ACCOUNT_METADATA: ${COLLECT_ACCOUNT_METADATA}" ) if( STORE_ACCOUNT_METADATA )
if( COLLECT_ACCOUNT_METADATA ) MESSAGE( STATUS " " )
MESSAGE( STATUS " " ) MESSAGE( STATUS " BUT STILL INDEXING ACCOUNT METADATA " )
MESSAGE( STATUS " CONFIGURING FOR ACCOUNT METADATA SUPPORT " ) MESSAGE( STATUS " " )
MESSAGE( STATUS " " ) SET( CMAKE_CXX_FLAGS "${CMAKE_CXX_FLAGS} -DCOLLECT_ACCOUNT_METADATA" )
SET( CMAKE_CXX_FLAGS "${CMAKE_CXX_FLAGS} -DCOLLECT_ACCOUNT_METADATA" ) SET( CMAKE_C_FLAGS "${CMAKE_C_FLAGS} -DCOLLECT_ACCOUNT_METADATA" )
SET( CMAKE_C_FLAGS "${CMAKE_C_FLAGS} -DCOLLECT_ACCOUNT_METADATA" ) endif()
endif() endif()
OPTION( SUPPORT_COMMENT_CONTENT "Build source with enabled comment content support (ON OR OFF)" OFF ) OPTION( SUPPORT_COMMENT_CONTENT "Build source with enabled comment content support (ON OR OFF)" OFF )
...@@ -144,31 +147,50 @@ if( HIVE_STATIC_BUILD AND ( ( MSVC AND NOT MINGW ) OR APPLE ) ) ...@@ -144,31 +147,50 @@ if( HIVE_STATIC_BUILD AND ( ( MSVC AND NOT MINGW ) OR APPLE ) )
endif() endif()
MESSAGE( STATUS "HIVE_STATIC_BUILD: ${HIVE_STATIC_BUILD}" ) MESSAGE( STATUS "HIVE_STATIC_BUILD: ${HIVE_STATIC_BUILD}" )
SET( HIVE_LINT_LEVEL "OFF" CACHE STRING "Lint level during Hive build (FULL, HIGH, LOW, OFF)" ) SET( HIVE_LINT "OFF" CACHE STRING "Enable linting with clang-tidy during compilation" )
find_program( find_program(
CLANG_TIDY_EXE CLANG_TIDY_EXE
NAMES "clang-tidy" NAMES "clang-tidy"
DOC "Path to clain-tidy executable" DOC "Path to clain-tidy executable"
) )
SET( CLANG_TIDY_IGNORED
"-fuchsia-default-arguments\
,-hicpp-*\
,-cert-err60-cpp\
,-llvm-namespace-comment\
,-cert-err09-cpp\
,-cert-err61-cpp\
,-fuchsia-overloaded-operator\
,-misc-throw-by-value-catch-by-reference\
,-misc-unused-parameters\
,-clang-analyzer-core.uninitialized.Assign\
,-llvm-include-order\
,-clang-diagnostic-unused-lambda-capture\
,-misc-macro-parentheses\
,-boost-use-to-string\
,-misc-lambda-function-name\
,-cert-err58-cpp\
,-cert-err34-c\
,-cppcoreguidelines-*\
,-modernize-*\
,-clang-diagnostic-#pragma-messages\
,-google-*\
,-readability-*"
)
if( NOT CLANG_TIDY_EXE ) if( NOT CLANG_TIDY_EXE )
message( STATUS "clang-tidy not found" ) message( STATUS "clang-tidy not found" )
elseif( VERSION LESS 3.6 ) elseif( VERSION LESS 3.6 )
messgae( STATUS "clang-tidy found but only supported with CMake version >= 3.6" ) messgae( STATUS "clang-tidy found but only supported with CMake version >= 3.6" )
else() else()
message( STATUS "clany-tidy found: ${CLANG_TIDY_EXE}" ) message( STATUS "clany-tidy found: ${CLANG_TIDY_EXE}" )
if( "${HIVE_LINT_LEVEL}" STREQUAL "FULL" ) if( HIVE_LINT )
message( STATUS "Linting level set to: FULL" ) message( STATUS "Linting enabled" )
set( DO_CLANG_TIDY "${CLANG_TIDY_EXE}" "-checks='*'" ) set( DO_CLANG_TIDY ${CLANG_TIDY_EXE};-checks=*,${CLANG_TIDY_IGNORED};--warnings-as-errors=* )
elseif( "${HIVE_LINT_LEVEL}" STREQUAL "HIGH" )
message( STATUS "Linting level set to: HIGH" )
set( DO_CLANG_TIDY "${CLANG_TIDY_EXE}" "-checks='boost-use-to-string,clang-analyzer-*,cppcoreguidelines-*,llvm-*,misc-*,performance-*,readability-*'" )
elseif( "${HIVE_LINT_LEVEL}" STREQUAL "LOW" )
message( STATUS "Linting level set to: LOW" )
set( DO_CLANG_TIDY "${CLANG_TIDY_EXE}" "-checks='clang-analyzer-*'" )
else() else()
unset( CLANG_TIDY_EXE ) unset( CLANG_TIDY_EXE )
message( STATUS "Linting level set to: OFF" ) message( STATUS "Linting disabled" )
endif() endif()
endif( NOT CLANG_TIDY_EXE ) endif( NOT CLANG_TIDY_EXE )
...@@ -286,7 +308,6 @@ endif() ...@@ -286,7 +308,6 @@ endif()
# fc/src/compress/miniz.c breaks strict aliasing. The Linux kernel builds with no strict aliasing # fc/src/compress/miniz.c breaks strict aliasing. The Linux kernel builds with no strict aliasing
SET(CMAKE_CXX_FLAGS "${CMAKE_CXX_FLAGS} -fno-strict-aliasing -Werror -DBOOST_THREAD_DONT_PROVIDE_PROMISE_LAZY" ) SET(CMAKE_CXX_FLAGS "${CMAKE_CXX_FLAGS} -fno-strict-aliasing -Werror -DBOOST_THREAD_DONT_PROVIDE_PROMISE_LAZY" )
SET(CMAKE_C_FLAGS "${CMAKE_C_FLAGS} -fno-strict-aliasing -DBOOST_THREAD_DONT_PROVIDE_PROMISE_LAZY" ) SET(CMAKE_C_FLAGS "${CMAKE_C_FLAGS} -fno-strict-aliasing -DBOOST_THREAD_DONT_PROVIDE_PROMISE_LAZY" )
# -Werror
# external_plugins needs to be compiled first because libraries/app depends on HIVE_EXTERNAL_PLUGINS being fully populated # external_plugins needs to be compiled first because libraries/app depends on HIVE_EXTERNAL_PLUGINS being fully populated
add_subdirectory( external_plugins ) add_subdirectory( external_plugins )
...@@ -309,7 +330,7 @@ set(CMAKE_INSTALL_PREFIX ${CMAKE_BINARY_DIR}/install) ...@@ -309,7 +330,7 @@ set(CMAKE_INSTALL_PREFIX ${CMAKE_BINARY_DIR}/install)
SET(CPACK_PACKAGE_DIRECTORY "${CMAKE_INSTALL_PREFIX}") SET(CPACK_PACKAGE_DIRECTORY "${CMAKE_INSTALL_PREFIX}")
set(CPACK_PACKAGE_NAME "hive") set(CPACK_PACKAGE_NAME "hive")
set(CPACK_PACKAGE_VENDOR "Steemit, Inc.") set(CPACK_PACKAGE_VENDOR "Hive Community")
set(CPACK_PACKAGE_VERSION_MAJOR "${VERSION_MAJOR}") set(CPACK_PACKAGE_VERSION_MAJOR "${VERSION_MAJOR}")
set(CPACK_PACKAGE_VERSION_MINOR "${VERSION_MINOR}") set(CPACK_PACKAGE_VERSION_MINOR "${VERSION_MINOR}")
set(CPACK_PACKAGE_VERSION_PATCH "${VERSION_PATCH}") set(CPACK_PACKAGE_VERSION_PATCH "${VERSION_PATCH}")
...@@ -364,6 +385,9 @@ endif() ...@@ -364,6 +385,9 @@ endif()
if( LOW_MEMORY_NODE ) if( LOW_MEMORY_NODE )
MESSAGE( STATUS "\n\n CONFIGURED FOR LOW MEMORY NODE \n\n" ) MESSAGE( STATUS "\n\n CONFIGURED FOR LOW MEMORY NODE \n\n" )
if( STORE_ACCOUNT_METADATA )
MESSAGE( STATUS "\n\n BUT STILL STORING ACCOUNT METADATA \n\n" )
endif()
else() else()
MESSAGE( STATUS "\n\n CONFIGURED FOR FULL NODE \n\n" ) MESSAGE( STATUS "\n\n CONFIGURED FOR FULL NODE \n\n" )
endif() endif()
......
...@@ -4,6 +4,7 @@ ARG LOW_MEMORY_NODE=ON ...@@ -4,6 +4,7 @@ ARG LOW_MEMORY_NODE=ON
ARG CLEAR_VOTES=ON ARG CLEAR_VOTES=ON
ARG BUILD_HIVE_TESTNET=OFF ARG BUILD_HIVE_TESTNET=OFF
ARG ENABLE_MIRA=OFF ARG ENABLE_MIRA=OFF
ARG HIVE_LINT=OFF
FROM registry.gitlab.syncad.com/hive/hive/hive-baseenv:latest AS builder FROM registry.gitlab.syncad.com/hive/hive/hive-baseenv:latest AS builder
ENV src_dir="/usr/local/src/hive" ENV src_dir="/usr/local/src/hive"
...@@ -21,7 +22,7 @@ FROM builder AS consensus_node_builder ...@@ -21,7 +22,7 @@ FROM builder AS consensus_node_builder
RUN \ RUN \
cd ${src_dir} && \ cd ${src_dir} && \
${src_dir}/ciscripts/build.sh "ON" "ON" "OFF" "OFF" ${src_dir}/ciscripts/build.sh "ON" "ON" "OFF" "OFF" "ON"
################################################################################################### ###################################################################################################
## CONSENSUS NODE CONFIGURATION ## ## CONSENSUS NODE CONFIGURATION ##
...@@ -93,15 +94,17 @@ ARG LOW_MEMORY_NODE ...@@ -93,15 +94,17 @@ ARG LOW_MEMORY_NODE
ARG CLEAR_VOTES ARG CLEAR_VOTES
ARG BUILD_HIVE_TESTNET ARG BUILD_HIVE_TESTNET
ARG ENABLE_MIRA ARG ENABLE_MIRA
ARG HIVE_LINT
ENV LOW_MEMORY_NODE=${LOW_MEMORY_NODE} ENV LOW_MEMORY_NODE=${LOW_MEMORY_NODE}
ENV CLEAR_VOTES=${CLEAR_VOTES} ENV CLEAR_VOTES=${CLEAR_VOTES}
ENV BUILD_HIVE_TESTNET=${BUILD_HIVE_TESTNET} ENV BUILD_HIVE_TESTNET=${BUILD_HIVE_TESTNET}
ENV ENABLE_MIRA=${ENABLE_MIRA} ENV ENABLE_MIRA=${ENABLE_MIRA}
ENV HIVE_LINT=${HIVE_LINT}
RUN \ RUN \
cd ${src_dir} && \ cd ${src_dir} && \
${src_dir}/ciscripts/build.sh ${LOW_MEMORY_NODE} ${CLEAR_VOTES} ${BUILD_HIVE_TESTNET} ${ENABLE_MIRA} ${src_dir}/ciscripts/build.sh ${LOW_MEMORY_NODE} ${CLEAR_VOTES} ${BUILD_HIVE_TESTNET} ${ENABLE_MIRA} ${HIVE_LINT}
################################################################################################### ###################################################################################################
## GENERAL NODE CONFIGURATION ## ## GENERAL NODE CONFIGURATION ##
...@@ -137,16 +140,20 @@ FROM builder AS testnet_node_builder ...@@ -137,16 +140,20 @@ FROM builder AS testnet_node_builder
ARG LOW_MEMORY_NODE=OFF ARG LOW_MEMORY_NODE=OFF
ARG CLEAR_VOTES=OFF ARG CLEAR_VOTES=OFF
ARG ENABLE_MIRA=OFF ARG ENABLE_MIRA=OFF
ARG HIVE_LINT=ON
ENV LOW_MEMORY_NODE=${LOW_MEMORY_NODE} ENV LOW_MEMORY_NODE=${LOW_MEMORY_NODE}
ENV CLEAR_VOTES=${CLEAR_VOTES} ENV CLEAR_VOTES=${CLEAR_VOTES}
ENV BUILD_HIVE_TESTNET="ON" ENV BUILD_HIVE_TESTNET="ON"
ENV ENABLE_MIRA=${ENABLE_MIRA} ENV ENABLE_MIRA=${ENABLE_MIRA}
ENV HIVE_LINT=${HIVE_LINT}
RUN \ RUN \
cd ${src_dir} && \ cd ${src_dir} && \
${src_dir}/ciscripts/build.sh ${LOW_MEMORY_NODE} ${CLEAR_VOTES} ${BUILD_HIVE_TESTNET} ${ENABLE_MIRA} && \
apt-get update && \ apt-get update && \
apt-get install -y clang && \
apt-get install -y clang-tidy && \
${src_dir}/ciscripts/build.sh ${LOW_MEMORY_NODE} ${CLEAR_VOTES} ${BUILD_HIVE_TESTNET} ${ENABLE_MIRA} ${HIVE_LINT} && \
apt-get install -y screen && \ apt-get install -y screen && \
pip3 install -U secp256k1prp && \ pip3 install -U secp256k1prp && \
git clone https://gitlab.syncad.com/hive/beem.git && \ git clone https://gitlab.syncad.com/hive/beem.git && \
......
...@@ -39,6 +39,8 @@ RUN \ ...@@ -39,6 +39,8 @@ RUN \
libbz2-dev \ libbz2-dev \
liblz4-dev \ liblz4-dev \
libzstd-dev \ libzstd-dev \
clang \
clang-tidy \
&& \ && \
apt-get clean && \ apt-get clean && \
rm -rf /var/lib/apt/lists/* /tmp/* /var/tmp/* && \ rm -rf /var/lib/apt/lists/* /tmp/* /var/tmp/* && \
......
...@@ -5,12 +5,14 @@ LOW_MEMORY_NODE=$1 ...@@ -5,12 +5,14 @@ LOW_MEMORY_NODE=$1
CLEAR_VOTES=$2 CLEAR_VOTES=$2
BUILD_HIVE_TESTNET=$3 BUILD_HIVE_TESTNET=$3
ENABLE_MIRA=$4 ENABLE_MIRA=$4
HIVE_LINT=${5:OFF}
echo "PWD=${PWD}" echo "PWD=${PWD}"
echo "LOW_MEMORY_NODE=${LOW_MEMORY_NODE}" echo "LOW_MEMORY_NODE=${LOW_MEMORY_NODE}"
echo "CLEAR_VOTES=${CLEAR_VOTES}" echo "CLEAR_VOTES=${CLEAR_VOTES}"
echo "BUILD_HIVE_TESTNET=${BUILD_HIVE_TESTNET}" echo "BUILD_HIVE_TESTNET=${BUILD_HIVE_TESTNET}"
echo "ENABLE_MIRA=${ENABLE_MIRA}" echo "ENABLE_MIRA=${ENABLE_MIRA}"
echo "HIVE_LINT=${HIVE_LINT}"
BUILD_DIR="${PWD}/build" BUILD_DIR="${PWD}/build"
CMAKE_BUILD_TYPE=Release CMAKE_BUILD_TYPE=Release
...@@ -28,6 +30,7 @@ cmake \ ...@@ -28,6 +30,7 @@ cmake \
-DBUILD_HIVE_TESTNET=${BUILD_HIVE_TESTNET} \ -DBUILD_HIVE_TESTNET=${BUILD_HIVE_TESTNET} \
-DENABLE_MIRA=${ENABLE_MIRA} \ -DENABLE_MIRA=${ENABLE_MIRA} \
-DHIVE_STATIC_BUILD=ON \ -DHIVE_STATIC_BUILD=ON \
-DHIVE_LINT=${HIVE_LINT} \
.. ..
make -j$(nproc) make -j$(nproc)
make install make install
......
#!/bin/bash
curl --silent -XPOST -H "Authorization: token $GITHUB_SECRET" https://api.github.com/repos/steemit/hive/statuses/$(git rev-parse HEAD) -d "{
\"state\": \"failure\",
\"target_url\": \"${BUILD_URL}\",
\"description\": \"JenkinsCI reports the build has failed!\",
\"context\": \"jenkins-ci-steemit\"
}"
rm -rf $WORKSPACE/*
# make docker cleanup after itself and delete all exited containers
sudo docker rm -v $(docker ps -a -q -f status=exited) || true
#!/bin/bash
curl --silent -XPOST -H "Authorization: token $GITHUB_SECRET" https://api.github.com/repos/steemit/hive/statuses/$(git rev-parse HEAD) -d "{
\"state\": \"pending\",
\"target_url\": \"${BUILD_URL}\",
\"description\": \"The build is now pending in jenkinsci!\",
\"context\": \"jenkins-ci-steemit\"
}"
#/bin/bash
curl --silent -XPOST -H "Authorization: token $GITHUB_SECRET" https://api.github.com/repos/steemit/hive/statuses/$(git rev-parse HEAD) -d "{
\"state\": \"success\",
\"target_url\": \"${BUILD_URL}\",
\"description\": \"Jenkins-CI reports build succeeded!!\",
\"context\": \"jenkins-ci-steemit\"
}"
rm -rf $WORKSPACE/*
# make docker cleanup after itself and delete all exited containers
sudo docker rm -v $(docker ps -a -q -f status=exited) || true
#!/bin/bash
set -e
sudo docker build --build-arg CI_BUILD=1 --build-arg BUILD_STEP=1 -t=steemit/hive:tests .
sudo docker run -v $WORKSPACE:/var/jenkins steemit/hive:tests cp -r /var/cobertura /var/jenkins
# make docker cleanup after itself and delete all exited containers
sudo docker rm -v $(docker ps -a -q -f status=exited) || true
\ No newline at end of file
...@@ -16,6 +16,17 @@ execute_unittest_group() ...@@ -16,6 +16,17 @@ execute_unittest_group()
fi fi
} }
# $1 ctest test name
execute_exactly_one_test()
{
local ctest_test_name=$1
echo "Start ctest test '${ctest_test_name}'"
if ! ctest -R ^${ctest_test_name}$ --output-on-failure -vv
then
exit 1
fi
}
execute_hive_functional() execute_hive_functional()
{ {
echo "Start hive functional tests" echo "Start hive functional tests"
...@@ -51,8 +62,8 @@ echo " ...@@ -51,8 +62,8 @@ echo "
execute_unittest_group plugin_test execute_exactly_one_test all_plugin_tests
execute_unittest_group chain_test execute_exactly_one_test all_chain_tests
execute_hive_functional execute_hive_functional
......
#!/bin/bash
set -e
/bin/bash $WORKSPACE/ciscripts/buildpending.sh
if /bin/bash $WORKSPACE/ciscripts/buildscript.sh; then
echo BUILD SUCCESS
else
echo BUILD FAILURE
exit 1
fi
#!/bin/bash
set -e
if /bin/bash $WORKSPACE/ciscripts/buildtests.sh; then
echo BUILD SUCCESS
else
echo BUILD FAILURE
exit 1
fi
\ No newline at end of file
#!/bin/bash
echo hived-testnet: getting deployment scripts from external source
wget -qO- $SCRIPTURL/master/$LAUNCHENV/$APP/testnetinit.sh > /usr/local/bin/testnetinit.sh
wget -qO- $SCRIPTURL/master/$LAUNCHENV/$APP/testnet.config.ini > /etc/hived/testnet.config.ini
wget -qO- $SCRIPTURL/master/$LAUNCHENV/$APP/fastgen.config.ini > /etc/hived/fastgen.config.ini
chmod +x /usr/local/bin/testnetinit.sh
echo hived-testnet: launching testnetinit script
/usr/local/bin/testnetinit.sh
...@@ -119,7 +119,14 @@ class application_impl { ...@@ -119,7 +119,14 @@ class application_impl {
}; };
application::application() application::application()
:my(new application_impl()), main_io_handler( true/*allow_close_when_signal_is_received*/, [ this ](){ shutdown(); } ) : pre_shutdown_plugins(
[]( abstract_plugin* a, abstract_plugin* b )
{
assert( a && b );
return a->get_pre_shutdown_order() > b->get_pre_shutdown_order();
}
),
my(new application_impl()), main_io_handler( true/*allow_close_when_signal_is_received*/, [ this ](){ finish(); } )
{ {
} }
...@@ -322,6 +329,18 @@ bool application::initialize_impl(int argc, char** argv, vector<abstract_plugin* ...@@ -322,6 +329,18 @@ bool application::initialize_impl(int argc, char** argv, vector<abstract_plugin*
} }
} }
void application::pre_shutdown() {
std::cout << "Before shutting down...\n";
for( auto& plugin : pre_shutdown_plugins )
{
plugin->pre_shutdown();
}
pre_shutdown_plugins.clear();
}
void application::shutdown() { void application::shutdown() {
std::cout << "Shutting down...\n"; std::cout << "Shutting down...\n";
...@@ -339,6 +358,12 @@ void application::shutdown() { ...@@ -339,6 +358,12 @@ void application::shutdown() {
plugins.clear(); plugins.clear();
} }
void application::finish()
{
pre_shutdown();
shutdown();
}
void application::exec() { void application::exec() {
if( !is_interrupt_request() ) if( !is_interrupt_request() )
......
...@@ -76,8 +76,12 @@ namespace appbase { ...@@ -76,8 +76,12 @@ namespace appbase {
} }
void startup(); void startup();
void pre_shutdown();
void shutdown(); void shutdown();
void finish();
/** /**
* Wait until quit(), SIGINT or SIGTERM and then shutdown * Wait until quit(), SIGINT or SIGTERM and then shutdown
*/ */
...@@ -162,13 +166,21 @@ namespace appbase { ...@@ -162,13 +166,21 @@ namespace appbase {
*/ */
///@{ ///@{
void plugin_initialized( abstract_plugin& plug ) { initialized_plugins.push_back( &plug ); } void plugin_initialized( abstract_plugin& plug ) { initialized_plugins.push_back( &plug ); }
void plugin_started( abstract_plugin& plug ) { running_plugins.push_back( &plug ); } void plugin_started( abstract_plugin& plug )
{
running_plugins.push_back( &plug );
pre_shutdown_plugins.insert( &plug );
}
///@} ///@}
private: private:
application(); ///< private because application is a singlton that should be accessed via instance() application(); ///< private because application is a singlton that should be accessed via instance()
map< string, std::shared_ptr< abstract_plugin > > plugins; ///< all registered plugins map< string, std::shared_ptr< abstract_plugin > > plugins; ///< all registered plugins
vector< abstract_plugin* > initialized_plugins; ///< stored in the order they were started running vector< abstract_plugin* > initialized_plugins; ///< stored in the order they were started running
using pre_shutdown_cmp = std::function< bool ( abstract_plugin*, abstract_plugin* ) >;
using pre_shutdown_multiset = std::multiset< abstract_plugin*, pre_shutdown_cmp >;
pre_shutdown_multiset pre_shutdown_plugins; ///< stored in the order what is necessary in order to close every plugin in safe way
vector< abstract_plugin* > running_plugins; ///< stored in the order they were started running vector< abstract_plugin* > running_plugins; ///< stored in the order they were started running
std::string version_info; std::string version_info;
std::string app_name = "appbase"; std::string app_name = "appbase";
...@@ -195,6 +207,7 @@ namespace appbase { ...@@ -195,6 +207,7 @@ namespace appbase {
public: public:
virtual ~plugin() {} virtual ~plugin() {}
virtual pre_shutdown_order get_pre_shutdown_order() const override { return _pre_shutdown_order; }
virtual state get_state() const override { return _state; } virtual state get_state() const override { return _state; }
virtual const std::string& get_name()const override final { return Impl::name(); } virtual const std::string& get_name()const override final { return Impl::name(); }
...@@ -230,6 +243,23 @@ namespace appbase { ...@@ -230,6 +243,23 @@ namespace appbase {
BOOST_THROW_EXCEPTION( std::runtime_error("Initial state was not initialized, so final state cannot be started.") ); BOOST_THROW_EXCEPTION( std::runtime_error("Initial state was not initialized, so final state cannot be started.") );
} }
virtual void plugin_pre_shutdown() override
{
/*
By default most plugins don't need any pre-actions during shutdown.
A problem appears when P2P plugin receives and sends data into dependent plugins.
In this case is necessary to close P2P plugin as soon as possible.
*/
}
virtual void pre_shutdown() override final
{
if( _state == started )
{
this->plugin_pre_shutdown();
}
}
virtual void shutdown() override final virtual void shutdown() override final
{ {
if( _state == started ) if( _state == started )
...@@ -243,7 +273,10 @@ namespace appbase { ...@@ -243,7 +273,10 @@ namespace appbase {
protected: protected:
plugin() = default; plugin() = default;
virtual void set_pre_shutdown_order( pre_shutdown_order val ) { _pre_shutdown_order = val; }
private: private:
pre_shutdown_order _pre_shutdown_order = abstract_plugin::basic_order;
state _state = abstract_plugin::registered; state _state = abstract_plugin::registered;
}; };
} }
...@@ -35,13 +35,20 @@ namespace appbase { ...@@ -35,13 +35,20 @@ namespace appbase {
stopped ///< the plugin is no longer running stopped ///< the plugin is no longer running
}; };
enum pre_shutdown_order {
basic_order = 0, ///most plugins don't need to be prepared before another plugins, therefore it doesn't matter when they will be closed
p2p_order = 1 ///p2p plugin has to reject/break all connections at the start
};
virtual ~abstract_plugin(){} virtual ~abstract_plugin(){}
virtual pre_shutdown_order get_pre_shutdown_order()const = 0;
virtual state get_state()const = 0; virtual state get_state()const = 0;
virtual const std::string& get_name()const = 0; virtual const std::string& get_name()const = 0;
virtual void set_program_options( options_description& cli, options_description& cfg ) = 0; virtual void set_program_options( options_description& cli, options_description& cfg ) = 0;
virtual void initialize(const variables_map& options) = 0; virtual void initialize(const variables_map& options) = 0;
virtual void startup() = 0; virtual void startup() = 0;
virtual void pre_shutdown() = 0;
virtual void shutdown() = 0; virtual void shutdown() = 0;
protected: protected:
...@@ -64,8 +71,13 @@ namespace appbase { ...@@ -64,8 +71,13 @@ namespace appbase {
/** Abstract method to be reimplemented in final plugin implementation. /** Abstract method to be reimplemented in final plugin implementation.
It is a part of shutdown process triggerred by main application. It is a part of shutdown process triggerred by main application.
*/ */
virtual void plugin_pre_shutdown() = 0;
/** Abstract method to be reimplemented in final plugin implementation.
It is a part of shutdown process triggerred by main application.
*/
virtual void plugin_shutdown() = 0; virtual void plugin_shutdown() = 0;
virtual void set_pre_shutdown_order( pre_shutdown_order val ) = 0;
}; };
template<typename Impl> template<typename Impl>
......
#pragma once
#include <fc/log/logger.hpp>
#include <vector>
#include <atomic>
#include <future>
namespace hive {
struct shutdown_state
{
using ptr_shutdown_state = std::shared_ptr< shutdown_state >;
std::promise<void> promise;
std::shared_future<void> future;
std::atomic_uint activity;
};
class shutdown_mgr
{
private:
std::string name;
std::atomic_bool running;
std::vector< shutdown_state::ptr_shutdown_state > states;
const char* fStatus(std::future_status s)
{
switch(s)
{
case std::future_status::ready:
return "ready";
case std::future_status::deferred:
return "deferred";
case std::future_status::timeout:
return "timeout";
default:
return "unknown";
}
}
void wait( const shutdown_state& state )
{
FC_ASSERT( !get_running().load(), "Lack of shutdown" );
std::future_status res;
uint32_t cnt = 0;
uint32_t time_maximum = 300;//30 seconds
do
{
if( state.activity.load() != 0 )
{
res = state.future.wait_for( std::chrono::milliseconds(100) );
if( res != std::future_status::ready )
{
ilog("finishing: ${s}, future status: ${fs}", ("s", fStatus( res ) )("fs", std::to_string( state.future.valid() ) ) );
}
FC_ASSERT( ++cnt <= time_maximum, "Closing the ${name} is terminated", ( "name", name ) );
}
else
{
res = std::future_status::ready;
}
}
while( res != std::future_status::ready );
}
public:
shutdown_mgr( std::string _name, size_t _nr_actions )
: name( _name ), running( true )
{
for( size_t i = 0; i < _nr_actions; ++i )
{
shutdown_state::ptr_shutdown_state _state( new shutdown_state() );
_state->future = std::shared_future<void>( _state->promise.get_future() );
_state->activity.store( 0 );
states.emplace_back( _state );
}
}
void prepare_shutdown()
{
running.store( false );
}
const std::atomic_bool& get_running() const
{
return running;
}
shutdown_state& get_state( size_t idx )
{
FC_ASSERT( idx < states.size(), "Incorrect index - lack of correct state" );
shutdown_state* _state = states[idx].get();
FC_ASSERT( _state, "State has NULL value" );
return *_state;
}
void wait()
{
if( get_running().load() )
return;
for( auto& state : states )
{
shutdown_state* _state = state.get();
FC_ASSERT( _state, "State has NULL value" );
wait( *_state );
}
}
};
class action_catcher
{
private:
const std::atomic_bool& running;
shutdown_state& state;
public:
action_catcher( const std::atomic_bool& _running, shutdown_state& _state ):
running( _running ), state( _state )
{
state.activity.store( state.activity.load() + 1 );
}
~action_catcher()
{
state.activity.store( state.activity.load() - 1 );
if( running.load() == false && state.future.valid() == false )
{
ilog("Sending notification to shutdown barrier.");
try
{
state.promise.set_value();
}
catch( const std::future_error& e )
{
ilog("action_catcher: future error exception. ( Code: ${c} )( Message: ${m} )", ( "c", e.code().value() )( "m", e.what() ) );
}
catch(...)
{
ilog("action_catcher: unknown error exception." );
}
}
}
};
}
...@@ -152,10 +152,10 @@ namespace hive { namespace chain { ...@@ -152,10 +152,10 @@ namespace hive { namespace chain {
my->block_file = file; my->block_file = file;
my->index_file = fc::path( file.generic_string() + ".index" ); my->index_file = fc::path( file.generic_string() + ".index" );
my->block_log_fd = ::open(my->block_file.generic_string().c_str(), O_RDWR | O_APPEND | O_CREAT, 0644); my->block_log_fd = ::open(my->block_file.generic_string().c_str(), O_RDWR | O_APPEND | O_CREAT | O_CLOEXEC, 0644);
if (my->block_log_fd == -1) if (my->block_log_fd == -1)
FC_THROW("Error opening block log file ${filename}: ${error}", ("filename", my->block_file)("error", strerror(errno))); FC_THROW("Error opening block log file ${filename}: ${error}", ("filename", my->block_file)("error", strerror(errno)));
my->block_index_fd = ::open(my->index_file.generic_string().c_str(), O_RDWR | O_APPEND | O_CREAT, 0644); my->block_index_fd = ::open(my->index_file.generic_string().c_str(), O_RDWR | O_APPEND | O_CREAT | O_CLOEXEC, 0644);
if (my->block_index_fd == -1) if (my->block_index_fd == -1)
FC_THROW("Error opening block index file ${filename}: ${error}", ("filename", my->index_file)("error", strerror(errno))); FC_THROW("Error opening block index file ${filename}: ${error}", ("filename", my->index_file)("error", strerror(errno)));
my->block_log_size = get_file_size(my->block_log_fd); my->block_log_size = get_file_size(my->block_log_fd);
...@@ -462,7 +462,7 @@ namespace hive { namespace chain { ...@@ -462,7 +462,7 @@ namespace hive { namespace chain {
//create and size the new temporary index file (block_log.index.new) //create and size the new temporary index file (block_log.index.new)
fc::path new_index_file(my->index_file.generic_string() + ".new"); fc::path new_index_file(my->index_file.generic_string() + ".new");
const size_t block_index_size = block_num * sizeof(uint64_t); const size_t block_index_size = block_num * sizeof(uint64_t);
int new_index_fd = ::open(new_index_file.generic_string().c_str(), O_RDWR | O_CREAT | O_TRUNC, 0644); int new_index_fd = ::open(new_index_file.generic_string().c_str(), O_RDWR | O_CREAT | O_TRUNC | O_CLOEXEC, 0644);
if (new_index_fd == -1) if (new_index_fd == -1)
FC_THROW("Error opening temporary new index file ${filename}: ${error}", ("filename", new_index_file.generic_string())("error", strerror(errno))); FC_THROW("Error opening temporary new index file ${filename}: ${error}", ("filename", new_index_file.generic_string())("error", strerror(errno)));
if (ftruncate(new_index_fd, block_index_size) == -1) if (ftruncate(new_index_fd, block_index_size) == -1)
...@@ -572,7 +572,7 @@ namespace hive { namespace chain { ...@@ -572,7 +572,7 @@ namespace hive { namespace chain {
#endif //NOT USE_BACKWARD_INDEX #endif //NOT USE_BACKWARD_INDEX
ilog("opening new block index"); ilog("opening new block index");
my->block_index_fd = ::open(my->index_file.generic_string().c_str(), O_RDWR | O_APPEND | O_CREAT, 0644); my->block_index_fd = ::open(my->index_file.generic_string().c_str(), O_RDWR | O_APPEND | O_CREAT | O_CLOEXEC, 0644);
if (my->block_index_fd == -1) if (my->block_index_fd == -1)
FC_THROW("Error opening block index file ${filename}: ${error}", ("filename", my->index_file)("error", strerror(errno))); FC_THROW("Error opening block index file ${filename}: ${error}", ("filename", my->index_file)("error", strerror(errno)));
//report size of new index file and verify it is the right size for the blocks in block log //report size of new index file and verify it is the right size for the blocks in block log
......
...@@ -55,36 +55,24 @@ ...@@ -55,36 +55,24 @@
#include <stdlib.h> #include <stdlib.h>
long hf24_time() long next_hf_time()
{ {
long hf24Time = // current "next hardfork" is HF25
long hfTime =
#ifdef IS_TEST_NET #ifdef IS_TEST_NET
1588334400; // Friday, 1 May 2020 12:00:00 GMT 1588334400; // Friday, 1 May 2020 12:00:00 GMT
#else #else
1601992800; // Tuesday, 06-Oct-2020 14:00:00 UTC 1640952000; // Thursday, 31 December 2021 12:00:00 GMT
#endif /// IS_TEST_NET #endif /// IS_TEST_NET
const char* value = getenv("HIVE_HF24_TIME");
if(value != nullptr)
{
hf24Time = atol(value);
ilog("HIVE_HF24_TIME has been specified through environment variable as ${v}, long value: ${l}", ("v", value)("l", hf24Time));
}
return hf24Time;
}
long hf23_time() const char* value = getenv("HIVE_HF25_TIME");
{
long hf23Time = 1584712800; // Friday, 20 March 2020 14:00:00 GMT
const char* value = getenv("HIVE_HF23_TIME");
if(value != nullptr) if(value != nullptr)
{ {
hf23Time = atol(value); hfTime = atol(value);
ilog("HIVE_HF23_TIME has been specified through environment variable as ${v}, long value: ${l}", ("v", value)("l", hf23Time)); ilog("HIVE_HF25_TIME has been specified through environment variable as ${v}, long value: ${l}", ("v", value)("l", hfTime));
} }
return hf23Time; return hfTime;
} }
namespace hive { namespace chain { namespace hive { namespace chain {
...@@ -117,8 +105,6 @@ FC_REFLECT( hive::chain::db_schema, (types)(object_types)(operation_type)(custom ...@@ -117,8 +105,6 @@ FC_REFLECT( hive::chain::db_schema, (types)(object_types)(operation_type)(custom
namespace hive { namespace chain { namespace hive { namespace chain {
using boost::container::flat_set;
struct reward_fund_context struct reward_fund_context
{ {
uint128_t recent_claims = 0; uint128_t recent_claims = 0;
...@@ -156,7 +142,7 @@ void database::open( const open_args& args ) ...@@ -156,7 +142,7 @@ void database::open( const open_args& args )
helpers::environment_extension_resources environment_extension( helpers::environment_extension_resources environment_extension(
appbase::app().get_version_string(), appbase::app().get_version_string(),
std::move( appbase::app().get_plugins_names() ), appbase::app().get_plugins_names(),
[]( const std::string& message ){ wlog( message.c_str() ); } []( const std::string& message ){ wlog( message.c_str() ); }
); );
chainbase::database::open( args.shared_mem_dir, args.chainbase_flags, args.shared_file_size, args.database_cfg, &environment_extension ); chainbase::database::open( args.shared_mem_dir, args.chainbase_flags, args.shared_file_size, args.database_cfg, &environment_extension );
...@@ -211,6 +197,9 @@ void database::open( const open_args& args ) ...@@ -211,6 +197,9 @@ void database::open( const open_args& args )
init_hardforks(); // Writes to local state, but reads from db init_hardforks(); // Writes to local state, but reads from db
}); });
#ifdef IS_TEST_NET
/// Leave the chain-id passed to cmdline option.
#else
with_read_lock( [&]() with_read_lock( [&]()
{ {
const auto& hardforks = get_hardfork_property_object(); const auto& hardforks = get_hardfork_property_object();
...@@ -220,7 +209,8 @@ void database::open( const open_args& args ) ...@@ -220,7 +209,8 @@ void database::open( const open_args& args )
set_chain_id(HIVE_CHAIN_ID); set_chain_id(HIVE_CHAIN_ID);
} }
}); });
#endif /// IS_TEST_NET
if (args.benchmark.first) if (args.benchmark.first)
{ {
args.benchmark.second(0, get_abstract_index_cntr()); args.benchmark.second(0, get_abstract_index_cntr());
...@@ -230,7 +220,6 @@ void database::open( const open_args& args ) ...@@ -230,7 +220,6 @@ void database::open( const open_args& args )
_shared_file_full_threshold = args.shared_file_full_threshold; _shared_file_full_threshold = args.shared_file_full_threshold;
_shared_file_scale_rate = args.shared_file_scale_rate; _shared_file_scale_rate = args.shared_file_scale_rate;
_sps_remove_threshold = args.sps_remove_threshold;
auto account = find< account_object, by_name >( "nijeah" ); auto account = find< account_object, by_name >( "nijeah" );
if( account != nullptr && account->to_withdraw < 0 ) if( account != nullptr && account->to_withdraw < 0 )
...@@ -647,8 +636,8 @@ std::vector<signed_block> database::fetch_block_range_unlocked( const uint32_t s ...@@ -647,8 +636,8 @@ std::vector<signed_block> database::fetch_block_range_unlocked( const uint32_t s
if (!result.empty()) if (!result.empty())
idump((result.front().block_num())(result.back().block_num())); idump((result.front().block_num())(result.back().block_num()));
result.reserve(result.size() + fork_items.size()); result.reserve(result.size() + fork_items.size());
for (const fork_item& item : fork_items) for (fork_item& item : fork_items)
result.push_back(std::move(item.data)); result.emplace_back(std::move(item.data));
return result; return result;
} FC_LOG_AND_RETHROW() } } FC_LOG_AND_RETHROW() }
...@@ -686,6 +675,24 @@ chain_id_type database::get_chain_id() const ...@@ -686,6 +675,24 @@ chain_id_type database::get_chain_id() const
return hive_chain_id; return hive_chain_id;
} }
chain_id_type database::get_old_chain_id() const
{
#ifdef IS_TEST_NET
return hive_chain_id; /// In testnet always use the chain-id passed as hived option
#else
return STEEM_CHAIN_ID;
#endif /// IS_TEST_NET
}
chain_id_type database::get_new_chain_id() const
{
#ifdef IS_TEST_NET
return hive_chain_id; /// In testnet always use the chain-id passed as hived option
#else
return HIVE_CHAIN_ID;
#endif /// IS_TEST_NET
}
void database::set_chain_id( const chain_id_type& chain_id ) void database::set_chain_id( const chain_id_type& chain_id )
{ {
hive_chain_id = chain_id; hive_chain_id = chain_id;
...@@ -693,7 +700,7 @@ void database::set_chain_id( const chain_id_type& chain_id ) ...@@ -693,7 +700,7 @@ void database::set_chain_id( const chain_id_type& chain_id )
idump( (hive_chain_id) ); idump( (hive_chain_id) );
} }
void database::foreach_block(std::function<bool(const signed_block_header&, const signed_block&)> processor) const void database::foreach_block(const std::function<bool(const signed_block_header&, const signed_block&)>& processor) const
{ {
if(!_block_log.head()) if(!_block_log.head())
return; return;
...@@ -1089,6 +1096,7 @@ void database::_maybe_warn_multiple_production( uint32_t height )const ...@@ -1089,6 +1096,7 @@ void database::_maybe_warn_multiple_production( uint32_t height )const
if( blocks.size() > 1 ) if( blocks.size() > 1 )
{ {
vector< std::pair< account_name_type, fc::time_point_sec > > witness_time_pairs; vector< std::pair< account_name_type, fc::time_point_sec > > witness_time_pairs;
witness_time_pairs.reserve( blocks.size() );
for( const auto& b : blocks ) for( const auto& b : blocks )
{ {
witness_time_pairs.push_back( std::make_pair( b->data.witness, b->data.timestamp ) ); witness_time_pairs.push_back( std::make_pair( b->data.witness, b->data.timestamp ) );
...@@ -1577,6 +1585,7 @@ asset database::adjust_account_vesting_balance(const account_object& to_account, ...@@ -1577,6 +1585,7 @@ asset database::adjust_account_vesting_balance(const account_object& to_account,
else else
adjust_balance( to_account, new_vesting ); adjust_balance( to_account, new_vesting );
// Update global vesting pool numbers. // Update global vesting pool numbers.
const auto& smt = get< smt_token_object, by_symbol >( liquid.symbol );
modify( smt, [&]( smt_token_object& smt_object ) modify( smt, [&]( smt_token_object& smt_object )
{ {
if( to_reward_balance ) if( to_reward_balance )
...@@ -1647,7 +1656,7 @@ asset database::adjust_account_vesting_balance(const account_object& to_account, ...@@ -1647,7 +1656,7 @@ asset database::adjust_account_vesting_balance(const account_object& to_account,
// we modify the database. // we modify the database.
// This allows us to implement virtual op pre-notifications in the Before function. // This allows us to implement virtual op pre-notifications in the Before function.
template< typename Before > template< typename Before >
asset create_vesting2( database& db, const account_object& to_account, asset liquid, bool to_reward_balance, Before&& before_vesting_callback ) asset create_vesting2( database& db, const account_object& to_account, const asset& liquid, bool to_reward_balance, Before&& before_vesting_callback )
{ {
try try
{ {
...@@ -1666,7 +1675,7 @@ asset create_vesting2( database& db, const account_object& to_account, asset liq ...@@ -1666,7 +1675,7 @@ asset create_vesting2( database& db, const account_object& to_account, asset liq
* @param to_account - the account to receive the new vesting shares * @param to_account - the account to receive the new vesting shares
* @param liquid - HIVE or liquid SMT to be converted to vesting shares * @param liquid - HIVE or liquid SMT to be converted to vesting shares
*/ */
asset database::create_vesting( const account_object& to_account, asset liquid, bool to_reward_balance ) asset database::create_vesting( const account_object& to_account, const asset& liquid, bool to_reward_balance )
{ {
return create_vesting2( *this, to_account, liquid, to_reward_balance, []( asset vests_created ) {} ); return create_vesting2( *this, to_account, liquid, to_reward_balance, []( asset vests_created ) {} );
} }
...@@ -1699,13 +1708,13 @@ void database::adjust_proxied_witness_votes( const account_object& a, ...@@ -1699,13 +1708,13 @@ void database::adjust_proxied_witness_votes( const account_object& a,
const std::array< share_type, HIVE_MAX_PROXY_RECURSION_DEPTH+1 >& delta, const std::array< share_type, HIVE_MAX_PROXY_RECURSION_DEPTH+1 >& delta,
int depth ) int depth )
{ {
if( a.proxy != HIVE_PROXY_TO_SELF_ACCOUNT ) if( a.has_proxy() )
{ {
/// nested proxies are not supported, vote will not propagate /// nested proxies are not supported, vote will not propagate
if( depth >= HIVE_MAX_PROXY_RECURSION_DEPTH ) if( depth >= HIVE_MAX_PROXY_RECURSION_DEPTH )
return; return;
const auto& proxy = get_account( a.proxy ); const auto& proxy = get_account( a.get_proxy() );
modify( proxy, [&]( account_object& a ) modify( proxy, [&]( account_object& a )
{ {
...@@ -1728,13 +1737,13 @@ void database::adjust_proxied_witness_votes( const account_object& a, ...@@ -1728,13 +1737,13 @@ void database::adjust_proxied_witness_votes( const account_object& a,
void database::adjust_proxied_witness_votes( const account_object& a, share_type delta, int depth ) void database::adjust_proxied_witness_votes( const account_object& a, share_type delta, int depth )
{ {
if( a.proxy != HIVE_PROXY_TO_SELF_ACCOUNT ) if( a.has_proxy() )
{ {
/// nested proxies are not supported, vote will not propagate /// nested proxies are not supported, vote will not propagate
if( depth >= HIVE_MAX_PROXY_RECURSION_DEPTH ) if( depth >= HIVE_MAX_PROXY_RECURSION_DEPTH )
return; return;
const auto& proxy = get_account( a.proxy ); const auto& proxy = get_account( a.get_proxy() );
modify( proxy, [&]( account_object& a ) modify( proxy, [&]( account_object& a )
{ {
...@@ -1749,7 +1758,7 @@ void database::adjust_proxied_witness_votes( const account_object& a, share_type ...@@ -1749,7 +1758,7 @@ void database::adjust_proxied_witness_votes( const account_object& a, share_type
} }
} }
void database::adjust_witness_votes( const account_object& a, share_type delta ) void database::adjust_witness_votes( const account_object& a, const share_type& delta )
{ {
const auto& vidx = get_index< witness_vote_index >().indices().get< by_account_witness >(); const auto& vidx = get_index< witness_vote_index >().indices().get< by_account_witness >();
auto itr = vidx.lower_bound( boost::make_tuple( a.name, account_name_type() ) ); auto itr = vidx.lower_bound( boost::make_tuple( a.name, account_name_type() ) );
...@@ -2124,7 +2133,7 @@ void database::gather_balance( const std::string& name, const asset& balance, co ...@@ -2124,7 +2133,7 @@ void database::gather_balance( const std::string& name, const asset& balance, co
void database::clear_accounts( const std::set< std::string >& cleared_accounts ) void database::clear_accounts( const std::set< std::string >& cleared_accounts )
{ {
auto treasury_name = get_treasury_name(); auto treasury_name = get_treasury_name();
for( auto account_name : cleared_accounts ) for( const auto& account_name : cleared_accounts )
{ {
const auto* account_ptr = find_account( account_name ); const auto* account_ptr = find_account( account_name );
if( account_ptr == nullptr ) if( account_ptr == nullptr )
...@@ -2646,7 +2655,7 @@ share_type database::pay_curators( const comment_object& comment, const comment_ ...@@ -2646,7 +2655,7 @@ share_type database::pay_curators( const comment_object& comment, const comment_
{ {
unclaimed_rewards -= claim; unclaimed_rewards -= claim;
const auto& voter = get( item->voter ); const auto& voter = get( item->voter );
operation vop = curation_reward_operation( voter.name, asset(0, VESTS_SYMBOL), comment_author_name, to_string( comment_cashout.permlink ) ); operation vop = curation_reward_operation( voter.name, asset(0, VESTS_SYMBOL), comment_author_name, to_string( comment_cashout.permlink ), has_hardfork( HIVE_HARDFORK_0_17__659 ) );
create_vesting2( *this, voter, asset( claim, HIVE_SYMBOL ), has_hardfork( HIVE_HARDFORK_0_17__659 ), create_vesting2( *this, voter, asset( claim, HIVE_SYMBOL ), has_hardfork( HIVE_HARDFORK_0_17__659 ),
[&]( const asset& reward ) [&]( const asset& reward )
{ {
...@@ -2759,7 +2768,7 @@ share_type database::cashout_comment_helper( util::comment_reward_context& ctx, ...@@ -2759,7 +2768,7 @@ share_type database::cashout_comment_helper( util::comment_reward_context& ctx,
auto curators_vesting_payout = calculate_vesting( *this, asset( curation_tokens, HIVE_SYMBOL ), has_hardfork( HIVE_HARDFORK_0_17__659 ) ); auto curators_vesting_payout = calculate_vesting( *this, asset( curation_tokens, HIVE_SYMBOL ), has_hardfork( HIVE_HARDFORK_0_17__659 ) );
operation vop = author_reward_operation( comment_author, to_string( comment_cashout.permlink ), hbd_payout.first, hbd_payout.second, asset( 0, VESTS_SYMBOL ), operation vop = author_reward_operation( comment_author, to_string( comment_cashout.permlink ), hbd_payout.first, hbd_payout.second, asset( 0, VESTS_SYMBOL ),
curators_vesting_payout ); curators_vesting_payout, has_hardfork( HIVE_HARDFORK_0_17__659 ) );
create_vesting2( *this, author, asset( vesting_hive, HIVE_SYMBOL ), has_hardfork( HIVE_HARDFORK_0_17__659 ), create_vesting2( *this, author, asset( vesting_hive, HIVE_SYMBOL ), has_hardfork( HIVE_HARDFORK_0_17__659 ),
[&]( const asset& vesting_payout ) [&]( const asset& vesting_payout )
...@@ -3142,7 +3151,7 @@ asset database::get_liquidity_reward()const ...@@ -3142,7 +3151,7 @@ asset database::get_liquidity_reward()const
return asset( 0, HIVE_SYMBOL ); return asset( 0, HIVE_SYMBOL );
const auto& props = get_dynamic_global_properties(); const auto& props = get_dynamic_global_properties();
static_assert( HIVE_LIQUIDITY_REWARD_PERIOD_SEC == 60*60, "this code assumes a 1 hour time interval" ); static_assert( HIVE_LIQUIDITY_REWARD_PERIOD_SEC == 60*60, "this code assumes a 1 hour time interval" ); // NOLINT(misc-redundant-expression)
asset percent( protocol::calc_percent_reward_per_hour< HIVE_LIQUIDITY_APR_PERCENT >( props.virtual_supply.amount ), HIVE_SYMBOL ); asset percent( protocol::calc_percent_reward_per_hour< HIVE_LIQUIDITY_APR_PERCENT >( props.virtual_supply.amount ), HIVE_SYMBOL );
return std::max( percent, HIVE_MIN_LIQUIDITY_REWARD ); return std::max( percent, HIVE_MIN_LIQUIDITY_REWARD );
} }
...@@ -3190,6 +3199,7 @@ asset database::get_producer_reward() ...@@ -3190,6 +3199,7 @@ asset database::get_producer_reward()
{ {
a.balance += pay; a.balance += pay;
} ); } );
push_virtual_operation( producer_reward_operation( witness_account.name, pay ) );
} }
return pay; return pay;
...@@ -3255,7 +3265,7 @@ uint16_t database::get_curation_rewards_percent() const ...@@ -3255,7 +3265,7 @@ uint16_t database::get_curation_rewards_percent() const
return HIVE_1_PERCENT * 50; return HIVE_1_PERCENT * 50;
} }
share_type database::pay_reward_funds( share_type reward ) share_type database::pay_reward_funds( const share_type& reward )
{ {
const auto& reward_idx = get_index< reward_fund_index, by_id >(); const auto& reward_idx = get_index< reward_fund_index, by_id >();
share_type used_rewards = 0; share_type used_rewards = 0;
...@@ -3414,7 +3424,7 @@ void database::process_decline_voting_rights() ...@@ -3414,7 +3424,7 @@ void database::process_decline_voting_rights()
modify( account, [&]( account_object& a ) modify( account, [&]( account_object& a )
{ {
a.can_vote = false; a.can_vote = false;
a.proxy = HIVE_PROXY_TO_SELF_ACCOUNT; a.clear_proxy();
}); });
remove( *itr ); remove( *itr );
...@@ -4011,6 +4021,8 @@ void database::_apply_block( const signed_block& next_block ) ...@@ -4011,6 +4021,8 @@ void database::_apply_block( const signed_block& next_block )
_current_op_in_trx = 0; _current_op_in_trx = 0;
_current_virtual_op = 0; _current_virtual_op = 0;
remove_expired_governance_votes();
update_global_dynamic_data(next_block); update_global_dynamic_data(next_block);
update_signing_witness(signing_witness, next_block); update_signing_witness(signing_witness, next_block);
...@@ -4021,11 +4033,6 @@ void database::_apply_block( const signed_block& next_block ) ...@@ -4021,11 +4033,6 @@ void database::_apply_block( const signed_block& next_block )
clear_expired_orders(); clear_expired_orders();
clear_expired_delegations(); clear_expired_delegations();
if( next_block.block_num() % 100000 == 0 )
{
}
update_witness_schedule(*this); update_witness_schedule(*this);
update_median_feed(); update_median_feed();
...@@ -5186,7 +5193,7 @@ void database::adjust_smt_balance( const account_object& owner, const asset& del ...@@ -5186,7 +5193,7 @@ void database::adjust_smt_balance( const account_object& owner, const asset& del
bo = &new_balance_object; bo = &new_balance_object;
} }
modify( *bo, std::move( modifier ) ); modify( *bo, std::forward<modifier_type>( modifier ) );
if( bo->is_empty() ) if( bo->is_empty() )
{ {
// Zero balance is the same as non object balance at all. // Zero balance is the same as non object balance at all.
...@@ -5296,7 +5303,7 @@ void database::modify_reward_balance( const account_object& a, const asset& valu ...@@ -5296,7 +5303,7 @@ void database::modify_reward_balance( const account_object& a, const asset& valu
void database::set_index_delegate( const std::string& n, index_delegate&& d ) void database::set_index_delegate( const std::string& n, index_delegate&& d )
{ {
_index_delegate_map[ n ] = std::move( d ); _index_delegate_map[ n ] = d;
} }
const index_delegate& database::get_index_delegate( const std::string& n ) const index_delegate& database::get_index_delegate( const std::string& n )
...@@ -5625,10 +5632,13 @@ void database::init_hardforks() ...@@ -5625,10 +5632,13 @@ void database::init_hardforks()
FC_ASSERT( HIVE_HARDFORK_1_24 == 24, "Invalid hardfork configuration" ); FC_ASSERT( HIVE_HARDFORK_1_24 == 24, "Invalid hardfork configuration" );
_hardfork_versions.times[ HIVE_HARDFORK_1_24 ] = fc::time_point_sec( HIVE_HARDFORK_1_24_TIME ); _hardfork_versions.times[ HIVE_HARDFORK_1_24 ] = fc::time_point_sec( HIVE_HARDFORK_1_24_TIME );
_hardfork_versions.versions[ HIVE_HARDFORK_1_24 ] = HIVE_HARDFORK_1_24_VERSION; _hardfork_versions.versions[ HIVE_HARDFORK_1_24 ] = HIVE_HARDFORK_1_24_VERSION;
#ifdef IS_TEST_NET
FC_ASSERT( HIVE_HARDFORK_1_25 == 25, "Invalid hardfork configuration" ); FC_ASSERT( HIVE_HARDFORK_1_25 == 25, "Invalid hardfork configuration" );
_hardfork_versions.times[ HIVE_HARDFORK_1_25 ] = fc::time_point_sec( HIVE_HARDFORK_1_25_TIME ); _hardfork_versions.times[ HIVE_HARDFORK_1_25 ] = fc::time_point_sec( HIVE_HARDFORK_1_25_TIME );
_hardfork_versions.versions[ HIVE_HARDFORK_1_25 ] = HIVE_HARDFORK_1_25_VERSION; _hardfork_versions.versions[ HIVE_HARDFORK_1_25 ] = HIVE_HARDFORK_1_25_VERSION;
#ifdef IS_TEST_NET
FC_ASSERT( HIVE_HARDFORK_1_26 == 26, "Invalid hardfork configuration" );
_hardfork_versions.times[ HIVE_HARDFORK_1_26 ] = fc::time_point_sec( HIVE_HARDFORK_1_26_TIME );
_hardfork_versions.versions[ HIVE_HARDFORK_1_26 ] = HIVE_HARDFORK_1_26_VERSION;
#endif #endif
const auto& hardforks = get_hardfork_property_object(); const auto& hardforks = get_hardfork_property_object();
...@@ -6019,7 +6029,11 @@ void database::apply_hardfork( uint32_t hardfork ) ...@@ -6019,7 +6029,11 @@ void database::apply_hardfork( uint32_t hardfork )
case HIVE_HARDFORK_1_24: case HIVE_HARDFORK_1_24:
{ {
restore_accounts( hardforkprotect::get_restored_accounts() ); restore_accounts( hardforkprotect::get_restored_accounts() );
#ifdef IS_TEST_NET
/// Don't change chain_id in testnet build.
#else
set_chain_id(HIVE_CHAIN_ID); set_chain_id(HIVE_CHAIN_ID);
#endif /// IS_TEST_NET
break; break;
} }
case HIVE_SMT_HARDFORK: case HIVE_SMT_HARDFORK:
...@@ -6114,7 +6128,7 @@ void database::validate_invariants()const ...@@ -6114,7 +6128,7 @@ void database::validate_invariants()const
total_vesting += itr->get_vesting(); total_vesting += itr->get_vesting();
total_vesting += itr->get_vest_rewards(); total_vesting += itr->get_vest_rewards();
pending_vesting_hive += itr->get_vest_rewards_as_hive(); pending_vesting_hive += itr->get_vest_rewards_as_hive();
total_vsf_votes += ( itr->proxy == HIVE_PROXY_TO_SELF_ACCOUNT ? total_vsf_votes += ( !itr->has_proxy() ?
itr->witness_vote_weight() : itr->witness_vote_weight() :
( HIVE_MAX_PROXY_RECURSION_DEPTH > 0 ? ( HIVE_MAX_PROXY_RECURSION_DEPTH > 0 ?
itr->proxied_vsf_votes[HIVE_MAX_PROXY_RECURSION_DEPTH - 1] : itr->proxied_vsf_votes[HIVE_MAX_PROXY_RECURSION_DEPTH - 1] :
...@@ -6363,9 +6377,12 @@ void database::perform_vesting_share_split( uint32_t magnitude ) ...@@ -6363,9 +6377,12 @@ void database::perform_vesting_share_split( uint32_t magnitude )
// Need to update all VESTS in accounts and the total VESTS in the dgpo // Need to update all VESTS in accounts and the total VESTS in the dgpo
for( const auto& account : get_index< account_index, by_id >() ) for( const auto& account : get_index< account_index, by_id >() )
{ {
asset old_vesting_shares = account.vesting_shares;
asset new_vesting_shares = account.vesting_shares;
modify( account, [&]( account_object& a ) modify( account, [&]( account_object& a )
{ {
a.vesting_shares.amount *= magnitude; a.vesting_shares.amount *= magnitude;
new_vesting_shares = a.vesting_shares;
a.withdrawn *= magnitude; a.withdrawn *= magnitude;
a.to_withdraw *= magnitude; a.to_withdraw *= magnitude;
a.vesting_withdraw_rate = asset( a.to_withdraw / HIVE_VESTING_WITHDRAW_INTERVALS_PRE_HF_16, VESTS_SYMBOL ); a.vesting_withdraw_rate = asset( a.to_withdraw / HIVE_VESTING_WITHDRAW_INTERVALS_PRE_HF_16, VESTS_SYMBOL );
...@@ -6375,6 +6392,8 @@ void database::perform_vesting_share_split( uint32_t magnitude ) ...@@ -6375,6 +6392,8 @@ void database::perform_vesting_share_split( uint32_t magnitude )
for( uint32_t i = 0; i < HIVE_MAX_PROXY_RECURSION_DEPTH; ++i ) for( uint32_t i = 0; i < HIVE_MAX_PROXY_RECURSION_DEPTH; ++i )
a.proxied_vsf_votes[i] *= magnitude; a.proxied_vsf_votes[i] *= magnitude;
} ); } );
if (old_vesting_shares != new_vesting_shares)
push_virtual_operation( vesting_shares_split_operation(account.name, old_vesting_shares, new_vesting_shares) );
} }
const auto& comments = get_index< comment_cashout_index >().indices(); const auto& comments = get_index< comment_cashout_index >().indices();
...@@ -6469,7 +6488,7 @@ void database::retally_witness_votes() ...@@ -6469,7 +6488,7 @@ void database::retally_witness_votes()
// Apply all existing votes by account // Apply all existing votes by account
for( auto itr = account_idx.begin(); itr != account_idx.end(); ++itr ) for( auto itr = account_idx.begin(); itr != account_idx.end(); ++itr )
{ {
if( itr->proxy != HIVE_PROXY_TO_SELF_ACCOUNT ) continue; if( itr->has_proxy() ) continue;
const auto& a = *itr; const auto& a = *itr;
...@@ -6492,7 +6511,7 @@ void database::retally_witness_vote_counts( bool force ) ...@@ -6492,7 +6511,7 @@ void database::retally_witness_vote_counts( bool force )
{ {
const auto& a = *itr; const auto& a = *itr;
uint16_t witnesses_voted_for = 0; uint16_t witnesses_voted_for = 0;
if( force || (a.proxy != HIVE_PROXY_TO_SELF_ACCOUNT ) ) if( force || a.has_proxy() )
{ {
const auto& vidx = get_index< witness_vote_index >().indices().get< by_account_witness >(); const auto& vidx = get_index< witness_vote_index >().indices().get< by_account_witness >();
auto wit_itr = vidx.lower_bound( boost::make_tuple( a.name, account_name_type() ) ); auto wit_itr = vidx.lower_bound( boost::make_tuple( a.name, account_name_type() ) );
...@@ -6517,5 +6536,117 @@ optional< chainbase::database::session >& database::pending_transaction_session( ...@@ -6517,5 +6536,117 @@ optional< chainbase::database::session >& database::pending_transaction_session(
return _pending_tx_session; return _pending_tx_session;
} }
void database::remove_expired_governance_votes()
{
if (!has_hardfork(HIVE_HARDFORK_1_25))
return;
const auto& accounts = get_index<account_index, by_governance_vote_expiration_ts>();
auto acc_it = accounts.begin();
time_point_sec block_timestamp = head_block_time();
if (acc_it->get_governance_vote_expiration_ts() >= block_timestamp)
return;
const auto& witness_votes = get_index<witness_vote_index, by_account_witness>();
const auto& proposal_votes = get_index<proposal_vote_index, by_voter_proposal>();
//stats
uint64_t processed_accounts = 0;
uint64_t processed_accounts_with_votes = 0;
uint64_t removed_witness_votes = 0;
uint64_t removed_proposal_votes = 0;
const time_point deleting_start_time = time_point::now();
uint16_t deleted_votes = 0;
constexpr uint16_t TIME_CHECK_INTERVAL = 50; //check current time every X deleted votes in order to not cross MAX_EXECUTION_TIME.
auto stop_loop = [](uint16_t& deleted_votes, const time_point& deleting_start_time) -> bool
{
if (deleted_votes >= TIME_CHECK_INTERVAL)
{
const fc::microseconds MAX_EXECUTION_TIME =
#ifdef IS_TEST_NET
fc::milliseconds(3);
#else
fc::milliseconds(500);
#endif
if (time_point::now() - deleting_start_time >= MAX_EXECUTION_TIME)
return true;
deleted_votes = 0;
}
else
++deleted_votes;
return false;
};
bool max_execution_time_reached = false;
while (!max_execution_time_reached && acc_it != accounts.end() && acc_it->get_governance_vote_expiration_ts() < block_timestamp)
{
++processed_accounts;
auto wvote = witness_votes.lower_bound(acc_it->name);
auto pvote = proposal_votes.lower_bound(acc_it->name);
const account_object& acc = *acc_it;
++acc_it;
if ((wvote == witness_votes.end() || wvote->account != acc.name) &&
(pvote == proposal_votes.end() || pvote->voter != acc.name) &&
!acc.has_proxy())
{
modify(acc, [&](account_object& acc) { acc.set_governance_vote_expired(); });
max_execution_time_reached = stop_loop(deleted_votes, deleting_start_time);
continue;
}
++processed_accounts_with_votes;
if (acc.has_proxy())
{
adjust_proxied_witness_votes( acc, -acc.vesting_shares.amount );
modify(acc, [&](account_object& acc) { acc.clear_proxy(); });
}
while (wvote != witness_votes.end() && wvote->account == acc.name)
{
const witness_vote_object& current = *wvote;
++wvote;
remove(current);
++removed_witness_votes;
modify(acc, [&](account_object& acc) { acc.witnesses_voted_for = 0; });
}
max_execution_time_reached = stop_loop(deleted_votes, deleting_start_time);
while (!max_execution_time_reached && pvote != proposal_votes.end() && pvote->voter == acc.name)
{
const proposal_vote_object& current = *pvote;
++pvote;
remove(current);
++removed_proposal_votes;
max_execution_time_reached = stop_loop(deleted_votes, deleting_start_time);
if (max_execution_time_reached)
break;
}
if (!acc.notified_expired_account())
{
push_virtual_operation( expired_account_notification_operation( acc.name ) );
modify(acc, [&](account_object& acc) { acc.notification_of_expiring_account_sent(); });
}
if (!max_execution_time_reached)
modify(acc, [&](account_object& acc) { acc.set_governance_vote_expired(); });
}
ilog("Removing: ${removed_pvotes} proposal votes, ${removed_wvotes} witness votes. Processed accounts: ${processed}, accounts with votes: ${with_votes}, exec_time: ${exec_time} us, max execution time reached: ${execution_time_limit_reached}",
("removed_pvotes", removed_proposal_votes) ("removed_wvotes", removed_witness_votes) ("processed", processed_accounts) ("with_votes", processed_accounts_with_votes) ("exec_time", (time_point::now() - deleting_start_time).count() ) ("execution_time_limit_reached", max_execution_time_reached));
}
} } //hive::chain } } //hive::chain
...@@ -271,7 +271,7 @@ vector<fork_item> fork_database::fetch_block_range_on_main_branch_by_number( con ...@@ -271,7 +271,7 @@ vector<fork_item> fork_database::fetch_block_range_on_main_branch_by_number( con
void fork_database::set_head(shared_ptr<fork_item> h) void fork_database::set_head(shared_ptr<fork_item> h)
{ {
_head = h; _head = std::move( h );
} }
void fork_database::remove(block_id_type id) void fork_database::remove(block_id_type id)
......