Skip to content
Snippets Groups Projects

Compare revisions

Changes are shown as if the source revision was being merged into the target revision. Learn more about comparing revisions.

Source

Select target project
No results found

Target

Select target project
  • hive/haf
  • dan/haf
2 results
Show changes
Commits on Source (21)
Showing
with 145 additions and 39 deletions
......@@ -52,3 +52,4 @@ cmake-build-*/
*.log
venv
.venv
......@@ -331,6 +331,16 @@ replay_with_haf:
variables:
PATTERNS_PATH: "$CI_PROJECT_DIR/tests/integration/replay/patterns/no_filter"
replay_with_haf_from_4_9m:
extends: .replay_step
variables:
PATTERNS_PATH: "$CI_PROJECT_DIR/tests/integration/replay/patterns_4_9m/no_filter"
replay_accounts_filtered_with_haf_from_4_9m:
extends: .replay_step
variables:
PATTERNS_PATH: "$CI_PROJECT_DIR/tests/integration/replay/patterns_4_9m/accounts_filtered"
replay_accounts_filtered_with_haf:
extends: .replay_step
variables:
......
......@@ -121,7 +121,7 @@ EXPOSE ${HTTP_PORT}
ENTRYPOINT [ "/home/haf_admin/docker_entrypoint.sh" ]
FROM ${CI_REGISTRY_IMAGE}base_instance:base_instance-${BUILD_IMAGE_TAG} as instance
FROM ${CI_REGISTRY_IMAGE}base_instance:${BUILD_IMAGE_TAG} as instance
# Embedded postgres service
EXPOSE 5432
......
......@@ -39,7 +39,7 @@ Now you can either sync your hived node from scratch via the Hive P2P network, o
To start your HAF server, type:
```
../haf/scripts/run_hived_img.sh registry.gitlab.syncad.com/hive/haf/instance:instance-local-develop --name=haf-instance --webserver-http-endpoint=8091 --webserver-ws-endpoint=8090 --data-dir=$(pwd)/haf-datadir --replay
../haf/scripts/run_hived_img.sh registry.gitlab.syncad.com/hive/haf/instance:local-develop --name=haf-instance --webserver-http-endpoint=8091 --webserver-ws-endpoint=8090 --data-dir=$(pwd)/haf-datadir --replay
```
If you don't have a local block_log file, just remove the `--replay` option from the command line above to get the blockchain blocks using the P2P network via the normal sync procedure.
......
......@@ -30,7 +30,7 @@ where:
`./haf` points to the source directory from which to build the docker image
`registry.gitlab.syncad.com/hive/haf/` specifies a docker registry where the built image can potentially be pushed (actually pushing the image to the registry requires additional steps).
The above command will result in creation of a local docker image: `registry.gitlab.syncad.com/hive/haf/instance:instance-local`
The above command will result in creation of a local docker image: `registry.gitlab.syncad.com/hive/haf/instance:local`
### Building a HAF docker image from a specific git commit hash
......@@ -40,7 +40,7 @@ A HAF instance image can also be built from a specific git commit hash in the HA
build_instance4commit.sh fdebe397498f814920e959d5d11863d8fe51be22 registry.gitlab.syncad.com/hive/haf/
```
This will create an image called: `registry.gitlab.syncad.com/hive/haf/instance:instance-fdebe397498f814920e959d5d11863d8fe51be22`
This will create an image called: `registry.gitlab.syncad.com/hive/haf/instance:fdebe39`
The examples below assume the following directory structure:
......@@ -62,11 +62,11 @@ Next create a `blockchain` subdirectory inside the datadir with a valid block_lo
With these preliminaries out of the way, you can start your instance container using a command like the one below:
```
./haf/scripts/run_hived_img.sh registry.gitlab.syncad.com/hive/haf/instance:instance-local --data-dir=/storage1/mainnet-5M-haf --name=haf-instance-5M --replay --stop-replay-at-block=5000000
./haf/scripts/run_hived_img.sh registry.gitlab.syncad.com/hive/haf/instance:local --data-dir=/storage1/mainnet-5M-haf --name=haf-instance-5M --replay --stop-replay-at-block=5000000
```
This example works as follows:
- `registry.gitlab.syncad.com/hive/haf/instance:instance-local` points to the HAF image you built in previous steps.
- `registry.gitlab.syncad.com/hive/haf/instance:local` points to the HAF image you built in previous steps.
- `--data-dir=/storage1/mainnet-5M-haf` option enforces proper volume mapping from your host machine to the docker container you are starting and points to a data directory where the hived node shall put its data.
- `--name=haf-instance-5M`- names your docker container for docker commands.
- other options `--replay --stop-replay-at-block=5000000` are passed directly to the hived command line
......
......@@ -262,6 +262,8 @@ HIVED_ARGS=()
echo "Processing passed arguments...: $*"
SKIP_HIVED=0
while [ $# -gt 0 ]; do
case "$1" in
--execute-maintenance-script*)
......@@ -278,6 +280,12 @@ while [ $# -gt 0 ]; do
BACKUP_SOURCE_DIR_NAME="${1#*=}"
PERFORM_LOAD=1
;;
--skip-hived)
SKIP_HIVED=1
# allow launching the container with only the database running, but not hived. This is useful when you want to
# examine the database, but there's some problem that causes hived to exit at startup, since hived exiting will
# then shut down the container, taking the database with it.
;;
*)
echo "Attempting to collect unknown (hived) option: ${1}"
HIVED_ARGS+=("$1")
......@@ -306,6 +314,18 @@ elif [ ${PERFORM_LOAD} -eq 1 ];
then
echo "Attempting to perform instance snapshot load"
perform_instance_load "${BACKUP_SOURCE_DIR_NAME}"
elif [ ${SKIP_HIVED} -eq 1 ];
then
echo "Not launching hived due to --skip-hived command-line option"
echo "You can now connect to the database. This this container will continue to exist until you shut it down"
# launch a webserver on port 8091 so the docker healthcheck will pass. We probably want
# the healthcheck to pass so docker-compose will continue to launch dependent containers
# like pgadmin.
# The webserver running in the foreground will also act to keep this container running
# until the docker image is stopped.
mkdir -p /tmp/dummy-webserver
cd /tmp/dummy-webserver
/home/haf_admin/.local/share/pypoetry/venv/bin/python -m http.server 8091
else
run_instance
status=$?
......
Subproject commit 763764c338f41ed11131638e1041c3c0af3c805a
Subproject commit 785aa74b7338c845294699d8457706909f28159e
#! /bin/bash
SCRIPTPATH="$( cd -- "$(dirname "$0")" >/dev/null 2>&1 ; pwd -P )"
SCRIPTPATH="$( cd -- "$(dirname "$0")" >/dev/null 2>&1 || exit 1; pwd -P )"
SCRIPTSDIR="$SCRIPTPATH/.."
LOG_FILE=build_instance4commit.log
export LOG_FILE=build_instance4commit.log
# shellcheck source=../common.sh
source "$SCRIPTSDIR/common.sh"
COMMIT=""
......@@ -14,16 +15,19 @@ BRANCH="master"
NETWORK_TYPE_ARG=""
EXPORT_BINARIES_ARG=""
BUILD_IMAGE_TAG=""
print_help () {
echo "Usage: $0 <commit> <registry_url> [OPTION[=VALUE]]..."
echo
echo "Builds a docker image containing HAF installation built from pointed COMMIT."
echo "OPTIONS:"
echo " --network-type=TYPE Specify type of blockchain network supported by built hived. Allowed values: mainnet, testnet, mirrornet."
echo " --export-binaries=PATH Specify a path where binaries shall be exported from the built image."
echo " --help Display this help screen and exit."
echo
cat <<-EOF
Usage: $0 <commit> <registry_url> [OPTION[=VALUE]]...
Builds Docker image containing HAF installation built from specified COMMIT
OPTIONS:
--network-type=TYPE Type of blockchain network supported by the built hived binary. Allowed values: mainnet, testnet, mirrornet.
--export-binaries=PATH Path where binaries shall be exported from the built image.
--image-tag=TAG Image tag. Defaults to short commit hash
--help,-h,-? Displays this help screen and exits
EOF
}
while [ $# -gt 0 ]; do
......@@ -36,6 +40,13 @@ while [ $# -gt 0 ]; do
export_path="${1#*=}"
EXPORT_BINARIES_ARG="--export-binaries=${export_path}"
;;
--image-tag=*)
BUILD_IMAGE_TAG="${1#*=}"
;;
--help|-h|-?)
print_help
exit 0
;;
-*)
echo "ERROR: '$1' is not a valid option."
exit 1
......@@ -58,13 +69,17 @@ while [ $# -gt 0 ]; do
shift
done
TST_COMMIT=${COMMIT:?"Missing arg #1 to specify a COMMIT."}
TST_REGISTRY=${REGISTRY:?"Missing arg #2 to specify target container registry."}
BUILD_IMAGE_TAG=$COMMIT
_TST_COMMIT=${COMMIT:?"Missing arg #1 to specify a COMMIT."}
_TST_REGISTRY=${REGISTRY:?"Missing arg #2 to specify target container registry."}
do_clone "$BRANCH" "./haf-${COMMIT}" https://gitlab.syncad.com/hive/haf.git "$COMMIT"
"$SCRIPTSDIR/ci-helpers/build_instance.sh" "${BUILD_IMAGE_TAG}" "./haf-${COMMIT}" "${REGISTRY}" ${NETWORK_TYPE_ARG} ${EXPORT_BINARIES_ARG}
if [[ -z "$BUILD_IMAGE_TAG" ]]; then
pushd "./hive-${COMMIT}" || exit 1
BUILD_IMAGE_TAG=$(git rev-parse --short "$COMMIT")
popd || exit 1
fi
"$SCRIPTSDIR/ci-helpers/build_instance.sh" "${BUILD_IMAGE_TAG}" "./haf-${COMMIT}" "${REGISTRY}" "${NETWORK_TYPE_ARG}" "${EXPORT_BINARIES_ARG}"
#! /bin/bash
P="${1}"
pushd "$P" >/dev/null 2>&1
pushd "$P" >/dev/null 2>&1 || exit 1
# this list is used to detect changes affecting hived binaries, list might change in the future
COMMIT=$(git log --pretty=format:"%H" -- hive/ src/ cmake/ scripts/ docker/ tests/unit tests/integration/functional Dockerfile CMakeLists.txt | head -1)
popd >/dev/null 2>&1
popd >/dev/null 2>&1 || exit 1
echo "$COMMIT"
......@@ -45,9 +45,9 @@ do_clone() {
local commit="$4"
if [[ "$commit" != "" ]]; then
do_clone_commit $commit "$src_dir" $repo_url
do_clone_commit "$commit" "$src_dir" "$repo_url"
else
do_clone_branch "$branch" "$src_dir" $repo_url
do_clone_branch "$branch" "$src_dir" "$repo_url"
fi
}
......@@ -5,8 +5,8 @@ docker run <image name> --execute-maintenance-script=<script name> [ arguments ]
For example:
docker run -ePYTEST_NUMBER_OF_PROCESSES="0" -ePG_ACCESS="host all all 127.0.0.1/32 trust" registry.gitlab.syncad.com/hive/haf/testnet-base_instance:testnet-base_instance-4a2d57c020d8f04602de36f82f31b9eea14acfea --execute-maintenance-script=/home/haf_admin/haf/scripts/maintenance-scripts/run_haf_system_tests.sh test_operations_after_switching_fork.py
docker run -ePYTEST_NUMBER_OF_PROCESSES="0" -ePG_ACCESS="host all all 127.0.0.1/32 trust" registry.gitlab.syncad.com/hive/haf/testnet-base_instance:4a2d57c --execute-maintenance-script=/home/haf_admin/haf/scripts/maintenance-scripts/run_haf_system_tests.sh test_operations_after_switching_fork.py
docker run -ePG_ACCESS="host all all 127.0.0.1/32 trust" registry.gitlab.syncad.com/hive/haf/base_instance:base_instance-4a2d57c020d8f04602de36f82f31b9eea14acfea --execute-maintenance-script=/home/haf_admin/haf/scripts/maintenance-scripts/run_hfm_functional_tests.sh
docker run -ePG_ACCESS="host all all 127.0.0.1/32 trust" registry.gitlab.syncad.com/hive/haf/base_instance:4a2d57c --execute-maintenance-script=/home/haf_admin/haf/scripts/maintenance-scripts/run_hfm_functional_tests.sh
PG_ACCESS - is environmant variable required in functional and system tests, arguments are optional and currently work only in system tests.
......@@ -49,7 +49,7 @@ void filter_processor::find_ops( const std::string& file_name )
return;
}
_json = fc::json::from_string( _content ).as< fc::variant >();
_json = fc::json::from_string( _content, fc::json::format_validation_mode::relaxed ).as< fc::variant >();
}
FC_CAPTURE_LOG_AND_RETHROW(("open file"))
......
......@@ -172,6 +172,8 @@ GRANT EXECUTE ON FUNCTION
, hive.unreachable_event_id()
, hive.initialize_extension_data()
, hive.ignore_registered_table_edition( pg_ddl_command )
, hive.account_sink_id()
, hive.block_sink_num()
TO hived_group;
REVOKE EXECUTE ON FUNCTION
......
......@@ -40,7 +40,7 @@ FROM
ha.block_num,
ha.id,
ha.name
FROM hive.accounts ha
FROM hive.accounts ha WHERE ha.id > hive.account_sink_id()
UNION ALL
SELECT
reversible.block_num,
......@@ -100,7 +100,7 @@ FROM (
hb.current_supply,
hb.current_hbd_supply,
hb.dhf_interval_ledger
FROM hive.blocks hb
FROM hive.blocks hb WHERE hb.num > hive.block_sink_num()
UNION ALL
SELECT hbr.num,
hbr.hash,
......@@ -283,8 +283,8 @@ JOIN hive.applied_hardforks_reversible hjr ON forks.max_fork_id = hjr.fork_id AN
-- only irreversible data
CREATE OR REPLACE VIEW hive.irreversible_account_operations_view AS SELECT * FROM hive.account_operations;
CREATE OR REPLACE VIEW hive.irreversible_accounts_view AS SELECT * FROM hive.accounts;
CREATE OR REPLACE VIEW hive.irreversible_blocks_view AS SELECT * FROM hive.blocks;
CREATE OR REPLACE VIEW hive.irreversible_accounts_view AS SELECT * FROM hive.accounts WHERE id > hive.account_sink_id();
CREATE OR REPLACE VIEW hive.irreversible_blocks_view AS SELECT * FROM hive.blocks WHERE num > hive.block_sink_num();
CREATE OR REPLACE VIEW hive.irreversible_transactions_view AS SELECT * FROM hive.transactions;
CREATE OR REPLACE VIEW hive.irreversible_operations_view AS
......
......@@ -76,7 +76,7 @@ BEGIN
hb.witness_signature,
hb.signing_key
FROM hive.blocks hb
WHERE hb.num <= c.min_block
WHERE hb.num > hive.block_sink_num() AND hb.num <= c.min_block
UNION ALL
SELECT hbr.num,
hbr.hash,
......@@ -122,7 +122,7 @@ EXECUTE format(
hb.extensions,
hb.witness_signature,
hb.signing_key
FROM hive.blocks hb
FROM hive.blocks hb WHERE hb.num > hive.block_sink_num()
;', _context_name
);
END;
......@@ -435,7 +435,7 @@ EXECUTE format(
ha.id,
ha.name
FROM hive.accounts ha
WHERE ha.block_num <= c.min_block
WHERE ha.id > hive.account_sink_id() AND ha.block_num <= c.min_block
UNION ALL
SELECT
reversible.block_num,
......@@ -475,7 +475,7 @@ EXECUTE format(
ha.block_num,
ha.id,
ha.name
FROM hive.accounts ha
FROM hive.accounts ha WHERE ha.id > hive.account_sink_id()
;', _context_name
);
END;
......
......@@ -18,6 +18,28 @@ BEGIN
END
$$;
CREATE OR REPLACE FUNCTION hive.block_sink_num()
RETURNS INT
LANGUAGE plpgsql
IMMUTABLE
AS
$$
BEGIN
RETURN 0;
END
$$;
CREATE OR REPLACE FUNCTION hive.account_sink_id()
RETURNS INT
LANGUAGE plpgsql
IMMUTABLE
AS
$$
BEGIN
RETURN -1;
END
$$;
CREATE TABLE IF NOT EXISTS hive.contexts(
id SERIAL NOT NULL,
name hive.context_name NOT NULL,
......
......@@ -63,6 +63,7 @@ DECLARE
__irreversible_head_block hive.blocks.num%TYPE;
BEGIN
SELECT COALESCE( MAX( num ), 0 ) INTO __irreversible_head_block FROM hive.blocks;
IF ( _block_num < __irreversible_head_block ) THEN
RETURN;
END IF;
......@@ -365,6 +366,16 @@ $BODY$
DECLARE
__events_id BIGINT := 0;
BEGIN
IF EXISTS ( SELECT 1 FROM hive.blocks WHERE num = hive.block_sink_num() LIMIT 1 ) THEN
SELECT MAX(eq.id) + 1 FROM hive.events_queue eq WHERE eq.id != hive.unreachable_event_id() INTO __events_id;
PERFORM SETVAL( 'hive.events_queue_id_seq', __events_id, false );
PERFORM hive.create_database_hash('hive');
RETURN;
END IF;
-- We need to check constraints at the moment when event_sink and block_sink are both added
-- to hive.events and hive.blocks tables
SET CONSTRAINTS ALL DEFERRED;
INSERT INTO hive.irreversible_data VALUES(1,NULL, FALSE) ON CONFLICT DO NOTHING;
INSERT INTO hive.events_queue VALUES( 0, 'NEW_IRREVERSIBLE', 0 ) ON CONFLICT DO NOTHING;
INSERT INTO hive.events_queue VALUES( hive.unreachable_event_id(), 'NEW_BLOCK', 2147483647 ) ON CONFLICT DO NOTHING;
......@@ -372,6 +383,30 @@ BEGIN
PERFORM SETVAL( 'hive.events_queue_id_seq', __events_id, false );
INSERT INTO hive.fork(block_num, time_of_fork) VALUES( 1, '2016-03-24 16:05:00'::timestamp ) ON CONFLICT DO NOTHING;
INSERT INTO hive.blocks VALUES(
hive.block_sink_num() --num
, 'x00'::bytea --hash bytea NOT NULL
, 'x00'::bytea --prev bytea NOT NULL
, '0001-01-01 00:00:00-07'::timestamp -- created_at timestamp without time zone NOT NULL
, hive.account_sink_id() -- producer_account_id integer NOT NULL,
, 'x00'::bytea -- transaction_merkle_root bytea NOT NULL,
, '[]'::jsonb -- extensions jsonb,
, 'x00'::bytea -- witness_signature bytea NOT NULL,
, ''::TEXT -- signing_key text COLLATE pg_catalog."default" NOT NULL
, 0::hive.interest_rate -- hbd_interest_rate
, 0::hive.hive_amount -- total_vesting_fund_hive
, 0::hive.vest_amount -- total_vesting_shares
, 0::hive.hive_amount -- total_reward_fund_hive
, 0::hive.hive_amount -- virtual_supply
, 0::hive.hive_amount -- current_supply
, 0::hive.hbd_amount -- current_hbd_supply
, 0::hive.hbd_amount -- dhf_interval_ledger
)
ON CONFLICT DO NOTHING;
INSERT INTO hive.accounts VALUES(hive.account_sink_id(),'', 0) ON CONFLICT DO NOTHING;
PERFORM hive.create_database_hash('hive');
END;
$BODY$
......
......@@ -49,6 +49,6 @@ void extract_set_witness_properties_from_flat_map(extract_set_witness_properties
void extract_set_witness_properties_from_string(extract_set_witness_properties_result_t& output, const fc::string& _input)
{
witness_set_properties_props_t input_properties{};
fc::from_variant(fc::json::from_string(_input), input_properties);
fc::from_variant(fc::json::from_string(_input, fc::json::format_validation_mode::relaxed), input_properties);
extract_set_witness_properties_from_flat_map(output, input_properties);
}
......@@ -29,7 +29,7 @@ std::vector< char > json_to_op( const char* json )
return {};
auto bufstream = fc::buffered_istream( fc::make_svstream( json ) );
fc::variant v = fc::json::from_stream( bufstream );
fc::variant v = fc::json::from_stream( bufstream, fc::json::format_validation_mode::relaxed );
hive::protocol::operation op;
fc::from_variant( v, op );
......
......@@ -26,6 +26,7 @@ ADD_LIBRARY(
indexes_controler.cpp
blockchain_data_filter.cpp
filter_collector.cpp
all_accounts_dumper.cpp
${HEADERS}
)
......