Skip to content
Snippets Groups Projects

Compare revisions

Changes are shown as if the source revision was being merged into the target revision. Learn more about comparing revisions.

Source

Select target project
No results found
Select Git revision
  • 13-implement-unit-tests-to-verify-get_impacted_balances-behavior
  • 139-separate-c-cpp-code
  • 194-supercronic-maintenance-service
  • 249-analyze-after-creating-indices
  • 249-app-indices-api
  • 249-concurrent-app-indices
  • 49-c-locale-text-columns
  • 92-hive-operation-member-function
  • add-docker-labels
  • add-plpython
  • add-timescaledb-extension
  • ak-test
  • akociubinski/hafah-account-history-api
  • block-interface-debugging
  • bw_139-improved-from-json-conversion
  • bw_build_instance_fix
  • bw_disable-constraint-validity-checks
  • bw_docker_entrypoint_fix
  • bw_image_publishing_fix
  • bw_lock_changes
  • bw_mi_cte_for_reversible
  • bw_mr632_cleanup
  • bw_openapi_generation_fix
  • bw_postgresql-16-take-three
  • bw_table_partitioning_and_other_opts
  • bw_table_partitioning_and_other_opts-mt-tuning
  • bw_table_partitioning_and_other_opts-mt-tuning-with-idxs
  • bw_table_partitioning_and_other_opts_2
  • bw_tm-ops-as-hive-operation
  • c-locale-faster
  • ci-test-debugging
  • csp_from_stable
  • debug-connection-closing-message
  • debug-get-block-ids
  • develop
  • develop_check_keyauth_hive_alone
  • disable-table-logging
  • docker-compose-script-tweaks
  • emf_haf_dockerized_setup_supplement_tests
  • fix-ambiguous-app_create_context
  • fix-replay-error-deadlock
  • g-maintenance
  • generate_block_testing
  • generate_block_testing_make
  • gm_dockerfile
  • hive-block-log-interface
  • hive-docker-fix
  • imp-no-pic-for-exes
  • imp-no-pic-for-exes-fc_pic
  • jziebinski/bump-tt
  • kbotor/ci-rewrite-for-parallel-replay
  • kbotor/extended-block-log-job-test
  • kbotor/faketime-image
  • kbotor/permlink-logging
  • km_direct_sql_serial
  • km_removed_post_apply_operation-mt-version
  • km_removed_post_apply_operation_2
  • kmochocki/add-private-public-key-pair
  • kmochocki/beekeepy
  • kmochocki/comparsion-tests
  • kmochocki/issue_74
  • kmochocki/issue_79
  • kmochocki/just_update_hived
  • kmochocki/mi/drop_hive_schema
  • kmochocki/mzander/sql-serializer
  • kmochocki/pattern_fixes
  • kmochocki/tmp
  • kmochocki/tmp_only_irreversible
  • kmochocki/update-test-tools
  • kudmich/block_log_for_denser
  • kudmich/full_block_generator
  • latest_hived
  • lucius/test-branch-1
  • lucius/test-branch-2
  • mahdiyari/nov16-develop-unlogged-tables
  • mahdiyari/unlogged-tables
  • master
  • mi/crqash_when_restart_livesync
  • mi/improve_update_testing
  • mi/issue#245_slow_processing_alarm
  • mi/issue#247_triggers_in_context_schema
  • mi/issue#272_remove_hive_rowid
  • mi/issue_252_analyze
  • mi/log_applications
  • mi/master_merge_with_develop
  • mi/modified_autodetach
  • mi/modified_autodetach_2
  • mi/new_gcc_warnings
  • mi/pruning
  • mi/query_supervisor_crash
  • mi/vectors
  • mi_contexts_before_first_next_blocks_are_synchronized
  • mi_cte_for_reversible
  • mi_dtach_and_find_next_event
  • mi_dtach_and_find_next_event2
  • mi_fix_for_root_branch
  • mi_for_hivemind
  • mi_fork_and_set_irreversible_exception_break__node
  • mi_funtional_tests_cleanup
  • mi_funtional_tests_refactor
  • 1.27.10
  • 1.27.11rc1
  • 1.27.11rc2
  • 1.27.11rc3
  • 1.27.11rc4
  • 1.27.11rc5
  • 1.27.5
  • 1.27.5rc8
  • 1.27.5rc9
  • 1.27.6rc1
  • 1.27.6rc3
  • 1.27.6rc4
  • 1.27.6rc5
  • 1.27.6rc6
  • 1.27.6rc7
  • 1.27.6rc8
  • 1.27.6rc9
  • 1.27.7rc10
  • 1.27.7rc11
  • 1.27.7rc12
  • 1.27.7rc13
  • 1.27.7rc14
  • 1.27.7rc15
  • 1.27.7rc16
  • 1.27.8
  • 1.27.9
  • 20220201_auto
  • 20220214_auto
  • ChangesDatabaseSchema_2022_12_20
  • test-tag
  • unprotected
  • v-protected
  • v-protected-2
  • v-protected-3
  • v-protected-4
  • v-protected-5
  • v-protected-6
  • v1.27.3.0
  • v1.27.4.0
  • v1.27.5.0-rc0
  • v1.27.5.0-rc7
  • v1.27.6rc2
142 results

Target

Select target project
  • hive/haf
  • dan/haf
2 results
Select Git revision
  • 13-implement-unit-tests-to-verify-get_impacted_balances-behavior
  • 139-separate-c-cpp-code
  • 194-supercronic-maintenance-service
  • 249-analyze-after-creating-indices
  • 249-app-indices-api
  • 249-concurrent-app-indices
  • 49-c-locale-text-columns
  • 92-hive-operation-member-function
  • add-docker-labels
  • add-plpython
  • add-timescaledb-extension
  • ak-test
  • akociubinski/hafah-account-history-api
  • block-interface-debugging
  • bw_139-improved-from-json-conversion
  • bw_build_instance_fix
  • bw_disable-constraint-validity-checks
  • bw_docker_entrypoint_fix
  • bw_image_publishing_fix
  • bw_lock_changes
  • bw_mi_cte_for_reversible
  • bw_mr632_cleanup
  • bw_openapi_generation_fix
  • bw_postgresql-16-take-three
  • bw_table_partitioning_and_other_opts
  • bw_table_partitioning_and_other_opts-mt-tuning
  • bw_table_partitioning_and_other_opts-mt-tuning-with-idxs
  • bw_table_partitioning_and_other_opts_2
  • bw_tm-ops-as-hive-operation
  • c-locale-faster
  • ci-test-debugging
  • csp_from_stable
  • debug-connection-closing-message
  • debug-get-block-ids
  • develop
  • develop_check_keyauth_hive_alone
  • disable-table-logging
  • docker-compose-script-tweaks
  • emf_haf_dockerized_setup_supplement_tests
  • fix-ambiguous-app_create_context
  • fix-replay-error-deadlock
  • g-maintenance
  • generate_block_testing
  • generate_block_testing_make
  • gm_dockerfile
  • hive-block-log-interface
  • hive-docker-fix
  • imp-no-pic-for-exes
  • imp-no-pic-for-exes-fc_pic
  • jziebinski/bump-tt
  • kbotor/ci-rewrite-for-parallel-replay
  • kbotor/extended-block-log-job-test
  • kbotor/faketime-image
  • kbotor/permlink-logging
  • km_direct_sql_serial
  • km_removed_post_apply_operation-mt-version
  • km_removed_post_apply_operation_2
  • kmochocki/add-private-public-key-pair
  • kmochocki/beekeepy
  • kmochocki/comparsion-tests
  • kmochocki/issue_74
  • kmochocki/issue_79
  • kmochocki/just_update_hived
  • kmochocki/mi/drop_hive_schema
  • kmochocki/mzander/sql-serializer
  • kmochocki/pattern_fixes
  • kmochocki/tmp
  • kmochocki/tmp_only_irreversible
  • kmochocki/update-test-tools
  • kudmich/block_log_for_denser
  • kudmich/full_block_generator
  • latest_hived
  • lucius/test-branch-1
  • lucius/test-branch-2
  • mahdiyari/nov16-develop-unlogged-tables
  • mahdiyari/unlogged-tables
  • master
  • mi/crqash_when_restart_livesync
  • mi/improve_update_testing
  • mi/issue#245_slow_processing_alarm
  • mi/issue#247_triggers_in_context_schema
  • mi/issue#272_remove_hive_rowid
  • mi/issue_252_analyze
  • mi/log_applications
  • mi/master_merge_with_develop
  • mi/modified_autodetach
  • mi/modified_autodetach_2
  • mi/new_gcc_warnings
  • mi/pruning
  • mi/query_supervisor_crash
  • mi/vectors
  • mi_contexts_before_first_next_blocks_are_synchronized
  • mi_cte_for_reversible
  • mi_dtach_and_find_next_event
  • mi_dtach_and_find_next_event2
  • mi_fix_for_root_branch
  • mi_for_hivemind
  • mi_fork_and_set_irreversible_exception_break__node
  • mi_funtional_tests_cleanup
  • mi_funtional_tests_refactor
  • 1.27.10
  • 1.27.11rc1
  • 1.27.11rc2
  • 1.27.11rc3
  • 1.27.11rc4
  • 1.27.11rc5
  • 1.27.5
  • 1.27.5rc8
  • 1.27.5rc9
  • 1.27.6rc1
  • 1.27.6rc3
  • 1.27.6rc4
  • 1.27.6rc5
  • 1.27.6rc6
  • 1.27.6rc7
  • 1.27.6rc8
  • 1.27.6rc9
  • 1.27.7rc10
  • 1.27.7rc11
  • 1.27.7rc12
  • 1.27.7rc13
  • 1.27.7rc14
  • 1.27.7rc15
  • 1.27.7rc16
  • 1.27.8
  • 1.27.9
  • 20220201_auto
  • 20220214_auto
  • ChangesDatabaseSchema_2022_12_20
  • test-tag
  • unprotected
  • v-protected
  • v-protected-2
  • v-protected-3
  • v-protected-4
  • v-protected-5
  • v-protected-6
  • v1.27.3.0
  • v1.27.4.0
  • v1.27.5.0-rc0
  • v1.27.5.0-rc7
  • v1.27.6rc2
142 results
Show changes
Commits on Source (10)
Showing
with 1460 additions and 1113 deletions
......@@ -9,13 +9,18 @@ stages:
variables:
PYTEST_NUMBER_OF_PROCESSES: 8
CTEST_NUMBER_OF_JOBS: 4
GIT_STRATEGY: clone
GIT_DEPTH: 1
GIT_SUBMODULE_DEPTH: 1
GIT_SUBMODULE_STRATEGY: recursive
GIT_SUBMODULE_UPDATE_FLAGS: --jobs 4
FF_ENABLE_JOB_CLEANUP: 1
GIT_STRATEGY: clone
# uses registry.gitlab.syncad.com/hive/haf/ci-base-image:ubuntu22.04-17
BUILDER_IMAGE_TAG: "@sha256:234d3592e53d4cd7cc6df8e61366e8cbe69ac439355475c34fb2b0daf40e7a26"
FF_NETWORK_PER_BUILD: 1
# uses registry.gitlab.syncad.com/hive/haf/ci-base-image:ubuntu24.04-1
BUILDER_IMAGE_TAG: "@sha256:fc149082a4ee91ed622a14d283ae7fe44d13b123f2927d2e71a2167bbe63fab0"
CI_DEBUG_SERVICES: "true"
SETUP_SCRIPTS_PATH: "$CI_PROJECT_DIR/scripts"
TEST_TOOLS_NODE_DEFAULT_WAIT_FOR_LIVE_TIMEOUT: 60
......@@ -28,12 +33,7 @@ variables:
include:
- template: Workflows/Branch-Pipelines.gitlab-ci.yml
- local: '/scripts/ci-helpers/prepare_data_image_job.yml'
- project: 'hive/common-ci-configuration'
ref: e74d7109838ff05fdc239bced6a726aa7ad46a9b
file:
- '/templates/python_projects.gitlab-ci.yml'
- '/templates/cache_cleanup.gitlab-ci.yml'
- '/templates/docker_image_jobs.gitlab-ci.yml'
# Do not include common-ci-configuration here, it is already referenced by scripts/ci-helpers/prepare_data_image_job.yml included from Hive
verify_poetry_lock_sanity:
extends: .verify_poetry_lock_sanity_template
......@@ -527,24 +527,53 @@ update_with_wrong_table_schema:
- public-runner-docker
- hived-for-tests
# job responsible for replaying data using preconfigured filtering options specified in given config.ini file
replay_filtered_haf_data_accounts_body_operations:
extends: .prepare_haf_data_5m
needs:
- job: haf_image_build
artifacts: true
stage: build_and_test_phase_1
variables:
HIVE_NETWORK_TYPE: mainnet
BLOCK_LOG_SOURCE_DIR: "$BLOCK_LOG_SOURCE_DIR_5M"
CONFIG_INI_SOURCE: "$CI_PROJECT_DIR/tests/integration/replay/patterns/accounts_body_operations_filtered/config.ini"
DATA_CACHE_DIR: "${PIPELINE_DATA_CACHE_HAF_DIRECTORY}_replay_accounts_body_operations_filtered"
tags:
- data-cache-storage
block_api_tests:
extends: .replay_step
image: $CI_REGISTRY_IMAGE/ci-base-image:ubuntu22.04-8-jmeter
extends: .jmeter_benchmark_job
stage: build_and_test_phase_2
needs:
- job: replay_filtered_haf_data_accounts_body_operations
artifacts: true
- job: haf_image_build
artifacts: true
variables:
FF_NETWORK_PER_BUILD: 1
PATTERNS_PATH: "$CI_PROJECT_DIR/tests/integration/replay/patterns/accounts_body_operations_filtered"
BENCHMARK_DIR: "$CI_PROJECT_DIR/hive/tests/python/hive-local-tools/tests_api/benchmarks"
script:
# setup
- |
echo -e "\e[0Ksection_start:$(date +%s):blocks_api_test_setup[collapsed=true]\r\e[0KSetting up blocks api tests..."
psql $DB_URL -c "CREATE ROLE bench LOGIN PASSWORD 'mark' INHERIT IN ROLE hived_group;"
export BENCHMARK_DB_URL="postgresql://bench:mark@hfm-only-instance:5432/$DB_NAME"
echo -e "\e[0Ksection_end:$(date +%s):blocks_api_test_setup\r\e[0K"
# Allow access from any network to eliminate CI IP addressing problems
HAF_DB_ACCESS: |
"host all haf_admin 0.0.0.0/0 trust"
"host all hived 0.0.0.0/0 trust"
"host all hafah_user 0.0.0.0/0 trust"
"host all all 0.0.0.0/0 scram-sha-256"
BENCHMARK_DB_URL: "postgresql://hived@haf-instance:5432/haf_block_log"
HIVED_UID: $HIVED_UID
services:
- name: ${HAF_IMAGE_NAME}
alias: haf-instance
variables:
PG_ACCESS: "${HAF_DB_ACCESS}"
DATA_SOURCE: "${PIPELINE_DATA_CACHE_HAF_DIRECTORY}_replay_accounts_body_operations_filtered"
LOG_FILE: $CI_JOB_NAME.log
command: ["--replay-blockchain", "--stop-at-block=5000000"]
script:
# run pattern tests
- |
echo -e "\e[0Ksection_start:$(date +%s):blocks_api_test[collapsed=true]\r\e[0KRunning blocks api tests..."
......@@ -567,8 +596,7 @@ block_api_tests:
when: always
expire_in: 1 week
tags:
- public-runner-docker
- hived-for-tests
- data-cache-storage
prepare_haf_data:
extends: .prepare_haf_data_5m
......
......@@ -2,12 +2,12 @@
# docker buildx build --progress=plain --target=ci-base-image --tag registry.gitlab.syncad.com/hive/haf/ci-base-image$CI_IMAGE_TAG --file Dockerfile .
# To be started from cloned haf source directory.
ARG CI_REGISTRY_IMAGE=registry.gitlab.syncad.com/hive/haf/
ARG CI_IMAGE_TAG=ubuntu22.04-17
ARG CI_IMAGE_TAG=ubuntu24.04-1
ARG BUILD_IMAGE_TAG
ARG IMAGE_TAG_PREFIX
FROM registry.gitlab.syncad.com/hive/hive/minimal-runtime:ubuntu22.04-13 AS minimal-runtime
FROM registry.gitlab.syncad.com/hive/hive/minimal-runtime:ubuntu24.04-1 AS minimal-runtime
ENV PATH="/home/haf_admin/.local/bin:$PATH"
......@@ -28,10 +28,10 @@ RUN bash -x ./scripts/setup_ubuntu.sh --haf-admin-account="haf_admin" --hived-ac
# everyone to upgrade their haf_api_node in sync with this commit. We should switch haf_api_node's healthcheck to
# use wget once images based on this Dockerfile are made official, and we can drop curl soon thereafter
RUN apt-get update && \
DEBIAN_FRONTEND=noniteractive apt-get install --no-install-recommends -y postgresql-common gnupg && \
DEBIAN_FRONTEND=noninteractive apt-get install --no-install-recommends -y postgresql-common gnupg && \
/usr/share/postgresql-common/pgdg/apt.postgresql.org.sh -y && \
apt-get update && \
DEBIAN_FRONTEND=noniteractive apt-get install --no-install-recommends -y curl postgresql-17 postgresql-17-cron libpq5 libboost-chrono1.74.0 libboost-context1.74.0 libboost-filesystem1.74.0 libboost-thread1.74.0 busybox netcat-openbsd && \
DEBIAN_FRONTEND=noninteractive apt-get install --no-install-recommends -y curl postgresql-17 postgresql-17-cron libpq5 libboost-chrono1.83.0 libboost-context1.83.0 libboost-filesystem1.83.0 libboost-thread1.83.0 busybox netcat-openbsd && \
apt-get remove -y gnupg && \
apt-get autoremove -y && \
busybox --install -s
......@@ -44,7 +44,7 @@ RUN useradd -r -s /usr/sbin/nologin -b /nonexistent -c "HAF maintenance service
USER haf_admin
WORKDIR /home/haf_admin
FROM registry.gitlab.syncad.com/hive/hive/ci-base-image:ubuntu22.04-13 AS ci-base-image
FROM registry.gitlab.syncad.com/hive/hive/ci-base-image:ubuntu24.04-1 AS ci-base-image
ENV PATH="/home/haf_admin/.local/bin:$PATH"
......@@ -107,7 +107,7 @@ RUN \
# Here we could use a smaller image without packages specific to build requirements
FROM ${CI_REGISTRY_IMAGE}ci-base-image:$CI_IMAGE_TAG AS base_instance
ENV BUILD_IMAGE_TAG=${BUILD_IMAGE_TAG:-:ubuntu22.04-8}
ENV BUILD_IMAGE_TAG=${BUILD_IMAGE_TAG:-:ubuntu24.04-1}
ARG P2P_PORT=2001
ENV P2P_PORT=${P2P_PORT}
......@@ -208,9 +208,9 @@ EXPOSE ${WS_PORT}
# JSON rpc service
EXPOSE ${HTTP_PORT}
FROM registry.gitlab.syncad.com/hive/haf/minimal-runtime:ubuntu22.04-16 AS minimal-instance
FROM registry.gitlab.syncad.com/hive/haf/minimal-runtime:ubuntu24.04-1 AS minimal-instance
ENV BUILD_IMAGE_TAG=${BUILD_IMAGE_TAG:-:ubuntu22.04-8}
ENV BUILD_IMAGE_TAG=${BUILD_IMAGE_TAG:-:ubuntu24.04-1}
ARG P2P_PORT=2001
ENV P2P_PORT=${P2P_PORT}
......
# syntax=docker/dockerfile:1.4
# docker buildx build --tag registry.gitlab.syncad.com/hive/haf/ci-base-image:$CI_IMAGE_TAG-jmeter --progress=plain --file Dockerfile.jmeter .
ARG CI_IMAGE_TAG=ubuntu22.04-8
FROM phusion/baseimage:jammy-1.0.1 AS build
COPY <<-EOF /opt/patch.sed
s/jtl2junit/m2u/g
s/results file/results file (required)/g
23 i final Options helpOpt = new Options();
23 i helpOpt.addOption("?", "help", false, "");
23 i helpOpt.addOption(new Option("i", CMD_OPTION_INPUT, true, ""));
23 i helpOpt.addOption(new Option("o", CMD_OPTION_OUTPUT, true, ""));
23 i helpOpt.addOption(new Option("t", CMD_OPTION_TESTSUITE_NAME, true, ""));
23 i helpOpt.addOption(new Option("f", M2UConstants.JUNIT_FILTER_SWITCH_NAME, true, ""));
23 i final CommandLine helpCmd = parser.parse( helpOpt, argv );
23 i if (helpCmd.hasOption("help")) {
23 i new HelpFormatter().printHelp( APPLICATION_NAME, options );
23 i System.exit(0);
23 i }
72 i options.addOption("?", "help", false, "Show these usage instructions");
EOF
RUN <<EOF
set -e
# Install system dependencies
apt-get update
apt-get install -y git unzip wget ca-certificates maven openjdk-8-jdk
apt-get clean
rm -rf /var/lib/apt/lists/*
# Prepare tools directory
mkdir -p /opt/tools
cd /opt/tools
# Install Apache JMeter
wget --quiet https://archive.apache.org/dist/jmeter/binaries/apache-jmeter-5.4.3.zip -O jmeter.zip
unzip -qq jmeter.zip
rm jmeter.zip
mv apache-jmeter-5.4.3 jmeter
wget --quiet https://jdbc.postgresql.org/download/postgresql-42.3.1.jar -O /opt/tools/jmeter/lib/postgresql-42.3.1.jar
# Build m2u from source
mkdir -p m2u
git clone --single-branch --branch master https://github.com/tguzik/m2u.git m2u-source
cd m2u-source
find -name CommandLineParser.java -exec sed -i -f /opt/patch.sed {} \;
mvn
# Install m2u
mv target/m2u.jar ../m2u/m2u.jar
cd ../m2u
rm -R ../m2u-source
echo 'java -jar /opt/tools/m2u/m2u.jar $@' > m2u
chmod +x m2u
EOF
FROM registry.gitlab.syncad.com/hive/haf/ci-base-image:$CI_IMAGE_TAG
COPY --from=build /opt/tools /opt/tools
USER root
RUN <<EOF
set -e
# Install system dependencies
apt-get update
apt-get install -y openjdk-8-jre
apt-get clean
rm -rf /var/lib/apt/lists/*
# Creater symlinks in bin directory
ln -s /opt/tools/jmeter/bin/jmeter /usr/bin/jmeter
ln -s /opt/tools/m2u/m2u /usr/bin/m2u
EOF
USER haf_admin
RUN <<EOF
set -e
# Install user dependencies
pip3 install prettytable
EOF
\ No newline at end of file
*
\ No newline at end of file
Subproject commit 5129c6fa3704730f4e46fef950ab10f486f6561f
Subproject commit 1a8bfcdf46a4a6430b8ea4b788fc1cbe71aecb99
#! /bin/bash
REGISTRY=${1:-registry.gitlab.syncad.com/hive/haf/}
CI_IMAGE_TAG=ubuntu22.04-17
CI_IMAGE_TAG=ubuntu24.04-1
# exit when any command fails
set -e
......
include:
- project: 'hive/hive'
ref: 1c2fe378cbb7c61147881dce247a6d9c28188f9e #develop
ref: 1a8bfcdf46a4a6430b8ea4b788fc1cbe71aecb99 #develop
file: '/scripts/ci-helpers/prepare_data_image_job.yml'
.prepare_haf_image:
......@@ -36,17 +36,18 @@ include:
BLOCK_LOG_SOURCE_DIR: ""
CONFIG_INI_SOURCE: ""
HIVE_NETWORK_TYPE: mainnet
DATA_CACHE_DIR: "${DATA_CACHE_HAF_PREFIX}_${HAF_COMMIT}"
script:
- mkdir "${DATA_CACHE_HAF_PREFIX}_${HAF_COMMIT}/datadir" -pv
- cd "${DATA_CACHE_HAF_PREFIX}_${HAF_COMMIT}/datadir"
- flock "${DATA_CACHE_HAF_PREFIX}_${HAF_COMMIT}/datadir" $SCRIPTS_PATH/ci-helpers/build_data.sh $HAF_IMAGE_NAME
--data-cache="${DATA_CACHE_HAF_PREFIX}_${HAF_COMMIT}" --block-log-source-dir="$BLOCK_LOG_SOURCE_DIR" --config-ini-source="$CONFIG_INI_SOURCE"
- mkdir "${DATA_CACHE_DIR}/datadir" -pv
- cd "${DATA_CACHE_DIR}/datadir"
- flock "${DATA_CACHE_DIR}/datadir" $SCRIPTS_PATH/ci-helpers/build_data.sh $HAF_IMAGE_NAME
--data-cache="${DATA_CACHE_DIR}" --block-log-source-dir="$BLOCK_LOG_SOURCE_DIR" --config-ini-source="$CONFIG_INI_SOURCE"
- cd "$CI_PROJECT_DIR"
- cp "${DATA_CACHE_HAF_PREFIX}_${HAF_COMMIT}/datadir/hived_uid.env" "$CI_PROJECT_DIR/hived_uid.env"
- cp "${DATA_CACHE_HAF_PREFIX}_${HAF_COMMIT}/datadir/docker_entrypoint.log" "${CI_PROJECT_DIR}/docker_entrypoint.log"
- ls -la "${DATA_CACHE_HAF_PREFIX}_${HAF_COMMIT}/datadir/"
- cp "${DATA_CACHE_DIR}/datadir/hived_uid.env" "$CI_PROJECT_DIR/hived_uid.env"
- cp "${DATA_CACHE_DIR}/datadir/docker_entrypoint.log" "${CI_PROJECT_DIR}/docker_entrypoint.log"
- ls -la "${DATA_CACHE_DIR}/datadir/"
after_script:
- rm "${DATA_CACHE_HAF_PREFIX}_${HAF_COMMIT}/replay_running" -f
- rm "${DATA_CACHE_DIR}/replay_running" -f
artifacts:
reports:
......
......@@ -39,7 +39,7 @@ install_all_dev_packages() {
"$SRC_DIR/hive/scripts/setup_ubuntu.sh" --runtime --dev
apt-get update
DEBIAN_FRONTEND=noniteractive apt-get install -y \
DEBIAN_FRONTEND=noninteractive apt-get install -y \
systemd \
libpq-dev \
tox \
......@@ -47,7 +47,7 @@ install_all_dev_packages() {
postgresql-common
/usr/share/postgresql-common/pgdg/apt.postgresql.org.sh -y
DEBIAN_FRONTEND=noniteractive apt-get install -y postgresql-17 postgresql-server-dev-17 postgresql-17-cron \
DEBIAN_FRONTEND=noninteractive apt-get install -y postgresql-17 postgresql-server-dev-17 postgresql-17-cron \
netcat-openbsd # needed to correctly handle --skip-hived option
apt-get clean
......
#include <boost/algorithm/string.hpp>
#include "configuration.hpp"
#include "psql_utils/logger.hpp"
#include <boost/algorithm/string.hpp>
#include <cassert>
namespace PsqlTools::QuerySupervisor {
......
......@@ -5,6 +5,7 @@
#include <optional>
#include <deque>
#include <atomic>
#include <fstream>
namespace hive::plugins::sql_serializer {
namespace bfs = boost::filesystem;
......
......@@ -17,6 +17,7 @@ if TYPE_CHECKING:
from sqlalchemy.engine.row import Row
from sqlalchemy.orm.session import Session
from sqlalchemy.sql import text
BLOCKS_IN_FORK = 5
BLOCKS_AFTER_FORK = 5
......@@ -174,10 +175,10 @@ SQL_CREATE_UPDATE_HISTOGRAM_FUNCTION = """
def create_app(session, application_context):
session.execute( "CREATE SCHEMA IF NOT EXISTS {}".format( application_context ) )
session.execute( "SELECT hive.app_create_context( '{0}', '{0}' )".format( application_context ) )
session.execute( SQL_CREATE_AND_REGISTER_HISTOGRAM_TABLE.format( application_context ) )
session.execute( SQL_CREATE_UPDATE_HISTOGRAM_FUNCTION.format( application_context ) )
session.execute( text("CREATE SCHEMA IF NOT EXISTS {}".format( application_context )) )
session.execute( text("SELECT hive.app_create_context( '{0}', '{0}' )".format( application_context )) )
session.execute( text(SQL_CREATE_AND_REGISTER_HISTOGRAM_TABLE.format( application_context )) )
session.execute( text(SQL_CREATE_UPDATE_HISTOGRAM_FUNCTION.format( application_context )) )
session.commit()
def wait_until_irreversible_without_new_block(session, irreversible_block, limit, interval):
......@@ -222,12 +223,12 @@ def wait_until_irreversible(node_under_test, session):
def query_col(session: Session, sql: str, **kwargs) -> list[Any]:
"""Perform a `SELECT n*1`"""
return [row[0] for row in session.execute(sql, params=kwargs).fetchall()]
return [row[0] for row in session.execute(text(sql), params=kwargs).fetchall()]
def query_all(session: Session, sql: str, **kwargs) -> list[Row]:
"""Perform a `SELECT n*m`"""
return session.execute(sql, params=kwargs).fetchall()
return session.execute(text(sql), params=kwargs).fetchall()
def wait_for_irreversible_in_database(
......
......@@ -2,6 +2,8 @@ from __future__ import annotations
from typing import Any, TYPE_CHECKING, TypeAlias, Union
from sqlalchemy.sql import text
if TYPE_CHECKING:
from sqlalchemy.engine.row import Row
from sqlalchemy.orm.session import Session
......@@ -14,24 +16,24 @@ class DbAdapter:
@staticmethod
def query_all(session: Session, sql: str, **kwargs) -> list[Row]:
"""Perform a `SELECT n*m`"""
return session.execute(sql, params=kwargs).all()
return session.execute(text(sql), params=kwargs).all()
@staticmethod
def query_col(session: Session, sql: str, **kwargs) -> ColumnType:
"""Perform a `SELECT n*1`"""
return [row[0] for row in session.execute(sql, params=kwargs).all()]
return [row[0] for row in session.execute(text(sql), params=kwargs).all()]
@staticmethod
def query_no_return(session: Session, sql: str, **kwargs) -> None:
"""Perform a query with no return"""
session.execute(sql, params=kwargs).close()
session.execute(text(sql), params=kwargs).close()
@staticmethod
def query_row(session: Session, sql: str, **kwargs) -> Row:
"""Perform a `SELECT 1*m`"""
return session.execute(sql, params=kwargs).first()
return session.execute(text(sql), params=kwargs).first()
@staticmethod
def query_one(session: Session, sql: str, **kwargs) -> ScalarType:
"""Perform a `SELECT 1*1`"""
return session.execute(sql, params=kwargs).scalar()
return session.execute(text(sql), params=kwargs).scalar()
from sqlalchemy.orm import Session
from sqlalchemy.sql import text
from typing import TYPE_CHECKING, Union
import test_tools as tt
......@@ -36,7 +38,7 @@ def assert_are_indexes_restored(haf_node: HafNode):
def does_index_exist(session, namespace, table, indexname):
return session.execute("""
return session.execute(text("""
SELECT 1
FROM pg_index i
JOIN pg_class idx ON i.indexrelid = idx.oid
......@@ -45,7 +47,7 @@ def does_index_exist(session, namespace, table, indexname):
WHERE n.nspname = :ns
AND tbl.relname = :table
AND idx.relname = :index
""", {'ns':namespace, 'table': table, 'index': indexname}).fetchone()
"""), {'ns':namespace, 'table': table, 'index': indexname}).fetchone()
def assert_index_exists(session, namespace, table, indexname):
......@@ -58,7 +60,7 @@ def assert_index_does_not_exist(session, namespace, table, indexname):
def wait_till_registered_indexes_created(haf_node, context):
while True:
result = haf_node.session.execute("SELECT hive.check_if_registered_indexes_created(:ctx)", {'ctx': context}).scalar()
result = haf_node.session.execute(text("SELECT hive.check_if_registered_indexes_created(:ctx)"), {'ctx': context}).scalar()
if result:
break
tt.logger.info("Indexes not yet created. Sleeping for 10 seconds...")
......@@ -67,7 +69,7 @@ def wait_till_registered_indexes_created(haf_node, context):
def register_index_dependency(haf_node, context, create_index_command):
haf_node.session.execute(
"SELECT hive.register_index_dependency(:ctx, :cmd)", {'ctx': context, 'cmd': create_index_command})
text("SELECT hive.register_index_dependency(:ctx, :cmd)"), {'ctx': context, 'cmd': create_index_command})
def assert_is_transaction_in_database(haf_node: HafNode, transaction: Union[Transaction, TransactionId]):
......
Source diff could not be displayed: it is too large. Options to address this: view the blob.
......@@ -20,9 +20,9 @@ source = [
[tool.poetry.dependencies]
python = "^3.10"
pandas = "1.4.0"
psycopg2-binary = "2.9.1"
sqlalchemy = "1.4.52"
python = "^3.12"
pandas = "^2.2.3"
psycopg2-binary = "2.9.10"
sqlalchemy = "^2.0.39"
sqlalchemy-utils = "0.41.2"
hive_local_tools = { path = "../../../hive/tests/python/hive-local-tools", develop = true }
......@@ -32,19 +32,19 @@ psql -w -d $DB_NAME -v ON_ERROR_STOP=on -U $DB_ADMIN -c "SELECT hive.app_create_
psql -w -d $DB_NAME -v ON_ERROR_STOP=on -U $DB_ADMIN -c "SELECT hive.app_state_provider_import('${TYPE}', '${NAME}_live');"
echo "Replay of ${NAME}..."
psql -w -d $DB_NAME -v ON_ERROR_STOP=on -U $DB_ADMIN -c "CALL ${NAME}_live.main('${NAME}_live', 0, 5000000, 500000);"
psql -w -d $DB_NAME -v ON_ERROR_STOP=on -U $DB_ADMIN -c "CALL ${NAME}_live.main('${NAME}_live', 0, 5000000, 500000);"
echo "Clearing tables..."
psql -w -d $DB_NAME -v ON_ERROR_STOP=on -U $DB_ADMIN -c "TRUNCATE ${NAME}_live.${TABLE_NAME};"
psql -w -d $DB_NAME -v ON_ERROR_STOP=on -U $DB_ADMIN -c "TRUNCATE ${NAME}_live.differing_accounts;"
echo "Installing dependencies..."
pip install psycopg2-binary
pip install --break-system-packages psycopg2-binary
rm -f "${CURRENT_PROJECT_DIR}/account_data/accounts_dump.json"
# The line below is somewhat problematic. Gunzip by default deletes gz file after decompression,
# but the '-k' parameter, which prevents that from happening is not supported on some of its versions.
#
#
# Thus, depending on the OS, the line below may need to be replaced with one of the following:
# gunzip -c "${SCRIPTDIR}/accounts_dump.json.gz" > "${SCRIPTDIR}/accounts_dump.json"
# gzcat "${SCRIPTDIR}/accounts_dump.json.gz" > "${SCRIPTDIR}/accounts_dump.json"
......
from sqlalchemy.orm.session import sessionmaker
from sqlalchemy.sql import text
import test_tools as tt
......@@ -17,15 +18,15 @@ APPLICATION_CONTEXT = "application"
def update_app_continuously(session, application_context, cycles):
for i in range(cycles):
blocks_range = session.execute( "SELECT * FROM hive.app_next_block( '{}' )".format( application_context ) ).fetchone()
blocks_range = session.execute( text("SELECT * FROM hive.app_next_block( '{}' )".format( application_context )) ).fetchone()
(first_block, last_block) = blocks_range
if last_block is None:
tt.logger.info( "next blocks_range was NULL\n" )
continue
tt.logger.info( "next blocks_range: {}\n".format( blocks_range ) )
session.execute( "SELECT public.update_histogram( {}, {} )".format( first_block, last_block ) )
session.execute( text("SELECT public.update_histogram( {}, {} )".format( first_block, last_block )) )
session.commit()
ctx_stats = session.execute( "SELECT current_block_num, irreversible_block FROM hafd.contexts WHERE NAME = '{}'".format( application_context ) ).fetchone()
ctx_stats = session.execute( text("SELECT current_block_num, irreversible_block FROM hafd.contexts WHERE NAME = '{}'".format( application_context )) ).fetchone()
tt.logger.info(f'ctx_stats-update-app: cbn {ctx_stats[0]} irr {ctx_stats[1]}')
......@@ -59,25 +60,25 @@ def test_application_broken(prepared_networks_and_database_12_8_without_block_lo
# system under test
create_app(second_session, APPLICATION_CONTEXT)
blocks_range = session.execute( "SELECT * FROM hive.app_next_block( '{}' )".format( APPLICATION_CONTEXT ) ).fetchone()
blocks_range = session.execute( text("SELECT * FROM hive.app_next_block( '{}' )".format( APPLICATION_CONTEXT )) ).fetchone()
(first_block, last_block) = blocks_range
# Last event in `events_queue` == `NEW_IRREVERSIBLE` (before it was `NEW_BLOCK`) therefore first call `hive.app_next_block` returns {None, None}
if first_block is None:
blocks_range = session.execute( "SELECT * FROM hive.app_next_block( '{}' )".format( APPLICATION_CONTEXT ) ).fetchone()
blocks_range = session.execute( text("SELECT * FROM hive.app_next_block( '{}' )".format( APPLICATION_CONTEXT )) ).fetchone()
(first_block, last_block) = blocks_range
tt.logger.info(f'first_block: {first_block}, last_block: {last_block}')
ctx_stats = session.execute( "SELECT current_block_num, irreversible_block FROM hafd.contexts WHERE NAME = '{}'".format( APPLICATION_CONTEXT ) ).fetchone()
ctx_stats = session.execute( text("SELECT current_block_num, irreversible_block FROM hafd.contexts WHERE NAME = '{}'".format( APPLICATION_CONTEXT )) ).fetchone()
tt.logger.info(f'ctx_stats-before-detach: cbn {ctx_stats[0]} irr {ctx_stats[1]}')
session.execute( "SELECT hive.app_context_detach( '{}' )".format( APPLICATION_CONTEXT ) )
session.execute( "SELECT public.update_histogram( {}, {} )".format( first_block, CONTEXT_ATTACH_BLOCK ) )
session.execute( "SELECT hive.app_set_current_block_num( '{}', {} )".format( APPLICATION_CONTEXT, CONTEXT_ATTACH_BLOCK ) )
session.execute( "SELECT hive.app_context_attach( '{}' )".format( APPLICATION_CONTEXT ) )
session.execute( text("SELECT hive.app_context_detach( '{}' )".format( APPLICATION_CONTEXT )) )
session.execute( text("SELECT public.update_histogram( {}, {} )".format( first_block, CONTEXT_ATTACH_BLOCK )) )
session.execute( text("SELECT hive.app_set_current_block_num( '{}', {} )".format( APPLICATION_CONTEXT, CONTEXT_ATTACH_BLOCK )) )
session.execute( text("SELECT hive.app_context_attach( '{}' )".format( APPLICATION_CONTEXT )) )
session.commit()
ctx_stats = session.execute( "SELECT current_block_num, irreversible_block FROM hafd.contexts WHERE NAME = '{}'".format( APPLICATION_CONTEXT ) ).fetchone()
ctx_stats = session.execute( text("SELECT current_block_num, irreversible_block FROM hafd.contexts WHERE NAME = '{}'".format( APPLICATION_CONTEXT )) ).fetchone()
tt.logger.info(f'ctx_stats-after-attach: cbn {ctx_stats[0]} irr {ctx_stats[1]}')
# THEN
......@@ -85,7 +86,7 @@ def test_application_broken(prepared_networks_and_database_12_8_without_block_lo
update_app_continuously(second_session, APPLICATION_CONTEXT, nr_cycles)
wait_for_irreversible_progress(node_under_test, START_TEST_BLOCK)
ctx_stats = session.execute( "SELECT current_block_num, irreversible_block FROM hafd.contexts WHERE NAME = '{}'".format( APPLICATION_CONTEXT ) ).fetchone()
ctx_stats = session.execute( text("SELECT current_block_num, irreversible_block FROM hafd.contexts WHERE NAME = '{}'".format( APPLICATION_CONTEXT )) ).fetchone()
tt.logger.info(f'ctx_stats-after-waiting: cbn {ctx_stats[0]} irr {ctx_stats[1]}')
wait_for_irreversible_in_database(session, START_TEST_BLOCK+3)
......@@ -107,13 +108,13 @@ def test_application_broken(prepared_networks_and_database_12_8_without_block_lo
nr_cycles = 1
update_app_continuously(second_session, APPLICATION_CONTEXT, nr_cycles)
ctx_stats = session.execute( "SELECT current_block_num, irreversible_block FROM hafd.contexts WHERE NAME = '{}'".format( APPLICATION_CONTEXT ) ).fetchone()
ctx_stats = session.execute( text("SELECT current_block_num, irreversible_block FROM hafd.contexts WHERE NAME = '{}'".format( APPLICATION_CONTEXT )) ).fetchone()
tt.logger.info(f'ctx_stats-after-waiting-2: cbn {ctx_stats[0]} irr {ctx_stats[1]}')
haf_irreversible = session.query(IrreversibleData).one()
tt.logger.info(f'consistent_block {haf_irreversible.consistent_block}')
context_irreversible_block = session.execute( "SELECT irreversible_block FROM hafd.contexts WHERE NAME = '{}'".format( APPLICATION_CONTEXT ) ).fetchone()[0]
context_irreversible_block = session.execute( text("SELECT irreversible_block FROM hafd.contexts WHERE NAME = '{}'".format( APPLICATION_CONTEXT )) ).fetchone()[0]
tt.logger.info(f'context_irreversible_block {context_irreversible_block}')
assert irreversible_block == haf_irreversible.consistent_block
......
......@@ -5,6 +5,8 @@ from haf_local_tools.haf_node.monolithic_workaround import apply_block_log_type_
from haf_local_tools.system.haf import (connect_nodes, assert_index_exists, register_index_dependency)
import time
from sqlalchemy.sql import text
def test_application_index_many(haf_node):
tt.logger.info(f'Start test_application_index_many')
......@@ -21,7 +23,7 @@ def test_application_index_many(haf_node):
session = haf_node.session
create_app(session, "application")
session.execute("CREATE EXTENSION IF NOT EXISTS btree_gin")
session.execute(text("CREATE EXTENSION IF NOT EXISTS btree_gin"))
register_index_dependency(haf_node, 'application',
r"CREATE INDEX IF NOT EXISTS hive_operations_vote_author_permlink_1 ON hafd.operations USING gin"
......@@ -55,7 +57,7 @@ def test_application_index_many(haf_node):
# THEN
while True:
result = session.execute("SELECT hive.check_if_registered_indexes_created('application')").scalar()
result = session.execute(text("SELECT hive.check_if_registered_indexes_created('application')")).scalar()
if result:
break
tt.logger.info("Indexes not yet created. Sleeping for 10 seconds...")
......
......@@ -5,6 +5,8 @@ from haf_local_tools.haf_node.monolithic_workaround import apply_block_log_type_
from haf_local_tools.system.haf import (connect_nodes, assert_index_exists, register_index_dependency)
import time
from sqlalchemy.sql import text
def test_application_index_one(haf_node):
tt.logger.info(f'Start test_application_index_one')
......@@ -21,7 +23,7 @@ def test_application_index_one(haf_node):
session = haf_node.session
create_app(session, "application")
session.execute("CREATE EXTENSION IF NOT EXISTS btree_gin")
session.execute(text("CREATE EXTENSION IF NOT EXISTS btree_gin"))
register_index_dependency(haf_node, 'application',
r"CREATE INDEX IF NOT EXISTS hive_operations_vote_author_permlink ON hafd.operations USING gin"
......@@ -34,7 +36,7 @@ def test_application_index_one(haf_node):
# THEN
while True:
result = session.execute("SELECT hive.check_if_registered_indexes_created('application')").scalar()
result = session.execute(text("SELECT hive.check_if_registered_indexes_created('application')")).scalar()
if result:
break
tt.logger.info("Indexes not yet created. Sleeping for 10 seconds...")
......
......@@ -7,6 +7,7 @@ from haf_local_tools.system.haf import (connect_nodes, assert_index_does_not_exi
import time
from sqlalchemy.sql import text
def test_application_index_replay(haf_node):
tt.logger.info(f'Start test_application_index_replay')
......@@ -27,7 +28,7 @@ def test_application_index_replay(haf_node):
session = haf_node.session
create_app(session, "application")
session.execute("CREATE EXTENSION IF NOT EXISTS btree_gin")
session.execute(text("CREATE EXTENSION IF NOT EXISTS btree_gin"))
register_index_dependency(haf_node, 'application',
r"CREATE INDEX IF NOT EXISTS hive_operations_vote_author_permlink ON hafd.operations USING gin"
......