Skip to content
GitLab
Explore
Sign in
Register
Primary navigation
Search or go to…
Project
H
haf_api_node
Manage
Activity
Members
Labels
Plan
Issues
Issue boards
Milestones
Wiki
Code
Merge requests
Repository
Branches
Commits
Tags
Repository graph
Compare revisions
Snippets
Build
Pipelines
Jobs
Pipeline schedules
Artifacts
Deploy
Releases
Package registry
Container Registry
Model registry
Operate
Environments
Terraform modules
Monitor
Incidents
Analyze
Value stream analytics
Contributor analytics
CI/CD analytics
Repository analytics
Model experiments
Help
Help
Support
GitLab documentation
Compare GitLab plans
Community forum
Contribute to GitLab
Provide feedback
Keyboard shortcuts
?
Snippets
Groups
Projects
Show more breadcrumbs
hive
haf_api_node
Commits
65c0a15f
Commit
65c0a15f
authored
1 week ago
by
Marcin
Browse files
Options
Downloads
Patches
Plain Diff
remove .env
parent
ec5a8395
No related branches found
No related tags found
No related merge requests found
Pipeline
#120233
failed
1 week ago
Stage: build
Stage: replay
Stage: test
Stage: cleanup
Changes
1
Pipelines
1
Hide whitespace changes
Inline
Side-by-side
Showing
1 changed file
.env
+0
-263
0 additions, 263 deletions
.env
with
0 additions
and
263 deletions
.env
deleted
100644 → 0
+
0
−
263
View file @
ec5a8395
# The name of the ZFS storage pool for HAF to use
ZPOOL="haf-pool-2"
# The name of the dataset on $ZPOOL where HAF will store its data
# HAF won't read/write anything outside of $ZPOOL/$TOP_LEVEL_DATASET,
# so you can have, e.g., multiple HAF installations on the same
# pool by changing TOP_LEVEL_DATASET
TOP_LEVEL_DATASET="haf-datadir"
# these defaults usually don't need changing
ZPOOL_MOUNT_POINT="/${ZPOOL}"
TOP_LEVEL_DATASET_MOUNTPOINT="${ZPOOL_MOUNT_POINT}/${TOP_LEVEL_DATASET}"
# COMPOSE_PROFILES are the list of HAF services you want to control when
# you run `docker compose up` etc. It's a comma-separated list of profiles
# taken from:
# - core: the minimal HAF system of a database and hived
# - admin: useful tools for administrating HAF: pgadmin, pghero
# - apps: core HAF apps: hivemind, hafah, hafbe (balance-tracker is a subapp)
# - servers: services for routing/caching API calls: haproxy, jussi (JSON caching), varnish (REST caching)
# - monitoring: services for Prometheus, Grafana, Loki, Cadvisor , Nodeexporter, Promtail, Postresexporter, Blackboxexporter...
# COMPOSE_PROFILES="core,admin,hafah,hivemind,servers"
# COMPOSE_PROFILES="core,admin,hafah,hafbe,hivemind,servers,monitoring"
COMPOSE_PROFILES="core,admin,servers,apps,monitoring,hivesense"
# The registry where Hive docker images are pulled from. Normally, you
# should set this to the default, `registry.hive.blog` or Docker Hub,
# where stable images will be published. If you want to run pre-release
# images, change this to `registry.gitlab.syncad.com/hive` where both CI
# builds are automatically pushed.
# HIVE_API_NODE_REGISTRY=registry.hive.blog
# HIVE_API_NODE_REGISTRY=hiveio
HIVE_API_NODE_REGISTRY=registry.gitlab.syncad.com/hive
# To use the same tagged version of all the Hive API node images,
# set it here. You can override the tags for individual images
# below
HIVE_API_NODE_VERSION=1.27.9
# Global settings
# override the HAF core image's version and registry image here:
# HAF_IMAGE=${HIVE_API_NODE_REGISTRY}/haf
# HAF_VERSION=${HIVE_API_NODE_VERSION}
HAF_IMAGE=registry.gitlab.syncad.com/hive/haf/instance
HAF_VERSION=0f4ab48d
# HAF_DATA_DIRECTORY="${TOP_LEVEL_DATASET_MOUNTPOINT}"
# HAF_LOG_DIRECTORY="${TOP_LEVEL_DATASET_MOUNTPOINT}/logs"
# HAF_WAL_DIRECTORY="${TOP_LEVEL_DATASET_MOUNTPOINT}/shared_memory/haf_wal"
# If you need to massive sync HAF (i.e. you are not using a ZFS snapshot),
# then you can sync faster by temporarily using an in-memory shared_memory.bin.
# To do this, comment out the line below and uncomment the one after, and
# mount an appropriate tmpfs filesystem there.
# After the sync has finished, do `docker compose down` then move the shared_memory.bin
# file to the shared_memory directory, edit this file to restore original values, and
# `docker compose up -d` to restart HAF.
HAF_SHM_DIRECTORY="${TOP_LEVEL_DATASET_MOUNTPOINT}/shared_memory"
# HAF_SHM_DIRECTORY="/mnt/haf_shared_mem"
# The docker compose project name, gets prefixed onto each container name
PROJECT_NAME=haf-world-test
# The docker network name, if you run two HAF instances on the same server,
# give them different network names to keep them isolated. Otherwise
# unimportant.
NETWORK_NAME=haft
# List of arguments for the HAF service
# ARGUMENTS=""
# ARGUMENTS="--replay-blockchain"
# ARGUMENTS="--dump-snapshot=20230821"
# ARGUMENTS="--skip-hived"
#
# Example how to use monitoring services
#
# ARGUMENTS="--replay-blockchain --stop-at-block 5000000 --block-stats-report-output=NOTIFY --block-stats-report-type=FULL --notifications-endpoint=hived-pme:9185"
#
# Mandatory options are:
# --block-stats-report-output=NOTIFY --block-stats-report-type=FULL --notifications-endpoint=hived-pme:9185
# Which activate endpoint notification for hived-pme service (log converter from hived to Prometheus metrics)
#
# By default, 5 dashboards are available:
#
# - Blockstats(available after replay phase, showing live state of block, times, delays etc)
# - cAdvisor Docker Container - status of containers in stack, CPU, Memory, I/O, Network...)
# - Node Exporter Full - full state of host on which haf-api-node is running(full overview of available metrics)
# - Monitor Services - status of containers included in the monitoring system
# - PostgreSQL Databases - databases stats from postgresexporter
#
# Additional logs are collected from all containers in the stack via Loki and Promtail
# Default login and password for Grafana is admin/admin - remember to change it after first login
# Statistics provided by Grafana are available at the host address on port 3000 (http(s)://hostname:3000)
ARGUMENTS="--replay-blockchain --block-stats-report-output=NOTIFY --block-stats-report-type=FULL --notifications-endpoint=hived-pme:9185"
# The default setup will run the recommended version of HAfAH,
# you can run a custom version by un-commenting and modifying the
# values below
# HAFAH_IMAGE=${HIVE_API_NODE_REGISTRY}/hafah
# HAFAH_VERSION=${HIVE_API_NODE_VERSION}
# The default setup will run the recommended version of Hivemind using the values
# below. You can override them here to run a custom version of Hivemind
# HIVEMIND_IMAGE=${HIVE_API_NODE_REGISTRY}/hivemind
# HIVEMIND_VERSION=${HIVE_API_NODE_VERSION}
# HIVEMIND_REWRITER_IMAGE=${HIVE_API_NODE_REGISTRY}/hivemind/postgrest-rewriter
#HIVEMIND_IMAGE=hivemind
#HIVEMIND_VERSION=local
#HIVEMIND_REWRITER_IMAGE=hivemind_rewriter
HIVEMIND_SYNC_ARGS=
# The default setup will run the recommended version of balance tracker,
# you can run a custom version by un-commenting and modifying the
# values below
# BALANCE_TRACKER_IMAGE=${HIVE_API_NODE_REGISTRY}/balance_tracker
# BALANCE_TRACKER_VERSION=${HIVE_API_NODE_VERSION}
# REPUTATION_TRACKER_ADDON
# REPUTATION_TRACKER_IMAGE=${HIVE_API_NODE_REGISTRY}/reputation_tracker
# REPUTATION_TRACKER_VERSION=${HIVE_API_NODE_VERSION}
# There are two ways of running Balance Tracker: as a standalone app, or
# integrated with HAF Block Explorer. While you can technically run both,
# there's no good reason to do so--you'll just waste disk space and processing
# power maintaining two copies of the data.
# Regardless of which way you decide to run Balance Tracker, you will need
# to run a single API server, and it needs to know which schema the data is
# stored in. It will be in "hafbe_bal" if you're running HAF Block Explorer,
# and "btracker_app" if you're running Balance Tracker standalone.
# The default behavior is to serve data from the HAF Block Explorer, but
# if you're only running the standalone Balance Tracker, uncomment the next
# line:
# BTRACKER_SCHEMA="btracker_app"
# The default setup will run the recommended version of HAF block explorer,
# you can run a custom version by un-commenting and modifying the
# values below
# HAF_BLOCK_EXPLORER_IMAGE=${HIVE_API_NODE_REGISTRY}/haf_block_explorer
# HAF_BLOCK_EXPLORER_VERSION=${HIVE_API_NODE_VERSION}
# The default setup uses "Drone" as the API reverse proxy & cache for the old JSON-RPC-style
# calls. There is the older alternate reverse proxy, "Jussi", that you can choose to use instead.
# For more info about drone/jussi, see:
# https://hive.blog/hive-139531/@deathwing/announcing-drone-or-leveling-up-hive-api-nodes-and-user-experience
# To replace Drone with Jussi, uncomment the next line:
# JSONRPC_API_SERVER_NAME=jussi
# The default setup will run the recommended version of Jussi
# you can run a custom version by un-commenting and modifying the
# values below
# JUSSI_IMAGE=${HIVE_API_NODE_REGISTRY}/jussi
# JUSSI_VERSION=latest
# JUSSI_REDIS_MAX_MEMORY=8G
# If you have chosen to run Drone instead of Jussi, it will run the
# this version by default. You can run a custom version by un-commenting
# and modifying the values below
# DRONE_IMAGE=${HIVE_API_NODE_REGISTRY}/drone
# DRONE_VERSION=latest
# DRONE_LOG_LEVEL=warn,access_log=info
# In the default configuration, synchronous broadcast_transaction calls are not handled by
# your local stack, but instead are sent to a dedicated hived instance on api.hive.blog.
# (asynchronous broadcast_transaction calls and all other hived calls are always handled by
# your local instance of hived).
# Synchronous calls can easily tie up your hived node and cause performance problems for
# all hived API calls. For that reason, synchronous broadcast calls are deprecated. On
# public API servers, we typically run a separate hived instance for synchronous calls,
# so if they cause performance problems, it only impacts other users making synchronous
# calls.
# To avoid forcing every haf_api_node operator to run a second hived server, the default
# config forwards these disruptive calls to a public server dedicated to the purpose.
# If you want to handle these calls using your local hived node, or you want to forward
# these calls to a different server, override these variables:
# the values below will cause synchronous broadcasts to be handled by your own hived
# SYNC_BROADCAST_BACKEND_SERVER=haf
# SYNC_BROADCAST_BACKEND_PORT=8091
# SYNC_BROADCAST_BACKEND_SSL=no-ssl
# For running a full stack:
# if you need to run a custom image (for example, to use ACME DNS challenges), specify it here
# CADDY_IMAGE=${HIVE_API_NODE_REGISTRY}/haf_api_node/caddy
# CADDY_VERSION=latest
# The hostname you'll be running this server on. This should be a single hostname, the public
# hostname your server will be accessible from. This is used by the Swagger-UI REST API
# explorer for generating URLs pointing at your server. If this isn't a public server,
# this can be a local domain name.
PUBLIC_HOSTNAME="hive-3.pl.syncad.com"
BALANCE_TRACKER_SERVER_LOG_LEVEL=error
BLOCK_EXPLORER_SERVER_LOG_LEVEL=error
HAFAH_SERVER_LOG_LEVEL=error
HIVEMIND_SERVER_LOG_LEVEL=error
REPUTATION_TRACKER_SERVER_LOG_LEVEL=error
# There are several ways you can configure serving HTTP/HTTPS. Some examples:
# - to serve API using HTTPS with automatic redirect from HTTP -> HTTPS (the default),
# just give the hostname:
# CADDY_SITES="your.hostname.com"
# In the normal case, where you want to serve HTTP/HTTPS from the hostname you set in
# PUBLIC_HOSTNAME above, you don't need to set this variable, it will automatically take
# the value of PUBLIC_HOSTNAME
# - to serve using only HTTP (if you have nginx or something else handling SSL termination),
# you can use:
# CADDY_SITES="http://your.hostname.com"
# or even:
# CADDY_SITES="http://"
# if you want to respond on any hostname
# - to serve on either HTTP or HTTPS (i.e., respond to HTTP requests in the clear, instead of
# issuing a redirect):
# CADDY_SITES="http://your.hostname.com, https://your.hostname.com"
# - to serve on multiple hostnames, separate them with a comma and space:
# CADDY_SITES="your.hostname.com, your.other-hostname.net"
# CADDY_SITES="your.hostname.com"
# By default, we're configured to use a self-signed SSL certificate (by including the
# file below, which tells Caddy to generate a self-signed certificate). To obtain a real
# certificate from LetsEncrypt or otherwise, you can prevent the self-signed config
# from acting by mounting /dev/null in its place, then adding your own config
# files in the caddy/snippets directory
# WARNING: if you disable the self-signed certificate, Caddy will attempt to get a
# real certificate for PUBLIC_HOSTNAME from LetsEncrypt. If this server is
# behind a firewall or NAT, or PUBLIC_HOSTNAME is misconfigured, it will fail
# to get a certificate, and that will count against LetsEncrypt's rate limits.
#TLS_SELF_SIGNED_SNIPPET=caddy/self-signed.snippet
TLS_SELF_SIGNED_SNIPPET=/dev/null
# By default, we restrict access to the /admin URLs to localhost. You can allow
# connections by switching the following variable to /dev/null. First, though,
# you should protect the admin endpoint by a password or to a local network.
# Read caddy/snippets/README.md for how
LOCAL_ADMIN_ONLY_SNIPPET=caddy/local-admin-only.snippet
# LOCAL_ADMIN_ONLY_SNIPPET=/dev/null
# Caddy will only accept requests on the /admin/ endpoints over https by default.
# This is so that you can password-protect them with HTTP basicauth.
# However, if you've configured your server to only serve http, and something
# upstream is providing SSL, you can change this to allow access to the
# admin endpoints.
# ADMIN_ENDPOINT_PROTOCOL=http
# Monitoring env variables
#
# PROMETHEUS_VERSION=v2.49.1
# NODE_EXPORTER_VERSION=v1.7.0
# CADVISOR_VERSION=v0.47.2
# GRAFANA_VERSION=10.3.3
# LOKI_VERSION=2.9.4
# PROMTAIL_VERSION=2.9.4
# HIVED_PME_VERSION=49a7312d
# BLACKBOX_VERSION=v0.24.0
# DATA_SOURCE="postgresql://postgres@haf:5432/postgres?sslmode=disable"
# HIVESENSE_IMAGE=${HIVE_API_NODE_REGISTRY}/hivesense
# HIVESENSE_VERSION=${HIVE_API_NODE_VERSION}
# HIVESENSE_REWRITER_IMAGE=${HIVE_API_NODE_REGISTRY}/hivesense/postgrest-rewriter
HIVESENSE_IMAGE=registry.gitlab.syncad.com/ickiewicz/hivesens
HIVESENSE_VERSION=8df84672
HIVESENSE_REWRITER_IMAGE=registry.gitlab.syncad.com/ickiewicz/hivesens/rewiter
This diff is collapsed.
Click to expand it.
Preview
0%
Loading
Try again
or
attach a new file
.
Cancel
You are about to add
0
people
to the discussion. Proceed with caution.
Finish editing this message first!
Save comment
Cancel
Please
register
or
sign in
to comment