To build image holding Hivemind instance, please use [build_instance.sh](scripts/ci-helpers/build_instance.sh). This script requires several parameters:
- a tag identifier to be set on the built image
- directory where Hivemind source code is located
- docker registry url to produce fully qualified image name and allow to correctly resolve its dependencies
```bash
# Assuming you are in workdir directory, to perform out of source build
../hivemind/scripts/ci-helpers/build_instance.sh local ../hivemind registry.gitlab.syncad.com/hive/hivemind
1. Firstly, we need a working HAF instance. Create some working directory (example workplace-haf on the same level as haf directory) and we can build it via docker:
```
#### Running HAF instance container
A Hivemind instance requires a HAF instance to process incoming blockchain data collected and to store its own data in fork-resistant manner (allows hivemind data to be reverted in case of a fork).
The easiest way to setup a HAF instance is to use a dockerized instance.
To start a HAF instance, we need to prepare a data directory containing:
- a blockchain subdirectory (where can be put the block_log file used by hived)
- optionally, but very useful, a copy of haf/doc/haf_postgresql_conf.d directory, which allows simple customization of Postgres database setup by modification of `custom_postgres.conf` and `custom_pg_hba.conf` files stored inside.
Please take care to set correct file permissions in order to provide write access to the data directory for processes running inside the HAF container.
For example, for testing purposes (assuming block_log file has been put into data-dir), you can spawn a 5M block replay to prepare a HAF database for further quick testing:
2. For testing purposes we need a 5M block_log, so in order to avoid syncing in `workplace-haf` directory we create blockchain directory and copy there a block_log (split or monolit block_log). We can skip this step and go to 3rd step directly, but we need to remove `--replay` option in order to let hive download 5M blocks.
```
└── workplace-haf
├── blockchain
│ └── block_log
```
By examining hived.log file or using docker logs haf-mainnet-instance, you can examine state of the started instance. Once replay will be finished, you can continue and start the Hivemind sync process.
Example output of hived process stopped on 5,000,000th block:
2022-12-19T18:28:05.575687 webserver_plugin.cpp:261 operator() ] start listening for http requests on 0.0.0.0:8090
2022-12-19T18:28:05.575716 webserver_plugin.cpp:263 operator() ] start listening for ws requests on 0.0.0.0:8090
2022-12-19T18:28:35.575535 chain_plugin.cpp:380 operator() ] No P2P data (block/transaction) received in last 30 seconds... peer_count=0
```
#### Running Hivemind instance container
The built Hivemind instance requires a preconfigured HAF database to store its data. You can perform them with `install_app` command before starting the sync.
The commands below assume that the running HAF container has IP: 172.17.0.2
Replay will be finished when you see these logs:
```
2025-01-15T12:06:28.244946 livesync_data_dumper.cpp:85 livesync_data_dumper ] livesync dumper created
2025-01-15T12:06:28.244960 data_processor.cpp:68 operator() ] Account operations data writer_1 data processor connected successfully ...
4. Update haf docker in order to allow connecting to postgres DB. You can use for that case `lazydocker` for example - in that case run lazydocker, choose proper docker container and then press shift+e in order to enter container.
Add to `/etc/postgresql/17/main/pg_hba.conf` these lines: (sudo may be needed)
```
host all all 0.0.0.0/0 trust
local all all peer
```
then restart postgresql: `sudo /etc/init.d/postgresql restart` (if for some reason docker container shutdown, just repeat step 3 and this one)
## Updating from an existing hivemind database
Now HAF database is ready to apply hivemind part. You can explore DB inside container with: `PGOPTIONS='-c search_path=hafd' psql -U haf_admin -d haf_block_log`
### Start the hivemind indexer (aka synchronization process)
```bash
hive sync
8. Begin sync process (it will take a while).
Note - make sure that the mocks have been added correctly via: `SELECT num FROM hafd.blocks ORDER BY NUM DESC LIMIT 1;` - this query should return `5000024` - if you still have `5000000`, you need to repeat previous steps (uninstall hivemind app or remove db and recreate it).
1. Make sure that the current version of `hivemind` is installed,
2. Api tests require that `hivemind` is synced to a node replayed up to `5_000_024` blocks (including mocks).\
This means, you should have your HAF database replayed up to `5_000_000` mainnet blocks and run the mocking script with:
```bash
cd hivemind/scripts/ci/
./scripts/ci/add-mocks-to-db.sh --postgres-url="postgresql://haf_admin@172.17.0.2/haf_block_log" # haf_admin access URL, assuming HAF is running on 172.17.0.2
```
2. Api tests require that `hivemind` is synced to a node replayed up to `5_000_024` blocks (including mocks).
3. Run `hivemind` in `server` mode
4. Set env variables:
(you may need to uncomment `export PGRST_DB_ROOT_SPEC="home"` from `scripts/start_postgrest.sh`. Otherwise, empty jsons could be returned , because postgrest doesn't support jsonrpc and there must be a proxy which handles this problem)
We can launch postgrest server in two ways (from root directory of `hivemind` repo):
-via docker:
```
./scripts/run_instance.sh registry.gitlab.syncad.com/hive/hivemind/instance:local-hivemind-develop server --database-url="postgresql://hivemind@172.17.0.2:5432/haf_block_log" --http-server-port=8080
```
- directly launching script:
`./scripts/start_postgrest.sh --host=172.17.0.2`
4. Run test:
While postgrest server is on, we can run all test cases from specific directory (again from root directory of `hivemind` repo):