Build and deployment process of Hivemind project is splitted into several stages:
- build of python code (production of distribution package (i.e. egg) next to be installed at specified Python site
- data supply - first part of application testing covering initial sync phase. Here are executed initial sync process (
hive sync) filling up the storage database basing on specified hived node url being a data source
- synced hivemind instance deployment - this means starting (async. to CI job)
hive serverprocess on specified port to serve API calls implemented at Hivemind
- starting specified API test suite to query deployed Hivemind instance for (end-to-end test phase).
Right now basic deployment and testing phase covers
smoketestset of e2e tests and the sync limited to 5M blocks (due to time required for sync).
The data supply, deployment and e2e testing can be parametrized by specifying following variables at pipeline spawn:
HIVEMIND_MAX_BLOCKnumber of blocks to perform
hive syncprocess for
HIVEMIND_SOURCE_HIVED_URLurl of a
hivednode providing blocks and virtual operations during syncing i.e.:
HIVEMIND_HTTP_PORTbasic port of hive-server process (to be started after successfull sync). This number can be modified by CI system by adding a value of
$CI_CONCURRENT_IDvariable if few concurrent jobs were running. Job description prints a actual port the started hivemind instance is listening to. Also dedicated entry to the created environment is created (including name of branch being deployed to such instance)
- Regular smoketest. Automatically spawned for each MR created. Required to pass to continue MR processing
- Partial (incremental) deployment scenarios - after 1st pass of (at least) data-supply phase, it shall be possible to manually trigger only build, deployment and e2e-test phases. This way each developer fixing bugs in API implementations can easily start own working instance on CI server without a need to setup own environment and waiting for
- Other deployment scenarios shall be possible for specified number of blocks and hived source node url. Similar scheme could be used for staging and production deployments to be done at Gitlab level.