|
|
1. General
|
|
|
|
|
|
Build and deployment process of Hivemind project is splitted into several stages:
|
|
|
- build of python code (production of distribution package (i.e. egg) next to be installed at specified Python site
|
|
|
- data supply - first part of application testing covering initial sync phase. Here are executed initial sync process (`hive sync`) filling up the storage database basing on specified hived node url being a data source
|
... | ... | @@ -6,4 +8,11 @@ Build and deployment process of Hivemind project is splitted into several stages |
|
|
|
|
|
Right now basic deployment and testing phase covers `smoketest` set of e2e tests and the sync limited to 5M blocks (due to time required for sync).
|
|
|
|
|
|
1. Job parameters
|
|
|
|
|
|
The data supply, deployment and e2e testing can be parametrized by specifying following variables at pipeline spawn:
|
|
|
* `HIVEMIND_MAX_BLOCK` number of blocks to perform `hive sync` process for
|
|
|
* `HIVEMIND_SOURCE_HIVED_URL` url of a `hived` node providing blocks and virtual operations during syncing i.e.: `{"default":"http://hive-4:8091"}`
|
|
|
* `HIVEMIND_HTTP_PORT` basic port of hive-server process (to be started after successfull sync). This number can be modified by CI system by adding a value of `$CI_CONCURRENT_ID` variable if few concurrent jobs were running. Job description prints a actual port the started hivemind instance is listening to. Also dedicated entry to the created environment is created (including name of branch being deployed to such instance)
|
|
|
|
|
|
|
|
|
\ No newline at end of file |