Skip to content
Snippets Groups Projects

HAF and HAF-app stack setup process diagram

  • Clone with SSH
  • Clone with HTTPS
  • Embed
  • Share
    The snippet can be accessed without any authentication.
    Authored by Bartek Wrona

    Shows idea how to ogranize a HAF instance and APPs setup, especially when APPs require to build indexes directly over HAF tables (which next will be managed by HAF). To achieve it and avoid problems related to concurrent access of HAF tables while creating the indexes, special optional phase has been introduced, where to HAF instance (about to build) container is started for each app requiring such indexes and executes a maintenance script which builds indexes.

    Edited
    HAFStack_diagram.md 6.46 KiB
    sequenceDiagram 
        actor START AS Gandalf
    
        box Transparent Optional APP specific custom index creation (managed by HAF) 
            participant HAF_MIB1 AS HAF_MIB-1
            participant HAF_MIB2 AS HAF_MIB-2
            participant HAF_MIBN AS HAF_MIB-N
        end
    
        Note over HAF_MIB2: Haf Managed Index Builder.<br/> Regular HAF image spawned with:<br/>--maintenance-script=app/scripts/setup_haf_indexes.sh
    
        participant HAF AS HAF instance
    
        note over HAF: Regular HAF image, started to perform a data replay and further live syncing
    
        opt If any custom HAF managed indexes are needed?
            START ->> HAF_MIB1: Performing a start with custom indexes
    
        loop For each app build custom indexes
            HAF_MIB1 ->>+ HAF_MIB2: Building indexes for APP-1
    #        HAF_MIB2 ->>- HAF_MIB1: APP-1 indexes finished        
            destroy HAF_MIB1
            HAF_MIB2-xHAF_MIB1: HAF instance shutdown
    
            HAF_MIB2 -->>+ HAF_MIBN: Building indexes for APP-2 ... N
    #        HAF_MIBN -->>- HAF_MIB2: APP-2 indexes finished
            destroy HAF_MIB2
            HAF_MIBN -x HAF_MIB2: HAF instance shutdown
        end
    
        destroy HAF_MIBN
        HAF -x HAF_MIBN: HAF instance shutdown<br/>Continue regular replay    
      end        
    
      START ->> HAF: Performing simple start
    
      create participant HAF_LIVE AS HAF Livesync Trigger
      HAF ->>+ HAF_LIVE: Wait for entering LIVE sync
      #HAF_LIVE ->>- APP1_setup_db: Once entered LIVE sync, start setup of all apps
    
      Note over HAF_LIVE: This is helper utility container (allowing to run a bash script using psql).<br/>Just executes a script to detect live sync phase on pointed HAF instance
    
      loop
        HAF ->> HAF: Data replay and further live syncing
      end
    
      loop
        HAF_LIVE ->> HAF_LIVE: Waiting for a live sync...
      end
    
      create participant APP1_setup_db
      HAF_LIVE ->> APP1_setup_db: Entering application setup phase...
    
      destroy HAF_LIVE
      APP1_setup_db -x HAF_LIVE: Destroy Livesync Trigger helper container
    
    
      loop For each app, perform a database setup
        create participant APP2_setup_db
        APP1_setup_db ->>+ APP2_setup_db: Performing App Setup
    #    APP2_setup_db ->>- APP1_setup_db: App1 setup finished
        destroy APP1_setup_db
        APP2_setup_db -x APP1_setup_db: Destroy setup helper container
        
        create participant APPN_setup_db
        APP2_setup_db ->>+ APPN_setup_db: Perforing setup for App-2 ... N
     #   APPN_setup_db ->>- APP2_setup_db: App-2 setup finished
        destroy APP2_setup_db
        APPN_setup_db -x APP2_setup_db: Destroy setup helper container
      end
    
        box Transparent Application database setup phase
            participant APP1_setup_db AS APP-1 setup
            participant APP2_setup_db AS APP-2 setup
            participant APPN_setup_db AS APP-N setup
        end
    
        Note over APP2_setup_db: Each APP-setup is a helper utility container (allowing to run a bash script using psql).<br/>Just has mapped haf_app_root volume and executes:<br/>/haf_app_root/scripts/setup_db.sh --postgres-url=postgres://haf_admin@haf:5432/haf_block_log
    
        create participant APP1_sync
        APPN_setup_db ->>+ APP1_sync: Entering applications sync phase
        create participant APP2_sync
        APPN_setup_db ->>+ APP2_sync: Entering applications sync phase
        create participant APPN_sync
        APPN_setup_db ->>+ APPN_sync: Entering applications sync phase
    
        box Transparent Application data syncing phase
            participant APP1_sync AS APP-1 sync
            participant APP2_sync AS APP-2 sync
            participant APPN_sync AS APP-N sync
        end
    
        Note over APP2_sync: Each APP-sync is an application specific image, able to peform data syncing using pointed HAF instance
    
        par Performing a APP-1 concurrent sync
            loop
            APP1_sync ->> APP1_sync: Data syncing
            end
        and Performing a APP-2 concurrent sync
            loop
            APP2_sync ->> APP2_sync: Data syncing
            end
        and Performing a APP-N concurrent sync
            loop
            APPN_sync ->> APPN_sync: Data syncing
            end
        end
    
        destroy APPN_setup_db
        APP1_sync -x APPN_setup_db: Destroy setup helper container
    
        create participant APP1_live_sync
        APP1_sync ->>+ APP1_live_sync: Starting APP-1 LIVE-sync detector
        create participant APP2_live_sync
        APP2_sync ->>+ APP2_live_sync: Starting APP-2 LIVE-sync detector
        create participant APPN_live_sync
        APPN_sync ->>+ APPN_live_sync: Starting APP-N LIVE-sync detector
    
        box Transparent Application live sync detectors
            participant APP1_live_sync AS APP-1 live sync detector
            participant APP2_live_sync AS APP-2 live sync detector
            participant APPN_live_sync AS APP-N live sync detector
        end
    
        Note over APP2_live_sync: Each APP-live sync detector is a helper utility container (allowing to run a bash script using psql).<br/>Just has mapped haf_app_root volume and executes:<br/>/haf_app_root/scripts/wait_for_app_live_sync.sh --postgres-url=postgres://haf_admin@haf:5432/haf_block_log
    
        par Waiting for an APP-1 live sync
            loop
                APP1_live_sync ->> APP1_live_sync: Waiting for live sync
            end
        and Waiting for an APP-2 live sync
            loop
                APP2_live_sync ->> APP2_live_sync: Waiting for live sync
            end
        and Waiting for an APP-N live sync
            loop
                APPN_live_sync ->> APPN_live_sync: Waiting for live sync
            end
        end
    
        create participant APP1_api
        APP1_live_sync ->>+ APP1_api: Starting APP-1 API server container
        create participant APP2_api
        APP2_live_sync ->>+ APP2_api: Starting APP-2 API server container
        create participant APPN_api
        APPN_live_sync ->>+ APPN_api: Starting APP-N API server container
    
        box Transparent Application API Server containers
            participant APP1_api AS APP-1 API server
            participant APP2_api AS APP-2 API server
            participant APPN_api AS APP-N API server
        end
    
        Note over APP2_api: Each APP-API server container uses an app specific image 
    
        destroy APP1_live_sync
        APP1_api -x APP1_live_sync: Destroy live sync detector container
    
        destroy APP2_live_sync
        APP2_api -x APP2_live_sync: Destroy live sync detector container
    
        destroy APPN_live_sync
        APPN_api -x APPN_live_sync: Destroy live sync detector container
    
        par Handling an APP-1 API requests
            loop
                APP1_api ->> APP1_api: Processing requests
            end
        and Handling an APP-2 API requests
            loop
                APP2_api ->> APP2_api: Processing requests
            end
        and Handling an APP-N API requests
            loop
                APPN_api ->> APPN_api: Processing requests
            end
        end
    
    0% Loading or .
    You are about to add 0 people to the discussion. Proceed with caution.
    Finish editing this message first!
    Please register or to comment