Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.

Local Setup Instructions

  1. Create a new project in geostreams. Instructions are here.
  2. Configure the docker-compose file for the services that need to be added. The docker-compose file linked below The docker-compose file for this project is based on the Default Geostreams file and the Default Clowder file and uses traefik v2. This way minimal configuration is required on the machine.


    1. Code Block
      languageyml
      titledocker-compose.yml
      collapsetrue
      version: "3.3"
      
      services:
      traefik:
      image: traefik:latest
      networks:
      - clowder
      - geostreams
      volumes:
      - /var/run/docker.sock:/var/run/docker.sock:ro
      ports:
      - 80:80
      - 443:443
      labels:
      - traefik.enable=true
      - traefik.http.routers.traefik.rule=Host(`${TRAEFIK_HOST:-localhost}`) && (PathPrefix(`/traefik`) || PathPrefix(`/api`))
      - traefik.http.routers.traefik.service=api@internal
      - traefik.http.routers.traefik.entrypoints=http
      - traefik.http.routers.traefik.middlewares=traefik-strip
      - traefik.http.middlewares.traefik-strip.stripprefix.prefixes=/traefik
      # Redirect all HTTP to HTTPS permanently
      # - traefik.http.routers.http_catchall.rule=HostRegexp(`{any:.+}`)
      # - traefik.http.routers.http_catchall.entrypoints=web
      # - traefik.http.routers.http_catchall.middlewares=https_redirect
      # - traefik.http.middlewares.https_redirect.redirectscheme.scheme=https
      # - traefik.http.middlewares.https_redirect.redirectscheme.permanent=false
      command:
      --log.level=DEBUG
      --entrypoints.http.address=:80
      --entrypoints.https.address=:443
      --api.dashboard
      --providers.docker.exposedbydefault=false
      # "--certificatesResolvers.le.acme.email=<YOUR_EMAIL>"
      # "--certificatesResolvers.le.acme.storage=acme.json"
      # "--certificatesResolvers.le.acme.tlsChallenge=true"
      # "--certificatesResolvers.le.acme.httpChallenge=true"
      # "--certificatesResolvers.le.acme.httpChallenge.entryPoint=web"
      
      # ----------------------------------------------------------------------
      # GEOSTREAMS STACK
      # ----------------------------------------------------------------------
      geodashboard:
      image: hub.ncsa.illinois.edu/geostreams/gd-smartfarm:dev-latest
      networks:
      - geostreams
      labels:
      - traefik.enable=true
      - traefik.http.services.geodashboard.loadbalancer.server.port=80
      - traefik.http.routers.geodashboard.rule=Host(`${TRAEFIK_HOST:-localhost}`) && (PathPrefix(`${GD_PREFIX_PATH:-/}`))
      - traefik.http.routers.geodashboard.entrypoints=http
      restart: unless-stopped
      
      geostreams:
      image: geostreams/geostreams
      env_file:
      - ./geostreams.env
      networks:
      - geostreams
      labels:
      - traefik.enable=true
      - traefik.http.services.geostreams.loadbalancer.server.port=9000
      - traefik.http.routers.geostreams.rule=Host(`${TRAEFIK_HOST:-localhost}`) && (PathPrefix(`${GEOSTREAMS_PREFIX_PATH:-/}`))
      - traefik.http.routers.geostreams.entrypoints=http
      # - traefik.http.routers.whoami.tls=true
      # - traefik.http.routers.whoami.tls.certresolver=le
      volumes:
      - ./application.conf:/home/geostreams/conf/application.conf
      - ./messages.en:/home/geostreams/conf/messages.en
      restart: unless-stopped
      healthcheck:
      test: ["CMD", "curl", "-s", "--fail", "http://localhost:9000/geostreams/api/status"]
      postgres:
      image: mdillon/postgis:9.5
      networks:
      - geostreams
      ports:
      - 5432:5432
      volumes:
      - postgres:/var/lib/postgresql/data
      restart: unless-stopped 
      
      
      # ----------------------------------------------------------------------
      # CLOWDER APPLICATION
      # ----------------------------------------------------------------------
      
      # main clowder application
      clowder:
      image: clowder/clowder:${CLOWDER_VERSION:-latest}
      restart: unless-stopped
      networks:
      - clowder
      depends_on:
      - mongo
      environment:
      - CLOWDER_ADMINS=${CLOWDER_ADMINS:-admin@example.com}
      - CLOWDER_REGISTER=${CLOWDER_REGISTER:-false}
      - CLOWDER_CONTEXT=${CLOWDER_CONTEXT:-/}
      - CLOWDER_SSL=${CLOWDER_SSL:-false}
      - RABBITMQ_URI=${RABBITMQ_URI:-amqp://guest:guest@rabbitmq/%2F}
      - RABBITMQ_EXCHANGE=${RABBITMQ_EXCHANGE:-clowder}
      - RABBITMQ_CLOWDERURL=${RABBITMQ_CLOWDERURL:-http://clowder:9000}
      - SMTP_MOCK=${SMTP_MOCK:-true}
      - SMTP_SERVER=${SMTP_SERVER:-smtp}
      labels:
      - traefik.enable=true
      - traefik.http.services.clowder.loadbalancer.server.port=9000
      - traefik.http.routers.clowder.rule=Host(`${TRAEFIK_HOST:-localhost}`) && (PathPrefix(`${CLOWDER_PREFIX_PATH:-/}`))
      - traefik.http.routers.clowder.entrypoints=http
      volumes:
      - clowder-custom:/home/clowder/custom
      - clowder-data:/home/clowder/data
      healthcheck:
      test: ["CMD", "curl", "-s", "--fail", "http://localhost:9000/clowder/api/status"]
      
      # ----------------------------------------------------------------------
      # CLOWDER DEPENDENCIES
      # ----------------------------------------------------------------------
      
      # database to hold metadata (required)
      mongo:
      image: mongo:3.6
      restart: unless-stopped
      networks:
      - clowder
      volumes:
      - mongo:/data/db
      
      # message broker (optional but needed for extractors)
      rabbitmq:
      image: rabbitmq:management-alpine
      restart: unless-stopped
      networks:
      - clowder
      environment:
      - RABBITMQ_SERVER_ADDITIONAL_ERL_ARGS=-rabbitmq_management path_prefix "/rabbitmq"
      - RABBITMQ_DEFAULT_USER=${RABBITMQ_DEFAULT_USER:-guest}
      - RABBITMQ_DEFAULT_PASS=${RABBITMQ_DEFAULT_PASS:-guest}
      labels:
      - traefik.enable=true
      - traefik.http.services.rabbitmq.loadbalancer.server.port=15672
      - traefik.http.routers.rabbitmq.rule=Host(`${TRAEFIK_HOST:-localhost}`) && (PathPrefix(`/rabbitmq`))
      - traefik.http.routers.rabbitmq.entrypoints=http
      # - "traefik.website.frontend.whiteList.sourceRange=${TRAEFIK_IPFILTER:-172.16.0.0/12}"
      volumes:
      - rabbitmq:/var/lib/rabbitmq
      
      # search index (optional, needed for search and sorting future) 
      elasticsearch:
      image: elasticsearch:2
      command: elasticsearch -Des.cluster.name="clowder"
      networks:
      - clowder
      restart: unless-stopped
      environment:
      - cluster.name=clowder
      volumes:
      - elasticsearch:/usr/share/elasticsearch/data
      
      # monitor clowder extractors
      monitor:
      image: clowder/monitor:${CLOWDER_VERSION:-latest}
      restart: unless-stopped
      networks:
      - clowder
      depends_on:
      - rabbitmq
      environment:
      - RABBITMQ_URI=${RABBITMQ_URI:-amqp://guest:guest@rabbitmq/%2F}
      - RABBITMQ_MGMT_PORT=15672
      - RABBITMQ_MGMT_PATH=/rabbitmq
      labels:
      - traefik.enable=true
      - traefik.http.services.monitor.loadbalancer.server.port=9999
      - traefik.frontend.rule=${TRAEFIK_FRONTEND_RULE:-}PathPrefixStrip:/monitor
      - traefik.http.routers.monitor.rule=Host(`${TRAEFIK_HOST:-localhost}`) && (PathPrefix(`/monitor`))
      - traefik.http.routers.monitor.entrypoints=http
      
      networks:
      clowder:
      geostreams:
      
      volumes:
      traefik:
      clowder-data:
      clowder-custom:
      mongo:
      rabbitmq:
      elasticsearch:
      postgres:
      
      


    2. The following .env file sets up the routes:

      Code Block
      languagebash
      CLOWDER_PREFIX_PATH=/clowder
      CLOWDER_CONTEXT=/clowder/
      RABBITMQ_CLOWDERURL=http://clowder:9000/clowder
      TRAEFIK_HOST=localhost
      GEOSTREAMS_PREFIX_PATH=/geostreams
      GD_PREFIX_PATH=/
      
      


  3. For geostreams, use the example application.conf https://github.com/geostreams/

...

...


Setting up the machine

  • Create a VM using the makevm-script. Use one of the available Floating IP Addresses on Nebula and provide it to the script.

...

  • Once its created, install docker and docker-compose on the machine. Additionally, install pass and gnupg2 without which logging into docker will give an error.
    • `sudo apt-get install pass gnupg2`
  • Login to hub.ncsa.illinois.edu using the robot account login
  • Add the docker-compose

...

  • previously setup and all files it depends on to the machine. Update host names in the docker-compose file and application.conf
  • Run docker-compose up -d.


Configuring Build and Deployment Pipeline for Project

  • Add the following Github Actions public key to the machine.

    `cat PATH_TO_KEY | ssh -i PATH_TO_ID_KEY user@hostname "cat >> ~/.ssh/authorized_keys"`

    Code Block
    languagebash
    titleGitHub_Repo_Public_key
    collapsetrue
    ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDCL/mqWBK1BTwZxKnZ546F9IbFlYY6qgFzh9xTM6eY+DxagKVV1BAr4kbqamnLYDzrOLC6zY+8k3xGS1HSxp8UAWXLnPzDkb13uXj+neGty7DwMIVWRVSc0JNa0cRaEKI1wC9AK1utKEU7aaGu6fZsmExmXNzIzxLIYVUFdW8G2GVoK9wNSba3OT2rneutgOUrb5PR6ADpfBEO8h48CcP6edw5A2HoJ0ZXySeadvnInOhp3yisO3khaZ7t4ZPRtRVRM+M+V9H+1JpOmsulfAZUyEdntzU1MkduFAz+X5T/h9IhHYlplqJ00GEjc/zIPS9y39Be5XqgvMadapupmeGpZWU6K/xvluATcP1xGqy7ytytAr6ZIbsCyKWGJXeUYsR8K1MdNC8zAVB+2Cnu3Df0TUf7xIV6sSx66MtehAADIGxtik4KGl5DZiWENgf0aCSeJHAjRkxGx4mxb/Cp9iercoKs+6uIAKd7fU/pYK9kw3z8WlkOsTcQcII9hzY6bQs= Smartfarm Key for Github Actions


  • Update github deployment action and add the details for the new project under strategy->matrix.

    Code Block
    languageyml
    name: <project

...

  • _name>
    gd-name: 

...

  • <lerna_project_name>
    prod-host: <production_host_name>
    dev-host: <dev_host_name>
    file-path: 

...

  • <docker-compose 

...

  • file 

...

  • path 

...

  • on VM>
    description: 


  • For development machines, an image with a develop tag will be created in hub.ncsa.illinois.edu and for production machines the latest tag will be used. The docker-compose files should be updated accordingly.