Create VMs on Nebula

  1. If the is a new instance for gltg and you intend to use the current database VMs, you will need to build 3 new VMS
    1. Nginx proxy server
    2. Clowder Server
    3. Geodashboard Server
  2. VMs on Nebula are created with a script that uses python-openstackclient (tested on python-openstackclient==3.4.1) (probably some other pip libraries as well).  It is recommended to create a virtualenv
    1. set up the environment.  In a linux (mac) shell within the viritualenv
      1. export OS_AUTH_URL=http://nebula.ncsa.illinois.edu:5000/v2.0
        export OS_TENANT_ID=c4121a001a8240d4a8b701d664ef4bf0
        export OS_TENANT_NAME="GLTG"							# This is for GLTG project, change to your project name
        export OS_PROJECT_NAME="GLTG"							# This is for GLTG project, change to your project name
        export OS_USERNAME="username"							# Your Nebula username
        export OS_PASSWORD="password"							# Your Nebula password
        export OS_REGION_NAME="RegionOne"
    2. Run the script 

      1. Get the script https://opensource.ncsa.illinois.edu/bitbucket/snippets/6b41ea2cfea041cb822d66b909a7bf31

      2. run script (make sure you have correct permissions if it fails to run (use chmod 755 makevm.sh)):

        ./makevm.sh -n <name of new VM> -k <name of key>

        for example, creating a new vm named "ilnlrs-dev" with nebula key pair "gltg"

        ./makevm.sh -n ilnlrs-dev -k gltg

Setup Nginx Server

  1. Login to the Nginx server.  If you are using the project key pair it will look like this (make sure you have the key in your .ssh folder, get it from Nebula interface https://nebula.ncsa.illinois.edu/dashboard/project/access_and_security/:

    ssh -i ~/.ssh/<key> ubuntu@<vm floating ip address>
  2. Install Nginx:

    apt-get install nginx
  3. Edit nginx config 

    1. create and edit

      sudo vim /etc/nginx/sites-available/gl	# creates file name gltg and opens vim editor (change to your key name)
    2. populate config (this is bare min without ssl, more docs coming)

      server {
        listen 80;
        client_max_body_size 0;
      
        proxy_read_timeout 300;  # answer from server, 5 min
        proxy_send_timeout 300;  # chunks to server, 5 min
      
        proxy_set_header   Host $host;
        proxy_set_header   X-Real-IP $remote_addr;
        proxy_set_header   X-Forwarded-For $proxy_add_x_forwarded_for;
        proxy_set_header   X-Forwarded-Proto $scheme;
        proxy_http_version 1.1;
        port_in_redirect   off;
      
        root /usr/share/nginx/html;
        index index.html index.htm;
      
        # Deny all attempts to access hidden files
        # such as .htaccess, .htpasswd, .DS_Store (Mac).
        location ~ /\. {
          deny all;
        }
      
        location / {
            try_files $uri $uri/ /index.html;
        }
      
        rewrite ^/geodashboard$ /geodashboard/ permanent;
        location /geodashboard {
          proxy_pass http://<geodashboard floating IP>:9000;				# replace <geodashboard floating IP> with the floating IP of your geodashboard machine
        }
      
        rewrite ^/clowder$ /clowder/ permanent;
        location /clowder/ {
          proxy_pass http://<clowder floating IP>:9000;					# replace <clowder floating IP> with the floating IP of your geodashboard machine
        }
      }
    3. delete (soft link) of default config and enable new config

      sudo rm /etc/nginx/sites-enabled/default
      sudo ln -s /etc/nginx/sites-available/gltg /etc/nginx/sites-enabled/gltg
    4. Add (Edit) index file to root

      1. For the tutorial we will only put a redirect to the /geodashboard route.  If you want to put in a static landing page with links to /geodashboard, /clowder, etc, please do - everyone will love you.

        sudo vim /usr/share/nginx/html/index.html			# This is the root path in the config we just created
      2. Add this text to file

        <meta http-equiv="refresh" content="0; url=http:/geodashboard/" />
  4. Install geodashboard-v3 https://opensource.ncsa.illinois.edu/bitbucket/projects/GEOD/repos/geodashboard-v3/browse (as of this documentation, geodashboard-v2 runs on the geodashboard server, but uses the v3 search page which is installed on the nginx server)

    1. In the nginx root directory (where we put the redirect above), create a directory 'gd3' which contains the build files bundle.js, config.js, index.html.  If v3 doesn't work, there is an issue that on a build an extra "}" needs to be added at the end of config.js

      ls /usr/share/nginx/html/gd3						# if gd3 directory is in correct nginx root path, 'ls' will show you the v3 build files 
      bundle.js  config.js  index.html
  5. Get an ssl

Setup Clowder Server 

  1. Setup Puppet (Getting started - Marcus will need to do before docs will be good)
    1. Login to foreman https://gonzo-foreman.ncsa.illinois.edu/hosts
      1. verify existence of host - the name will be <name of vm>.os.ncsa.edu
      2. puppet env
        1. production
      3. puppet classes
        1. clowder
        2. or install java

          sudo apt-get install default-jre
    2. maybe service puppet restart on host machine
  2. configure clowder
    1. login to clowder machine

    2. Edit /home/clowder/clowder/custom/custom.conf 

      1. clowder should (as in must) have a security token if exposed to internet. If using ssl, set:

        securesocial.ssl=true
        permissions = public
        application.context="/clowder/"
        
        initialAdmins=""   		# add admin emails between quotes
        smtp.host="smtp.ncsa.illinois.edu"
        
        # securesocial customization
        securesocial.onLoginGoTo=/clowder/
        securesocial.onLogoutGoTo=/clowder/login
        securesocial.ssl=false
        securesocial.cookie.idleTimeoutInMinutes=1440
        
        # rabbitmq
        clowder.rabbitmq.uri="amqp://clowder:***********@rabbitmq.ncsa.illinois.edu/clowder" 	# you'll need the security code
        clowder.rabbitmq.exchange="gltg-clowder-dev"
        
        # mongodb      These are the IP addresses for the current mongodb servers
        mongodbURI="mongodb://141.142.209.172:27017,141.142.209.173:27017,141.142.209.174:27017/gltg?replicaSet=GLTG&maxpoolsize=100"
        
        # postgres
        postgres.user="***********"			# you'll need the postgres username
        postgres.password="**************"  # you'll need the postgres user password
        postgres.host="141.142.209.176"		# This is the IP of the current postgres vm
        postgres.db="geostream-dev"         # you can start by using one of the existing databases - this is for gltg-dev
        
        # cache
        geostream.cache=/home/clowder/cache
        
        # security options
        
        application.secret="******************************************************"  	# you'll need to create and application.secret (random)
        commKey=************															# you'll need to create	commKey (random)
        
        
        # storage
        service.byteStorage=services.filesystem.DiskByteStorageService
        clowder.diskStorage.path="/home/clowder/data"


    3. Edit /home/clowder/clowder/custom/play.plugins

      9992:services.RabbitmqPlugin
      10005:services.PostgresPlugin

Setup Geodashboard Server

  1. Setup Puppet (Getting started - Marcus will need to do before docs will be good)
    1. Login to foreman https://gonzo-foreman.ncsa.illinois.edu/hosts
      1. verify existence of host - the name will be <name of vm>.os.ncsa.edu
      2. puppet env
        1. production
      3. puppet classes
        1. clowder
        2. what about java?
    2. maybe service puppet restart on host machine

Databases: Dump, Copy, and Restore

  1. Clowder Disk Storage
    1. Goto the clowder custom configuration /home/clowder/clowder/custom/custon.conf on the server you are copying from
      1. find the path of clowder file storage - something like this

        clowder.diskStorage.path="/home/clowder/data"
      2. Likewise create or get the clowder file storage path from the server you are copying to (might be same path):
    2. authentication
      1. create a key, or
      2. user auth
    3. rsync the data
      1. in the source server

        rsync -az /home/clowder/data/uploads <source server>:/home/clowder/data
      2. For example using user auth from source server to target server 'kryptonite' by user 'luther'

        rsync -az /home/clowder/data/uploads luther@kryptonite:/home/clowder/data
  2. Postgresdb
    1. On source machine in a directory where you have permissions to write, dump the postgres database (assumed named 'geostream')

      sudo -u postgres pg_dump geostream > geostream.sql
    2. Scp the file to the target server
    3. On the targer server, if you already have a geostream database

      sudo -u postgres psql geostream < geostream.sql
    4. if you need a new database or to replace and existing database, look here Recreate a Database
  3. Mongodb
    1. In a directory on the source VM

      sudo mongodump
    2. Copy the 'dump' file to the source VM

    3. Restore the database on the target VM

      sudo mongorestore --drop -h mongo-db-1:27017 dump

Add Crashplan





  • No labels

1 Comment

  1. add something about crashplan and what folders to backup.