Versions Compared


  • This line was added.
  • This line was removed.
  • Formatting was changed.


  1. Create a Windows VM in Nebula.
    1. Choose the instance boot source to be "Boot from image (create new volume option)". 
    2. Choose appropriate VM flavor to get sufficient disk size and number of CPUs
    3. Choose appropriate security and networking
    4. Launch instance
    5. Associate a Floating IP address
    6. Login to the VM and enable remote desktop connection

Install the Extractor


  1. Install



make sure “C:\Python27\” is on path for system user (start->right click on “computer”->properties->advanced->environment variables), edit PATH if necessary

For now, use Python 2.7 instead of Python 3.x for compatibility reasons.

2. Install pip

download file “” from

open cmd as admin


after install is done, type pip in the cmd prompt, if “not recognized” error, add “C:\Python27\Scripts" to path for system user (like described in 1)


3. Install pika, requests

pip install pika

pip install requests


4. Install pyclowder


  1. ArcGIS
    1. Available on the WebStore for free
    2. Instructions with ArcGIS work for the install
    3. ArcGIS also includes the Python that it expects to use
    4. Get the path to ArcGIS as it will be used in the next step (hereafter referred to as <ArcGIS PATH>)
  2. Add Python, pip, and 7zip to the Path
    1. These work for Windows 10, future versions of Windows may change how the path is altered
    2. On the control panel search for Edit the System Environment Variables, click the Environment Variables button
    3. The lower half of the new window has the system variables, choose Path and then the Edit button
    4. Click the New button, to add a new section to the path
    5. Add <ArcGIS PATH> to the top of the list of path, and <ArcGIS PATH>/Scripts right beneath it (this will use the Python installed with ArcGIS before anything else)
    6. Add "C:\Program Files\7-Zip\" to the end of the path.
      1. Quote the value due to the space in Program Files
  3. Update pip
    1. `python -m pip install -U pip`
  4. Install pika and requests
    1. The version is important as newer versions introduced errors
    2. `pip install pika==0.11.2`
    3. `pip install requests==2.18.4`
  5. Install Git Bash to run git from the command line
    1. Download and install from
  6. Install pyclowder version 1
    1. The extractors have not been updated to the latest pyclowder yet
    2. Set up a Repositories directory in home directory, and change into it
    3. From Git Bash run `git clone


    1. scm/


 python install

If you need to make any changes to pyclowder during development, you will need to rerun install.

5. Install any additional software/packages required


6. Ensure Clowder and RabbitMQ are running [This is not required if Clowder is not running in the same VM]

In clowder\conf\play.plugins, 9992:services.RabbitmqPlugin should be enabled (line 3 may be commented out by default).

RabbitMQ needs a topic exchange called clowder - this can be created via RabbitMQ browser UI (default is http://localhost:15672/)

The extractor should include the rabbitmqURL (default is "amqp://guest:guest@localhost:5672/%2f")



    1. cats/zzpyclowder1.git`
    2. Change into the zzpyclowder1 directory
    3. `python install`
  1. Install the extractor
    1. Use the version from Clowder, rather than Browndog
    2. From Git Bash run `git clone`
    3. This puts the scripts into place to be run
    4. The for each extractor will need to be updated to point to the proper RabbitMQ URL and Clowder registration endpoints
      1. For use:
        1. RabbitMQ URL: amqp://<USER>:<PASSWORD>[-dev]
        2. Clowder Registration:[-dev]/api/extractors?key=<KEY>
      2. If running everything locally then start RabbitMQ and Clowder locally, and the default values in will likely work
  2. Test that the extractor works 
    1. Run it from the cmd prompt and watch the outputs to be sure it is working properly.
    2. Also advised to upload some files to Clowder and observe that the extractor is receiving, processing and uploading the results back correctly.

Build Windows Service [Uses NSSM]