Current Version: 0.9.2
See https://github.com/nds-org/ndslabs-specs/tree/master/clowder
NDSLabs Test Cases
Prerequisites
Start Tool Server
- Start up a new toolserver using the following command:
- NOTE: Ensure that the image is tagged as ncsa/clowder-toolserver
- This is a manual step until the image gets tagged and pushed to Docker Hub.
Save the Public IP of the node where this is running.
docker run -d -p 8082:8082 --name toolserver -v /var/run/docker.sock:/var/run/docker.sock ndslabs/toolserver:0.9.2 toolserver
- NOTE: Ensure that the image is tagged as ncsa/clowder-toolserver
Start NDSLabs
Start up NDSLabs as described below:
Run the NDSLabs System Shell:
docker run -it --volume=/:/rootfs:ro --volume=/sys:/sys:ro --volume=/var/lib/docker/:/var/lib/docker:rw --volume=/var/lib/kubelet/:/var/lib/kubelet:rw --volume=/var/run:/var/run:rw --net=host --pid=host --privileged=true -v /var/run/docker.sock:/var/run/docker.sock ndslabs/system-shell Unable to find image 'ndslabs/system-shell:latest' locally latest: Pulling from ndslabs/system-shell ff12aecbe22a: Already exists 287750ad6625: Already exists ca98bdf222fa: Already exists a3ed95caeb02: Already exists 97ef68d67ea6: Pull complete 8c53c989a967: Pull complete 79d911a06f41: Pull complete 807cecd8f466: Pull complete 7f887f3746f8: Pull complete 0cadab32de06: Pull complete aff97fd2a6c1: Pull complete Digest: sha256:4128fff8a0234ee6cc25d077b7f607358e681370e5a483b6c89fe1a3dfc3e77e Status: Downloaded newer image for ndslabs/system-shell:latest [root@default NDSLabsSystem ] / #
- From the NDSLabsSystem shell:
- Start Kubernetes by running kube-up.sh
- This will pull the necessary Kubernetes images and start up a single-node (development) Kubernetes cluster.
- Start NDSLabs by running ndslabs-up.sh
- This will start the API Server and GUI in Kubernetes
- NOTE: You may need to wait 30 seconds or more for the GUI server to start while npm and bower download the GUI's dependencies
- This will start the API Server and GUI in Kubernetes
- Check the API server logs and verify that the specs were loaded correctly:
Cloning into '/specs'... I0325 01:52:58.670716 15 server.go:127] Starting NDS Labs API server (0.1alpha 2016-03-24 14:03) I0325 01:52:58.671129 15 server.go:128] etcd PRIVATE_IP:4001 I0325 01:52:58.671145 15 server.go:129] kube-apiserver https://PRIVATE_IP:6443 I0325 01:52:58.671159 15 server.go:130] volume dir /volumes I0325 01:52:58.671170 15 server.go:131] specs dir /specs I0325 01:52:58.671181 15 server.go:132] host PUBLIC_IP I0325 01:52:58.671192 15 server.go:133] port 8083 I0325 01:52:58.671205 15 server.go:134] V1 I0325 01:52:58.671216 15 server.go:135] V2 I0325 01:52:58.671227 15 server.go:136] V3 I0325 01:52:58.671237 15 server.go:137] V4 I0325 01:52:58.671257 15 server.go:675] GetEtcdClient PRIVATE_IP:4001 I0325 01:52:58.672692 15 server.go:165] Using local storage I0325 01:52:58.672746 15 server.go:175] CORS origin http://PUBLIC_IP:30000 I0325 01:52:58.673061 15 server.go:276] Loading service specs from /specs I0325 01:52:58.673910 15 server.go:1875] Adding /specs/clowder/clowder.json I0325 01:52:58.674621 15 server.go:1875] Adding /specs/clowder/elastic.json I0325 01:52:58.675383 15 server.go:1875] Adding /specs/clowder/extractors/image-preview.json I0325 01:52:58.675999 15 server.go:1875] Adding /specs/clowder/extractors/plantcv.json I0325 01:52:58.676513 15 server.go:1875] Adding /specs/clowder/extractors/video-preview.json I0325 01:52:58.677121 15 server.go:1875] Adding /specs/clowder/mongo.json I0325 01:52:58.677516 15 server.go:1875] Adding /specs/clowder/rabbitmq.json I0325 01:52:58.678111 15 server.go:1875] Adding /specs/dataverse/dataverse.json I0325 01:52:58.679353 15 server.go:1875] Adding /specs/dataverse/postgres.json I0325 01:52:58.679768 15 server.go:1875] Adding /specs/dataverse/rserve.json I0325 01:52:58.680253 15 server.go:1875] Adding /specs/dataverse/solr.json I0325 01:52:58.682642 15 server.go:1875] Adding /specs/dataverse/tworavens.json I0325 01:52:58.684168 15 server.go:1875] Adding /specs/elk/elastic.json I0325 01:52:58.684700 15 server.go:1875] Adding /specs/elk/kibana.json I0325 01:52:58.685192 15 server.go:1875] Adding /specs/elk/logspout.json I0325 01:52:58.686697 15 server.go:1875] Adding /specs/elk/logstash.json I0325 01:52:58.689697 15 server.go:1875] Adding /specs/irods/cloudbrowser.json I0325 01:52:58.691582 15 server.go:1875] Adding /specs/irods/cloudbrowserui.json I0325 01:52:58.692906 15 server.go:1875] Adding /specs/irods/icat.json I0325 01:52:58.693939 15 server.go:283] Listening on PUBLIC_IP:8083
- Start Kubernetes by running kube-up.sh
- You should now be able to reach the GUI by navigating to http://CLUSTER_IP:30000
- Create an account on the NDSLabs GUI as described below:
TERRA Clowder Configuration
- Create a new Clowder stack using the GUI or the CLI, as described below:
- Enable all optional services
- PlantCV Extractor
- Image Preview Extractor
- Video Preview Extractor
- ElasticSearch
- Under "Advanced Configuration," make sure to specify the Public IP of the toolserver started previously
- Create a new volume for MongoDB to use
- mongo-01: 5 GB
- Upon confirmation, you should see the new Clowder stack appear on the right-side of the page
- While the stack is still stopped, add the Video Preview Extractor service
- Finally, start the new Clowder stack
- Once Clowder has started (turns green) you should see a link to its web interface appear
Account Registration
- Start Clowder (as described above) and navigate to its endpoint link
- At the top right of the page, click Login and then choose Sign Up on the bottom of the panel.
- Enter your e-mail address in the box and press Submit.
- You should receive an e-mail with a link to confirm your account registration
- Click the link in the e-mail to be brought back to Clowder.
- Enter your First/Last name, enter/confirm your desired password, then click Submit.
- You should now be able to log in with the credentials that you have entered (email / password).
Testing Extractor(s): Upload a File
- Once Clowder starts, register for an account (see above).
- Verify that the extractors are present by navigating to http://YOUR_OPENSTACK_IP:30291/api/status
- You should see rabbitmq: connected listed under the "plugins" section.
- You should see the extractors you specified listed at the bottom
- Create a new dataset by choosing Datasets > Create from the navbar at the top of the page.
- Upload a plantcv test file to this new dataset and watch the extractors work:
- Check the logs of the mongo container and you should see the uploaded files being added to the database
- View http://CLOWDER_IP/admin/extractions in your browser to verify that the extractors are working
- You should also be able to see per-image extraction events listed under each image in a dataset
PlantCV Extractor
- From the NDSLabs Dashboard, Click "View Logs" next to the PlantCV Extractor
- The logs should show the extractor reading the image and attempting to attach metadata:
2016-03-27 00:40:30,299 INFO : pika.adapters.base_connection - Connecting to 10.0.0.56:5672 2016-03-27 00:40:30,303 INFO : pika.adapters.blocking_connection - Created channel=1 2016-03-27 00:40:30,320 INFO : pyclowder.extractors - Waiting for messages. To exit press CTRL+C 2016-03-27 06:03:47,478 DEBUG : pyclowder.extractors - [56f777c3e4b0eb6623c4c197] : Started processing file 2016-03-27 06:03:47,479 INFO : pyclowder.extractors - Starting a New Thread for Process File 2016-03-27 06:03:47,479 DEBUG : pyclowder.extractors - [56f777c3e4b0eb6623c4c197] : Downloading file. 2016-03-27 06:03:47,553 INFO : urllib3.connectionpool - Starting new HTTP connection (1): 141.142.209.154 2016-03-27 06:03:47,644 INFO : terra.plantcv - PARAMETERS: {'channel': <pika.adapters.blocking_connection.BlockingChannel object at 0x7ffb3416b950>, u'datasetId': u'56f777a3e4b0eb6623c4c192', u'fileSize': u'241928', 'fileid': u'56f777c3e4b0eb6623c4c197', u'filename': u'VIS_SV_0_z500_389257.jpg', u'flags': u'', 'header': <BasicProperties(['content_type=application\\json', 'correlation_id=3f98e98f-19db-46e7-bc57-cd3d02b10e85', 'reply_to=amq.gen-e62nBqNswj-FuX-paer3rg'])>, u'host': u'http://141.142.209.154:32408', u'id': u'56f777c3e4b0eb6623c4c197', 'inputfile': u'/tmp/tmpq5B8w7.jpg', u'intermediateId': u'56f777c3e4b0eb6623c4c197', u'secretKey': u'r1ek3rs'} 2016-03-27 06:03:47,644 INFO : terra.plantcv - inputfile=/tmp/tmpq5B8w7.jpg filename=VIS_SV_0_z500_389257.jpg fileid=56f777c3e4b0eb6623c4c197 2016-03-27 06:03:47,645 INFO : terra.plantcv - EX-CMD: /home/clowder/extractors-plantcv/bin/extract.sh /tmp/tmpq5B8w7.jpg VIS_SV_0_z500_389257.jpg 56f777c3e4b0eb6623c4c197 /home/clowder/plantcv-output 2016-03-27 06:04:02,158 INFO : urllib3.connectionpool - Starting new HTTP connection (1): 141.142.209.154 2016-03-27 06:04:02,218 DEBUG : pyclowder.extractors - preview id = [56f777d2e4b0eb6623c4c1a8] 2016-03-27 06:04:02,219 INFO : urllib3.connectionpool - Starting new HTTP connection (1): 141.142.209.154 2016-03-27 06:04:02,234 INFO : urllib3.connectionpool - Starting new HTTP connection (1): 141.142.209.154 2016-03-27 06:04:02,269 DEBUG : pyclowder.extractors - preview id = [56f777d2e4b0eb6623c4c1ab] 2016-03-27 06:04:02,270 INFO : urllib3.connectionpool - Starting new HTTP connection (1): 141.142.209.154 2016-03-27 06:04:02,283 INFO : urllib3.connectionpool - Starting new HTTP connection (1): 141.142.209.154 2016-03-27 06:04:02,334 DEBUG : pyclowder.extractors - preview id = [56f777d2e4b0eb6623c4c1ad] 2016-03-27 06:04:02,335 INFO : urllib3.connectionpool - Starting new HTTP connection (1): 141.142.209.154 2016-03-27 06:04:02,346 INFO : urllib3.connectionpool - Starting new HTTP connection (1): 141.142.209.154 2016-03-27 06:04:02,400 DEBUG : pyclowder.extractors - preview id = [56f777d2e4b0eb6623c4c1b1] 2016-03-27 06:04:02,401 INFO : urllib3.connectionpool - Starting new HTTP connection (1): 141.142.209.154 2016-03-27 06:04:02,424 INFO : urllib3.connectionpool - Starting new HTTP connection (1): 141.142.209.154 2016-03-27 06:04:02,443 DEBUG : pyclowder.extractors - preview id = [56f777d2e4b0eb6623c4c1b4] 2016-03-27 06:04:02,444 INFO : urllib3.connectionpool - Starting new HTTP connection (1): 141.142.209.154 2016-03-27 06:04:02,462 DEBUG : pyclowder.extractors - [56f777c3e4b0eb6623c4c197] : Uploading file metadata. 2016-03-27 06:04:02,463 INFO : urllib3.connectionpool - Starting new HTTP connection (1): 141.142.209.154 2016-03-27 06:04:02,764 DEBUG : pyclowder.extractors - [56f777c3e4b0eb6623c4c197] : Uploading file tags. 2016-03-27 06:04:02,766 INFO : urllib3.connectionpool - Starting new HTTP connection (1): 141.142.209.154 2016-03-27 06:04:02,847 DEBUG : pyclowder.extractors - [56f777c3e4b0eb6623c4c197] : Done
Image Preview Extractor
- From the NDSLabs Dashboard, Click "View Logs" next to the Image Preview Extractor
- The logs should show the extractor reading the image and attempting to create a preview thumbnail from it:
2016-03-27 00:40:30,299 INFO : pika.adapters.base_connection - Connecting to 10.0.0.56:5672 2016-03-27 00:40:30,303 INFO : pika.adapters.blocking_connection - Created channel=1 2016-03-27 00:40:30,320 INFO : pyclowder.extractors - Waiting for messages. To exit press CTRL+C 2016-03-27 06:03:47,478 DEBUG : pyclowder.extractors - [56f777c3e4b0eb6623c4c197] : Started processing file 2016-03-27 06:03:47,479 INFO : pyclowder.extractors - Starting a New Thread for Process File 2016-03-27 06:03:47,479 DEBUG : pyclowder.extractors - [56f777c3e4b0eb6623c4c197] : Downloading file. 2016-03-27 06:03:47,543 INFO : requests.packages.urllib3.connectionpool - Starting new HTTP connection (1): 141.142.209.154 2016-03-27 06:03:48,467 INFO : requests.packages.urllib3.connectionpool - Starting new HTTP connection (1): 141.142.209.154 2016-03-27 06:03:48,606 DEBUG : pyclowder.extractors - preview id = [56f777c4e4b0eb6623c4c1a0] 2016-03-27 06:03:48,607 INFO : requests.packages.urllib3.connectionpool - Starting new HTTP connection (1): 141.142.209.154 2016-03-27 06:03:49,144 INFO : requests.packages.urllib3.connectionpool - Starting new HTTP connection (1): 141.142.209.154 2016-03-27 06:03:49,309 DEBUG : pyclowder.extractors - preview id = [56f777c5e4b0eb6623c4c1a3] 2016-03-27 06:03:49,310 INFO : requests.packages.urllib3.connectionpool - Starting new HTTP connection (1): 141.142.209.154 2016-03-27 06:03:49,372 DEBUG : pyclowder.extractors - [56f777c3e4b0eb6623c4c197] : Done
Video Preview Extractor
- From the NDSLabs Dashboard, Click "View Logs" next to the Video Preview Extractor
- The logs should will show that this extractor has ignored this file upload, since it is not a video file:
2016-03-27 00:40:30,299 INFO : pika.adapters.base_connection - Connecting to 10.0.0.56:5672 2016-03-27 00:40:30,303 INFO : pika.adapters.blocking_connection - Created channel=1 2016-03-27 00:40:30,320 INFO : pyclowder.extractors - Waiting for messages. To exit press CTRL+C
Testing Text-Based Search (ElasticSearch)
- Verify that elasticsearch is enabled by navigating to Clowder's endpoint
- You should see elasticsearch: connected listed under the "plugins" section of http://YOUR_OPENSTACK_IP:30291/api/status
- You should see a "Search" box at the top-right of the Clowder UI. This indicates that elasticsearch is enabled.
- After uploading a file (as described above), attempt to search for the file extensions, such as "jpg" or "png".
- You should see any matching file(s) that you have uploaded listed under the results of the search.
Testing the Tool Server
- Navigate to the Dataset that you created above.
- On the right side of the page, you should see the Tool Manager section.
- Choose a tool (Jupyter / Rstudio) from the drop-down and press "Launch"
- Once the image downloads and the container starts (this may take several minutes):
- Rstudio:
- Navigate to and log into the Rstudio instance
- username: rstudio
- password: rstudio
- You should see the Dataset that you uploaded listed here
- Navigate to and log into the Rstudio instance
- Jupyter:
- Navigate to the Jupyter instance
- You should see the Dataset that you uploaded listed here
- Rstudio:
Clowder Docker Images
- Clone the https://github.com/nds-org/ndslabs-clowder repository.
- Change directories to dockerfiles.
- From dockerfiles, run the make command.
- You should see the images start building from the Dockerfiles present.
- Images that will be built include:
- clowder
- toolserver
- extractors:
- image-preview
- plantcv
- video-preview
- Coming Soon: New Extractors!
- audio-preview
- audio-speech2text
- image-metadata
- pdf-preview
WARNING: plantcv may take up to 25 minutes to complete its build. Plan accordingly.
Archived Test Cases
These test cases are kept for historical purposes and can be used to run and test the Clowder stack in raw Kubernetes (without NDSLabs).
- Run . ./start-clowder.sh with no arguments to spin up a vanilla Clowder, with only a MongoDB instance attached.
- Navigate your browser to http://YOUR_OPENSTACK_IP:30291. You should see the Clowder homepage.
- Verify MongoDB attachment by navigating to http://YOUR_OPENSTACK_IP:30291/api/status.
- You should see mongodb: true listed under the "plugins" section.
Account Registration
- Start Clowder (as described above)
- At the top right of the page, click Login and then choose Sign Up on the bottom of the panel.
- Enter your e-mail address in the box and press Submit.
- You should receive an e-mail with a link to confirm your account registration
- Click the link in the e-mail to be brought back to Clowder.
- Enter your First/Last name, enter/confirm your desired password, then click Submit.
- You should now be able to log in with the credentials that you have entered (email / password).
Create a Dataset / Upload a File
- After registering for an account (see above), create a new dataset by choosing Datasets > Create from the navbar at the top of the page.
- Choose a picture file to upload to this dataset. The contents of the picture do not matter.
- After choosing Start Upload, check the logs of the mongo container and you should see
Extractor(s)
Now that you've seen the basic setup, let's try something a little more complex:
- Stop any running Clowder / plugin instances: . ./stop-clowder.sh -m
- Restart Clowder with some extractors: . ./start-clowder.sh -w image-preview plantcv video-preview
- The script should automatically start RabbitMQ for you as well, since you have specified that you would like to utilize extractors.
- Wait for everything to finish starting everything up (this may take up to ~1 minute)
- Once Clowder starts, verify that the extractors are present by navigating tohttp://YOUR_OPENSTACK_IP:30291/api/status
- You should see rabbitmq: true listed under the "plugins" section.
- You should see the extractors you specified listed at the bottom
- Create a Dataset and upload a file as described above.
- View http://CLOWDER_IP/admin/extractions in your browser to verify that the extractors are working.
- If anything strange appears on the UI, check the log(s) of each extractor and you should see it doing work on the file(s) you chose to upload
Text-Based Search (ElasticSearch)
- Stop any running Clowder / plugin instances: . ./stop-clowder.sh -m
- Restart Clowder with elasticsearch enabled: . ./start-clowder.sh elasticsearch
- Once Clowder starts, verify that elasticsearch is enabled by navigating tohttp://YOUR_OPENSTACK_IP:30291/api/status
- You should see elasticsearch: true listed under the "plugins" section.
- You should see a "Search" box at the top-right of the Clowder UI. This indicates that elasticsearch is enabled.
- After uploading a file (as described above), attempt to search for the file extensions, such as "jpg" or "png".
- You should see the file that you uploaded listed under the results of the search.