Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.

...

  • Participant will use his/her own laptop for this part
    • We will provide a VM with everything pre-installed in it through Nebula. 
      • TODO: Rob will talk to Doug for this if we can spawn 50 VMs on Nebula for the tutorial session
      • TODO: Order 50+ flash drive for back up that will contain the VMs
      • TODO: Create a VM with everything installed in it and take a snapshot which will then be deployed within Nebula. Approx. time required - 2 days
        • Make a list of all softwares required and the directory structure for the tutorial
      • TODO: Not sure of Jetstream yet. 
    • Provide clear instructions as how to access VMs in Nebula with proper credentials.
      • TODO: Clear Instruction of how to access the VMs (e.g., through ssh), for different OSes.
    • (Before tutorial - wiki pages with clear instructions) Installs Python/R/MATLAB/cURL to use BD Service along with the library required in case any one interested in using the BD services in future.
      • TODO: Create wiki pages with clear instructions
  • Demonstration of use of BD Fiddle 
    • Sign up for Brown Dog Service
    • Obtain a key/token using curl or Postman or use of IPython notebook
    • Use token and bd fiddle interface to obtain to see BD in action. 
    • Copy paste the python code snippet and use it the application to be explained next. 
    • TODO: Create a document for the demo with step-by-step screenshots

  • Create an applications using BD services
    Three applications:
    • Problem 1 : Given a collection of images with text embedded in it (or scanned handwritten documents image), try to search images based on its content.
      • One can upload images from local directory to obtain images or use external web service.  
          • TODO: Create an example dataset with images
          • TODO: Provide a code snippet of using externel service to obtain images. e.g. Flicker API.
            •  This will only be provided as an example and will not be used for the rest of the code.

      • Let the participant use the python library of BD to obtain key/token and submit request to BD-API gateway
        • TODO: Provide the link to the current BD REST API and create a document/wiki page showing step-by-step screenshots of obtaining a key/token using python library.
        • TODO: Write a Python script that will serve as a stub for the BD client
            • The participant will fill in the code to BD REST API call to submit their requests.
      • Make sure OCR and face extractor are running before starting the demo
      • Make sure the Elasticsearch is started before the example files are submitted to BD service
        • TODO: Provide Instructions to start Elasticsearch and start a webclient to it for visualization.
          • Make sure the cluster name in the config.yml differs for each participant.
      • Once technical metadata is obtained from BD, index it tags and technical metadata in an locally running Elasticsearch.
        • TODO: Write a python script that will index the technical metadata in ES
      • Search for the image using ES query
        • TODO: Provide ES query for search
    • Problem 2 : Given a collection of text files from a survey or reviews for a book/movie, use sentiment analysis extractor to calculate the sentiment value for each file and group similar values together.
      • A collection of text files with reviews
        • TODO: Obtain an examples dataset from the web.
      • Let the participant use the python library of BD to obtain key/token and submit request to BD-API gateway
        • TODO: Provide the link to the current BD REST API and create a document/wiki page showing step-by-step screenshots of obtaining a key/token using python library.
        • TODO: Write a Python script that will serve as a stub for the BD client
            • The participant will fill in the code to BD REST API call to submit their requests.
      • Make sure the Sentiment Analysis extractor is running
      • Saves the results for each text file in a single file with corresponding values
        • TODO: Provide code for this in stub script
      • Create  separate folders and move the file based on the sentiment value
        • TODO: Provide a code that will do the above action in the stub
      • (Optional) Index text files along with the sentiment values and use ES visualization tool to search for documents with sentiment value less than some number.
      • Tried to see Yelp API, IMDB API to obtain reviews (???)  or use Twitter API (??) to pull some reviews

    • Problem 3: Use BD conversion to convert a collection of images/ps/odp files to png/pdf/ppt.  This will demonstrates that if you have a directory with files in old file formats, just use BD to get it all converted.
      • TODO: Provide a Python script for this and let Participant use python library to use the  BD service
                     
    • CSV files (Ameriflux), use BD for some gap-filing on the files and return result.

    • Think of a R client for BD. (PeCAn??)

    • Think of a MATLAB client 

    • Want to build a Web application on top of BD, (Similar to what Marcus build??)

    • Use BD convert to convert a collection of images with old format to png or pdf. convert odp/odt to ppt/doc

...