Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.

...

  • Participant will use his/her own laptop for this part
    • We will provide a VM with everything pre-installed in it through Nebula. 
      •  Rob Kooper will talk to Doug for this if we can spawn 50 VMs on Nebula for the tutorial session. (DONE) We will get  50 VMs on Nebula.
      •  Smruti PadhyOrder 50+ flash drive for back up that will contain the VMs
      •  Create a VM with everything installed in it and take a snapshot which will then be deployed within Nebula. Approx. time required - 2 days
      •  Make a list of all softwares required and the directory structure for the tutorial
      •  Not sure of Jetstream yet. 
    • Provide clear instructions as how to access VMs in Nebula with proper credentials.
      •  Clear Instruction of how to access the VMs (e.g., through ssh), from different OSes.
    • (Before tutorial - wiki pages with clear instructions) Installs Python/R/MATLAB/cURL to use BD Service along with the library required in case any one interested in using the BD services in future.
      •  Create wiki pages with clear instructions
  • Demonstration of use of BD Fiddle 
    • Sign up for Brown Dog Service
    • Obtain a key/token using curl or Postman or use of IPython notebook
    • Use token and bd fiddle interface to obtain to see BD in action. 
    • Copy paste the python code snippet and use it the application to be explained next. 
    •  Create a document for the demo with step-by-step screenshots
    •  Fix the CORS error for file url option (I think it is a known issue)
  • Create an applications using BD services
    Three applications:
    • Problem 1 : Given a collection of images with text embedded in it, try to search images based on its content. (Emphasizes on extraction on unstructured data, indexing and content-based retrieval)
      • One can upload images from local directory to obtain images or use external web service.  
          •  Create an example dataset with images
          •  Provide a code snippet of using externel service to obtain images. e.g. Flicker API.
            •  This will only be provided as an example and will not be used for the rest of the code.

      • Let the participant use the python library of BD to obtain key/token and submit request to BD-API gateway
        •  Provide the link to the current BD REST API and create a document/wiki page showing step-by-step screenshots of obtaining a key/token using python library.
        •  Write a Python script that will serve as a stub for the BD client
            • The participant will fill in the code to BD REST API call to submit their requests.
      • Make sure OCR and face extractor are running before starting the demo
      • Make sure the Elasticsearch is started before the example files are submitted to BD service
        •  Provide Instructions to start Elasticsearch and start a webclient to it for visualization.
          • Make sure the cluster name in the config.yml differs for each participant.
      • Once technical metadata is obtained from BD, index it tags and technical metadata in an locally running Elasticsearch.
        •  Write a python script that will index the technical metadata in ES
      • Search for the image using ES query
        •  Provide ES query for search
    • Problem 2 : Given a collection of text files from a survey or reviews for a book/movie, use sentiment analysis extractor to calculate the sentiment value for each file and group similar values together. (Emphasizes on extraction on unstructured data and useful analysis )
      • A collection of text files with reviews
        •  Obtain an examples dataset from the web.
      • Let the participant use the python library of BD to obtain key/token and submit request to BD-API gateway
        •  Provide the link to the current BD REST API and create a document/wiki page showing step-by-step screenshots of obtaining a key/token using python library.
        •  Write a Python script that will serve as a stub for the BD client
            • The participant will fill in the code to BD REST API call to submit their requests.
      • Make sure the Sentiment Analysis extractor is running
      • Saves the results for each text file in a single file with corresponding values
        •  Provide code for this in stub script
      • Create  separate folders and move the file based on the sentiment value
        •  Provide a code that will do the above action in the stub
      • (Optional) Index text files along with the sentiment values and use ES visualization tool to search for documents with sentiment value less than some number.
      • Tried to see Yelp API, IMDB API to obtain reviews (???)  or use Twitter API (??) to pull some reviews

    • Problem 3: Use BD conversion to convert a collection of images/ps/odp files to png/pdf/ppt.  This will demonstrates that if you have a directory with files in old file formats, just use BD to get it all converted. (Emphasies on conversion)
      •  Provide a Python script for this and let Participant use python library to use the  BD service

    • Problem 4: Given a collection of *.xlsx files, obtain some results based on some columns value. (Emphasizes on extraction and analysis on scientific data)

...

      • Convert *.xlsx file to *.csv using conversion API so that you can see the content of the file on the VM. We are not installing any office software on the VM.
      • use extraction API to extract columns from the file and
      • Perform some analysis and add to the technical metadata
      •  Write an extractor/converter for this problem
        • This should be an enticing yet simple problem that can handle many spreadsheets and get a result.
        • Ideas
          1. An algebra 101, traveling trains problem.  2 trains leave 2 different stations on tracks heading toward a junction.  Given a spreadsheet with departure times, distances, velocities, etc., upload all the spreadsheets and determine if they will crash.
            1. This problem is simple and would provide the user an easily understood problem that can clearly be scaled to much more involved traffic problems.  
            2. However, it doesn't really present a cool new idea.  It may be preferable to think of something more cutting edge technology
          2. A bacterial growth model.  Given a culture with varied conditions, eg. pH, stored in multiple spreadsheets, determine the growth rate.  Might be able to base this on http://mathinsight.org/bacteria_growth_initial_model
            1. This would require a few minutes of explanation of the model and would require some learning by the developer of the extractor.
            2. Still maybe not that enticing.
          3. Better Ideas?
                         
    • CSV files (Ameriflux), use BD for some gap-filing on the files and return result.

    • Problem 5: Obtaining Ameriflux data and converting into *.clim format (similar to csv format but tab separated) for SNIPET model.  Calculate average air temperature and its standard deviation. (This will emphasize both conversion and analysis)

...