...
Current "Standard" Approach | Box Tuned/Potentially Streamlined |
---|---|
If using the /extractions/file endpoint the file is transferred three times:
If using the /extractions/url endpoint the file is transferred ?? times: (not sure we could use the URL endpoint since the Box file won't be publicly accessible, Has to be downloaded via the Box API.
File lives in clowder until the cleanup script is run | File is downloaded once from box to the extractor container File lives in extractor and deleted at end of process_message by PyClowder (is this correct, i.e. by PyClowder?) |
Box SDK only lives in the BoxClient service. No changes are required elsewhere, however, this client will need to be maintained (by us or external party) | Box SDK has to be introduced into Pyclowder library. Any other repos we want to support would also have to be included in the future Burden of maintenance/adding new supported external services is squarely on our end (vs on the client's end) Question: Is Pyclowder the right place for this? PyClowder is just a convenient wrapper mechanism for creating "some" extractors that happen to be written/wrapped in python. This will make it more heavy weight and leave out other languages. |
An automated translation of clowder metadata to box skills cards would have to be developed, which may be difficult, or may not (sounds like there are only 4 or 5 types) | Custom metadata structure for box would be implemented in the extractor What happens when an extractor doesn't support a specific service (e.g. Box, Dataverse)? |
Potential Bottlenecks for massive scaling:
Notes: We can't rely on threading in the BoxClient to do the polling since we would risk running out of threads. We can run an a small experiment to get some numbers here (e.g. average time per request, cpu hours/memory utilized, network I/O) | Potential Bottlenecks for massive scaling:
Notes: Everything apart from Rabbit is stateless and can be horizontally scaled. We can run an a small experiment to get some numbers here (e.g. average time per request, cpu hours/memory utilized, network I/O) |
Would need to deploy a new service and proxy it behind Apache BoxClient would need to be allocated a service account and handle Brown Dog tokens | Would need to add some endpoints to fence |
Errors can be reported in Clowder (not visible to the Box user) The BoxClient could potentially retry | Limited error logging and reporting Unsure about retry if an extraction fails Eventually we could create an app where user logs in via their Box credentials to see the history of extractions |
...
Wireframe | ||||||||||||
---|---|---|---|---|---|---|---|---|---|---|---|---|
|
The tools catalog will rely on Leverage ideas from binder to facilitate community extractor development/registering. Allow tools catalog to utilize an underlying Git repo for storing extractor_info documents and keep track of versions, issues, branches, and pull requests. It will download the extractor_info.json
file to populate information on the page.It This will additionally furnish Box enterprise admins with URLs that expose the tool as a skill. Initially they will have to copy and paste the URL. Once Box exposes management of Skills through an API this can be further automated.
...
- Langid
- DBPedia
- Census From Cell
- Handwritten Decimals
- Killed Photos
- Mean Grey
- Faces
- Eyes
- Profiles
- Closeups
- NLTK Summary
- Stanford CoreNLP
- Tesseract
- Tika
- Versus
- VLFeat
- Generalized exif/image metadata extractor
Scientific Communities to be Seeded in Tools Catalog
- OpenCV - Grad student to curate?
- Critical Zone Observatory (via ESIP?)
- Data Driven Ag
- Bisque - Counting Cells in a microscope image??
- Cosmology