Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.

...

  • Custom catalog only of the tools needed for the training environment
  • User accounts that can be created with and without requiring registration (e.g., batch import) or approval. Authentication that ties to existing systems (e.g., Shibboleth, Oauth
  • Optional TLS and vulnerability management. Not all services will be around long enough to merit it.
  • Wildcard DNS (www.x.ndslabs.org)
  • Short term scalable resources (e.g. 40 users, 4 hours) as well as longer-term stable resources (11 weeks, 24x7, maintenance allowed)
  • Custom documentation and branding/skinningCustom data, API keys, etc accessible by usersshared data available to users of the system. For example, Phenome 2017 wanted sample data for each user and pre-defined API keys.Configurable quotas (not one-size fits all)
  • Ability to deploy a dedicated environment, scale it up, and tear it down. At the end of the workshop/semester, access can be revoked.
  • Ability to backup/download data
  • Ability to deploy system under a variety of architectures (commercial, local, OpenStack, etc)
  • Ability to host/manage system at NDSC/SDSC/TACC
  • Security/TLS/vulnerability management
  • with failover
  • Monitoring
  • Support and clear SLA
  • Worth considering:
    • Authentication that ties to existing systems (e.g., Shibboleth, Oauth)
    • Custom documentation and branding/skinning
    • Configurable quotas (not one-size fits all)
    Monitoring

Scalable analysis environment

We can also envision the platform working as a replacement for the TERRA-REF toolserver or as a DataDNS analysis serviceenvironment. In this case, the requirements include:

  • Custom catalog of tools supported for the environment as well as user-defined toolscatalogs
  • User accounts that can be created without requiring registration (API). For example, shared auth with Clowder for TERRA-REF.
  • Authentication that ties to existing systems (e.g., Shibboleth, Oauth)
  • Long-term stable and scalable resources. Ability to add/remove nodes as needed.
  • Ability to terminate long-running containers to reclaim resources
  • Custom documentation and branding, although the UI itself may be optional
  • Ability to mount data stored on remote systems (e.g., ROGER) as read-only and possibly read-write scratch space
  • Ability to add data to a running container, retrieved from a remote system?
  • Clear REST API to
    • List tools; list environments for a user; launch tools; stop tools;
  • Security/TLS/vulnerability assessment

...

Another use case, really a re-purposing of the platform, is to support the development and deployment of research data portals – aka, the Zuhone case. In this case we have something like workbench to develop and test services, with the ability to "push" or "publish", which is still a bit unclear.

Requirements include:

  • Ability to develop data portal using common tools (e.g., development environments or portal toolkits)
  • Ability to deploy (host) data portal services "near" datasets (e.g., ythub).

Additional requirements

  • Security (TLS)Backup
  • Monitoring (Nagios)TLS/DNS
  • Custom DNS entries (gcmc.hub.yt)
  • Optional authentication into portal services (i.e., restricting access to service from end users)

 

Current features/components

...