Overview

This page defines test cases for  NDS-132 - Getting issue details... STATUS  . The Dataverse "stack" consists of required components Dataverse, Postgres, Rserve, and Solr with optional components TwoRavens and iRODS/iCAT

Global Preconditions

  • Kubernetes is running
  • NDSLabs API server and GUI are running
  • Project has been created
  • User is logged into project

Basic Cases

Service List

  • Preconditions:
    • User is logged in to project
  • Expected results: 
    • "Dataverse" appears in the NDSLabs service list
    • Description tooltip displays over "+" button

Add Stack - Basic ( no existing volumes)

  • Pre-Conditions
    • No existing volumes
  • Test Steps
    • User selects "+" button next to dataverse
    • Introduction page displays with brief description of Dataverse
      • Required services: Dataverse, Postgres, Solr, and Rserve
      • Volumes required for: Dataverse, Postgres, and Sol
    • User does not select optional services (TwoRavens, or iCAT)
    •  Configuration:
      • Required:
        • Database password
      • Optional configuration
        • SMTP server (smtp.ncsa.illinois.edu)
    • Volumes:
      • Required volumes for dataverse, Postgres, and Solr
  • Expected results
    • Stack is added with Dataverse, Postgres, Solr, and Rserve
    • Dataverse config is visible
    • Services are stopped, delete stack and launch stack buttons are enabled

Add Stack - Basic (existing volumes)

  • Pre-Conditions
    • Volume exists from previous Dataverse stack (see Delete stack – preserve volume)
  • Test Steps
    • User selects "Dataverse" stack from service list, wizard is displayed
    • Same flow as Add Stack - Basic, but use existing Dataverse volume
  • Expected results
    • Stack is created 
    • Existing volume is used for Dataverse service
    • Stack is in a stopped state
    • Optional flow – see Restart stack case (volume is reused)

Start Stack

  • Pre-Conditions
    • Stack has been added
    • Stack is stopped
  • Test Steps
    • Select "Launch Stack"
  • Expected results
    • All services are started without error (~3-5 min – Dataverse service is slow to start)
 

View Container Logs

  • Pre-Conditions
    • Stack has been started
  • Test Steps
    • Select "Logs" for each service
  • Expected results
    • Logs are viewable for each service

View Dataverse Interface 

  • Pre-Conditions
    • Stack has been started
  • Test Steps
    • Select endpoint link next to Dataverse service
    • Dataverse login page is displayed (dataverseAdmin/admin)
    • Select "Add Data" > "New Dataset"
    • Enter title, text, select subject, select files to add (Add file),  select "Save Dataset"
  • Expected results
    • User is able to login and upload a file

Stop Stack

  • Pre-Conditions
    • Stack is started
  • Test Steps
    • Select "Stop Stack"
  • Expected results
    • All services in stack are stopped (after ~ 2-3 minutes)

Restart Stack

  • Pre-Conditions
    • Stack was previously started
    • Stack is stopped
  • Test Steps
    • Select "Launch stack"
    • Select "Dataverse" endpoint
    • Login dataverseAdmin/admin
  • Expected results
    • Stack is started (~2-3 minute)
    • Previously uploaded data is there

Delete stack - preserve data

  • Pre-Conditions
    • Stack is stopped
  • Test Steps
    • Select "Delete stack"
    • Select "Yes, but save the data"
  • Expected results
    • Stack is deleted, but volume remains (see Volumes tab)
    • Optional flow: Add stack – existing volume

Delete stack - delete data

  • Pre-Conditions
    • Stack is stopped
  • Test Steps
    • Select "Delete Stack"
    • Select "Yes, and delete the data"
  • Expected results
    • Stack is deleted, volume is removed

Optional Cases

 Two Ravens integration

  • Pre-Conditions
    • No existing volumes
  • Test Steps
    • Same as "Add Stack - Basic" case
    • User selects optional TwoRavens service
    • No additional configuration or volumes for TwoRavens
    • Select endpoint link next to "Dataverse"
    • Login to Dataverse (dataverseAdmin/admin)
    • Upload file "fearonLaitin.csv", Save Dataset
    • Select "Explore" button next to file
    • TwoRavens interface is displayed. After 30 or so seconds, graph is displayed
  • Expected results
    • Stack is added with Dataverse, Postgres, Solr
    • Dataverse config is visible
    • Services are stopped, delete stack and launch stack buttons are enabled

iCAT integration

 

  • Test Steps
    • Start iRODS iCAT instance (this is the "preservation" server)
      • Enable optional Cloudbrowser UI
      • Generate iRODS password
      • Specify "fedZone" for CloudBrowser zone
    • Start Dataverse
      • Enable optional Dataverse iCAT service
      • Config:
        • Required
          • iRODS password: generate
          • iRODS preservation server IP: enter internal IP address of existing iCAT server
      • Basic:
        • Set "iRODS PReservation server hostname" = "hostname" of iCAT server in step 1. This can be found in the iCAT server logs.
          • hostname [<hostname>] may need to be a FQDN
      • Launch Dataverse
    • Start separate CloudBrowser UI
      • "Show standalone services" > iRODS CloudBrowser UI
      • Launch stack
    • Upload file to Dataverse
      • Open Dataverse ui
      • Login (dataverseAdmin/admin)
      • Upload file "fearonLaitin.csv" > Save Dataset
    • Wait for 5-10 minutes for files to copy across services
    • Optionally, exec into Dataverse instance and run /irsynch.sh then exec into dvicat instance and run /opt/dataverse/archive.sh
    • Confirm files have propagated:
      • Open standalone Cloudbrowser UI
        • Host = internal IP of dvicat instance
        • Port = 1247
        • Zone = dvnZone
        • User = dataverse
        • Password = PRESERVATION_PASSWORD from dvicat
        • Confirm dvn_preservation directory is not empty (should contain 10.5072 directory)
      • Open iCAT Cloudbrowser UI
        • User = rods
        • Password = RODS_PASSWORD
        • Select "fedZone" > "dvnZone" > dvn_preservation. Confirm 10.5072 directory exists

 

  • No labels