Release 1.5

Bug Fixes (Highest Testing Priority)

Permissions

  1. SETUP: Is there special setup or configuration needed to make an instance "private"?
  2. Login to Clowder
  3. Make yourself SuperAdmin using the dropdown at the top-right
    1. You should now be able to see all datasets on Clowder, including those that you are not a part of
  1. SETUP: Is there special setup or configuration needed to make an instance "private"?
  2. Login to Clowder
  3. ???
    1. Expectations?

Fixes to Authentication Providers

  1. ???
  1. Enable LDAP login in Clowder
  2. Login with one user via LDAP
  3. Login with a second user via LDAP
  4. ???
    1. Expectations?

Fixed MongoDB Dataset Service findFileById

  1. add file under path test/a/b/file,
    1. run get paths endpoint on this file.
    2. delete this file from clowder.
    3. Expectations?
  2. add file under test dataset.
    1. run get paths endpoint on this file.
    2. delete this file from clowder.
    3. Expectations?

Fixed RabbitMQ Parameter Escaping

  1. Enable the RabbitMQ plugin in your Clowder instance

  2. Register an extractor with Clowder

  3. Manually submit a file or dataset for extraction

  4. View the message in RabbitMQ (question)
    1. You should see that the map of "parameters" is no longer escaped as a string

Fixed Extractors Logging Sensitive Data in Plaintext

  1. Enable the RabbitMQ plugin in your Clowder instance
  2. Register an extractor with Clowder
  3. Manually submit a file or dataset for extraction

    1. Hint: Try to find some case that you know will produce an extractor error
  4. Check the extractor logs / metadata
    1. You should no longer see the secret_key explicitly printed to the logs during errors

New Views: ExtractorInfo / ExtractorDetails

  1. Enable the RabbitMQ plugin in your Clowder instance
  2. Start an extractor that has the process block defined in extractor_info.json
  3. Login as an admin user via http://localhost:9000/login
  4. Expand the "Admin" dropdown menu at the top-right (gear icon)
  5. Choose the new option for "Extractors" (not to be confused with "Extractions", which previously existed)
  6. Enable an extractor and click "Update"
  7. Navigate back to the "Extractors" view and refresh the page
  8. Navigate to a per-space extractor list for a space (e.g. spaces/:spaceId/extractors)
  1. Enable the RabbitMQ plugin in your Clowder instance

  2. Start and register an extractor with Clowder (for example, ncsa.image.metadata):

    export PUBLIC_IP="<YOUR_PUBLIC_IP>"
    export CLOWDER_API_KEY="<CLOWDER_API_KEY>"
    docker run -it --rm --net=host -e RABBITMQ_URI="amqp://guest:guest@localhost/%2f" -e REGISTRATION_ENDPOINTS="http://${PUBLIC_IP}:9000/api/extractors?key=${CLOWDER_API_KEY}" clowder/extractors-image-metadata


  3. Navigate to localhost:9000/extractors/:extractorName (for example: http://localhost:9000/extractors/ncsa.image.metadata)

    1. You should see a new view that roughly matches the mockups shown here
    2. The new view should list all/most of the common fields of the underlying ExtractorInfo object (note that these fields were previously hidden everywhere else in the UI)

New API: Reverse Proxy

  1. Configure the following in your custom.conf, then run Clowder using that configuration:

    clowder.proxy {
      geopub="https://geoserver.ncsa.illinois.edu/geoserver"
    }


  2. Without logging in, attempt to hit the geoserver API through the proxy - for example: http://localhost:9000/api/proxy/geopub/clowder/wms?service=WMS&version=1.1.0&request=GetMap&layers=clowder:tmppFKK2_.zip12172744&styles=&bbox=365741.5,4434859.5,375398.5,4444630.5&width=506&height=512&srs=EPSG:26916&format=image/png
  3. Log into Clowder via http://localhost:9000/login
  4. Attempt to hit the geoserver API again using the proxy - for example: http://localhost:9000/api/proxy/geopub/clowder/wms?service=WMS&version=1.1.0&request=GetMap&layers=clowder:tmppFKK2_.zip12172744&styles=&bbox=365741.5,4434859.5,375398.5,4444630.5&width=506&height=512&srs=EPSG:26916&format=image/png
  5. Choose an endpoint_key that does not exist - for example: http://localhost:9000/api/proxy/ThisIsNonsense
  1. Configure your clowder.proxy as follows:

    clowder.proxy {
        authtest="http://username:12345@httpbin.org/basic-auth/username/12345"
    }


  2. Navigate to http://localhost:9000/api/proxy/authtest
  3. Navigate to http://localhost:9000/login and login to Clowder
  4. Navigate once more to http://localhost:9000/api/proxy/authtest


  1. view the updated spec in Swagger UI here: https://clowder.ncsa.illinois.edu/swagger/?url=http://localhost:9000/swagger
Notes

You can also serve swagger.yml some other way, if you'd like, but CORS did (I think) prevent me from linking Swagger UI directly to a raw file in our OpenSource BitBucket.

For example, this did not work: https://clowder.ncsa.illinois.edu/swagger/?url=https%3A%2F%2Fopensource.ncsa.illinois.edu%2Fbitbucket%2Fprojects%2FCATS%2Frepos%2Fclowder%2Fraw%2Fpublic%2Fswagger.yml%3Fat%3Drefs%252Fheads%252Fbugfix%252FCATS-910-fix-swagger-documentation-format

New Features

Tracking Usage Metrics

  1. Enable the RabbitMQ plugin in your Clowder instance
  2. Register the standard image preview extractor with Clowder
    1. NOTE: This will prevent "views" of a file from being counted as a download, which could be confusing to users
  3. Login to Clowder
  4. Upload a new file
  5. View it in Clowder
    1. You should see that the file has 1 view and 0 downloads
  6. Refresh the page a few times
    1. You should see that the "views" count is incremented for each refresh
  7. Download the file and refresh the page again
    1. You should see now that the file has 1 download

Per-Space Extractors

  1. Enable the RabbitMQ plugin in your Clowder instance
  2. Start an extractor that has the process block defined in extractor_info.json
    1. For example: image-metadata
  3. Create a space (or navigate to an existing space)
  4. Admins should see an "Extractors" button/link on the right of the spaces/:spaceId view - click this link
    1. You should be brought to spaces/:spaceId/extractors
    2. You should see any registered extractors listed here
    3. This view now should roughly match the mockups attached to this ticket (sans the "Show Columns" dropdown)
    4. Each extractor should list the name, description, author, and process triggers, as well as the checkbox to enable or disable them
    5. CATS-908: You shouldn't see any extraneous <br> elements between the page header and the table

  1. Start from the end of the previous test case (CATS-890)
    1. Make sure that the image-metadata extractor is disabled, both globally and in your test space
  2. Upload an image file to your test space
    1. Your extractor should not trigger, and should remain idle
  3. Enable the extractor globally via the new global extractor view (e.g. /extractors
  4. Upload another test image
    1. This time your extractor should fire, since it has been enabled globally
  5. Disable the extractor globally (via /extractors
  6. Enable the extractor at the space-level (e.g. spaces/:spaceId/extractors)
  7. Upload one more test image
    1. The extractor should trigger again, since it has been enabled in this space

Configuration / Maintainability Changes

  1. ???
  1. ???
  1. ???