There are several parallel efforts to capture information about Clowder metrics:

The goal is to minimize number of moving parts to capture and store this data. Below is summary of our discussion from 12/7.


RabbitMQ Queue & Flask API

Use queue to store data points. Not an extractor queue, but a special new system queue. 

Flask API design notes - ideally these endpoints also match the calls on the new backend SinkService:


Internal Clowder events service

For the user activity (Max's reporting part and Mike's Clickstream stuff basically) we can call an internal RabbitMQ service for the events that we want to capture, to generate datapoints.

Current (frontend) tracking:

Proposed changes:

Clowder health monitor(s)

Bing's external monitor can't call Clowder, because it has to operate even when Clowder is down. Instead the monitors in different regions can collect and post their datapoints to the Flask API, which can go around Clowder into RabbitMQ directly.


We run a service as docker container periodically fetch the statistics data of Clowder service, e.g., the uptime, response time and a number of active connections to Clowder, etc. And those data will be stored in the backend services e.g., influxdata (this will need the extra endpoints of service), and grafana will retrieve those data and render them on the grafana website for the visualization.

The uptime of Clowder website can ensure we understand the liveness of Clowder service and this metric will be collected by sending ping to the target Clowder website with a certain timeout.

Response time: meanwhile, we collect the statistics of the response time of the ping command. And the elapsed time of downloading Clowder homepage.

The number of connections: It would be good to see how many connections to Clowder website. we can measure the number of connections within a period of time. We would analyze the NGINX log to get those information.



Database monitor(s)

Finally, we need a service to actually pull the messages from RabbitMQ and write them into a database, whether that is MongoDB or InfluxDB or whatever. Maybe these could register with Clowder like extractors even, so that they each get a separate queue and multiples can log to different destinations at once.


Database design

Let's consider some different types of events. Assume user and timestamp for all data captured too.

componentevent typedata capturednotes
storage 

file uploaded

file deleted

fileid, datasetid, spaceid, bytes
extractionsextraction eventmessage, type (queued or working)do we care about data traffic downloaded to the extractor containers?
traffic

page views

resource downloads

url, resourceid

bytes

do we care about every page view? this is currently tracking which resources are being viewed but without the full url
healthping updateresponse time, queue length, other?