Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.
Comment: Moved "Design Goal" section to the beginning; Reordered parts of the "Algorithm" section; added details on how to get the needed data in the algorithm part
  • Design Goal / Requirement

To support auto-scaling of the system resources to adapt to the load of external requests to the Brown Dog Data Tiling Service. In general, this includes Medici, MongoDB, RabbitMQ, and the extractors, currently the design focuses only on auto-scaling the extractors. Specifically, the system needs to start or use more extractors when certain criterion is met, such as the number of outstanding requests exceed certain thresholds, and suspend or stop extractors when other criteria are met. The intention of scaling down part is mainly to save resources (CPU, memory, etc.) for other purposes.

...

  • Investigated technologies
    • Olive (olivearchive.org): mainly developed at Carnegie Mellon University (CMU).

...

Brown Dog VM elasticity project needs to support multiple OSes, so OpenStack seems a viable high level solution. Currently considering using OpenStack.  May consider using Docker on the VMs if needed.

  • Design Goal

...

.

...

  • Algorithm / Logic

Assumptions:

The following assumptions are made in the design:

  1. the extractor is installed as a service on a VM, so when a VM starts, all the extractors that the VM contains as services will start automatically;
  2. the resource limitation of using extractors to process input data is CPU processing, not memory, hard disk I/O, or network I/O, so the design is only for scaling for CPU usage;
  3. we need to support multiple OS types, including both Linux and Windows;
  4. we assume that the entire Brown Dog system will be using RabbitMQ as the messaging technology.

...

Algorithm:

This VM elasticity system / module has maintains and uses the following data as inputs:

  1. RabbitMQ queue lengths and the number of consumers for the queues;
    Can be obtained using RabbitMQ management API.
  2. for each queue, the corresponding extractor name;
    Currently hard coded in the extractor code, so that queue name == extractor name.
  3. for a given extractor, the list of running VMs where an instance of the extractor is running, and the list of suspended VMs where it was running;
    Running VM list: can be obtained using RabbitMQ management API, queue --> connections --> IP.
    Suspended VM list: when suspending a VM, update the mapping for the given extractor, remove the entry from the running VM list and add it to the suspended VM list.
  4. the number of vCPUs of the VMs;
    This info is fixed for a given OpenStack flavor. The flavor must be specified when starting a VM, and this data can be stored at that time.
  5. the load averages of the VMs;
    For Linux, can be obtained by executing a command ("uptime" or "cat /proc/loadavg") with ssh (a bit long, last testing took 12 seconds from devstack host to a ubuntu machine, using ssh public key).
  6. for a given extractor, the list of VM images where the extractor is available.
    This is manual and static data. Can be stored in a config file, a MongoDB collection, or using other ways.

In the above data, items 2, 4 and 6 are static (or near static), the others are dynamic, changing at run time.

...

 

The system watches for the above data:.

Periodically (configurable, such as every minute), check whether we need to scale up, and whether we need to scale down. These 2 checks can be in parallel, but if in parallel, need to protect and synchronize shared data, such as the list of running VMs.

...