You are viewing an old version of this page. View the current version.

Compare with Current View Page History

« Previous Version 5 Next »

  • Design Goal / Requirement

BD-703. To add Docker containers as a new granularity in the VM elasticity module, thus expands its functionality.

  • Design questions and answers

  1. Should an extractor type be managed by both an VM image and docker, or only one of them?
    A: only one for simplicity. So need to specify this piece of information in the config file.
  2. Support managing the extractors both using VM images and using Docker containers at the same time?
    A: Yes.
  3. Separate docker machines than the other machines? Or on the same machines? A: Separate. 

  4. Docker image storage: docker hub, or a private registry?
    A: Docker hub for now. Setting up a private registry takes time and a secure one requires getting a certificate from a CA. Can do it later when needed. Use ncsa/clowder-ocr, ncsa/clowder-python-base, ncsa/clowder-opencv-closeups, etc. for now. Done.

  5. How do we restart a docker container if the application crashed/stopped?
    A: docker run --restart=always ...
    This will retry indefinitely, but with a delay that doubles before each retry, starting from 100 ms (0.1 second), to avoid flooding the server. Can also consider using "--restart=on-failure[:max-retries]" to limit the number of retries, but then that could leave a container in the stopped state, without any component to restart it. Usually a RabbitMQ server restart would cause an error, and the error was observed to persist for about 2 minutes.
  6. How do we scale up an application managed by docker?
    A: see below.
  7. How do we scale down?
    A: see below.
    1. Do we suspend and resume docker VMs, or always keep them running?
      A: We suspend and resume docker VMs, but keep at least 1.
  8. A VM image is created and used to start "Docker machines" or "Docker VMs", to host the docker containers. How do we start them – externally bootstrap, or start them using the elasticity module?
    A: need to add the docker VM image info in the config file, so the module knows how to start a new docker VM. Can start one at the beginning of the elasticity. Later on as needed start more.
  9. How do we detect idle extractors managed by docker?
    A: Same logic using the RabbitMQ API as before. After detection, perform docker-specific commands to stop the idle extractors.
  10. How do we detect idle docker machines if no container runs on them?
    A: add data structure for docker machines. Find docker VMs that have no extractors running on them, add them to the idle machine list, or somehow signal that they can be suspended.
  11. How do we specify mapping of docker images with extractors?
    A: add a [Docker] section in the config file, with line items such as: "extr1: dockerimg1". When starting the elasticity module, load the config file, and check for errors: one extractor type should be managed only by one method: either docker or a VM image. If such configuration errors exist, print out, and use a default type such as docker – also make this choice a configurable item in the config file.
  12. Details of the Docker VM image? A: Ubuntu 14.04 base image + Docker installed.    In the config file [OpenStack Image Info] section:    docker-ubuntu-trusty = docker, m1.small, ubuntu, NCSA-Nebula, ''    Use a larger flavor (4 or 8 CPUs, since multiple containers would share one machine).
  • Algorithm / Logic

    • Assumptions:
  1. The module needs to support managing the extractors both using VM images and using Docker containers at the same time;
  2. A VM image is created and used to start "Docker machines" or "Docker VMs".
  3. Dockerized extractors run only in the Docker machines, the extractors managed by VM images do not run in the Docker machines.
  • Additional data structure to add support for Docker:
  1. a map to look up whether an extractor is managed by Docker or a VM image;
  2. a list/map of Docker machines, to start/suspend//resume Docker machines;
  3. a map of extractor to Docker images, to be used in adding extractor instances;
    • Scaling up

Get the list of running queues, and iterate through them:

  1. If the above criterion is reached for a given queue, say, q1, then use the data item 2 above, find the corresponding extractor (e1). Currently this is hardcoded in the extractors, so that queue name == extractor name.
  2. Look up e1 to find the corresponding running VM list, say, (vm1, vm2, vm3).
  3. Go through the list one by one. If there's an open slot in the VM, meaning its #vCPUs > loadavg + <cpu_buffer_room> (configurable, such as 0.5), for example, vm1 #vCPUs == 2, loadavg = 1.2, then start another instance of e1 on vm1. Finish working on this queue and go back to Step 1 for the next queue. If there's no open slot on vm1, look at the next VM in the list. Finish working on this queue and go back to Step 1 for the next queue if an open slot is found and another instance of e1 is started.
  4. If we go through the entire list and there's no open slot, or the list is empty, then look up e1 to find the corresponding suspended VM list, say, (vm4, vm5). If the list is not empty, resume the first VM in the list. If unsuccessful, go to the next VM in the list. After a successful resumption, look up and find the other extractors running in the VM, and set a mark for them so that this scaling logic will skip these other extractors, as resuming this VM would also resume them. Finish working on this queue and go back to Step 1 for the next queue.
  5. If the above suspended VM list is empty, then we need to start a new VM to have more e1 instances. Look up e1 to find a VM image that contains it. Start a new VM using that image. Similar to the above step, after this, look up and find the other extractors available in the VM, and set a mark for them so that this scaling logic will skip these other extractors, as starting this VM would also resume them.

At the end of the above iterations, we could consider verifying whether the expected increase in the number of extractors actually occurred or not, and print the result out.

    • Scaling down
    1. Stop idle extractor instances:
      Find out idle queues (no data / activity for a configurable period of time). For each such queue, find out the running VMs and the number of extractor instances. We allow a user to specify the minimum number of total running instances for an extractor type in the config file. If the number of extractor instances is > 1, stop all instances that still keep the min number of running instances for the extractor type, leaving the first instance running on each machine.

    2. Suspend idle VMs.

Get the list of IPs of the running VMs. Iterate through them:

If there is no RabbitMQ activity on a VM in a configurable time period (say, 1 hour), then there is no work for the extractors on it to do, so we can suspend the VM to save resources for other tasks. However, if suspending this VM would decrease the number of running instances for any extractor that runs on it below the minimum number configured for that extractor type, we do not suspend it and will leave it running.

Notes:

  1. This logic is suitable for a production environment. For a testing environment or a system that's not busy, this logic could suspend many or even all VMs since there is not much or no request, and lead to a slow start – only the next time the check is done (say, every minute), this module will notice that the number of extractors for the queues are 0 and resume the VMs. We could make it configurable whether or not to maintain at least one extractor running for each queue – it's a balance of potential waste of system resource vs. fast processing startup time.
  • Programming Language and Application Type

Continue with the existing use of Python and a stand-alone program.

  • Testing

  • Input
    Use a script to generate high request rates with OCR, OpenCV extractors to test the scaling up part. Stop sending the requests to test the scaling down part.
  • Output
    Use the OpenStack CLI / web UI for the VM part, RabbitMQ web UI and SSH into the docker machines for the extractor part, to verify that the docker containers are started/stopped, and docker VMs are started/resumed/suspended as expected.

 

  • No labels