It would be good if we can decouple the executors from the engine and move towards the architecture used in clowder.
The thought is to use rabbitmq with a queue for each extractor. We could use as key a specific extractor (java, commandline, etc) and have an exchange for each datawolf. This would allow us to have specific executors that for specific projects.
Executors will just connect to the queue and pick up jobs and process them. When finished they will send back a message that has all the data generated, and location where the data is.
The engine will look at the queue for return messages and launch new jobs based on the data available.
- relates to
-
WOLF-152 Docker executor
- In Progress