Pattern: Creating Task-Level Workers at Runtime

Overview

A type of topic appearing more and more frequently in StackOverflow and Quora are questions on general architecture and app design. We came across one the other day on approaches to queuing and scheduling workers.


A good worker pattern ā€“ based on what we've seen and done ā€“ is to chunk the work (and create task-level workers) at runtime and not prior. In other words, use a master/slave setup that is event or time-driven and do the majority of the work using concurrent processes. When the master comes off the queue, it can slice up the work/data space into granular tasks/chunks as well as queue up slave worker jobs to each handle a collection of discreet tasks and data. (If the work or data space is especially large or complicated, then the slicing can be distributed across a set of master tasks)
The reason for waiting until runtime to create the slave workers is that viewing jobs in the schedule is much easier when done at a coarse grain level. Meaning that the jobs scheduled correspond to the units that you're tracking (such as webpages, user profiles, or blocks of data from users, sensors, or other streaming input devices). At this level, you can more easily monitor and inspect the collection of scheduled jobs because they correspond to your key metrics and inputs. Notifications have a better signal-to-noise ratio and status indicators are much more meaningful.

Table of Contents

Related Reading: Top 10 uses of IronWorker

Achieve Cloud Elasticity with Iron

Speak to us to find how you can achieve cloud elasticity with a serverless messaging queue and background task solution with free handheld support.

Fine-Grained Data | Course-Grained Workers

The slicing of the work often gets done on a fine-grained level, meaning that atomic pieces of data are created or made available for your task-level routines to do to their work. (Such as check to see if a Klout score needs to be updated or adding explicit likes or clickstream data to a user's existing preference information).

Instead of creating a worker for each data element, however, it's better to have each worker work on a reasonable collection of tasks or data items. We've found that having each worker process multiple tasks (20-1000 data items for example) provides a good balance between:

  • optimizing worker setup (establishing a database connection for example)
  • providing good introspection into the jobs
  • making retries and exception handling more manageable
The number per worker will depend on the type/length of the task. The idea is to have workers execute in minutes as opposed to seconds or hours. The reason is so you have greater visibility into worker performance and so that retries only affect a limited amount of the workspace.
Making use of S3 to hold the large data blocks and then using a NoSQL solution (esp. database-as-a-service ones like MongoHQ or MongoLabs) to hold the data slices can make it easy to keep track and manage the data slicing and task-level work.
Iron

Iron.io Serverless Tools

Speak to us to learn how IronWorker and IronMQ are essential products for your application to become cloud elastic.

Worker Independence

A key part in creating any worker process set is to do so in a way that makes it independent of your application environment. This means writing each worker so it can run in an independent app environment as well as using callbacks, database flags, and other asynchronous approaches to communicate asynchronously between the application and the workers.

 

Doing it this way gives you much greater agility ā€“ meaning workers can be modified or new ones created without worrying about application dependencies. This approach also allows the work to run asynchronously and be distributed over an elastic worker system, which is really where you need to go if you have even a modest amount of work.
Just as applications are being written to run on elastic (and disposable) infrastructure, workers also need to be written to run in elastic environments, ones that will increasingly be separate from your application environment.
Update: We're seeing the need for this pattern more and more. Which should explain the anti-pattern (and corresponding blog post) that we came up with:
Iron

Unlock the Cloud with Iron.io

Find out how IronWorker and IronMQ can help your application obtain the cloud with fanatical customer support, reliable performance, and competitive pricing.

Leave a Comment





This site uses Akismet to reduce spam. Learn how your comment data is processed.