Worker Dynos vs Worker add-ons
Heroku is a cloud platform offered as a service. It lets you deploy and manage applications without worrying about the infrastructure. Heroku is a polyglot platform that supports most of the standard frameworks and languages like Python, Java, Node.js, Ruby, etc. Heroku uses the Git version control system as the primary way of deploying applications.
It can be scaled in no time, both vertically and horizontally. Continuous delivery can be accomplished using Heroku flow which enables one to set up deployment pipelines and separate environments.
Heroku apps are run on virtualized Unix containers called Dynos. Heroku allows integrating third-party or custom services to your applications through the concept of add-ons.
Add-ons are typically used to provide database services, queue services, background job services, etc. If you are on the lookout for a completely managed worker service add-on or queue service add-on, checkout Iron - Hosted solutions as Heroku add ons. Now, let’s dive into this post all about the differences between Heroku Worker Dynos and Add ons.
Understanding Heroku Dynos
Applications in Heroku are a collection of the source code, the frameworks that are required, a list of dependencies that need to be installed, and a configuration file called ‘Procfile’ that specifies the service which should be run to make the application live.
Such applications are deployed in virtualized Linux containers called Dynos. These containers are the basic building block of Heroku and allow it to scale horizontally and vertically. Heroku provides three kinds of Dynos - Web Dyno, Worker Dyno, and On-off Dyno.
Web Dynos are meant to be used for executing web processes that accept HTTP connections. The services are created based on entries in Procfile that are marked as web.
Worker Dynos can be used for any process that does not require accepting HTTP requests. It is normally used for background jobs, queueing systems, timed jobs, etc. The same application can use multiple kinds of worker dynos as specified in the Procfile.
On-off dynos are usually used for one-time administration jobs that are done manually. They serve as a point to connect to your other dynos and execute tasks like database migrations, console operations, etc.
Since our topic of interest is worker dynos let us now focus more on that.
Worker Dyno
Worker Dynos are typically used to execute background processes that affect the response time of the service. A typical example of a background job is a process that transfers or fetches some data from a different service or a process with heavy resource requirements like image or video processing.
A typical example for such a requirement will be an image processing task that needs to be triggered by a user input like a file input. The naive method of implementing this is to show the user a busy icon till the file is processed and then give them the responsibility for the file input submit after the processing is done. This is not a scalable approach that can lead to a bad user experience as well as high infrastructure requirements.
The scalable approach is to push a message to a queue when the user triggers the action and have a background job handle the processing. The response for the user’s input will just be a submission success status and once the processing is over, a notification can be triggered to let the user know.
Implementing such an architecture requires a queueing system and a background processing system. Worker Dynos are meant for this. They offer a scalable environment where queueing systems and background jobs can be designed. But the responsibility of actually implementing these systems still stays with the application developer.
The developer will have to use the tools available in their preferred language or framework to implement this. For example, in Python, you can do this using RabbitMQ and the Celery framework. In Java, you can do this using the concept of asynchronous workers and RabbitMQ.
Now that we understand the purpose of the worker dyno’s let move on to add-ons
Iron.io Serverless Tools
Speak to us to learn how IronWorker and IronMQ are essential products for your application to become cloud elastic.
Understanding Heroku Add-ons
Add-ons are Heroku’s way of allowing application developers to extend the functionality of their apps using third-party or custom services. Dyno’s in Heroku are meant to be horizontally scalable and can be restarted or reset any time based on the application load or health metrics.
Hence they are not meant to store any state information and can be used only for processing power. This means there should be other mechanisms for activities like data storage, logging, user activity capture, building content recommendations, etc. This is where Add-ons will help.
Add-ons can be downloaded from Heroku marketplace. All the common use cases like relational databases, non-relational databases, error monitoring, mail sending services, etc are available as add-ons. MySQL, MongoDB, etc are available as add-ons in the marketplace. You can check out the complete list of currently supported Heroku add-ons here.
Since Worker Dynos can run any framework and language, it is logically possible to implement any of the above services by combining it with a persistence storage service like S3. But in that case, the effort of implementation and maintenance of that service comes on to the application team.
Add-ons differ from worker dynos in the sense that they are hosted solutions that are meant for a specific purpose and the developers can use them as a black box without worrying about what goes under the hood.
For example, we discussed the possibility of implementing a background queue and processing system using RabbitMQ and Celery for a python based application. Imagine having access to an add-on that can accomplish the same use case without you having to worry about RabbitMQ or Celery.
The only action that needs to be taken from the developer side is to integrate the add-on using Heroku CLI and manage the configuration variables. Heroku also provides options to implement your own add-on in case you have in-house functionality that needs to be integrated to your applications.
Ironworker and IronMQ are two such add-ons that can be integrated easily into any Heroku app without worrying about the infrastructure or implementation details. The IronWorker is a drop-in replacement for any background process implementation and IronMQ can be used for any queueing requirements. You can check out Iron offerings here.
Differences between Worker Dyno and Add-on
- Worker Dynos provide an environment for you to set up a background job processing service. IronWorker goes one step beyond and provides a full-fledged completely managed worker-as-a-service. In the case of IronWorker, you only have to worry about the core working logic of your job.
- IronWorker pricing starts at 19$ per month for the smallest one that can support auto-scaling. Heroku Dynos start at 25$ per month for the smallest one with horizontal scaling.
- In the case of worker dynos, you need to make a decision on the language or framework of choice for implementing your worker logic. For example, if you go with Celery and Python, you will need to use Python to represent your logic. For Ironworker, you can virtually use any language and build a container.
Conclusion
Both worker dynos and IronWorkers are good choices for implementing background jobs in Heroku. While worker dynos provide you an environment for setting up the background job service, IronWorker provides everything as a service that can be integrated straight away. You can check out Ironworker here.
Unlock the Cloud with Iron.io
Find out how IronWorker and IronMQ can help your application obtain the cloud with fanatical customer support, reliable performance, and competitive pricing.