Migrating from Sidekiq to Iron
I've used Sidekiq for years. It's an absolutely fantastic project and Mike Perham is a shining example of what it means to be a maintainer. I've sent him numerous questions in the past about our installations (I've been both a Pro and an Enterprise user) and he's been extremely quick to respond and we always got to the bottom of any issues.
Years went by and I ended up in the Docker community. I was involved in very large projects where basically all code written and deployed by different departments ended up in CVE audited containers and access dished out by either internal methods or Docker UCP. It was a completely different way of thinking about development at scale in a large organization. I was hooked.
How did organizations handle distribution and processing at scale prior to the recent development in containerized workflows? I'd put my vote in for "Horribly". I worked with a Fortune 100 company at one point that distributed a physical CD that was called "The Golden Image". It apparently contained a completely secure distro that all internal/external code should be run on top of. That was definitely not the case. The web server on the image was almost a year behind the current point release, its distro was far behind its security update, and I was able to fire up Tux Racer and play for a bit before I remembered I had real work to do.
I ended up having Docker containers I needed to run at scale, and I ended up coming across Iron. Its Worker product allowed me to upload my image and fire-and-forget invocations in my code. It was like a containerized Sidekiq. There were a few big points that asonthusanotehusaonthue:
Table of Contents
- Language agnostic
- Hosted, Hybrid or On-Premise
- How do I turn my Sidekiq jobs into IronWorker jobs?
Related Reading: ironworker-vs-sidekiq
Achieve Cloud Elasticity with Iron
Speak to us to find how you can achieve cloud elasticity with a serverless messaging queue and background task solution with free handheld support.
Why Iron instead of Sidekiq?
When we enabled autoscaling at Iron, we didn't have to worry about scaling up or down. It was done for us. For example, if we were a sports network that experienced large spikes during game time and had to deal with the data in real-time, Iron scaled up resources behind the scenes and took care of the load. We were also able to set lower and upper thresholds if we wanted to keep resources reigned in if necessary. It was ideal.
We were a large organization and didn't write software in one language. We rolled with the "the best tool for the job" paradigm and had departments writing Perl, Python, Ruby, Golang, Rust, Erlang, Haskell, C++, .NET... you name it. Each department ended up needing to scale out asynchronous work and ended up using a hodge-podge of services. Once everything was containerized, Iron was able to run everything for us. This exponentially reduced complexity.
Hosted, Hybrid or On-Premise
Hosted Worker worked great, but we ended up scaling up way higher than we ever thought we would have, and it started to cost a bit more than we anticipated. We were partners with a very popular public cloud provider and had a great deal on infrastructure. Luckily, Iron has a Hybrid deployment integration that allowed us to run the actual jobs on our own infrastructure while the scheduling, authentication, and autoscaling logic resided on their infrastructure. This allowed us to save quite a bit on infrastructure spend.
We didn't dig into the full On-Premise deployment, but it was a good piece of mind knowing that an on-premise solution existed. If we needed a HIPAA compliant installation or something that stood up on GovCloud, we were good to go.
Iron.io Serverless Tools
Speak to us to learn how IronWorker and IronMQ are essential products for your application to become cloud elastic.
How do I turn my Sidekiq jobs into Worker jobs?
Finding the code that does the work
In Sidekiq I'd usually have a /workers directory (or /jobs) that would contain all the background jobs that needed to run. If a project got large, this /workers directory would often branch into different sub-directories. I looked into some old code and found the following job. It represents an inbound email that our application needed to process behind the scenes. As you can see, most of the logic hides behind the instance method "process" on a given Message. It's aptly named the MessageProcessor.
"Dockerizing" your worker code
Now, it's not as easy as adding one line to your Gemfile. Well, you still need to do that:
The next few steps are where things get different, but if you're familiar with Docker, this becomes second nature. You need to create a self-contained docker image with your code and the dependencies that you need to run it. For a full walkthrough, I recommend reading through this.
There are more steps, but it's worth it. You end up with an image that can run itself, and also access to the IronWorker API which gives you a ton of ways to access runtime details and metrics directly within your application.
Here's a link to the Ruby implementation that illustrates this a lot better than I can.
Unlock the Cloud with Iron.io
Find out how IronWorker and IronMQ can help your application obtain the cloud with fanatical customer support, reliable performance, and competitive pricing.
Leave a Comment