A Serverless Message Queue Without the Glue

More and more technologies get involved as systems grow, and it’s sometimes hard to keep track of what’s doing what. Caching layers, message queues, serverless functions, tracing frameworks… the list goes on.  Once you start sprinkling in public cloud services, you may find yourself developing your way into vendor lock-in.  All of the sudden, you’re dealing with one cloud, tons of services, and having to glue everything together in order to make the services talk to each other.  One of Iron’s primary goals is to make life easier for developers, and IronMQ’s little known “Push Queue” feature is one that can help prevent you from having to write the glue.

What are Push Queues?

IronMQ has a built-in feature called Push Queues, which when enabled, fire off an event any time a message gets pushed onto that queue. This comes in extremely handy when you immediately want to “do something” (or many things) with that message. With traditional message queues, you’d usually need to write another process that polls your queues for messages at a given duration. MQ’s push queues can fire off events to different types of endpoints, each extremely helpful in their own way.

What Type of events can be triggered?

HTTP
When a message gets put onto your push queue, IronMQ can make a POST request (with the message in the request body) to any URL of your choice. This is extremely handy when you want to notify other systems that some sort of event just happened or kick off another process.

MQ
Inception! You can have the delivery of a message populate another IronMQ queue. This is helpful if you want to tie multiple queues together or create a dead letter queue for example.

Worker
MQ can connect directly to IronWorker and pass its message as the payload to one of your jobs. How cool is that!?  In order to exemplify how cool that actually is, we’ll run through a real-life scenario.

MQ & Worker Example

Let’s say you have a time-sensitive nightly job that processes uploaded CSV files.  It needs to process all of the files uploaded during that day and finish within a set amount of time.   As your system grows and there are more CSV files to process, your nightly process starts running behind schedule.

You realize that a lot of the time spent in your nightly worker is spent formatting the CSV file into the correct format.  It would make sense to split this process into two distinct stages, formatting and processing.  When a CSV file is received, you could send a message to your push queue which in turn will kick off a “formatting” worker job to pre-process the CSV file into the correct format. Your nightly “processing” worker job will then be able to fly through the CSV files because it no longer needs to fix any formatting issues.

The beauty here is that you can continue to add more push events to the queue.  When a file is uploaded, maybe you also need to ping another worker that handles OCR or post an update to an external HTTP endpoint letting it know exactly “what” file was uploaded.  Without a push queue, you’d be adding a lot of custom code to handle these requests, retries, errors, etc.  IronMQ’s push queues take care of all of this for you.

How can I configure a Push Queue?

Retries
You can configure your queue to allow for a custom amount of retries, a custom delay between retries, and even provide another queue to store failed push attempts. For example, using an HTTP event, MQ will retry pushes (3 times by default) every time it receives a non-200 response code.

Timeouts
If your event never receives a response after a certain period of time (10 seconds by default), it will chalk that up as a failed attempt and retry.

Unicast or Multicast?
You can even fire off multiple events from one queue. If you need to trigger one HTTP endpoint and also fire off a Worker job, that’s not a problem.

How do I create a push queue?

Creating one is straightforward.  Here’s an example cURL request that creates a multicast Push Queue with an HTTP endpoint as well as a Worker endpoint.

IronMQ has client libraries available in most languages, so you can easily create one programmatically as well.  Here’s an example in PHP:

Conclusion

With one IronMQ Push Queue, you can make a lot happen. If you were to try and replicate a multicast Push Queue in a traditional message queue, for example, you’d end up writing a lot of custom code to glue everything together. You’d also have to deal with scaling your infrastructure as your message queue needs grew. With IronMQ, you can save time and money focusing on your applications business logic, and less time on glue and infrastructure. For more detailed information about Push Queues, visit the IronMQ documentation.

If you’re interested in knowing more about IronMQ, or want to chat about how we may be able to help, call us anytime at 888-501-4766 or email at support@iron.io.

* These fields are required.


Webhooks the Right Way™

If you’re a developer, dealing with webhooks is a part of your life. Nowadays almost every subscription service allows for these user-defined callbacks.  For example, when a Lead is added to Salesforce, you may want to have a task that runs in the background to generate more information about the company they work for.  Maybe you want to receive a request from Stripe when a customers payment fails so you can send them dunning emails?  You get the drift.

The most common way to deal with webhooks is adding an endpoint to your application that handles the request and response. There are some benefits to this.  No external dependencies by having all your code in one place for example.  However, the cons usually outweigh the pros.

Common problems handling Webhooks

Application downtime

If your application is down, or in maintenance mode, you won’t be able to accept webhooks.  Most external services will have retries built in but there are many that don’t.  You’d need to be OK with missing data sent from these services.

IronMQ and IronWorker have great uptime
If your application is down, you could lose valuable data
Request queuing

What happens if you have a ton of webhooks from a bunch of different services all coming in at once?  Your application/reverse proxy/etc will probably end up queuing up the requests along with other customer requests.  If your application is customer facing, this could result in a degraded user experience or even full-blown timeouts.

Use IronMQ and IronWorker to prevent bad user experiences
Too many requests to your frontend could result in request queuing and negatively affect the end user experience
Thundering herds and Cache stampedes

Even if you’re able to process all of the webhooks coming in at once, your system is going to feel the effects one way or another.  This could result in unwanted resource spikes (CPU/MEM/IO).  Unless you’re set up to autoscale, bad things could happen to your infrastructure.

 

IronMQ and IronWorker can help prevent downtime caused by webhooks
Handling webhooks at scale via your application could result in infrastructure issues

 

At Iron, many of our customers get around these issues by using IronMQ and IronWorker in conjunction.  Since IronMQ is HTTP based, highly available, and built to handle thousands of requests a second, it’s a perfect candidate for receiving webhooks.  One of the great things about IronMQ is that it supports push queues.  When a message is received, it can push its payload to an external HTTP endpoint, to another queue, or even to IronWorker.

IronWorker is a container based enterprise-ready background job processing system that can autoscale up and down transparently.  We have customers processing 100’s of jobs concurrently one minute, while the next minute the number is up in the 100’s of thousands.

The beauty of the IronMQ and IronWorker integration is that IronMQ can push its payloads directly to IronWorker.  Your work is then picked up and worked on immediately (or at a specific date and time if required).  You can have a suite of different workers firing off for different types of webhooks and handling this process transparently.  This is great for a number of reasons.

Handling Webhooks the Right Way

Happy application

Now your application is never involved in the process of handling webhooks.  This all happens outside of your normal application lifecycle.  Your application machines will never have to deal with the excessive load that could deal with infrastructure issues.

Happy users

All the work you need to do to process webhooks now happens in the background and on hardware that your users aren’t interacting with.  This ensures that processing your webhooks won’t affect your user experience.

MQ and Worker to handle Webhooks
Using IronMQ and IronWorker to handle incoming Webhooks

This is a pattern that our customers are pretty happy with, and we’re constantly improving both IronMQ and IronWorker to handle their additional needs. For example, being able to programmatically validate external API signatures and the ability to respond with custom response codes are on our list.  That said, similar to microservices, this level of service abstraction can also introduce its own complexities.  For example, dependency and access management come to mind.  We’ve had long conversations about these topics with our customers and in almost all cases, the pro’s out-weigh the cons.  This approach has been a success and we’re seeing it implemented more and more.

If you have any questions or want to get started with the pattern above, contact us and we’ll be happy to help.

* These fields are required.


How to Bake Your Own Pi

Baking Your Own Pi

It’s 3/14, and that means it’s international Pi day! A day where we rejoice over the transcendental number that seems to be everywhere.

So, why am I writing about pi on the Iron.io blog? It turns out pi is the best (read: the absolute best!) way to test out computers. It’s sufficiently random, requires large amounts of memory, CPU, and is easy to check.

I first learned about this aspect of pi while reading the book Heres Looking at Euclid. There, I also learned that Pi beyond 40 digits or so isn’t all that useful. So, why do we know pi into the billions of digits? To quote the many time world record holder,

“I have no interest as a hobby for extending the known value of pi itself. I have a major interest for improving the performance of computation. [..] Mathematical constants like the square root of 2, e, and gamma are some of the candidates, but pi is the most effective.”

How To Make Pi

I’m on board! I want to make Pi, myself. If Pi is a great way to test any computer, why not use it to test first-class distributed computing solutions, like IronWorker?

Humans have known about Pi for a while. Which is part of what makes it a great computation. We have multiple recipes for baking the same dish. That means it’s easy to check our work (by comparing two algorithms).

So, what goes into pi? How can I cook this dish? Let’s check out a few of the best recipes. Continue reading “How to Bake Your Own Pi”