AWS Fargate: Overview and Alternatives

Making software applications behave predictably on different computers is one of the biggest challenges for developers. Software may need to run in multiple environments: development, testing, staging, and production. Differences in these environments can cause unexpected behavior, yet be very hard to track down.

 

To solve these challenges, more and more developers are using a technology called containers. Each container encapsulates an entire runtime environment. This includes the application itself, as well as the dependencies, libraries, frameworks, and configuration files that it needs to run.

 

Docker and Kubernetes were two of the first container technologies, but they are by no means the only alternatives. In late 2017, Amazon announced that it would jump into the container market with AWS Fargate. So what is AWS Fargate exactly, and is AWS Fargate worth it for developers?

What is AWS Fargate?

Amazon’s first entry into the container market was Amazon Elastic Container Service (ECS). While many customers saw value in ECS, this solution often required a great deal of tedious manual configuration and oversight. For example, some containers may have to work together despite needing entirely different resources.

 

Performing all this management is the bane of many developers and IT staff. It requires a great deal of resources and effort, and it takes time away from what’s most important: deploying applications.

 

In order to solve these problems, Amazon has introduced AWS Fargate. According to Amazon, Fargate is “a compute engine for Amazon ECS that allows you to run containers without having to manage servers or clusters.”

 

Fargate separates the task of running containers from the task of managing the underlying infrastructure. Users can simply specify the resources that each container requires, and Fargate will handle the rest. For example, there’s no need to select the right server type, or fiddle with complicated multi-layered access rules.

AWS Fargate vs. ECS vs. EKS

Besides Fargate, Amazon’s other cloud computing offerings are ECS and EKS (Elastic Container Service for Kubernetes). ECS and EKS are largely for users of Docker and Kubernetes, respectively, who don’t mind doing the “grunt work” of manual configuration.

 

One advantage of Fargate is that you don’t have to start out using it as an AWS customer. Instead, you can begin with ECS or EKS and then migrate to Fargate if you decide that it’s a better fit.

 

In particular, Fargate is a good choice if you find that you’re leaving a lot of compute power or memory on the table. Unlike ECS and EKS, Fargate only charges you for the CPU and memory that you actually use.

AWS Fargate: Pros and Cons

AWS Fargate is an exciting technology, but does it really live up to the hype? Below, we’ll discuss some of the advantages and disadvantages of using AWS Fargate.

Pro: Less Complexity

These days, tech companies are offering everything “as a service,” taking the complexity out of users’ hands. There’s software as a service (SaaS), infrastructure as a service (IaaS), platform as a service (PaaS), and dozens of other buzzwords.

 

In this vein, Fargate is a Container as a Service (CaaS) technology. You don’t have to worry about where you’ll deploy your containers, or how you’ll manage and scale them. Instead, you can focus on defining the right parameters for your containers (e.g. compute, storage, and networking) for a successful deployment.

Pro: Better Security

Due to their complexity, Amazon ECS and EKS present a few security concerns. Having multiple layers of tasks and containers in your stack means that you need to handle security for each one.

 

With Fargate, however, the security of your IT infrastructure is no longer your concern. Instead, you embed security within the container itself. You can also combine Fargate with container security companies such as Twistlock. These companies offer products for guarding against attacks on running applications in Fargate.

Pro: Lower Costs (Maybe)

If you’re migrating from Amazon ECS or EKS, then Fargate could be a cheaper alternative. This is for two main reasons:

 

  • As mentioned above, Fargate charges you only when your container workloads are running inside the underlying virtual machine. It does not charge you for the total time that the VM instance is running.
  • Fargate does a good job at task scheduling, making it easier to start and stop containers at a specific time.

 

Want some more good news? In January 2019, Fargate users saw a major price reduction that will slash operating expenses by 35 to 50 percent.

Con: Less Customization

Of course, the downside of Fargate is that you sacrifice customization options for ease of use. As a result, Fargate is not well-suited for users who need greater control over their containers. These users may have special requirements for governance, risk management, and compliance that require fine-tuned control over their IT infrastructure.

Con: Higher Costs (Maybe)

Sure, Fargate is a cost-saving opportunity in the right situation when switching from ECS or EKS. For simpler use cases, however, Fargate may actually end up being more expensive. Amazon charges Fargate users a higher per-hour fee than ECS and EKS users. This is to compensate for the complexity of managing your containers’ infrastructure.

 

In addition, running your container workloads in the cloud will likely be more expensive than operating your own infrastructure on-premises. What you gain in ease of use, you lose in flexibility and performance.

Con: Regional Availability

AWS Fargate is slowly rolling out across Amazon’s cloud data centers, but it’s not yet available in all regions. As of January 2019, Fargate is not available for the following Amazon regions:

 

  • São Paulo
  • Paris
  • Stockholm
  • Osaka
  • Beijing
  • Ningxia
  • GovCloud (US-West and US-East)

AWS Fargate Reviews

Even though AWS Fargate is still a new technology, it has earned mostly positive feedback on the tech review platform G2 Crowd. As of this writing, AWS Fargate has received an average score of 4.5 out of 5 stars from 12 G2 Crowd users.

 

Multiple users praise AWS Fargate’s ease of use. One customer says that Fargate “made the job of deploying and maintaining containers very easy.” A second customer praises Fargate’s user interface, calling it “simple and very easy to navigate.”

 

Another reviewer calls AWS Fargate an excellent solution: “I have been working with AWS Fargate for 1 or 2 years, and as a cloud architect it’s a boon for me…  It becomes so easy to scale up and scale down dynamically when you’re using AWS Fargate.”

 

Despite these advantages, AWS Fargate customers do have some complaints:

 

  • One user wishes that the learning curve were easier, writing that “it requires some amount of experience on Amazon EC2 and knowledge of some services.”
  • Multiple users mention that the cost of AWS Fargate is too high for them: “AWS Fargate is costlier when compared with other services”; “the pricing isn’t great and didn’t fit our startup’s needs.”
  • Finally, another user has issues with Amazon’s support: “as it’s a new product introduced in 2017, the quality of support is not so good.”

AWS Fargate Alternatives: AWS Fargate vs. Iron.io

AWS Fargate is a popular solution for container management in the cloud, but it’s far from the only option out there. Offerings such as Iron.io are mature and feature-rich, offering an alternative to Amazon’s own container management solutions.

 

Iron.io offers IronWorker, a container-based platform with Docker support for performing work on-demand. Just like AWS Fargate, IronWorker takes care of all the messy questions about servers and scaling. All you have to do on your end is develop applications, and then queue up tasks for processing.

 

As of yet, Fargate’s container scaling technology is not available for on-premises deployments. On the other hand, one of the main goals of Iron.io is for the platform to run anywhere. Iron.io offers a variety of deployment options to fit every company’s needs:

 

 

  • Shared: Users can run containers on Iron.io’s shared cloud infrastructure.
  • Hybrid: Users benefit from a hybrid cloud and on-premises solution. Containers run on in-house hardware, while Iron.io handles concerns such as scheduling and authentication. This is a smart choice for organizations who already have their own server infrastructure, or who have concerns about data security in the cloud.
  • Dedicated: Users can run containers on Iron.io’s dedicated server hardware, making their applications more consistent and reliable. With Iron.io’s automatic scaling technology, users don’t have to worry about manually increasing or decreasing their usage.
  • On-premises: Finally, users can run IronWorker on their own in-house IT infrastructure. This is the best choice for customers who have strict regulations for compliance and security. Users in finance, healthcare, and government may all need to run containers on-premises.

 

Final Thoughts

Breathless tech reviewers have called AWS Fargate a “game changer” and “the future of serverless computing.” As we’ve discussed in this article, however, it’s certainly not the right choice for every company. It’s true that Fargate often provides extra time and convenience. However, Fargate users will also sacrifice control and incur potentially higher costs.

 

Each organization has a unique situation with different goals and requirements, and only you can say what’s best for your business. Is the task of infrastructure management too onerous for your developers? AWS Fargate may be the right choice. On the other hand, if you need greater control and you’re concerned about costs, you might want to stay with ECS or EKS.

 

Iron.io offers a mature, feature-rich alternative to both Fargate and ECS/EKS. Users can run containers on-premises, in the cloud, or benefit from a hybrid solution. Like Fargate, Iron.io takes care of infrastructure questions such as servers, scaling, setup, and maintenance. This gives your developers more time to spend on deploying code and creating value for your organization.

 

Want to find out more? You can try the advantages of Iron.io for yourself with a no-obligations test drive? Sign up today to request a demo of Iron.io, or start a free, full-feature trial for 14 days.

IBM MQ (IBM Message Queue): Overview and Comparison to IronMQ

Wherever two or more people need to wait in line for something, you can guarantee that there will be a queue: supermarkets, bus stops, sporting events, and more.

waiting in line

It turns out, however, that queues are also a useful concept in computer science.

The data structure known as a queue is a collection of objects that are stored in consecutive order. Elements in the queue may be removed from the front of the queue and inserted at the back of the queue (but not vice versa).

Queues are a good choice of data structure when you have items that should be processed one at a time in first-in, first-out (FIFO) order–in particular, the message queue. In this article, we’ll discuss IBM MQ, one of the most popular solutions for implementing message queues, and see how it stacks up against Iron.io’s IronMQ software.

What is a Message Queue?

As the name suggests, a message queue is a queue of messages that are sent between different software applications. Messages consist of any data that an application wants to send, as well as a header at the start of the message that contains information about the data below it.

message in a bottle

Message queues are necessary because different applications consume and produce information at different speeds. For example, one application may sporadically create large volumes of messages at the same time, while another application may slowly process messages, one after another.

The differing speeds at which these two applications operate can pose an issue. Because they are all produced at the same time, all but one of the first application’s messages will be lost by the second application–unless they can be temporarily stored within a message queue.

A message queue is a classic example of asynchronous communication, in which messages and responses do not need to occur at the same time. The messages that are placed on the queue do not require an immediate response, but can be postponed until a later time. Emails and text messages are other examples of asynchronous communication in the real world.

texting

While implementing a basic message queue is a fairly straightforward task, complex IT environments may include communications between separate operating systems and network protocols. In addition, basic message queues may not be resilient when the network goes down, causing important messages to be lost.

For these reasons, many organizations have chosen to use “message-oriented middleware” (MOM): applications that make it easier for different software and hardware components to exchange messages.

There are a variety of MOM products on the market today (like Delayed Job or Sidekiq in the Ruby on Rails world), each one intended for different situations and use cases. In the rest of this article, we’ll compare and contrast two popular options for exchanging data via MOM software: IBM MQ and IronMQ.

What to Consider When Selecting a Message Queue

Message queues are essential to how different applications interact and exchange data within your IT environment. This means, of course, that choosing the right message queue solution is no easy task–picking the wrong one can drastically affect your performance.

Below, we’ll discuss some of the most important factors to consider when choosing a message queue solution.

Features

Depending on the specifics of your IT environment, you may require any number of different features from your message queue. Here’s just a small selection of potential functionality:

  • Pushing and/or pulling: Most message queues include support for both pushing and pulling when retrieving new messages. In the first option, new messages are “pushed” to the receiving application in the form of a direct notification. In the second option, the receiving application chooses to “pull” new messages itself by checking the queue at regular intervals.

pushing on a train

  • Delivery options: You may want to schedule messages at a specific time, or send messages more than once in order to make sure that they are delivered. If so, choose a message queue that includes support for these features.
  • Message priorities: Some messages are more critical or urgent than others. In order to receive the information you need in a timely manner, your message queue may use some way of migrating important messages up the queue (just like letting late passengers cut in front of you at the airport).
  • Persistence: Messages that are persistent are written to disk as soon as they enter the queue, while transient messages are only written to disk when the system is using a large amount of memory. Persistence improves the redundancy of your messages and ensures that they will be processed even in the event of a system crash.

scribe

Scalability and Performance

The more complex your IT environment is, the more difficult it is to scale it all at once. Instead, you can scale each application independently, decoupled from the rest of the environment, and use a message queue to communicate asynchronously.

Certain message queue solutions are better-suited for improving the scalability and performance of your IT environment. Look for options that are capable of handling high message loads at a rapid pace.

Pricing

Different message queue solutions may have different price points and pricing models that lead you to choose one over the other.

“As a service” is currently the predominant pricing model for message queues. This means that customers have a “pay as you go” plan in which they are charged by the hours and the computing power that they use. However, there are also prepaid message queue plans with an “all you can eat” pricing model.

Security

With hacks and data breaches constantly in the news, maintaining the security of your message queue should be a primary concern. Malicious actors may attempt to insert fraudulent messages into the queue and use them to exfiltrate data or gain control over your system.

Message queue solutions that use the Advanced Message Queuing Protocol (AMQP) include support for transport-level security. In addition, if the contents of the message itself may be sensitive, you should look for a solution that encrypts messages while in transit and at rest within the queue.

locked door

What is IBM MQ (IBM Message Queue)?

IBM Message Queue (IBM MQ) is a MOM product from IBM that seeks to help applications communicate and swap data in enterprise IT environments. IBM MQ calls itself a “flexible and reliable hybrid messaging solution across on-premises and clouds.” It includes support for a variety of different APIs, including Message Queue Interface (MQI), Java Message Service (JMS), REST, .NET, IBM MQ Light, and MQTT.

The IBM MQ software has been around in some form since 1993. Thanks to the widespread demand for real-time transactions on the Internet, IBM MQ and other message queue solutions have enjoyed a renewed popularity in recent years.

The benefits of using IBM MQ include:

  • Support for on-premises, cloud, and hybrid environments, as well as more than 80 different platforms.
  • Advanced Message Security (AMS) for encrypting and signing messages between applications.
  • Multiple modes of operation, including point-to-point, publish/subscribe, and file transfer.
  • A variety of tools for managing and monitoring queues, including MQ Explorer, the MQ Console, and MQ Script Commands.

On websites such as TrustRadius and IT Central Station, IBM MQ users mention a few advantages and disadvantages of the software. Some of the recurring themes in these reviews are:

  • IBM MQ is reliable and does its job well, without any lost messages.
  • The software helps to improve data integrity, availability, and security.
  • The user interface may be a little unintuitive and challenging, especially for first-time users.
  • Tools such as MQ Explorer seem to be “aging” and are not as effective as third-party solutions.
  • IBM MQ lacks certain integrations that would be useful in a modern IT enterprise environment.

IBM MQ vs. IronMQ

There’s no doubt that IBM MQ is a robust, mature message queue solution that fits the needs of many organizations. However, it’s far from the only MOM software out there. Offerings such as Iron.io’s IronMQ are highly viable message queue alternatives, and in many cases may be superior to market leaders such as IBM MQ.

What is IronMQ?

IronMQ is a messaging queue solution from Iron.io, a cloud application services provider based in Las Vegas. According to Iron.io, the IronMQ message queue is “the most industrial-strength, cloud-native solution for modern application architecture.”

industrial strength

The software includes support for all major programming languages and is accessible via REST API calls. Iron.io offers a number of different monthly and annual pricing models for IronMQ, ranging from the hobbyist all the way up to the large enterprise.

The benefits of IronMQ include:

  • Support for both push and pull queues, as well as “long polling” (holding a pull request open for a longer period of time).
  • The use of multiple clouds and availability zones, making the service highly scalable and resistant to failure. In the event of an outage, queues are automatically redirected to another zone without any action needed on the part of the user.
  • Backing by a high-throughput key/value data store. Messages are preserved without being lost in transit, and without the need to sacrifice performance.
  • Flexible deployment options. IronMQ can be hosted on Iron.io’s shared infrastructure or on dedicated hardware to improve performance and redundancy. In addition, IronMQ can run on your internal hardware in cases where data must remain on-premises.

IBM MQ vs. IronMQ: The Pros and Cons

Both IBM MQ and IronMQ are cloud-based solutions, which means they enjoy all the traditional benefits of cloud computing: better reliability and scalability, faster speed to market, less complexity, and so on.

Since it was created with the cloud in mind, IronMQ is particularly well-suited for use with cloud deployments. Because IronMQ uses well-known cloud protocols and standards such as HTTP, JSON, and OAuth, cloud developers will find IronMQ exceedingly simple to work with.

software developer

IronMQ users enjoy access to an extensive set of client libraries, each one with easy-to-read documentation. The IronMQ v3 update has also made the software faster than ever for customers who need to maintain high levels of performance.

Customers who already use Iron.io’s IronWorker software for task management and scheduling will find IronMQ to be the natural choice. According to one IronMQ user in the software industry, “I can run my Workers and then have them put the finished product on a message queue – which means my whole ETL process is done without any hassle.”

On the other hand, because it’s part of the IBM enterprise software family, IBM MQ is the right choice for organizations that already use IBM applications. If you already have an application deployed on IBM WebSphere, then it will be easier to simply use it together with IBM MQ.

What’s more, IBM MQ is capable of working well in many different scenarios with different technologies, including mainframe systems. However, some customers report that IBM MQ has a clunky, “legacy” feel to it and is difficult to use in an agile IT environment.

While it’s definitely able to compete with IBM MQ, IronMQ also stacks up favorably against other message queue solutions such as RabbitMQ and Kafka. For example, RabbitMQ’s use of the AMQP protocol means that it is more difficult to use and can only be deployed in limited environments. According to various benchmarks, IronMQ is roughly 10 times as fast as RabbitMQ.

IronMQ Customer Reviews

Of course, reading long lists of software features can only go so far–you need customer feedback in order to make sure that the application really does what it says on the tin.

The good news is that IronMQ has a number of happy customers who are all too eager to share their positive experiences. John Eskilsson, technical architect at the engineering firm Edeva, raves about IronMQ in his testimonial on FeaturedCustomers:

“IronMQ has been very reliable and was easy to implement. We can take down the central server for maintenance and still rely on the data being gathered in IronMQ. When we start up the harvester again, we can consume the queue in parallel using IronWorker and be back to real-time quickly.”

In a review on G2, one user working in marketing and advertising praised IronMQ’s reliability and performance:

“My experience with the message queues was a good one. I had no issues and found the message queues to be very reliable. The website has good monitoring showing exactly what is happening in real time.”

The world’s most popular websites may receive millions of page hits per day, and more during times of peak activity. Businesses such as CNN need a robust, feature-rich, highly available message queue solution in order to get the right information to the right people. CNN is one of many enterprise clients that uses IronMQ as its message queue solution.

IBM MQ vs. IronMQ: Which is Right for You?

At the end of the day, no one can tell you which message queue solution is right for your company’s situation. Both IBM MQ and IronMQ have their advantages and drawbacks, and only one may be compatible with your existing IT infrastructure.

In order to make the final decision, draw up a list of the features and functionality that are most important to you in a message queue. These may include issues such as persistence, fault tolerance, high performance, compatibility with existing software and hardware, and more.

Fortunately, you can also try IronMQ before you buy. Want to find out why so many clients are proud to use IronMQ and other Iron.io products? Request a demo of IronMQ, or sign up today for a free, full-feature 14-day trial of the software.

Amazon SQS (Simple Queue Service): Overview and Tutorial

What’s a Queue?  What’s Amazon SQS?

queue
Now that’s quite a queue!

Queues are a powerful way of combining software architectures. They allow for asynchronous communication between different systems, and are especially useful when the throughput of the systems is unequal.   Amazon offers their version of queues with Amazon SQS (Simple Queue Service).

For example, if you have something like:

  • System A – produces messages periodically in huge bursts
  • System B – consumes messages constantly, at a slower pace

With this architecture, a queue would allow System A to produce messages as fast as it can, and System B to slowly digest the messages at it’s own pace.

Queues have played an integral role in software architecture for decades along with core technology concepts like APIs (Application Programming Interfaces) and ETL/ELT (Extract, Load Transform). With the recent trend towards microservices, have become more important than ever.

Amazon Web Services

AWS (Amazon Web Services) is one of the leading cloud providers in the world, and anyone writing software is probably familiar with them. AWS offers a wide variety of “simple” services that traditionally had to be implemented in-house (eg, storage, database, computing, etc). The advantages offered by cloud providers are numerous, and include:

  • Better scalability – your data center is a drop in their ocean. They’ve got mind-boggling capacity. And it’s spread around the world.
  • Better reliability – they hire the smartest people in the world (oodles of them) to ensure these services work correctly, all the time.
  • Better performance – you can typically harness as much computing horsepower as you’d like with cloud providers, far exceeding what you could build in-house.
  • Better (lower) cost – nowadays, they can usually do all this cheaper than you could in your own data center, especially when you account for all the expertise they bring to the table. And many of these services employ a “pay as you go” model, charging for usage as it occurs. So you don’t have to pay the large up front cost for licenses, servers, etc.
  • Better security – their systems are always up to date with the latest patches, and all their smart brainiacs are also thinking about how to protect their systems.

If you have to choose between building out your own infrastructure, or going with something in the cloud, it’s usually an easy decision.

AWS Simple Queue Service

It comes as no surprise that AWS also offers a queueing service, simply named AWS Simple Queue Service. It touts all the cloud benefits mentioned before, and also features:

  • Automatic scaling – if your volume grows you never have to give a thought to your queuing architecture. AWS takes care of it under the covers.
  • Infinite scaling – while there probably is some sort of theoretical limit here (how many atoms are in the universe?), AWS claims to support any level of traffic.
  • Server side encryption – using AWS SSE (Server Side Encryption), messages can remain secure throughout their lifetime on the queues.

Their documentation is also top-notch. It’s straightforward to get started playing with the technology, and when you’re ready for serious, intricate detail, the documentation goes deep enough to get you there.

Example

Let’s walk through a simple example of using AWS SQS, using the line at the DMV (Department of Motor Vehicles) as the example subject matter. The DMV is notorious for long waits, forcing people to corral themselves into some form of a line. While this isn’t an actual use case anyone would (presumably) solve using AWS SQS, it will allow us to quickly demo their capabilities, with a real-world situation most are all too familiar with.

While AWS SQS has SDK libraries for almost any language you may want to use, I’ll be using their REST interface for this exercise (with my trusted REST side kick Postman!).

Authorization

Postman makes it easy to setup all the necessary authorization using Collections. Configure the AWS authorization in the parent collection with the Access Key and Secret Access Key found in the AWS Console:

AWS SWS Authorization

Then reference that authorization in each request:

AWS SQS Create Parent Auth

Using this pattern, it’s easy to quickly spin up requests and put AWS SQS through its paces.

Creating a Queue

When people first walk in the door, any DMV worth their salt will give them a number to begin the arduous process. This is your main form of identification for the next few minutes/hours (depending on that day’s “volume”), and it’s how the DMV employees think of you (“Number 14 over there sure seems a bit testy!”).

Let’s create our “main queue” now, with the following REST invocation:

Request:

GET https://sqs.us-east-1.amazonaws.com?Action=CreateQueue&DefaultVisibilityTimeout=0&QueueName=MainLine&Version=2012-11-05

Response:

https://sqs.us-east-1.amazonaws.com/612055710376/MainLine

fa178e12-3178-5318-8d90-da20904943f0

Good deal. Now we’ve got a mechanism to track people as they come through the door.

Standard vs FIFO

One important detail that should be mentioned – there are two types of queues within AWS SQS:

  • Standard – higher throughput, with “at least once delivery”, and “best effort ordering”.
  • FIFO (First-In-First-Out) – not as high throughput, but guarantees on “exactly once” processing, and preserving ordering of messages.

Long story short, if you need things super fast, can tolerate messages out of order, and possibly sent more than once, Standard queues are the answer. If you need absolute guarantees on order of operations, no duplication of work, and don’t have huge throughput needs, then FIFO queues are the best choice.

We’d better make sure we create our MainLine queue using FIFO! While a “mostly in order” guarantee might suffice in some situations, you’d have a riot on your hands at the DMV if people started getting called out of order. Purses swinging, hair pulling – it wouldn’t be pretty. Let’s add “FifoQueue=true” to the query string to indicate that the queue should be FIFO:

Request

https://sqs.us-east-1.amazonaws.com?Action=CreateQueue&DefaultVisibilityTimeout=0&QueueName=MainLineFIFO&Version=2012-11-05&FifoQueue=true

Send Message

Now that we’ve got a queue, let’s start adding “people” to it, using the “SendMessage” action. Note that when using REST, we need to URL encode the payload. So something like this:

{
"name": "Ronnie Van Zandt",
"drivers_license_number": "1234"
}

Becomes this:

%7B%0A%20%20%20%22name%22%3A%20%22Ronnie%20Van%20Zandt%22%2C%0A%20%20%20%22drivers_license_number%22%3A%20%221234%22%0A%7D

There are many ways of accomplishing this, I find the urlencoder site to be easy and painless.

Here’s the final result:

Request

https://sqs.us-east-1.amazonaws.com/612055710376/MainLineFIFO?Version=2012-11-05&Action=SendMessage&MessageBody=%7B%0A%20%20%20%22name%22%3A%20%22Ronnie%20Van%20Zandt%22%2C%0A%20%20%20%22drivers_license_number%22%3A%20%221234%22%0A%7D

Response:

00ad4e10-4394-450f-8902-4a9cf4b96b95

b9f28edc9c6dc9fe2a86f5ae8efb2364

97a41dd4-5d15-59e0-b9f5-49e02fb4384d

After this call, we’ve got young Ronnie standing in line at the DMV. Thanks to AWS’s massive scale and performance, we can leave Ronnie there as long as we’d like. And we can add as many people as we’d like – with AWS SQS’s capacity, we could have a line around the world. But that’s horrible customer service, someone needs to find out what Ronnie needs!

ReceiveMessage

At the DMVs I’ve been to, there’s usually a large electronic sign on the counter that will display the next lucky person’s number. You feel a brief pulse of joy when your number displays, and rush to the counter on a pillow of euphoria, eager to get on with your life. How do we recreate this experience in AWS SQS?

Why, “ReceiveMessage”, of course! (Note we are invoking it using the actual QueueUrl passed back by the CreateQueue call above)

Request

https://sqs.us-east-1.amazonaws.com/612055710376/MainLineFIFO?Action=ReceiveMessage&Version=2012-11-05

Response

00ad4e10-4394-450f-8902-4a9cf4b96b95

AQEBjq8apWDfLXE0pCbpABh6Wdx70ZbszY0k38t9u8Mrny1Jz+Q522Vwvvf4xLqzQHfjoHQd56JJJEM67LJG5tQ/YSCibFSNCg8jfadyNMbqBH48/WxmpYunI3w1+GbDCL2tlKkDz/Lm9akGasgDZEBtw6U9jw1Bu6XbzNuNiw5jfVzjC99E38KSvxvZMHfmSi3Wo2XOBAcfU0oTpLmGMwccGiRUOp4XtS38nMXHhBdtKSS+U11N38cJAtlnxHQJkXmTAk7ZdvpxJNtnOrXmeGN00vtf6OSyLJzRJJieYHNtxIyxojcGZcnJQ6dTveMWQ1A1FOzschRuavl3wtftDS/YSt5sDNeBcjEOE+Y0QE+18qiWaDZc+nlaetcBvqmt6Hbt

b9f28edc9c6dc9fe2a86f5ae8efb2364

{
"name": "Ronnie Van Zandt",
"drivers_license_number": "1234"
}

6a43b589-940c-52a4-bc62-e1bde75e22e4

One thing to keep in mind – ReceiveMessage doesn’t actually REMOVE the item from the queue – the item will remain there until explicitly removed. Visibility Timeout can be used to ensure multiple readers don’t attempt to process the same message.

So how do we permanently mark the item as “processed”? By deleting it from the queue!

DeleteMessage

The DeleteMessage action is what removes items from a queue. There’s not really a good analogy with the DMV here (thankfully, DMV employees can’t “delete” us), so we’ll just go with an example. DeleteMessage takes the ReceiptHandle returned by the ReceiveMessage endpoint as a parameter (once again, encoded):

Request

https://sqs.us-east-1.amazonaws.com/612055710376/MainLineFIFO?Action=DeleteMessage&Version=2012-11-05&ReceiptHandle=AQEBjq8apWDfLXE0pCbpABh6Wdx70ZbszY0k38t9u8Mrny1Jz%2BQ522Vwvvf4xLqzQHfjoHQd56JJJEM67LJG5tQ%2FYSCibFSNCg8jfadyNMbqBH48%2FWxmpYunI3w1%2BGbDCL2tlKkDz%2FLm9akGasgDZEBtw6U9jw1Bu6XbzNuNiw5jfVzjC99E38KSvxvZMHfmSi3Wo2XOBAcfU0oTpLmGMwccGiRUOp4XtS38nMXHhBdtKSS%2BU11N38cJAtlnxHQJkXmTAk7ZdvpxJNtnOrXmeGN00vtf6OSyLJzRJJieYHNtxIyxojcGZcnJQ6dTveMWQ1A1FOzschRuavl3wtftDS%2FYSt5sDNeBcjEOE%2BY0QE%2B18qiWaDZc%2BnlaetcBvqmt6Hbt

Response

a69c7042-d0e2-546a-bdf7-2476a30b89df

And just like that, Ronnie is able to leave the DMV with his newly printed license, all thanks to AWS SQS!

DMV line
It’s time to get out of here!

IronMQ vs AWS SQS

While AWS SQS has many strengths, there are advantages to using Iron MQ that make it a more compelling choice, including:

Client Libraries

Iron MQ features an extensive set of client libraries, with clear, straightforward documentation . Getting started with Iron MQ is a breeze. After playing with both SDKs, I found the Iron MQ experience to be easier.

Speed

Iron MQ is much faster than SQS, with V3 making it faster and more powerful than ever before. And with high volume systems, bottlenecks in your messaging architecture can bring the whole system to its knees. Faster is better, and Iron MQ delivers in this area.

Push Queues

Iron MQ offers something called Push Queues, which supercharge your queueing infrastructure with the ability to push messages OUT. So rather than relying solely on services pulling messages off queues, this allows your queues to proactive send the messages to designated endpoints, recipients, etc. This powerful feature expands the communication options between systems, resulting in faster workflow completion, and more flexible architectures.

Features

Check out the comparison matrix between Iron MQ and its competitors (including SQS). It clearly stands out as the most feature-rich offering, with functionality not offered by SQS (or anyone else, for that matter).

In Con-q-sion

Hopefully this simple walkthrough is enough to illustrate some possibilities of using AWS SQS for your queuing needs. It is easy to use, with incredible power, and their SDK supports a variety of language. And may your next trip to the DMV be just as uneventful as young Ronnie’s.

Happy queueing!