ECS Alternatives

If you are in the field of software development, you have probably heard of containers. A containerized application has myriad benefits, including efficiency, cost, and portability. One of the big questions with this technology is where and how to host it? In house, in the cloud, somewhere else? Amazon Web Services (AWS) offers a few options for container hosting. Elastic Container Services (ECS) is one of those offerings. ECS provides robust container management, supercharged with the power of AWS. However, there are other options out there. ECS alternatives may better fit your needs. An important decision like this justifies some shopping around.

There are several things to consider when choosing a container host. One size does not fit all! Each customer has their own in-house skillset and existing cloud integrations.

This post will illustrate the important things to consider. We will dig into details around alternatives to ECS. We will compare and contrast the offerings, looking at the pros and cons of each. With this background information, you will be better educated on this decision. You can then decide which solution best fits your business needs.

Alternatives to ECS and EC2.

AWS Elastic Container Service

AWS Elastic Container Service (ECS) is Amazon’s main offering for container management. Utilizing ECS allows you to take advantage of AWS’s scale, speed, security, and infrastructure. With this power, you can launch one, tens, or thousands of containers to handle all your computing needs. ECS also ties in with all the other AWS services, including databases, networking, and storage.

ECS offers two main options for containers:

  • AWS Elastic Compute Cloud (EC2): EC2 is AWS’s virtual machine service. Using this option, you are responsible for selecting the servers you want in your container cluster. Once that’s complete, AWS handles the management and orchestration of the servers.
  • AWS Fargate: Fargate abstracts things another level, eliminating the need to manage EC2 instances. Rather, you specify the CPU and memory requirements, and AWS provisions EC2 instances under the covers. This offers all the power of ECS, without worrying about the details of the actual underlying servers.

Pros and Cons

Here are some things to consider with the ECS offerings:

  • Integration with AWS: One of the biggest decisions around using ECS is its integration and reliance on AWS. This is either a pro or a con, depending on your circumstances. If you are already using AWS, adding ECS to the mix is a straightforward proposal. However, if you are not currently using AWS, there is a considerable learning curve to get up and running.
  • More Automation: ECS provides layers of automation over your containers. Customers without in-house expertise to manage the lower-level complexities may prefer this. However, it may also bind the hands of someone who wants more control over their container landscape. Fargate takes the automation a step further. Again, that could be good or bad, depending on your situation.
  • Cost: In this age of modern cloud computing, it is typically more cost effective to run everything in the cloud. No more hardware to purchase, networking snafus to resolve, or expertise to hire and retain. However, the cost differences in the container offerings are more nuanced. If you have container expertise in-house, it might be more cost effective to run your own container solution on top of AWS services. If not, you may save money using something like ECS.
  • Deployments: One key drawback to ECS is that it is not available on-premise. While all cloud may be fine for many businesses, there are instances where maintaining legacy services or closed networks is preferential if not mandatory.
  • Vendor lock in: In order to use ECS, you must be on AWS cloud. This also means the possibility of getting locked into a single technology provider if steps are not taken to painstakingly avoid this.

Google Cloud/Kubernetes

Similar to AWS, Google offers “all the things” on its cloud services. This includes servers, storage, databases, networking, and other technologies. Google’s solution for managing containers is Kubernetes, an industry leader in container orchestration. Kubernetes began as a project within Google, which eventually made it open source, available to the public. Since then, it has become one of the strongest options for container orchestration. Kubernetes is a service that all the major cloud providers offer. Google currently offers this service similar to AWS’ ECS called Google Kubernetes Engine, or GKE for short.

ECS alternatives cloud.

Pros and Cons

There are some pros and cons of using Google for your container services:

  • Integration with Google services: Like the AWS decision, you need to consider whether you currently use Google Cloud services. If you are already heavily invested there, adding Kubernetes to the top makes sense. If you are not, then it may introduce a large amount of time and cost to the equation.
  • Familiarity with Kubernetes: This is a big one. If you have in-house expertise with Kubernetes, you’ll feel comfortable running it in Google Cloud. If not, there’s a fairly steep learning curve to get there. Kubernetes is not for the faint-hearted.
  • Less Automation: With Kubernetes, Google puts more power (and responsibility) in the hands of their customers. Some customers may prefer that level of control. Others may not want to worry about these lower-level details.
  • Deployments: As with AWS, a key drawback is that it is not available for on-premise deployments.
  • Vendor lock in: In order to use GKE, you must be on GCP. Again, this means the possibility of getting locked into a single technology provider if steps are not taken to avoid this.

Microsoft Azure

Amazon Web Services Elastic Container Service alternatives.

Rounding out the offerings of the “Big Three” cloud providers is Microsoft’s Azure. It offers a few flavors of container management, including the following:

  • Azure Kubernetes Service (AKS): Azure provides hosting for a Kubernetes service, and with it, the same pros and cons. Good for customers with Kubernetes know-how, maybe not for those without.
  • Azure App Service: This is a more limited option, where a small set of Azure-specific application types can run within hosted containers.
  • Azure Service Fabric: Service Fabric allows for hosting an unlimited number of microservices. They can run in Azure, on premises, or within other clouds. However, you must use Microscofts infrastucture.
  • Azure Batch: This service runs recurring jobs using containers.

Pros and Cons

Here are some pros and cons of the Azure offerings:

  • Confusion: The list above illustrates the many container-based services Azure offers. There are many “Azure-specific” technologies at play here. It can be hard to differentiate where the containerization stops and the Azure-specific things begin.
  • Integration with Azure Services: If you are already using Azure for other services, using its container offerings makes sense. If not, you’ll need to climb the Azure learning curve. As with the other cloud providers, this introduces time and resource expenses.
  • Less (or More?) Automation: The Azure offerings run the gamut. They start with no management (Azure Container Registry) to fully managed (Azure App Service and Azure Service Fabric). Once educated on all the features, pros, and cons of each, you may find a solution that perfectly meets your needs. Or, you might possibly drown in the details.
  • Deployments: Differing from both AWS and GCP, Azure Service Fabric is actually available on-premise. However, (and it s a big however), you must use Microsoft servers that Azure provides. By going down this route you are virtually guaranteed to be locked into the Azure/Microsoft technology architecture with no easy way out.
  • Vendor lock in: See above, as with both GCP and AWS, vendor lock-in is difficult to avoid and expensive to leave.

Iron.io

AWS ECS alternatives.

Another ECS alternative that may surprise you is Iron.io. It provides container services but shields customers from the underlying complexities. This may be perfect for customers not interested in developing large amounts of in-house expertise. Iron.io offers a container management solution called Worker. It is a hosted background job solution supporting a variety of computing workloads. Iron.io allows for several deployment options (on its servers, on your servers, in the cloud, or a combination of these). It manages all your containers and provides detailed analytics on their performance. By handling the low-level details, Iron.io allows you to focus on your applications. You focus on your business; they’ll worry about making sure it all runs correctly.

Pros and Cons

Here are some things to know about Iron.io:

  • Easy to Use: For customers that want the benefits of containerization without having to worry about the lower-level details, Iron.io is perfect. Focus on your applications and let the pros worry about infrastructure.
  • Flexible: For customers that have Docker/Kubernetes expertise, Iron.io provides its hybrid solution. You host the hardware and run the workers there. Iron.io provides automation, scheduling, and reporting. You don’t have to give up what you already have to gain what Iron.io has to offer. Iron also offers a completely on-premise deployment of Worker. This allows installing Worker in environments with high compliance and security requirements.
  • Powerful: Iron.io can scale from one to thousands of parallel workers, easily accommodating all sizes of computing needs.
  • Deployments: Unique to Iron Worker is the ability to deploy fully on-premise, as well as hybrid and fully cloud.
  • No Vendor lock-in: Another unique aspect of Iron Worker is the ability to avoid being locked into any single vendor. It is cloud agnostic, so it will run on any cloud. Migration is also virtually a one-click process. This means operational expenses are kept to a bare minimum. It also means deploying redundantly to multiple clouds is an easy, efficient process.

Conclusion

Containerization is the future of computing. The need to own and run our own servers (or even our own operating systems) is slowly fading. The big question is where to start? Customers with Docker expertise, and existing cloud provider integrations, may find a container solution from a big cloud provider as the best choice. For customers just starting out in this field, or those looking to add management and analytics to an existing solution, Iron.io adds a good deal of power. Iron.io will grow with you, and with initial architectures in place, other options will unfold.

With this information in hand, you’re better prepared to answer some big questions. May your containers go forth and multiply!

Ready to get started with IronWorker?

Start you free 14 day trial, no cards, no commitments needed. Signup here.

Introducing: Computerless™

Iron was one of the pioneers of Serverless, so we’re excited to announce that we’ll also be one of the first companies to offer the next generation of compute:  It’s called Computerless™.

Unlike Serverless, this technology removes the physical machine completely.  Our offering piggy-backs off the recent developments in fiber optic technology developed at the University of Oxford.  If you haven’t heard about this breakthrough, we’ll do our best to explain:

Researchers have found a way to control how light travels at the molecular level, thus being in complete control of the resulting attenuation.  Molecular gates can then be created, and state stored in finite wavelengths. It’s somewhat equivalent to qubits in quantum computing, but in the case of optical fiber, it’s a physical reality.

The end result of this technological release allows for computers to be fully encapsulated in fiber optic cable.  The usual components needed are now mapped 1-to-1, via light. This has allowed Iron’s infrastructure to completely change.  While we’ve run our infrastructure on public clouds like AWS and GCP in the past, we’ve been able to leave that all behind. We’re now able to push our entire suite of products into optical cable itself:


Iron’s new and improved infrastructure on a cheap plot of land in Arkansas

In the next few months, we’ll be pushing all of our customer’s sensitive data into the cables shown above as well as running all Worker jobs through them.  We’re pretty sure the cables we purchased are for multi-tenant applications, so you can probably rest assured that we’re doing the right thing. In fact, NASA has already expressed an interest in licensing this technology from Iron. Other interested parties include the government of French Guiana and defense conglomerate Stark Industries. 

Researchers have kind-of concluded that this technology is ready for prime time, and also are quick to state the fact that in 1998, The Undertaker threw Mankind off Hell In A Cell, and plummeted 16 ft through an announcer’s table.

On-Premises or On-Cloud? How to Make the Choice

on prem container management

Introduction

It was once a tech buzzword, but cloud computing has become a mature best practice for companies of all sizes and industries. Even if you don’t know it, it’s highly likely that part or all of your business has already moved into the cloud.

Case in point: A whopping 96 percent of organizations now report that they use cloud computing in some form or fashion.

Despite this lofty statistic, many companies are also choosing to maintain some or all of their technology on-premises. So what exactly is the difference between on-premises and on-cloud? How can you make the choice for yourself? We’ll discuss the answers in this article.

What Is On-Premise Computing?

On-premise computing (also known as “on-premises”) is the traditional (and, until recently, the dominant) model of enterprise IT.

In the on-premises model, organizations buy their own hardware and software, and then run applications and services on their own IT infrastructure. On-premises applications are sometimes called “shrinkwrap.” This refers to the plastic film used to package commercial off-the-shelf software.

The term “on-premises” implies that the technology is physically located on the organization’s own property. This could be in the building itself or at a nearby facility. This grants the organization full control over the technology’s management, monitoring, configuration, and security.

Companies that use the on-premises model usually need to purchase their own software licenses and handle their own tech support. For these reasons, on-premise computing is well-suited for larger enterprises. They are more likely to have sizable IT budgets and skilled IT employees.

What Is Cloud Computing?

Cloud computing vs. on premise

Cloud computing is an enterprise IT model in which hardware and/or software are hosted remotely, rather than on company premises.

There are two main types of cloud computing: public cloud and private cloud.

  • In a public cloud, a third party is responsible for providing your business cloud services, software, and storage via the internet. Your data is hosted within the cloud provider’s remote data center, separate from the data of other customers.
  • In a private cloud, your business owns or maintains its own cloud infrastructure. The cloud is provisioned for the use of a specific organization. Like the public cloud, software, storage, and services are provided remotely via the internet.

In addition to the public-private division, there are three different types of cloud computing that you should know about: IaaS, PaaS, and SaaS.

  • IaaS (infrastructure as a service): This is the most bare-bones cloud computing offering. Users rent IT infrastructure such as virtual machines (VMs), servers, networks, and storage and access them via the internet. However, they are responsible for managing all other aspects of the system: runtime, middleware, operating systems, applications, and data.
  • PaaS (platform as a service): PaaS includes all the services provided by IaaS, as well as the runtime, middleware, and operating system. Users are only responsible for managing the applications and data.
  • SaaS (software as a service): SaaS is an all-in-one offering that includes everything from the VMs and servers to the applications running atop them and the data that they use. A few examples of SaaS products are Dropbox, Google Apps, Microsoft Office 365, and Adobe Creative Cloud.

On-Premise vs. On-Cloud: The Pros and Cons

Rather than jumping on the cloud bandwagon, it’s important to perform a sober evaluation of the pros and cons of both on-premise and on-cloud for your own business needs. Now that we’ve defined both on-premise computing and cloud computing, let’s discuss which one is more convenient in terms of four considerations: cost, scalability, security, and backups.

Cost Model

In terms of cost, the cloud vs. on-premises comparison boils down to two different pricing models: capital expenses and operating expenses.

With on-premise computing, businesses need to make a large capital investment up front when buying hardware and software. They’re also responsible for any support and maintenance costs incurred during a product’s lifetime.

Cloud computing, meanwhile, is usually an operating expense. Businesses pay a monthly or annual subscription fee in order to have continued access to the cloud. Upgrades, support, and maintenance are the vendor’s responsibility and usually baked into the costs of the subscription.

Some companies find that the subscription model of cloud computing is more convenient for their purposes. Subscribing to a new service for a few months can be more cost-efficient for businesses that are looking to experiment, and for smaller businesses that don’t have large amounts of capital available. However, research by firms such as Gartner have shown that both cost models are equivalent over the long term.

Scalability

container management on-prem

Scalability refers to a system’s capacity to easily handle increases in load. For example, a website that usually sees very little traffic could suddenly have thousands or millions of visitors if it starts to receive attention on social media.

Cloud computing is able to rapidly scale storage and services up and down during peaks and lulls in activity. This is tremendously helpful for companies that see frequent changes in demand, such as a greeting card e-commerce website that does most of its business during a few holidays.

When compared with the cloud, on-premise computing is fairly brittle and difficult to scale. Businesses that operate on-premises may need to buy powerful hardware that goes unused for much of the time, wasting time and resources.

Compliance and Security

For many organizations, concerns about security and compliance are the biggest reason that they haven’t yet moved their data and infrastructure into the cloud. Fifty-six percent of IT security decision-makers say that their company’s on-premises security is better than what they can have in the cloud.

Industries that handle sensitive personal information — such as heath care, finance, and retail — have their own regulations about how this data can be stored and processed. In addition, many U.S. federal agencies have chosen to keep some or all of their workloads on-premises.

Nevertheless, despite popular concerns about the security of cloud computing, there has yet to be a large-scale breach of one of the major public cloud providers that was due to a fault in their technology. The breaches involving Amazon Web Services, Microsoft Azure, and Google Cloud Platform have been due to human error. This presents one advantage of on-premise: containment of human errors.

As Adrian Sanabria, director of ThreatCare says: “Since everything in the cloud is virtualized, it’s possible to access almost everything through a console. Failing to secure everything from the console’s perspective is a common (and BIG) mistake. Understanding access controls for your AWS S3 buckets is a big example of this. Just try Googling “exposed S3 bucket” to see what I mean.”

With an on-premise workload, if a person makes a configuration error, the possibility of a breach is lessened because there is no single console for everything in their IT system. So a single error is less likely to result in a data loss, big or small. After all, human errors will persist for the foreseeable future.

As a final note, both on-premises and cloud storage solutions support encryption for data while in transit and at rest.

Backups and Disaster Recovery

running containers on premise devops

One of the biggest selling points of the cloud is the ability to securely back up your data and applications to a remote location.

Whether it’s a natural disaster or a cyberattack, the effects of a catastrophe can devastate organizations that are caught unprepared. According to FEMA, 40 to 60 percent of small businesses never reopen after suffering a disaster.

In an era when threats both natural and virtual are multiplying, it’s critical to have a robust strategy for disaster recovery and business continuity. Customers, employees, and vendors must all be assured that your doors will be reopened as soon as possible.

The benefits of the cloud as a backup strategy are clear. Data stored in the cloud will survive any natural disaster that befalls your physical infrastructure, thanks to its storage at a remote site.

However, this benefit is also a double-edged sword; restoring data from cloud backups is usually slower than restoring data from on-premises. Therefore, organizations that can afford it often choose a two-pronged backup strategy: on-premises backups as the first line of defense, as well as secondary backups in the cloud.

Hybrid Cloud: The Best of Both Worlds?

So far we’ve presented on-premises and the cloud as polar opposites. However, reality isn’t quite that simple. Fifty-one percent of organizations have chosen to pursue a “hybrid” cloud strategy.

In a hybrid cloud solution, businesses use a combination of both on-premises and cloud technology, mixing and matching as best suits their goals and requirements.

For example, the city government of Los Angeles has opted for a hybrid cloud deployment using both public cloud and on-premises infrastructure. Officials decided that data and applications from certain departments — such as emergency services, traffic control, and wastewater management — are too risky to host in the cloud.

Other enterprises are attracted by the features of the cloud, but are still content with their current on-premises deployment. These organizations choose a hybrid strategy for now, slowly migrating to the cloud while replacing their on-premises infrastructure piece by piece.

Still other companies prefer the business agility that a hybrid cloud strategy offers. Different software, data, and components can operate between different clouds and between the cloud and on-premises. These organizations usually have needs and objectives that evolve quickly, making flexibility an essential concern.

Conclusion

Whether you’re staying on-premises for now or you’re totally committed to the cloud, there’s no wrong answer when it comes to on-premises and on-cloud. Instead of blindly following trends, each business needs to examine its own situation to determine the best fit.

Here at Iron.io, we understand that each organization has a unique timeline and different goals for its enterprise IT. That’s why we offer both IronWorker and IronMQ in cloud, hybrid, and on-premise deployments.  IronMQ is a high-performance message queue solution, while IronWorker is a highly flexible container orchestration tool that allows background task processing with ease.

Want to find out more? Get in touch with us to sign up for a free trial of IronWorker and IronMQ today.

Iron Shout-out: Scout APM

scout apm application monitoring

At Iron we love programming languages.  We started off with Ruby way back in the day, and eventually moved most of our latency critical services to Golang.  Internally, our team is made of some that love Typescript, some that speak fluent Rust, and myself… I’m a big Erlang nerd at heart, so I’m obviously a big Elixir fan.  

While the aesthetics and “pleasantness” of a language are important to its user, each language is a tool, and the right tool should be used for the right job. This often times isn’t the case, and we rely on tooling to give us more insight into the repercussions of our language choices and implementations.

Enter Scout APM.  Out of all the SaaS applications we use, it’s (by far) the one that has saved us the most money.  It must be noted that it’s not a product you install and hope it magically solves all your performance issues and optimizes your infrastructure for you.  Scout APM is more like a map that shows you all the treasure chests. It’s up to you to go dig them up, however deep they may be buried.

< image from dashboard showing performance improvement after a fix here >

After installing Scout APM for the first time, we looked for the lowest hanging fruit.  These were easy fixes like missing indexes in our database, N+1 queries, or slow external network requests.  These were thrown in our pipeline and resolved quickly, as they’re mostly one-line fixes. The next step for us was to identify the larger picture issues.  

Scout APM does a great job of giving you not just the finite details of a particular issue, but also the ability to take a step back and look at things with a birds eye view.  In our case, we found ourselves using an ActiveRecord construct in “many” places in our platform that was causing huge spikes in memory usage that was resulting in extreme process bloat.  This bloat led to churn, and… it definitely snowballed from there.

Our platform used to run on way too many machines and they ran hot.  After fixing most of our performance issues we were able to scale down our instance fleet significantly.  This was even after we went from 30 servers to 2 by moving a critical piece of our infrastructure from Ruby to Golang.

At the end of the day, the cost of Scout APM ended up being an insignificant percentage of what we were saving each month.  It took man hours to fix the actual issues themselves, but these performance enhancements ended up flowing into our pipeline like normal technical debt items.  The benefit of these items is that they were directly tied to decreasing operational costs.

To be noted, we ended up choosing Scout APM due to many factors. One of the biggest reasons was due to their fantastic customer support.  They went above and beyond to help answer our constant questions when we first started using their platform (and we asked A LOT of questions).  If you aren’t using an APM tool, or aren’t 100% happy with what you’ve got, the engineering team here at Iron highly recommends running with Scout APM.

AWS Fargate: Overview and Alternatives

Making software applications behave predictably on different computers is one of the biggest challenges for developers. Software may need to run in multiple environments: development, testing, staging, and production. Differences in these environments can cause unexpected behavior, yet be very hard to track down.

To solve these challenges, more and more developers are using a technology called containers. Each container encapsulates an entire runtime environment. This includes the application itself, as well as the dependencies, libraries, frameworks, and configuration files that it needs to run.

Docker and Kubernetes were two of the first container technologies, but they are by no means the only alternatives. In late 2017, Amazon announced that it would jump into the container market with AWS Fargate. So what is AWS Fargate exactly, and is AWS Fargate worth it for developers?

What is AWS Fargate?

Amazon’s first entry into the container market was Amazon Elastic Container Service (ECS). While many customers saw value in ECS, this solution often required a great deal of tedious manual configuration and oversight. For example, some containers may have to work together despite needing entirely different resources.

Performing all this management is the bane of many developers and IT staff. It requires a great deal of resources and effort, and it takes time away from what’s most important: deploying applications.

In order to solve these problems, Amazon has introduced AWS Fargate. According to Amazon, Fargate is “a compute engine for Amazon ECS that allows you to run containers without having to manage servers or clusters.”

Fargate separates the task of running containers from the task of managing the underlying infrastructure. Users can simply specify the resources that each container requires, and Fargate will handle the rest. For example, there’s no need to select the right server type, or fiddle with complicated multi-layered access rules.

AWS Fargate vs. ECS vs. EKS

Besides Fargate, Amazon’s other cloud computing offerings are ECS and EKS (Elastic Container Service for Kubernetes). ECS and EKS are largely for users of Docker and Kubernetes, respectively, who don’t mind doing the “grunt work” of manual configuration.

One advantage of Fargate is that you don’t have to start out using it as an AWS customer. Instead, you can begin with ECS or EKS and then migrate to Fargate if you decide that it’s a better fit.

In particular, Fargate is a good choice if you find that you’re leaving a lot of compute power or memory on the table. Unlike ECS and EKS, Fargate only charges you for the CPU and memory that you actually use.

AWS Fargate: Pros and Cons

AWS Fargate is an exciting technology, but does it really live up to the hype? Below, we’ll discuss some of the advantages and disadvantages of using AWS Fargate.

Pro: Less Complexity

These days, tech companies are offering everything “as a service,” taking the complexity out of users’ hands. There’s software as a service (SaaS), infrastructure as a service (IaaS), platform as a service (PaaS), and dozens of other buzzwords.

In this vein, Fargate is a Container as a Service (CaaS) technology. You don’t have to worry about where you’ll deploy your containers, or how you’ll manage and scale them. Instead, you can focus on defining the right parameters for your containers (e.g. compute, storage, and networking) for a successful deployment.

Pro: Better Security

Due to their complexity, Amazon ECS and EKS present a few security concerns. Having multiple layers of tasks and containers in your stack means that you need to handle security for each one.

With Fargate, however, the security of your IT infrastructure is no longer your concern. Instead, you embed security within the container itself. You can also combine Fargate with container security companies such as Twistlock. These companies offer products for guarding against attacks on running applications in Fargate.

Pro: Lower Costs (Maybe)

If you’re migrating from Amazon ECS or EKS, then Fargate could be a cheaper alternative. This is for two main reasons:

    • As mentioned above, Fargate charges you only when your container workloads are running inside the underlying virtual machine. It does not charge you for the total time that the VM instance is running.
  • Fargate does a good job at task scheduling, making it easier to start and stop containers at a specific time.

Want some more good news? In January 2019, Fargate users saw a major price reduction that will slash operating expenses by 35 to 50 percent.

Con: Less Customization

Of course, the downside of Fargate is that you sacrifice customization options for ease of use. As a result, Fargate is not well-suited for users who need greater control over their containers. These users may have special requirements for governance, risk management, and compliance that require fine-tuned control over their IT infrastructure.

Con: Higher Costs (Maybe)

Sure, Fargate is a cost-saving opportunity in the right situation when switching from ECS or EKS. For simpler use cases, however, Fargate may actually end up being more expensive. Amazon charges Fargate users a higher per-hour fee than ECS and EKS users. This is to compensate for the complexity of managing your containers’ infrastructure.

In addition, running your container workloads in the cloud will likely be more expensive than operating your own infrastructure on-premises. What you gain in ease of use, you lose in flexibility and performance.

Con: Regional Availability

AWS Fargate is slowly rolling out across Amazon’s cloud data centers, but it’s not yet available in all regions. As of January 2019, Fargate is not available for the following Amazon regions:

    • São Paulo
    • Paris
    • Stockholm
    • Osaka
    • Beijing
    • Ningxia
  • GovCloud (US-West and US-East)

AWS Fargate Reviews

Even though AWS Fargate is still a new technology, it has earned mostly positive feedback on the tech review platform G2 Crowd. As of this writing, AWS Fargate has received an average score of 4.5 out of 5 stars from 12 G2 Crowd users.

Multiple users praise AWS Fargate’s ease of use. One customer says that Fargate “made the job of deploying and maintaining containers very easy.” A second customer praises Fargate’s user interface, calling it “simple and very easy to navigate.”

Another reviewer calls AWS Fargate an excellent solution: “I have been working with AWS Fargate for 1 or 2 years, and as a cloud architect it’s a boon for me…  It becomes so easy to scale up and scale down dynamically when you’re using AWS Fargate.”

Despite these advantages, AWS Fargate customers do have some complaints:

    • One user wishes that the learning curve were easier, writing that “it requires some amount of experience on Amazon EC2 and knowledge of some services.”
    • Multiple users mention that the cost of AWS Fargate is too high for them: “AWS Fargate is costlier when compared with other services”; “the pricing isn’t great and didn’t fit our startup’s needs.”
  • Finally, another user has issues with Amazon’s support: “as it’s a new product introduced in 2017, the quality of support is not so good.”

AWS Fargate Alternatives: AWS Fargate vs. Iron.io

AWS Fargate is a popular solution for container management in the cloud, but it’s far from the only option out there. Offerings such as Iron.io are mature and feature-rich, offering an alternative to Amazon’s own container management solutions.

Iron.io offers IronWorker, a container-based platform with Docker support for performing work on-demand. Just like AWS Fargate, IronWorker takes care of all the messy questions about servers and scaling. All you have to do on your end is develop applications, and then queue up tasks for processing.

As of yet, Fargate’s container scaling technology is not available for on-premises deployments. On the other hand, one of the main goals of Iron.io is for the platform to run anywhere. Iron.io offers a variety of deployment options to fit every company’s needs:

  • Shared: Users can run containers on Iron.io’s shared cloud infrastructure.
  • Hybrid: Users benefit from a hybrid cloud and on-premises solution. Containers run on in-house hardware, while Iron.io handles concerns such as scheduling and authentication. This is a smart choice for organizations who already have their own server infrastructure, or who have concerns about data security in the cloud.
  • Dedicated: Users can run containers on Iron.io’s dedicated server hardware, making their applications more consistent and reliable. With Iron.io’s automatic scaling technology, users don’t have to worry about manually increasing or decreasing their usage.
  • On-premises: Finally, users can run IronWorker on their own in-house IT infrastructure. This is the best choice for customers who have strict regulations for compliance and security. Users in finance, healthcare, and government may all need to run containers on-premises.

Final Thoughts

Breathless tech reviewers have called AWS Fargate a “game changer” and “the future of serverless computing.” As we’ve discussed in this article, however, it’s certainly not the right choice for every company. It’s true that Fargate often provides extra time and convenience. However, Fargate users will also sacrifice control and incur potentially higher costs.

Each organization has a unique situation with different goals and requirements, and only you can say what’s best for your business. Is the task of infrastructure management too onerous for your developers? AWS Fargate may be the right choice. On the other hand, if you need greater control and you’re concerned about costs, you might want to stay with ECS or EKS.

Iron.io offers a mature, feature-rich alternative to both Fargate and ECS/EKS. Users can run containers on-premises, in the cloud, or benefit from a hybrid solution. Like Fargate, Iron.io takes care of infrastructure questions such as servers, scaling, setup, and maintenance. This gives your developers more time to spend on deploying code and creating value for your organization.

Want to find out more? You can try the advantages of Iron.io for yourself with a no-obligations test drive? Sign up today to request a demo of IronWorker or IronMQ, or start a free, full-feature trial for 14 days.

Ready to get started with IronWorker?

Start you free 14 day trial, no cards, no commitments needed. Signup here.

Docker vs Kubernetes – How Do They Stack Up?

Docker and Kubernetes are two hot technologies in the world of software. Most software architectures are using them, or considering them. The question is often asked – Docker vs Kubernetes – which is better? Which one should we be using? As it turns out, this question misrepresents the two. These two technologies don’t actually do the same thing! They do complement each other nicely, however. In this post, we will explore the “Docker vs Kubernetes” question. We will dig into the backgrounds and details of both. We will also show how they differ. With this information, you can better decide how Docker and Kubernetes fit in your architecture. First, some background…
How Did We Get Here?
Before diving into the topic, let’s walk through a brief history of how we got here.
In the Beginning…
 
In the REALLY early days of computing (like, the 1960s), there was time sharing on mainframes. On the surface, this looked nothing like its modern day counterparts. A room full of big iron, and perhaps a primitive text-based terminal. Lots of little lights. Very limited functionality. Yet, the concept is the same – one machine serving many users at once. Each isolated from each other. While not practical for today’s needs, this technology planted the seed for the future. Around the 1980s and 1990s, computer workstations began to grow in prominence. Computers no longer required a room full of mainframe hardware. Instead, a server could fit on your desk. One in every home! In the software industry, these workstations become the main workhorses of web serving. This didn’t scale well to a large number of users and services, due to the expensive hardware. For most users, a beefy workstation offered far more capacity than a one person required.
Virtual machines
 
Virtual Machines (VM) offered a solution to this problem. Full Virtualization allowed one physical server to host several “VM instances”. Each instance featured its own copy of the Operating System. This allowed “machines” to be rapidly created and deployed. Instead of deploying a physical server each time you needed a computer, a VM could take its place. These VMs were usually not as powerful as a full workstation, but they didn’t need to be. This advance made it much easier to add new machines to a computing environment. It was inefficient and costly though. Each VM instance required a full operating system. Lots of duplicate code and processes would run on a single VM server. Lots of OS licenses needed purchasing. The industry kept working on better alternatives.
Containers
 
Containers (also known as Operating-System-Level Virtualization) provide a solution to this waste. A single container environment provides the “core” Operating System processes. Each container running in this environment is an isolated “user-space” instances. In other words, the instances share the common functionality (file system, networking, etc). This eliminates the duplicate OS-level processes. As a result, a single physical server can support a much large volume of containers. Additionally, cloud computing landscape lends itself very well to container architecture. Customers generally don’t want (or need) to worry about individual machines. It’s all “in the cloud.” Developers can code, test, and deploy containers to the cloud. Never worrying about the hardware they are running on. Containers have exploded in popularity with the growth of cloud computing.
Docker
 
Docker (both the company and product) is a big name in containerization. Docker begin as an internal project at a dotCloud, a Platform as a Service company. It soon outgrew its creator and debuted to the public in 2013. It is an open source project, and has rapidly become a leader in the Container space. “Google” is synonymous with “Search”. You might say, “google it”. The same has almost become true for Docker – “use docker” means “use containers”. Docker is available on all major cloud platforms, with rapid growth since its release.Here are some key concepts from world of Docker:
 
  • Image – the Docker Image is the file that holds everything necessary to run a Container. This includes:
 
  • the actual application code
 
  • a run-time environment, with all the OS services the application needs.
 
  • any libraries needed for your application
 
  • environment variables and config files, such as connection strings and other settings.
 
  • Container – a Container is a “copy” of an Image, either running or ready to run in Docker. There can be more than one Containers copied from the same Image.
 
  • Networking – Docker allows different Containers to speak to each other (and the outside world). The code running in the Container isn’t “aware” that it’s running within Docker. It simply makes network requests (REST, etc), and Docker routes the calls.
 
  • Volumes – Docker offers Volumes to allow for shared storage between Containers.
 
The Docker “ecosystem” consists of a few main software components:
Docker Engine
 
Docker’s main platform is the Docker Engine. It is the software that hosts and runs the Containers. It runs on the physical host machine, and is the “sandbox” all the containers will live within. The Docker Engine consists of the following components:
 
  • The Server, or Daemon – the Daemon is the “brains” of the whole operation. This is the main process that manages all the other Docker pieces. Those pieces include Images, Containers, Networks, and Volumes.
 
  • REST API – The REST API allows programs to communicate with the Daemon for all their needs. This includes adding/removing Images, stopping/starting Containers, adjusting configuration, etc.
 
  • Command Line Interface (CLI) – allows command line interaction with the Docker Daemon. This is how end users interact with Docker. It uses the Docker REST API under the covers.
Docker Hub
 
Docker Hub is an enormous online library containing vast quantities of “pre-made” images. Like Github, except instead of hosting Git repositories, it hosts Docker images. For almost any software need, there is an image on Docker Hub that provides it.For example, you might need:
 
  • a Rails environment for web services
 
  • connected with a MySQL database
 
  • with Redis available for caching.
 
Dockerhub contains “Official” images for these types of things. “Pull” the required images to your local environment, and use them to build Containers. Complex, production-ready environments can be ready within minutes. Companies can also pay for private repositories to host their internal Docker images. Dockerhub offers a centralized location to track and share images. History tracking, branching, etc. Like Github, except for Docker.
Docker Swarm
 
Docker Swarm is Docker’s open source Container Orchestration platform. Container Orchestration becomes important in large scale deployments. Large environments, with tens, hundreds or thousands of Containers. With this type of volume, manually tracking and deploying Containers becomes cost prohibitive. An Orchestration platform provides a “command center.” It monitors and deploy all the various Containers in an environment. Docker Swarm provides some of the same functionality as Kubernetes. It is simpler and less powerful, but easier to get started with. It uses the same CLI, making its usage familiar to a typical Docker user. We’ll get more into Container Orchestration below.
Alternatives
 
While Docker is the industry leader, there are alternatives. These include:
 
  • Core OS’ rkt (Rocket) – the “pod-native” container engine. Developed by a Kubernetes-based software team, this is a competitor to Docker.
 
  • Cloud Foundry – adds a layer of abstraction on top of Containers. Allows you to provide the application, and not worry about the layering beneath. With this service, you’re not really focused on the Container layer.
 
  • Digital Ocean – a cloud-based provider that calls its containers “droplets”. This appears to be like Cloud Foundry, in that they abstract away some complexity. There are still cloud/Kubernetes options in their control panel.
 
  • “Serverless” services – major cloud providers like AWS and Azure offer “serverless” services. These allow companies to create simple webservices on the fly. No hardware, or hardware virtualization. No worries about the underlying platform. Not technically Containers, but offer support for many of the same use cases.

Kubernetes

Kubernetes is the industry leader in Container Orchestration. First, here’s an overview of what that is…
Container Orchestration
 
Containers are a very powerful tool, but in large environments, they can get out of hand. Different deployment schedules into different environments types. Tracking uptime, and knowing when things fall down. Networking spaghetti. Capacity planning. Tracking all that complexity requires more tools. As this technology has matured, Container Orchestration platforms have grown in importance. These orchestration engines offer some of the following benefits:
 
  • “Dashboard” for all the Containers. One place to watch and manage them all.
 
  • Automatic provisioning and deployment. Rather than individually spinning up Containers, the orchestration engine manages for you. Push a button, adjust a value, more Containers spring to life.
 
  • Redundancy – if a Container fails in the wild, an orchestration engine will notice it fail. And put a new one in its place.
 
  • Scaling – as your workload grows, you may outgrow what you have. An orchestration engine detects capacity shortages. It adds new Containers to spread the load.
 
  • Resource Allocation – under all those Containers, you’re still dealing with real-life computers. Orchestration engines can manage and optimize those physical resources.
 
While there are several options available, Kubernetes has become the market leader.
Rise of Kubernetes
 
Kubernetes (Greek for “governor”) began at Google in 2014. It was heavily influenced by Google’s internal “Borg” system. Borg was an internal tool Google used to manage all their environments. Google released and open-sourced Kubernetes in 2015. It has since grown to become one of the largest open source projects on the planet. All the major cloud providers offer Kubernetes solutions. Kubernetes is now the de facto Container Orchestration platform. This post goes into great detail about the growth of Kubernetes over the past couple of years.
Kubernetes architecture
 
At a very high level, Kubernetes helps manages large numbers of Containers. Simple enough, right? At a more granular level, Kubernetes consists of a Cluster managing lots of Nodes. It has one Master Node, and one-to-many Worker Nodes. These Nodes use Pods to deploy Containers to environments. As requirements scale, Kubernetes can deploy more Containers, Pods, and Nodes. Kubernetes tracks all the above, and adds/removes when needed. Here’s a closer look at all the concepts described above:
 
  • Cluster – A Cluster is an instance of a Kubernetes environment. It has a Master node and several Worker nodes.
 
  • Node – A Kubernetes Node is a process that runs on a server (physical or virtual). A node is either a Master node, or a Worker node. Together, Master and Workers manage all the distributed resources, both physical and virtual.
 
  • Master – the Master node is the control center for Kubernetes. It hosts an API server exposing a REST interface used to communicate with Worker nodes. The Master runs the Scheduler, which creates Containers on the various Worker Nodes. It contains the Controller Manager, which manages the current state of the cluster. If the cluster doesn’t match the desired state, the Controller Manager will correct. For example, if Containers fail, it creates new Containers to take their place.
 
  • Worker – the Worker Nodes carry out the wishes of the Master Node. This includes starting Containers, and reporting back their status. As an environment needs to scale to more machines, Kubernetes adds more Worker Nodes.
 
  • Pods – A Pod is the smallest deployable unit in the Kubernetes object model. It consists of one or more Containers, storage resources, networking glue, and configuration. Kubernetes deploys Pods to Nodes. Docker is the main Container technology Kubernetes uses, but others are available.
Alternatives
 
While Kubernetes is the front runner, there are alternative options for Container Orchestration. These include:
 
  • Docker Swarm – already mentioned above, this is Docker’s Container Orchestration offering. This has the advantage of coming from the same team that maintains Docker. It is also considered easier to use by some, and faster to get started. Additionally, Swarm uses the same CLI as Docker. This makes it easy to use for those already familiar with Docker.
 
  • Apache Marathon – a container swarm for Apache Mesos. Not as widespread or popular as Docker/Kubernetes. If you are already invested in the Apache ecosystem, this might be a good choice. This requires a decent level of Linux/Apache expertise to get started.
 
  • Nomad – this is a lightweight orchestration platform. This doesn’t feature all the bells and whistles of more advanced systems. It is more simple, though, which may appeal to some.
In Summary

With all this solid background in place, we are now better poised to make a decision. How to containerize everything? For starters, Docker is a must. While alternatives exist, Docker is the clear front runner. It has become the industry standard, and features extensive tooling and documentation. It is open source, and free to get started. You can’t go wrong using Docker as your container technology. Once things get big enough to orchestrate, you must make a decision. The best two choices seem to be:
 
  1. Docker Swarm – an easy stepping stone from simple Docker, Swarm is worth exploring first. Using the same CLI, you can grow your Docker environment to multiple Containers on several Machines. If you are able to manage everything this way, you might just stop there.
 
  1. Kubernetes – if Swarm doesn’t seem up to the task, it’s probably worth the leap to Kubernetes. It’s the leader in the orchestration space, which offers the same documentation and support advantages. It will grow as big as you need it, and supports the complications that arise with large-scale systems.
Iron.io
If your organization is looking to use Containers in the Cloud, Iron.io can help you get there. Iron.io supports Docker, Kubernetes and other alternatives. Iron.io’s expert staff will help you intelligently scale your business on any of the major cloud platforms. Iron.io is trusted by brands such as Zenefits, Google, and Untappd. Allow them to help your business containerized in the cloud!

* These fields are required.


Docker Jobs: 11 Awesome Jobs for 2019

If you’re searching for docker jobs online, it can be a real challenge to find open positions that fit your skills.

A development job presents a fantastic opportunity to work at an innovative company. However, finding positions can be a time-consuming process. Here’s some advice for finding docker jobs online. You’ll also find 11 open positions to consider.

docker jobs

Docker Skills Are The Next Best Thing

When it comes to automating the creation and deployment for container-based apps, Docker is the go-to technology. That’s why it’s one of the best skills you can possess as a developer in today’s marketplace.

Containers, being a lighter weight type of virtualization, are truly taking over. Docker has the hope of freeing developers from dependencies on software and other types of infrastructure. That means Docker’s approach is able to cut costs and boost efficiency.

Overall demand for DevOps skills has been steadily increasing since the early 2000s. As a developer, you recognize the importance of continuously expanding your skillset. Docker is the new thing you should be looking to sharpen up on.

docker jobs

The Benefits To Expect

Working a top-of-the-line position at an innovative new company means you’ll get to enjoy a number of different benefits. These are things the general workforce is yet to have access to.

First, innovations in workplace healthcare have brought in-office care to the scene. Other wellness programs are also being further emphasized. New perks are coming to big companies and innovative startups alike. And these are the places currently searching for Docker professionals.

Secondly, you’ll get to enjoy a strong work/life balance. This is thanks to a number of initiatives that larger companies are taking. Businesses are now working hard to support employees in living a healthier lifestyle. This includes paid time off and paid holiday. Oftentimes, sabbaticals are also offered that allow you to truly escape for a while.

Volunteering opportunities and other team-building outlets are abound. They can help you find more meaning in your career. You can even find purpose in your personal life thanks to work-sponsored endeavors. This is all part of a widespread effort on behalf of companies. Many companies are trying to be more supportive of employees’ well-being.

Many modern workplaces feature on-site gyms and fitness centers. Personal coaching is often included. It’s also becoming more common for companies to pay for a fitness membership on behalf of workers.

Some companies offer wellness bonuses. So you can even get paid money for keeping yourself in tip-top shape. That’s right, some companies actually monetarily reward employees. Get paid to lose those extra inches or make strides to living a healthier lifestyle.

Of course, this all pays off in the end for the company. Study after study is proving how important work/life balance is. Studies are also proving how motivating it can be for a company to go the extra mile to support workers’ health. This is why newer companies are adopting and offering such neat programs.

If you’re focused on your family, you may even get the joy of parental leave. At the very least, this work perk will allow you to take time off for your family without getting penalized. The best companies even offer paid parental leave. That means you can take time off without adding any financial stress.

docker jobs

11 Docker Jobs to Consider

  1. Senior Software Developer at ThoughtWorks. Work with Fortune 500 clients as you work through business challenges. Your job is to spot poorly written code and fix it. Experience with Docker preferred.
  2. Senior Backend Engineer at AllyO. This is a fast-growing startup looking to build a strong team. They need an experienced and motivated individual. Experience with Docker required.
  3. DevOps / Python Developer at Lore IO. This is a well-funded startup in its early stages. In this position, you’ll be integrating Lore into cloud ecosystems. Experience with Docker preferred.
  4. Senior Python Developer at Mako Professionals. As a senior developer, you’ll spend about 25% of your time coding. You’ll also test systems and work with a collaborative team. Experience with Docker required.
  5. Senior Site Reliability Engineer at Procurant. Support and maintain services in this exciting position. Your position will scale systems using automation. Evolving technologies is a must. Experience with Docker required.
  6. Senior Python Developer at Pearson. Lead development initiatives and work closely with scientists in this position. You’ll promote the use of new technologies too. Experience with Docker required.
  7. Senior Python Backend Developer at Mirafra. Design database architecture in this fast-paced environment. Your job includes delivering high performance applications. You’ll focus on scalability too. Experience with Docker preferred.
  8. Senior Database Administrator at Verisys. If you’re fun and energetic, this could be the right position for you. Work to build a next generation platform for healthcare credentialing. Experience with Docker preferred.
  9. Senior DevOps Engineer at Outset Medical. This privately held company has a number of investors backing it. Work on innovative medical technologies in this rewarding position. Experience with Docker required.
  10. Senior Site Reliability Engineer at Guardian Analytics. This company fights fraud in the financial industry. Your job will play a vital role in helping them keep consumers safe. Experience with Docker required.
  11. DevOps Engineer at Arthur Grand Technologies. Design and build automated systems in this high-paying position. Experience with Docker required.

docker jobs

Where to Find Docker Jobs

If you’re looking for Docker jobs, you should be looking on a number of different websites. These are offering a full list of open positions that you could potentially snag.

Indeed is one of the most popular job search platforms. You can also be looking on LinkedIn and other professional networking websites. Oftentimes, you’ll be able to find a great opportunity without ever looking at an official job ad.

If you have the right people in your network, have them put in a good word for you. This way, you could very well be the first person a company contacts. Be front of mind when they start looking for a professional with a strong Docker skillset.

You can also find plenty of new opportunities. Try websites like Monster and other job search platforms. Glass Door is also a good website to visit. It can help you review a potential company that is hiring and make sure that they are a worthy employer.

On Glass Door, you’ll often be able to see reviews from previous employees. They will share their experiences working with a particular employer. This information can be vital in helping you avoid bad companies. It can also aid you in understanding more about the company itself and what they are after.

You shouldn’t let a few bad reviews of disgruntled employees shake you. But, if a company’s reviews seem genuine, it’s probably a good idea to take them into consideration.

As far as choosing a job search site to use to look for open positions, try using more than one. Many companies cross-post on different platforms to reach the most potential candidates. But some only post on a few select platforms (or even just one). That means looking on multiple sites can reveal the most opportunities to you.

It doesn’t hurt to apply to all of the open positions you find, but you probably won’t be doing that. It takes time and a bit of research to craft a good application. It’s best to follow the below tips and only apply to the positions you really want.

docker jobs

Tips for Applying

When applying for a new position, it’s always best to start by reviewing your resume. Your resume needs to highlight the fact that you’re up-to-date on all the relevant skills.

You should also go the extra mile to tailor your resume for each position you’re applying for. Write a cover letter targeted at each specific company’s offerings too.

For example, if you are reading a job opening ad that mentions X, Y, and Z, you definitely want your resume to reflect that. Don’t waste time on A, B, and C.

Emphasize your proficiencies. Focus on what aligns with the specific skills outlined in the job ad. Place requested skills at the top of any bulleted lists.

Additionally, you should clean up your resume by cutting out unnecessary experience. Positions that simply aren’t relevant to your application aren’t needed. It’s a common mistake to try and list out as much as possible. But, if you’re listing every job you’ve had since you first started working, that’s fluff.

Similarly, avoid adding filler skills like “Microsoft Suite”. You need to shorten things up. Your resume should reflect only your most relevant experience. It should contain only relevant skills so that it’s easy for the recruiter to see your value.

Most recruiters only skim a resume. By taking out all the unnecessary items, you’ll be sure to get their attention instantly. You’ll portray yourself as an exact match. It will be clear that you specialize in what the company requires.

The next step is reviewing your cover letter. Your cover letter is a must to include because it’s your chance to speak to the recruiter. In it, you can detail the information in your resume to explain why you are the perfect fit for the given position.

Again, you’ll want to tailor this to fit the specific job opening you’re going after. When applying, be certain that you include your contact information. There is no need to include references unless you get a call back requesting them.

Most companies today have a multi-step interview process. It typically begins with a phone interview. This gives you the chance to ask questions and explain why you like the position. You’ll also let them know why you’re a great fit.

If you pass the phone interview, the next step is generally an on-site interview. Depending on the size of the company, there may be multiple phone interviews. There may also be multiple on-site interviews. Generally, they will explain the process in the first phone conversation with you.

It may feel like a lot of hoops to jump through. This is especially true if you’re applying at a larger company. However, these are necessary steps that help them see if you’re the right fit for the company. At the same time, they will help you understand whether you think the company is the right fit for you.

One final tip of the application process is to ask questions when given the opportunity. You should formulate questions that articulate your interest in the position. These questions also showcase your understanding of their expectations.

You should do some basic research into the company in order to come up with the right questions. This will enable you to better understand the company. You’ll also get a glimpse of the work environment. It can even help you understand what they are after in the employee they hire.

docker jobs

Next Steps

Now that you have read all about the importance of Docker skills, you should feel inspired. The next step is to begin looking for open positions where you can show off your new skillset.

The job ads you look at should detail what specific skills the company is looking for. Look for a position that best matches your list of current skills. Keep in mind, of course, that not every position will be a good fit for you.

There is increasing emphasis on matching values and other aspects today. So, you may find a company isn’t the right match for you even if you seem like the right match for the company (and vice versa). Put in the effort and you’ll be able to find the right docker job.

docker jobs

About Iron.io

Iron.io features a suite of developer tools. The aim of Iron.io is to empower developers to work smarter. Save time with a suite of Cloud Native products. Expert staff will stand by every step of the way. With Iron.io, you can intelligently scale your business.

IBM MQ (IBM Message Queue): Overview and Comparison to IronMQ

Wherever two or more people need to wait in line for something, you can guarantee that there will be a queue: supermarkets, bus stops, sporting events, and more.

waiting in line

It turns out, however, that queues are also a useful concept in computer science.

The data structure known as a queue is a collection of objects that are stored in consecutive order. Elements in the queue may be removed from the front of the queue and inserted at the back of the queue (but not vice versa).

Queues are a good choice of data structure when you have items that should be processed one at a time in first-in, first-out (FIFO) order–in particular, the message queue. In this article, we’ll discuss IBM MQ, one of the most popular solutions for implementing message queues, and see how it stacks up against Iron.io’s IronMQ software.

What is a Message Queue?

As the name suggests, a message queue is a queue of messages that are sent between different software applications. Messages consist of any data that an application wants to send, as well as a header at the start of the message that contains information about the data below it.

message in a bottle

Message queues are necessary because different applications consume and produce information at different speeds. For example, one application may sporadically create large volumes of messages at the same time, while another application may slowly process messages, one after another.

The differing speeds at which these two applications operate can pose an issue. Because they are all produced at the same time, all but one of the first application’s messages will be lost by the second application–unless they can be temporarily stored within a message queue.

A message queue is a classic example of asynchronous communication, in which messages and responses do not need to occur at the same time. The messages that are placed on the queue do not require an immediate response, but can be postponed until a later time. Emails and text messages are other examples of asynchronous communication in the real world.

texting

While implementing a basic message queue is a fairly straightforward task, complex IT environments may include communications between separate operating systems and network protocols. In addition, basic message queues may not be resilient when the network goes down, causing important messages to be lost.

For these reasons, many organizations have chosen to use “message-oriented middleware” (MOM): applications that make it easier for different software and hardware components to exchange messages.

There are a variety of MOM products on the market today (like Delayed Job or Sidekiq in the Ruby on Rails world), each one intended for different situations and use cases. In the rest of this article, we’ll compare and contrast two popular options for exchanging data via MOM software: IBM MQ and IronMQ.

What to Consider When Selecting a Message Queue

Message queues are essential to how different applications interact and exchange data within your IT environment. This means, of course, that choosing the right message queue solution is no easy task–picking the wrong one can drastically affect your performance.

Below, we’ll discuss some of the most important factors to consider when choosing a message queue solution.

Features

Depending on the specifics of your IT environment, you may require any number of different features from your message queue. Here’s just a small selection of potential functionality:

  • Pushing and/or pulling: Most message queues include support for both pushing and pulling when retrieving new messages. In the first option, new messages are “pushed” to the receiving application in the form of a direct notification. In the second option, the receiving application chooses to “pull” new messages itself by checking the queue at regular intervals.

pushing on a train

  • Delivery options: You may want to schedule messages at a specific time, or send messages more than once in order to make sure that they are delivered. If so, choose a message queue that includes support for these features.
  • Message priorities: Some messages are more critical or urgent than others. In order to receive the information you need in a timely manner, your message queue may use some way of migrating important messages up the queue (just like letting late passengers cut in front of you at the airport).
  • Persistence: Messages that are persistent are written to disk as soon as they enter the queue, while transient messages are only written to disk when the system is using a large amount of memory. Persistence improves the redundancy of your messages and ensures that they will be processed even in the event of a system crash.

scribe

Scalability and Performance

The more complex your IT environment is, the more difficult it is to scale it all at once. Instead, you can scale each application independently, decoupled from the rest of the environment, and use a message queue to communicate asynchronously.

Certain message queue solutions are better-suited for improving the scalability and performance of your IT environment. Look for options that are capable of handling high message loads at a rapid pace.

Pricing

Different message queue solutions may have different price points and pricing models that lead you to choose one over the other.

“As a service” is currently the predominant pricing model for message queues. This means that customers have a “pay as you go” plan in which they are charged by the hours and the computing power that they use. However, there are also prepaid message queue plans with an “all you can eat” pricing model.

Security

With hacks and data breaches constantly in the news, maintaining the security of your message queue should be a primary concern. Malicious actors may attempt to insert fraudulent messages into the queue and use them to exfiltrate data or gain control over your system.

Message queue solutions that use the Advanced Message Queuing Protocol (AMQP) include support for transport-level security. In addition, if the contents of the message itself may be sensitive, you should look for a solution that encrypts messages while in transit and at rest within the queue.

locked door

What is IBM MQ (IBM Message Queue)?

IBM Message Queue (IBM MQ) is a MOM product from IBM that seeks to help applications communicate and swap data in enterprise IT environments. IBM MQ calls itself a “flexible and reliable hybrid messaging solution across on-premises and clouds.” It includes support for a variety of different APIs, including Message Queue Interface (MQI), Java Message Service (JMS), REST, .NET, IBM MQ Light, and MQTT.

The IBM MQ software has been around in some form since 1993. Thanks to the widespread demand for real-time transactions on the Internet, IBM MQ and other message queue solutions have enjoyed a renewed popularity in recent years.

The benefits of using IBM MQ include:

  • Support for on-premises, cloud, and hybrid environments, as well as more than 80 different platforms.
  • Advanced Message Security (AMS) for encrypting and signing messages between applications.
  • Multiple modes of operation, including point-to-point, publish/subscribe, and file transfer.
  • A variety of tools for managing and monitoring queues, including MQ Explorer, the MQ Console, and MQ Script Commands.

On websites such as TrustRadius and IT Central Station, IBM MQ users mention a few advantages and disadvantages of the software. Some of the recurring themes in these reviews are:

  • IBM MQ is reliable and does its job well, without any lost messages.
  • The software helps to improve data integrity, availability, and security.
  • The user interface may be a little unintuitive and challenging, especially for first-time users.
  • Tools such as MQ Explorer seem to be “aging” and are not as effective as third-party solutions.
  • IBM MQ lacks certain integrations that would be useful in a modern IT enterprise environment.

IBM MQ vs. IronMQ

There’s no doubt that IBM MQ is a robust, mature message queue solution that fits the needs of many organizations. However, it’s far from the only MOM software out there. Offerings such as Iron.io’s IronMQ are highly viable message queue alternatives, and in many cases may be superior to market leaders such as IBM MQ.

What is IronMQ?

IronMQ is a messaging queue solution from Iron.io, a cloud application services provider based in Las Vegas. According to Iron.io, the IronMQ message queue is “the most industrial-strength, cloud-native solution for modern application architecture.”

industrial strength

The software includes support for all major programming languages and is accessible via REST API calls. Iron.io offers a number of different monthly and annual pricing models for IronMQ, ranging from the hobbyist all the way up to the large enterprise.

The benefits of IronMQ include:

  • Support for both push and pull queues, as well as “long polling” (holding a pull request open for a longer period of time).
  • The use of multiple clouds and availability zones, making the service highly scalable and resistant to failure. In the event of an outage, queues are automatically redirected to another zone without any action needed on the part of the user.
  • Backing by a high-throughput key/value data store. Messages are preserved without being lost in transit, and without the need to sacrifice performance.
  • Flexible deployment options. IronMQ can be hosted on Iron.io’s shared infrastructure or on dedicated hardware to improve performance and redundancy. In addition, IronMQ can run on your internal hardware in cases where data must remain on-premises.

IBM MQ vs. IronMQ: The Pros and Cons

Both IBM MQ and IronMQ are cloud-based solutions, which means they enjoy all the traditional benefits of cloud computing: better reliability and scalability, faster speed to market, less complexity, and so on.

Since it was created with the cloud in mind, IronMQ is particularly well-suited for use with cloud deployments. Because IronMQ uses well-known cloud protocols and standards such as HTTP, JSON, and OAuth, cloud developers will find IronMQ exceedingly simple to work with.

software developer

IronMQ users enjoy access to an extensive set of client libraries, each one with easy-to-read documentation. The IronMQ v3 update has also made the software faster than ever for customers who need to maintain high levels of performance.

Customers who already use Iron.io’s IronWorker software for task management and scheduling will find IronMQ to be the natural choice. According to one IronMQ user in the software industry, “I can run my Workers and then have them put the finished product on a message queue – which means my whole ETL process is done without any hassle.”

On the other hand, because it’s part of the IBM enterprise software family, IBM MQ is the right choice for organizations that already use IBM applications. If you already have an application deployed on IBM WebSphere, then it will be easier to simply use it together with IBM MQ.

What’s more, IBM MQ is capable of working well in many different scenarios with different technologies, including mainframe systems. However, some customers report that IBM MQ has a clunky, “legacy” feel to it and is difficult to use in an agile IT environment.

While it’s definitely able to compete with IBM MQ, IronMQ also stacks up favorably against other message queue solutions such as RabbitMQ and Kafka. For example, RabbitMQ’s use of the AMQP protocol means that it is more difficult to use and can only be deployed in limited environments. According to various benchmarks, IronMQ is roughly 10 times as fast as RabbitMQ.

IronMQ Customer Reviews

Of course, reading long lists of software features can only go so far–you need customer feedback in order to make sure that the application really does what it says on the tin.

The good news is that IronMQ has a number of happy customers who are all too eager to share their positive experiences. John Eskilsson, technical architect at the engineering firm Edeva, raves about IronMQ in his testimonial on FeaturedCustomers:

“IronMQ has been very reliable and was easy to implement. We can take down the central server for maintenance and still rely on the data being gathered in IronMQ. When we start up the harvester again, we can consume the queue in parallel using IronWorker and be back to real-time quickly.”

In a review on G2, one user working in marketing and advertising praised IronMQ’s reliability and performance:

“My experience with the message queues was a good one. I had no issues and found the message queues to be very reliable. The website has good monitoring showing exactly what is happening in real time.”

The world’s most popular websites may receive millions of page hits per day, and more during times of peak activity. Businesses such as CNN need a robust, feature-rich, highly available message queue solution in order to get the right information to the right people. CNN is one of many enterprise clients that uses IronMQ as its message queue solution.

IBM MQ vs. IronMQ: Which is Right for You?

At the end of the day, no one can tell you which message queue solution is right for your company’s situation. Both IBM MQ and IronMQ have their advantages and drawbacks, and only one may be compatible with your existing IT infrastructure.

In order to make the final decision, draw up a list of the features and functionality that are most important to you in a message queue. These may include issues such as persistence, fault tolerance, high performance, compatibility with existing software and hardware, and more.

Fortunately, you can also try IronMQ before you buy. Want to find out why so many clients are proud to use IronMQ and other Iron.io products? Request a demo of IronMQ, or sign up today for a free, full-feature 14-day trial of the software.

Amazon SQS (Simple Queue Service): Overview and Tutorial

What’s a Queue?  What’s Amazon SQS?

queue
Now that’s quite a queue!

Queues are a powerful way of combining software architectures. They allow for asynchronous communication between different systems, and are especially useful when the throughput of the systems is unequal.   Amazon offers their version of queues with Amazon SQS (Simple Queue Service).

For example, if you have something like:

  • System A – produces messages periodically in huge bursts
  • System B – consumes messages constantly, at a slower pace

With this architecture, a queue would allow System A to produce messages as fast as it can, and System B to slowly digest the messages at it’s own pace.

Queues have played an integral role in software architecture for decades along with core technology concepts like APIs (Application Programming Interfaces) and ETL/ELT (Extract, Load Transform). With the recent trend towards microservices, have become more important than ever.

Amazon Web Services

AWS (Amazon Web Services) is one of the leading cloud providers in the world, and anyone writing software is probably familiar with them. AWS offers a wide variety of “simple” services that traditionally had to be implemented in-house (eg, storage, database, computing, etc). The advantages offered by cloud providers are numerous, and include:

  • Better scalability – your data center is a drop in their ocean. They’ve got mind-boggling capacity. And it’s spread around the world.
  • Better reliability – they hire the smartest people in the world (oodles of them) to ensure these services work correctly, all the time.
  • Better performance – you can typically harness as much computing horsepower as you’d like with cloud providers, far exceeding what you could build in-house.
  • Better (lower) cost – nowadays, they can usually do all this cheaper than you could in your own data center, especially when you account for all the expertise they bring to the table. And many of these services employ a “pay as you go” model, charging for usage as it occurs. So you don’t have to pay the large up front cost for licenses, servers, etc.
  • Better security – their systems are always up to date with the latest patches, and all their smart brainiacs are also thinking about how to protect their systems.

If you have to choose between building out your own infrastructure, or going with something in the cloud, it’s usually an easy decision.

AWS Simple Queue Service

It comes as no surprise that AWS also offers a queueing service, simply named AWS Simple Queue Service. It touts all the cloud benefits mentioned before, and also features:

  • Automatic scaling – if your volume grows you never have to give a thought to your queuing architecture. AWS takes care of it under the covers.
  • Infinite scaling – while there probably is some sort of theoretical limit here (how many atoms are in the universe?), AWS claims to support any level of traffic.
  • Server side encryption – using AWS SSE (Server Side Encryption), messages can remain secure throughout their lifetime on the queues.

Their documentation is also top-notch. It’s straightforward to get started playing with the technology, and when you’re ready for serious, intricate detail, the documentation goes deep enough to get you there.

Example

Let’s walk through a simple example of using AWS SQS, using the line at the DMV (Department of Motor Vehicles) as the example subject matter. The DMV is notorious for long waits, forcing people to corral themselves into some form of a line. While this isn’t an actual use case anyone would (presumably) solve using AWS SQS, it will allow us to quickly demo their capabilities, with a real-world situation most are all too familiar with.

While AWS SQS has SDK libraries for almost any language you may want to use, I’ll be using their REST interface for this exercise (with my trusted REST side kick Postman!).

Authorization

Postman makes it easy to setup all the necessary authorization using Collections. Configure the AWS authorization in the parent collection with the Access Key and Secret Access Key found in the AWS Console:

AWS SWS Authorization

Then reference that authorization in each request:

AWS SQS Create Parent Auth

Using this pattern, it’s easy to quickly spin up requests and put AWS SQS through its paces.

Creating a Queue

When people first walk in the door, any DMV worth their salt will give them a number to begin the arduous process. This is your main form of identification for the next few minutes/hours (depending on that day’s “volume”), and it’s how the DMV employees think of you (“Number 14 over there sure seems a bit testy!”).

Let’s create our “main queue” now, with the following REST invocation:

Request:

GET https://sqs.us-east-1.amazonaws.com?Action=CreateQueue&DefaultVisibilityTimeout=0&QueueName=MainLine&Version=2012-11-05

Response:

https://sqs.us-east-1.amazonaws.com/612055710376/MainLine

fa178e12-3178-5318-8d90-da20904943f0

Good deal. Now we’ve got a mechanism to track people as they come through the door.

Standard vs FIFO

One important detail that should be mentioned – there are two types of queues within AWS SQS:

  • Standard – higher throughput, with “at least once delivery”, and “best effort ordering”.
  • FIFO (First-In-First-Out) – not as high throughput, but guarantees on “exactly once” processing, and preserving the ordering of messages.

Long story short, if you need things super fast, can tolerate messages out of order, and possibly sent more than once, Standard queues are the answer. If you need absolute guarantees on order of operations, no duplication of work, and don’t have huge throughput needs, then FIFO queues are the best choice.

We’d better make sure we create our MainLine queue using FIFO! While a “mostly in order” guarantee might suffice in some situations, you’d have a riot on your hands at the DMV if people started getting called out of order. Purses swinging, hair pulling – it wouldn’t be pretty. Let’s add “FifoQueue=true” to the query string to indicate that the queue should be FIFO:

Request

https://sqs.us-east-1.amazonaws.com?Action=CreateQueue&DefaultVisibilityTimeout=0&QueueName=MainLineFIFO&Version=2012-11-05&FifoQueue=true

Send Message

Now that we’ve got a queue, let’s start adding “people” to it, using the “SendMessage” action. Note that when using REST, we need to URL encode the payload. So something like this:

{
"name": "Ronnie Van Zandt",
"drivers_license_number": "1234"
}

Becomes this:

%7B%0A%20%20%20%22name%22%3A%20%22Ronnie%20Van%20Zandt%22%2C%0A%20%20%20%22drivers_license_number%22%3A%20%221234%22%0A%7D

There are many ways of accomplishing this, I find the urlencoder site to be easy and painless.

Here’s the final result:

Request

https://sqs.us-east-1.amazonaws.com/612055710376/MainLineFIFO?Version=2012-11-05&Action=SendMessage&MessageBody=%7B%0A%20%20%20%22name%22%3A%20%22Ronnie%20Van%20Zandt%22%2C%0A%20%20%20%22drivers_license_number%22%3A%20%221234%22%0A%7D

Response:

00ad4e10-4394-450f-8902-4a9cf4b96b95

b9f28edc9c6dc9fe2a86f5ae8efb2364

97a41dd4-5d15-59e0-b9f5-49e02fb4384d

After this call, we’ve got young Ronnie standing in line at the DMV. Thanks to AWS’s massive scale and performance, we can leave Ronnie there as long as we’d like. And we can add as many people as we’d like – with AWS SQS’s capacity, we could have a line around the world. But that’s horrible customer service, someone needs to find out what Ronnie needs!

ReceiveMessage

At the DMVs I’ve been to, there’s usually a large electronic sign on the counter that will display the next lucky person’s number. You feel a brief pulse of joy when your number displays, and rush to the counter on a pillow of euphoria, eager to get on with your life. How do we recreate this experience in AWS SQS?

Why, “ReceiveMessage”, of course! (Note we are invoking it using the actual QueueUrl passed back by the CreateQueue call above)

Request

https://sqs.us-east-1.amazonaws.com/612055710376/MainLineFIFO?Action=ReceiveMessage&Version=2012-11-05

Response

00ad4e10-4394-450f-8902-4a9cf4b96b95

AQEBjq8apWDfLXE0pCbpABh6Wdx70ZbszY0k38t9u8Mrny1Jz+Q522Vwvvf4xLqzQHfjoHQd56JJJEM67LJG5tQ/YSCibFSNCg8jfadyNMbqBH48/WxmpYunI3w1+GbDCL2tlKkDz/Lm9akGasgDZEBtw6U9jw1Bu6XbzNuNiw5jfVzjC99E38KSvxvZMHfmSi3Wo2XOBAcfU0oTpLmGMwccGiRUOp4XtS38nMXHhBdtKSS+U11N38cJAtlnxHQJkXmTAk7ZdvpxJNtnOrXmeGN00vtf6OSyLJzRJJieYHNtxIyxojcGZcnJQ6dTveMWQ1A1FOzschRuavl3wtftDS/YSt5sDNeBcjEOE+Y0QE+18qiWaDZc+nlaetcBvqmt6Hbt

b9f28edc9c6dc9fe2a86f5ae8efb2364

{
"name": "Ronnie Van Zandt",
"drivers_license_number": "1234"
}

6a43b589-940c-52a4-bc62-e1bde75e22e4

One thing to keep in mind – ReceiveMessage doesn’t actually REMOVE the item from the queue – the item will remain there until explicitly removed. Visibility Timeout can be used to ensure multiple readers don’t attempt to process the same message.

So how do we permanently mark the item as “processed”? By deleting it from the queue!

DeleteMessage

The DeleteMessage action is what removes items from a queue. There’s not really a good analogy with the DMV here (thankfully, DMV employees can’t “delete” us), so we’ll just go with an example. DeleteMessage takes the ReceiptHandle returned by the ReceiveMessage endpoint as a parameter (once again, encoded):

Request

https://sqs.us-east-1.amazonaws.com/612055710376/MainLineFIFO?Action=DeleteMessage&Version=2012-11-05&ReceiptHandle=AQEBjq8apWDfLXE0pCbpABh6Wdx70ZbszY0k38t9u8Mrny1Jz%2BQ522Vwvvf4xLqzQHfjoHQd56JJJEM67LJG5tQ%2FYSCibFSNCg8jfadyNMbqBH48%2FWxmpYunI3w1%2BGbDCL2tlKkDz%2FLm9akGasgDZEBtw6U9jw1Bu6XbzNuNiw5jfVzjC99E38KSvxvZMHfmSi3Wo2XOBAcfU0oTpLmGMwccGiRUOp4XtS38nMXHhBdtKSS%2BU11N38cJAtlnxHQJkXmTAk7ZdvpxJNtnOrXmeGN00vtf6OSyLJzRJJieYHNtxIyxojcGZcnJQ6dTveMWQ1A1FOzschRuavl3wtftDS%2FYSt5sDNeBcjEOE%2BY0QE%2B18qiWaDZc%2BnlaetcBvqmt6Hbt

Response

a69c7042-d0e2-546a-bdf7-2476a30b89df

And just like that, Ronnie is able to leave the DMV with his newly printed license, all thanks to AWS SQS!

DMV line
It’s time to get out of here!

IronMQ vs AWS SQS

While AWS SQS has many strengths, there are advantages to using Iron MQ that make it a more compelling choice, including:

Client Libraries

Iron MQ features an extensive set of client libraries, with clear, straightforward documentation . Getting started with Iron MQ is a breeze. After playing with both SDKs, I found the Iron MQ experience to be easier.

Speed

Iron MQ is much faster than SQS, with V3 making it faster and more powerful than ever before. And with high volume systems, bottlenecks in your messaging architecture can bring the whole system to its knees. Faster is better, and Iron MQ delivers in this area.

Push Queues

Iron MQ offers something called Push Queues, which supercharge your queueing infrastructure with the ability to push messages OUT. So rather than relying solely on services pulling messages off queues, this allows your queues to proactive send the messages to designated endpoints, recipients, etc. This powerful feature expands the communication options between systems, resulting in faster workflow completion, and more flexible architectures.

Features

Check out the comparison matrix between Iron MQ and its competitors (including SQS). It clearly stands out as the most feature-rich offering, with functionality not offered by SQS (or anyone else, for that matter).

Iron MQ offers a free 14 day trial to see for yourself how it compares to SQS. Signup here.

In Con-q-sion

Hopefully this simple walkthrough is enough to illustrate some possibilities of using AWS SQS for your queuing needs. It is easy to use, with incredible power, and their SDK supports a variety of language. And may your next trip to the DMV be just as uneventful as young Ronnie’s.

Happy queueing!

Iron at APIdays, see you there?

First off, we’re giving away a few free tickets to the SF APIdays conference on July 31st.  Comment about your favorite API on this post for a chance to win a free ticket!

With the freebie out of the way, we’re huge fans of APIdays (and API’s in general) and love to reference this landscape diagram.  If the landscapes weren’t moving so fast, we’d probably have a copy printed on our office wall alongside the Cloud Native Landscape diagram.

API’s are everywhere

As engineers, most of us are inherently API minded.  Others, not so much.  It’s only been in the last 5 or 6 years that the idea behind API’s has gained public mindshare.  Following Executive Order 13571 in 2011, the Obama administration directed federal agencies to deploy Web APIs which put API’s in the public spotlight.   There’s been a lot of progress in the public sector, and now we’re holding conferences about API’s in general.  These are steps in the right direction.

Iron <3’s API’s

We build all of our products with API’s in mind.  All of our client libraries for each of our products use our HTTP API’s, and we’ve received a lot of praise for building API centric and cloud agnostic services.  Internally we rely on a lot of API’s as well.  We use API management solutions like DreamFactory to coordinate data sources, RingCaptcha for SMS verification and Zapier to tie disparate services together.  We obviously use all of the public cloud API’s directly as well.

What API’s do you use?

There are many others API’s that we use that I didn’t list.  What are some of your favorite API’s?  Comment below and you might be sent a free ticket to APIdays.  If you’re already going, let us know as we’d be happy to meet up!