Save the Date: DockerCon 2019 Meet-up & Drink-Up

Mark your calendars next week for an awesome 3-day conference that brings the container technology community together to learn, share and connect. DockerCon 2019 will consist of workshops, keynotes, expo hall and of course a few drink-ups. (Our team will be co-hosting one of them!) You can expect a variety of attendees from across the world including C-level executives, systems admins, architects and developers.

Docker and Iron

So, what is Docker to Iron? Iron’s Worker product is a hosted background job solution that lets you run your containers with dynamic scale and detailed analytics. It has the ability to run short lived containers quickly, or even containers needing to work across multiple days. Think of it as serverless containers. (Plus it’s deployable in any environment, cloud, hybrid or on-premise.) While we have expanded its capabilities, IronWorker was built around Docker containers and has a long-standing relationship with Docker, so it’s only natural for to be there.

The Drink-up is teaming up with CircleCI and Sauce Labs from 5:00PM – 8:00PM for a drink-up on Wednesday, May 1st. Whether you are attending DockerCon or will just be in the Bay area, we would love to meet up. It will not only be a great time, but a great opportunity to network and chat over drinks. Rumor has it that vintage board games might make some appearances as well.

We look forward to seeing you there!

Don’t forget to RSVP to the drink-up so that we can check you in at the door!

Google Cloud Functions Alternatives

Google cloud functions alternatives

Google Cloud Functions is a serverless environment that many developers use. It enables programmers to write simple functions and attach them to events related to their cloud infrastructure. It’s a fully managed environment, which means there is no need to allocate servers or other equipment in order for your functions to run.

However, Google Cloud Functions is far from the only tool on the market. In fact, many competitors are out there. Some companies have taken note of the features and capabilities that Google Cloud Functions lacks, making for an even better solution. Here’s a look at serverless environments, an overview of Google Cloud Functions, and an alternative worth considering.

Understanding the Concept of Serverless

The solutions featured here all work on the same premise. They are all based in the cloud, without servers. Here’s an explanation of the terminology you’ll come across when seeing these services described.

  • Functions: A function is a simple code snippet that servers just one purpose. Functions get associated with events.
  • Events: A service will set off an event when something has changed states.
  • Triggers: Events will happen regardless of whether developers define functions and triggers. But the trigger is what connects the function to the event.

These lead into the five principles that should guide any developer who’s adopting a serverless environment.

1. Execute Code on Demand Using a Compute Service

alternatives to google cloud functions

Regardless of the type of serverless environment you choose, all of them serve the same purpose: executing code. With a cloud environment, you will not have to run your own virtual machines (VMs), servers, or containers. Everything stays in the cloud.

In the case of open source architecture, you can run the compute service in your public, private, or hybrid cloud environment. Alternatively, you can pay a vendor monthly and run your cloud environment as a FaaS (function as a service). FaaS means all you have to focus on is custom code. The vendor handles the cloud and coding environment for you.

2. Write Stateless Functions That Serve a Single Purpose

The single responsibility principle (SRP) should guide your work in a serverless environment. You should focus on writing functions, or code snippets, that serve just one purpose. That’s because functions are easier to create. This explains the shift to function-based thinking, as it allows for much easier testing and debugging.

Compared to the development of single functions to the traditional approach of trying to develop an entire app or container all at once, it is an easier process. The focus on microservices allows for greater agility. This granular workflow puts your focus on specific actions, enabling developers to test and launch sooner. In turn, this approach increases efficiency.

3. Design Pipelines Driven by Events

In these environments, developers build pipelines driven by events, allowing even the most complex computing tasks to run easily. The main purpose of a serverless environment is to allow you to create code that connects various services. These pipelines allow you to do just that, giving you the capacity to make different services interact to give you the results you want.

With an event-driven pipeline that’s set up to be push-based, there should be minimal (if any) human intervention. It should all run to be as hands-off as possible. This makes for less involvement on behalf of the users. So, it streamlines the workflow.

4. Create a Powerful Front End Interface

If you’re creating an entire system that consists of both a back and front end, and hosting it in a serverless environment, then this principle is also important. On the other hand, if you find yourself building a pipeline meant to transform a file or some other kind of system, you might not need to worry about a front end.

However, in situations where a front end interface is applicable, the idea is to make it as smart as possible. This requires developers to allocate as much logic as they can to the front end interface. In other words, this front end should interact directly with services. In turn, this will work to decrease how many serverless functions you need to run in the environment.

Obviously, there will always be situations where you cannot or should not set up direct communication with services. Security and privacy are some major examples of why that might be the case. In those situations, it’s best to use the serverless functions for these particular actions. With that said, when you can put something on the front end, you should definitely opt to.

5. Use Third-Party Applications and Services

Cloud functions best alternatives

The beauty of a serverless environment is that you can connect any number of third-party applications and services to reduce how much custom code you have to create. Obviously, this saves developers a great deal of time. By connecting other services, you’re able to leverage the code they’ve already created and use it for certain elements of your application.

Of course, when using third-party apps, always run tests and consider the disadvantages of doing so. Typically, using a third-party app means giving up control to make things faster. There is always a trade-off in some form. And this may prove worth it in many situations, but the goal is to make the right decision to best use your serverless environment.

Choosing a Cloud Environment

If you think that a serverless environment is the right fit for your development, the first decision you have to make is which provider to opt for. There are many out there, but Google Cloud Functions has quickly become one of the most well-known. Despite having just joined the market in 2017, Google Cloud Functions is widely used.

Amazon was actually the one to introduce the idea of a serverless cloud environment. It did so back in 2014 with the release of Amazon Web Services (AWS) Lambda.

Obviously, these services come with a number of benefits, including added flexibility and reduced cost. But choosing Google Cloud Functions will lead to some disadvantages and challenges, too. At the top of the list is the risky business of vendor lock-in along with the cost, and documentation (or lack thereof).

To help you decide which of the main providers might be the right fit for your business, here’s an overview of Google Cloud Functions next to IronFunctions, a popular alternative.

Overview of Google Cloud Functions

Google describes Cloud Functions as an, “event-driven serverless compute platform.” It’s touted as the easiest way to get your code up and running within a cloud environment. Since it’s based in the cloud, it’s also easy to scale and extra reliable. There aren’t any servers to manage, either.

Like other cloud software, you only pay for what you need. In the case of Google Cloud Functions, you’ll only pay for the resources it takes to run your code. The cloud also makes it easy to extend functionality through the connection of other cloud-based services.

Cloud Functions has many use cases, including backends for serverless applications where you can run the backends for your IoT and mobile applications, plus integrate third-party APIs and services. You can also use it for real-time data processing to process files and streams, and use extract, transform, and load (ETL) functions on an event-driven basis.

Additionally, this environment can function as a foundation for intelligent applications. With it, you can set up and run many smart apps, like virtual assistants and chatbots for your business. You can also take advantage of image, video, and sentiment analysis.

The core philosophy behind Cloud Functions is that developers should begin focusing more on small, individual features that they can deploy independently rather than building an entire container or application at once. This is similar to other cloud-based development tools today.

Overview of IronFunctions

IronFunctions best alternative google cloud functions

IronFunctions is a direct competitor of Google Cloud Functions. This open-source environment also allows for serverless computing with the same capabilities of Cloud Functions. One of the main highlights is that it allows you to avoid vendor lock-in. That’s because IronFunctions works on any cloud, whether public, private, or hybrid.

You can also implement Functions directly into any application you are building. Its design purposely allows for seamless and simple integration into your environment. You’ll find that you’ll spend less time working on tasks thanks to advanced job processing. Plus, good job management allows you to focus on building better software.

Functions enables you to better use the infrastructure you already have. You can also easily integrate other tools you want to use, whether they’re commercial or open source. It supports Docker Swarm, Kubernetes, Mesosphere, and countless other popular applications.

To set up Functions, simply implement it into your app and quickly establish the infrastructure and job processing you need to handle. You can then begin focusing on building your software as tasks that typically overload your CPU begin to run seamlessly in the background.

It’s a simple and free serverless environment. But cost isn’t the only area where IronFunctions competes with Google Cloud Functions. As such, comparing them side by side is a must in order to determine which is right for your business.

Deciding What’s Right for You

open source serverless functions

There are many pros to using a serverless environment. The main advantage is that it enables developers to focus on the application they’re trying to build instead of worrying about the infrastructure they’re running it in.

Most developers spend a significant amount of time implementing, maintaining, and fixing their environment in between application development. The notion is that, by running a serverless environment, all of that legwork is no longer needed. Thus, developers have more time to focus on what they do best: developing apps.

The scalability of a serverless, cloud-based environment also makes it very appealing and functional. That’s why the likes of Netflix, AOL, Reuters, and countless other companies already run serverless environments. Fortunately, thanks to its scalability (in either direction), the cloud is also very accessible to smaller companies and even individual developers.

In fact, the reduced cost of operating in a serverless environment is definitely one of its highlights. The price of a serverless environment depends entirely on how many executions you run. There’s no pre-purchasing capacity and overpaying. You’ll only pay for what you need. Plus, without needing to manage servers, there is no cost to keep people active 24/7 managing and fixing said servers when things break.

Another major advantage of operating inside of a serverless environment is that it’s very easy to create multiple environments for development. You can do it in just a click. This is different compared to traditional environments that take planning, developing, staging, and test runs prior to being able to use the new architecture.

With all of this in mind, here’s a look at the primary options on the market for running a serverless environment.

Determining which serverless environment is right for your applications can prove overwhelming. Here’s a simple run-through to help you decide.


cloud functions alternatives

Cloud Functions runs on Google’s infrastructure, so you have to pay to use it. Meanwhile, IronFunctions is open source. It’s made to run in any environment, be it public, private, or hybrid. That said, IronFunctions is also hosting some serverless environments for select customers who request it.

If you choose to use Google Cloud Functions, your cost will depend on the number of invocations (flat rate of $0.40 per million), compute time ($0.00001 per GHz-second), and networking (flat rate of $0.12 per GB). When deploying, you’ll need to specify how much memory your function requires.

With this in mind, you can experiment with Google Cloud Functions for free using the introductory tier. This is a good way to see for yourself whether or not it will work for you before committing a large portion of your budget to it.

Programming Languages

When it comes to supported programming languages, Google Cloud Functions has limited options. It was one of the last serverless environments to join the game, and it shows. The platform continues to support only Node.js 6, Node.js 8 (Beta), and Python 3.7 (Beta).

Meanwhile, IronFunctions leverages containers. This enables you to use any language that Linux or Windows supports. Plus, you can use AWS Lambda format and import any Lambda functions you’ve created before. The CLI tool also makes it easy to create and deploy new functions.


As it’s an open-source project, IronFunction has extensive documentation. There is also a community of developers always happy to help if you post your issue or question online.

Google Cloud Functions, on the other hand, has limited documentation. Many have also brought up its community support, or lack thereof, in many reviews. With that said, the paid support for Google Cloud Functions is highly reliable.

User Interface

When it comes to the user interface of either solution, both Google Cloud Functions and IronFunctions have a great layout. With that said, Google Cloud Functions can sometimes feel a bit disjointed as you go about using it.

BigQuery, for instance, uses a slightly different user interface, which can make it feel detached from the core features. StackDriver, the logging feature for Google Cloud, suffers from the same ill design.

The Bottom Line

IronFunctions allows you to take advantage of all the perks of a serverless environment without getting locked in to one vendor or paying hefty fees. You can run it in your own environment, on your own terms. It’s highly flexible and allows for deep integration so that you can produce the best apps possible.

To sum it up, IronFunctions is open-source, budget-friendly, and ready to run your projects. Want to learn more about IronFunctions and everything it can do? Click here.

ECS Alternatives

If you are in the field of software development, you have probably heard of containers. A containerized application has myriad benefits, including efficiency, cost, and portability. One of the big questions with this technology is where and how to host it? In house, in the cloud, somewhere else? Amazon Web Services (AWS) offers a few options for container hosting. Elastic Container Services (ECS) is one of those offerings. ECS provides robust container management, supercharged with the power of AWS. However, there are other options out there. ECS alternatives may better fit your needs. An important decision like this justifies some shopping around.

There are several things to consider when choosing a container host. One size does not fit all! Each customer has their own in-house skillset and existing cloud integrations.

This post will illustrate the important things to consider. We will dig into details around alternatives to ECS. We will compare and contrast the offerings, looking at the pros and cons of each. With this background information, you will be better educated on this decision. You can then decide which solution best fits your business needs.

Alternatives to ECS and EC2.

AWS Elastic Container Service

AWS Elastic Container Service (ECS) is Amazon’s main offering for container management. Utilizing ECS allows you to take advantage of AWS’s scale, speed, security, and infrastructure. With this power, you can launch one, tens, or thousands of containers to handle all your computing needs. ECS also ties in with all the other AWS services, including databases, networking, and storage.

ECS offers two main options for containers:

  • AWS Elastic Compute Cloud (EC2): EC2 is AWS’s virtual machine service. Using this option, you are responsible for selecting the servers you want in your container cluster. Once that’s complete, AWS handles the management and orchestration of the servers.
  • AWS Fargate: Fargate abstracts things another level, eliminating the need to manage EC2 instances. Rather, you specify the CPU and memory requirements, and AWS provisions EC2 instances under the covers. This offers all the power of ECS, without worrying about the details of the actual underlying servers.

Pros and Cons

Here are some things to consider with the ECS offerings:

  • Integration with AWS: One of the biggest decisions around using ECS is its integration and reliance on AWS. This is either a pro or a con, depending on your circumstances. If you are already using AWS, adding ECS to the mix is a straightforward proposal. However, if you are not currently using AWS, there is a considerable learning curve to get up and running.
  • More Automation: ECS provides layers of automation over your containers. Customers without in-house expertise to manage the lower-level complexities may prefer this. However, it may also bind the hands of someone who wants more control over their container landscape. Fargate takes the automation a step further. Again, that could be good or bad, depending on your situation.
  • Cost: In this age of modern cloud computing, it is typically more cost effective to run everything in the cloud. No more hardware to purchase, networking snafus to resolve, or expertise to hire and retain. However, the cost differences in the container offerings are more nuanced. If you have container expertise in-house, it might be more cost effective to run your own container solution on top of AWS services. If not, you may save money using something like ECS.
  • Deployments: One key drawback to ECS is that it is not available on-premise. While all cloud may be fine for many businesses, there are instances where maintaining legacy services or closed networks is preferential if not mandatory.
  • Vendor lock in: In order to use ECS, you must be on AWS cloud. This also means the possibility of getting locked into a single technology provider if steps are not taken to painstakingly avoid this.

Google Cloud/Kubernetes

Similar to AWS, Google offers “all the things” on its cloud services. This includes servers, storage, databases, networking, and other technologies. Google’s solution for managing containers is Kubernetes, an industry leader in container orchestration. Kubernetes began as a project within Google, which eventually made it open source, available to the public. Since then, it has become one of the strongest options for container orchestration. Kubernetes is a service that all the major cloud providers offer. Google currently offers this service similar to AWS’ ECS called Google Kubernetes Engine, or GKE for short.

ECS alternatives cloud.

Pros and Cons

There are some pros and cons of using Google for your container services:

  • Integration with Google services: Like the AWS decision, you need to consider whether you currently use Google Cloud services. If you are already heavily invested there, adding Kubernetes to the top makes sense. If you are not, then it may introduce a large amount of time and cost to the equation.
  • Familiarity with Kubernetes: This is a big one. If you have in-house expertise with Kubernetes, you’ll feel comfortable running it in Google Cloud. If not, there’s a fairly steep learning curve to get there. Kubernetes is not for the faint-hearted.
  • Less Automation: With Kubernetes, Google puts more power (and responsibility) in the hands of their customers. Some customers may prefer that level of control. Others may not want to worry about these lower-level details.
  • Deployments: As with AWS, a key drawback is that it is not available for on-premise deployments.
  • Vendor lock in: In order to use GKE, you must be on GCP. Again, this means the possibility of getting locked into a single technology provider if steps are not taken to avoid this.

Microsoft Azure

Amazon Web Services Elastic Container Service alternatives.

Rounding out the offerings of the “Big Three” cloud providers is Microsoft’s Azure. It offers a few flavors of container management, including the following:

  • Azure Kubernetes Service (AKS): Azure provides hosting for a Kubernetes service, and with it, the same pros and cons. Good for customers with Kubernetes know-how, maybe not for those without.
  • Azure App Service: This is a more limited option, where a small set of Azure-specific application types can run within hosted containers.
  • Azure Service Fabric: Service Fabric allows for hosting an unlimited number of microservices. They can run in Azure, on premises, or within other clouds. However, you must use Microscofts infrastucture.
  • Azure Batch: This service runs recurring jobs using containers.

Pros and Cons

Here are some pros and cons of the Azure offerings:

  • Confusion: The list above illustrates the many container-based services Azure offers. There are many “Azure-specific” technologies at play here. It can be hard to differentiate where the containerization stops and the Azure-specific things begin.
  • Integration with Azure Services: If you are already using Azure for other services, using its container offerings makes sense. If not, you’ll need to climb the Azure learning curve. As with the other cloud providers, this introduces time and resource expenses.
  • Less (or More?) Automation: The Azure offerings run the gamut. They start with no management (Azure Container Registry) to fully managed (Azure App Service and Azure Service Fabric). Once educated on all the features, pros, and cons of each, you may find a solution that perfectly meets your needs. Or, you might possibly drown in the details.
  • Deployments: Differing from both AWS and GCP, Azure Service Fabric is actually available on-premise. However, (and it s a big however), you must use Microsoft servers that Azure provides. By going down this route you are virtually guaranteed to be locked into the Azure/Microsoft technology architecture with no easy way out.
  • Vendor lock in: See above, as with both GCP and AWS, vendor lock-in is difficult to avoid and expensive to leave.

AWS ECS alternatives.

Another ECS alternative that may surprise you is It provides container services but shields customers from the underlying complexities. This may be perfect for customers not interested in developing large amounts of in-house expertise. offers a container management solution called Worker. It is a hosted background job solution supporting a variety of computing workloads. allows for several deployment options (on its servers, on your servers, in the cloud, or a combination of these). It manages all your containers and provides detailed analytics on their performance. By handling the low-level details, allows you to focus on your applications. You focus on your business; they’ll worry about making sure it all runs correctly.

Pros and Cons

Here are some things to know about

  • Easy to Use: For customers that want the benefits of containerization without having to worry about the lower-level details, is perfect. Focus on your applications and let the pros worry about infrastructure.
  • Flexible: For customers that have Docker/Kubernetes expertise, provides its hybrid solution. You host the hardware and run the workers there. provides automation, scheduling, and reporting. You don’t have to give up what you already have to gain what has to offer. Iron also offers a completely on-premise deployment of Worker. This allows installing Worker in environments with high compliance and security requirements.
  • Powerful: can scale from one to thousands of parallel workers, easily accommodating all sizes of computing needs.
  • Deployments: Unique to Iron Worker is the ability to deploy fully on-premise, as well as hybrid and fully cloud.
  • No Vendor lock-in: Another unique aspect of Iron Worker is the ability to avoid being locked into any single vendor. It is cloud agnostic, so it will run on any cloud. Migration is also virtually a one-click process. This means operational expenses are kept to a bare minimum. It also means deploying redundantly to multiple clouds is an easy, efficient process.


Containerization is the future of computing. The need to own and run our own servers (or even our own operating systems) is slowly fading. The big question is where to start? Customers with Docker expertise, and existing cloud provider integrations, may find a container solution from a big cloud provider as the best choice. For customers just starting out in this field, or those looking to add management and analytics to an existing solution, adds a good deal of power. will grow with you, and with initial architectures in place, other options will unfold.

With this information in hand, you’re better prepared to answer some big questions. May your containers go forth and multiply!

Ready to get started with IronWorker?

Start you free 14 day trial, no cards, no commitments needed. Signup here.

Introducing: Computerless™

Iron was one of the pioneers of Serverless, so we’re excited to announce that we’ll also be one of the first companies to offer the next generation of compute:  It’s called Computerless™.

Unlike Serverless, this technology removes the physical machine completely.  Our offering piggy-backs off the recent developments in fiber optic technology developed at the University of Oxford.  If you haven’t heard about this breakthrough, we’ll do our best to explain:

Researchers have found a way to control how light travels at the molecular level, thus being in complete control of the resulting attenuation.  Molecular gates can then be created, and state stored in finite wavelengths. It’s somewhat equivalent to qubits in quantum computing, but in the case of optical fiber, it’s a physical reality.

The end result of this technological release allows for computers to be fully encapsulated in fiber optic cable.  The usual components needed are now mapped 1-to-1, via light. This has allowed Iron’s infrastructure to completely change.  While we’ve run our infrastructure on public clouds like AWS and GCP in the past, we’ve been able to leave that all behind. We’re now able to push our entire suite of products into optical cable itself:

Iron’s new and improved infrastructure on a cheap plot of land in Arkansas

In the next few months, we’ll be pushing all of our customer’s sensitive data into the cables shown above as well as running all Worker jobs through them.  We’re pretty sure the cables we purchased are for multi-tenant applications, so you can probably rest assured that we’re doing the right thing. In fact, NASA has already expressed an interest in licensing this technology from Iron. Other interested parties include the government of French Guiana and defense conglomerate Stark Industries. 

Researchers have kind-of concluded that this technology is ready for prime time, and also are quick to state the fact that in 1998, The Undertaker threw Mankind off Hell In A Cell, and plummeted 16 ft through an announcer’s table.

On-Premises or On-Cloud? How to Make the Choice

on prem container management


It was once a tech buzzword, but cloud computing has become a mature best practice for companies of all sizes and industries. Even if you don’t know it, it’s highly likely that part or all of your business has already moved into the cloud.

Case in point: A whopping 96 percent of organizations now report that they use cloud computing in some form or fashion.

Despite this lofty statistic, many companies are also choosing to maintain some or all of their technology on-premises. So what exactly is the difference between on-premises and on-cloud? How can you make the choice for yourself? We’ll discuss the answers in this article.

What Is On-Premise Computing?

On-premise computing (also known as “on-premises”) is the traditional (and, until recently, the dominant) model of enterprise IT.

In the on-premises model, organizations buy their own hardware and software, and then run applications and services on their own IT infrastructure. On-premises applications are sometimes called “shrinkwrap.” This refers to the plastic film used to package commercial off-the-shelf software.

The term “on-premises” implies that the technology is physically located on the organization’s own property. This could be in the building itself or at a nearby facility. This grants the organization full control over the technology’s management, monitoring, configuration, and security.

Companies that use the on-premises model usually need to purchase their own software licenses and handle their own tech support. For these reasons, on-premise computing is well-suited for larger enterprises. They are more likely to have sizable IT budgets and skilled IT employees.

What Is Cloud Computing?

Cloud computing vs. on premise

Cloud computing is an enterprise IT model in which hardware and/or software are hosted remotely, rather than on company premises.

There are two main types of cloud computing: public cloud and private cloud.

  • In a public cloud, a third party is responsible for providing your business cloud services, software, and storage via the internet. Your data is hosted within the cloud provider’s remote data center, separate from the data of other customers.
  • In a private cloud, your business owns or maintains its own cloud infrastructure. The cloud is provisioned for the use of a specific organization. Like the public cloud, software, storage, and services are provided remotely via the internet.

In addition to the public-private division, there are three different types of cloud computing that you should know about: IaaS, PaaS, and SaaS.

  • IaaS (infrastructure as a service): This is the most bare-bones cloud computing offering. Users rent IT infrastructure such as virtual machines (VMs), servers, networks, and storage and access them via the internet. However, they are responsible for managing all other aspects of the system: runtime, middleware, operating systems, applications, and data.
  • PaaS (platform as a service): PaaS includes all the services provided by IaaS, as well as the runtime, middleware, and operating system. Users are only responsible for managing the applications and data.
  • SaaS (software as a service): SaaS is an all-in-one offering that includes everything from the VMs and servers to the applications running atop them and the data that they use. A few examples of SaaS products are Dropbox, Google Apps, Microsoft Office 365, and Adobe Creative Cloud.

On-Premise vs. On-Cloud: The Pros and Cons

Rather than jumping on the cloud bandwagon, it’s important to perform a sober evaluation of the pros and cons of both on-premise and on-cloud for your own business needs. Now that we’ve defined both on-premise computing and cloud computing, let’s discuss which one is more convenient in terms of four considerations: cost, scalability, security, and backups.

Cost Model

In terms of cost, the cloud vs. on-premises comparison boils down to two different pricing models: capital expenses and operating expenses.

With on-premise computing, businesses need to make a large capital investment up front when buying hardware and software. They’re also responsible for any support and maintenance costs incurred during a product’s lifetime.

Cloud computing, meanwhile, is usually an operating expense. Businesses pay a monthly or annual subscription fee in order to have continued access to the cloud. Upgrades, support, and maintenance are the vendor’s responsibility and usually baked into the costs of the subscription.

Some companies find that the subscription model of cloud computing is more convenient for their purposes. Subscribing to a new service for a few months can be more cost-efficient for businesses that are looking to experiment, and for smaller businesses that don’t have large amounts of capital available. For example, Xplenty is a cloud-based ETL (extract, transform, load) tool that offers code-free transformations. The cost of paying for Xplenty’s platform monthly is less expensive than paying a data engineer to handle the process. However, research by firms such as Gartner have shown that both cost models are equivalent over the long term.


container management on-prem

Scalability refers to a system’s capacity to easily handle increases in load. For example, a website that usually sees very little traffic could suddenly have thousands or millions of visitors if it starts to receive attention on social media.

Cloud computing is able to rapidly scale storage and services up and down during peaks and lulls in activity. This is tremendously helpful for companies that see frequent changes in demand, such as a greeting card e-commerce website that does most of its business during a few holidays.

When compared with the cloud, on-premise computing is fairly brittle and difficult to scale. Businesses that operate on-premises may need to buy powerful hardware that goes unused for much of the time, wasting time and resources.

Compliance and Security

For many organizations, concerns about security and compliance are the biggest reason that they haven’t yet moved their data and infrastructure into the cloud. Fifty-six percent of IT security decision-makers say that their company’s on-premises security is better than what they can have in the cloud.

Industries that handle sensitive personal information — such as heath care, finance, and retail — have their own regulations about how this data can be stored and processed. In addition, many U.S. federal agencies have chosen to keep some or all of their workloads on-premises.

Nevertheless, despite popular concerns about the security of cloud computing, there has yet to be a large-scale breach of one of the major public cloud providers that was due to a fault in their technology. The breaches involving Amazon Web Services, Microsoft Azure, and Google Cloud Platform have been due to human error. This presents one advantage of on-premise: containment of human errors.

As Adrian Sanabria, director of ThreatCare says: “Since everything in the cloud is virtualized, it’s possible to access almost everything through a console. Failing to secure everything from the console’s perspective is a common (and BIG) mistake. Understanding access controls for your AWS S3 buckets is a big example of this. Just try Googling “exposed S3 bucket” to see what I mean.”

With an on-premise workload, if a person makes a configuration error, the possibility of a breach is lessened because there is no single console for everything in their IT system. So a single error is less likely to result in a data loss, big or small. After all, human errors will persist for the foreseeable future.

As a final note, both on-premises and cloud storage solutions support encryption for data while in transit and at rest.

Backups and Disaster Recovery

running containers on premise devops

One of the biggest selling points of the cloud is the ability to securely back up your data and applications to a remote location.

Whether it’s a natural disaster or a cyberattack, the effects of a catastrophe can devastate organizations that are caught unprepared. According to FEMA, 40 to 60 percent of small businesses never reopen after suffering a disaster.

In an era when threats both natural and virtual are multiplying, it’s critical to have a robust strategy for disaster recovery and business continuity. Customers, employees, and vendors must all be assured that your doors will be reopened as soon as possible.

The benefits of the cloud as a backup strategy are clear. Data stored in the cloud will survive any natural disaster that befalls your physical infrastructure, thanks to its storage at a remote site.

However, this benefit is also a double-edged sword; restoring data from cloud backups is usually slower than restoring data from on-premises. Therefore, organizations that can afford it often choose a two-pronged backup strategy: on-premises backups as the first line of defense, as well as secondary backups in the cloud.

Hybrid Cloud: The Best of Both Worlds?

So far we’ve presented on-premises and the cloud as polar opposites. However, reality isn’t quite that simple. Fifty-one percent of organizations have chosen to pursue a “hybrid” cloud strategy.

In a hybrid cloud solution, businesses use a combination of both on-premises and cloud technology, mixing and matching as best suits their goals and requirements.

For example, the city government of Los Angeles has opted for a hybrid cloud deployment using both public cloud and on-premises infrastructure. Officials decided that data and applications from certain departments — such as emergency services, traffic control, and wastewater management — are too risky to host in the cloud.

Other enterprises are attracted by the features of the cloud, but are still content with their current on-premises deployment. These organizations choose a hybrid strategy for now, slowly migrating to the cloud while replacing their on-premises infrastructure piece by piece.

Still other companies prefer the business agility that a hybrid cloud strategy offers. Different software, data, and components can operate between different clouds and between the cloud and on-premises. These organizations usually have needs and objectives that evolve quickly, making flexibility an essential concern.


Whether you’re staying on-premises for now or you’re totally committed to the cloud, there’s no wrong answer when it comes to on-premises and on-cloud. Instead of blindly following trends, each business needs to examine its own situation to determine the best fit.

Here at, we understand that each organization has a unique timeline and different goals for its enterprise IT. That’s why we offer both IronWorker and IronMQ in cloud, hybrid, and on-premise deployments.  IronMQ is a high-performance message queue solution, while IronWorker is a highly flexible container orchestration tool that allows background task processing with ease.

Want to find out more? Get in touch with us to sign up for a free trial of IronWorker and IronMQ today.

Iron Shout-out: Scout APM

scout apm application monitoring

At Iron we love programming languages.  We started off with Ruby way back in the day, and eventually moved most of our latency critical services to Golang.  Internally, our team is made of some that love Typescript, some that speak fluent Rust, and myself… I’m a big Erlang nerd at heart, so I’m obviously a big Elixir fan.  

While the aesthetics and “pleasantness” of a language are important to its user, each language is a tool, and the right tool should be used for the right job. This often times isn’t the case, and we rely on tooling to give us more insight into the repercussions of our language choices and implementations.

Enter Scout APM.  Out of all the SaaS applications we use, it’s (by far) the one that has saved us the most money.  It must be noted that it’s not a product you install and hope it magically solves all your performance issues and optimizes your infrastructure for you.  Scout APM is more like a map that shows you all the treasure chests. It’s up to you to go dig them up, however deep they may be buried.

< image from dashboard showing performance improvement after a fix here >

After installing Scout APM for the first time, we looked for the lowest hanging fruit.  These were easy fixes like missing indexes in our database, N+1 queries, or slow external network requests.  These were thrown in our pipeline and resolved quickly, as they’re mostly one-line fixes. The next step for us was to identify the larger picture issues.  

Scout APM does a great job of giving you not just the finite details of a particular issue, but also the ability to take a step back and look at things with a birds eye view.  In our case, we found ourselves using an ActiveRecord construct in “many” places in our platform that was causing huge spikes in memory usage that was resulting in extreme process bloat.  This bloat led to churn, and… it definitely snowballed from there.

Our platform used to run on way too many machines and they ran hot.  After fixing most of our performance issues we were able to scale down our instance fleet significantly.  This was even after we went from 30 servers to 2 by moving a critical piece of our infrastructure from Ruby to Golang.

At the end of the day, the cost of Scout APM ended up being an insignificant percentage of what we were saving each month.  It took man hours to fix the actual issues themselves, but these performance enhancements ended up flowing into our pipeline like normal technical debt items.  The benefit of these items is that they were directly tied to decreasing operational costs.

To be noted, we ended up choosing Scout APM due to many factors. One of the biggest reasons was due to their fantastic customer support.  They went above and beyond to help answer our constant questions when we first started using their platform (and we asked A LOT of questions).  If you aren’t using an APM tool, or aren’t 100% happy with what you’ve got, the engineering team here at Iron highly recommends running with Scout APM.

Docker vs Kubernetes – How Do They Stack Up?

Docker and Kubernetes are two hot technologies in the world of software. Most software architectures are using them, or considering them. The question is often asked – Docker vs Kubernetes – which is better? Which one should we be using? As it turns out, this question misrepresents the two. These two technologies don’t actually do the same thing! They do complement each other nicely, however. In this post, we will explore the “Docker vs Kubernetes” question. We will dig into the backgrounds and details of both. We will also show how they differ. With this information, you can better decide how Docker and Kubernetes fit in your architecture. First, some background…
How Did We Get Here?
Before diving into the topic, let’s walk through a brief history of how we got here.
In the Beginning…
In the REALLY early days of computing (like, the 1960s), there was time sharing on mainframes. On the surface, this looked nothing like its modern day counterparts. A room full of big iron, and perhaps a primitive text-based terminal. Lots of little lights. Very limited functionality. Yet, the concept is the same – one machine serving many users at once. Each isolated from each other. While not practical for today’s needs, this technology planted the seed for the future. Around the 1980s and 1990s, computer workstations began to grow in prominence. Computers no longer required a room full of mainframe hardware. Instead, a server could fit on your desk. One in every home! In the software industry, these workstations become the main workhorses of web serving. This didn’t scale well to a large number of users and services, due to the expensive hardware. For most users, a beefy workstation offered far more capacity than a one person required.
Virtual machines
Virtual Machines (VM) offered a solution to this problem. Full Virtualization allowed one physical server to host several “VM instances”. Each instance featured its own copy of the Operating System. This allowed “machines” to be rapidly created and deployed. Instead of deploying a physical server each time you needed a computer, a VM could take its place. These VMs were usually not as powerful as a full workstation, but they didn’t need to be. This advance made it much easier to add new machines to a computing environment. It was inefficient and costly though. Each VM instance required a full operating system. Lots of duplicate code and processes would run on a single VM server. Lots of OS licenses needed purchasing. The industry kept working on better alternatives.
Containers (also known as Operating-System-Level Virtualization) provide a solution to this waste. A single container environment provides the “core” Operating System processes. Each container running in this environment is an isolated “user-space” instances. In other words, the instances share the common functionality (file system, networking, etc). This eliminates the duplicate OS-level processes. As a result, a single physical server can support a much large volume of containers. Additionally, cloud computing landscape lends itself very well to container architecture. Customers generally don’t want (or need) to worry about individual machines. It’s all “in the cloud.” Developers can code, test, and deploy containers to the cloud. Never worrying about the hardware they are running on. Containers have exploded in popularity with the growth of cloud computing.
Docker (both the company and product) is a big name in containerization. Docker begin as an internal project at a dotCloud, a Platform as a Service company. It soon outgrew its creator and debuted to the public in 2013. It is an open source project, and has rapidly become a leader in the Container space. “Google” is synonymous with “Search”. You might say, “google it”. The same has almost become true for Docker – “use docker” means “use containers”. Docker is available on all major cloud platforms, with rapid growth since its release.Here are some key concepts from world of Docker:
  • Image – the Docker Image is the file that holds everything necessary to run a Container. This includes:
  • the actual application code
  • a run-time environment, with all the OS services the application needs.
  • any libraries needed for your application
  • environment variables and config files, such as connection strings and other settings.
  • Container – a Container is a “copy” of an Image, either running or ready to run in Docker. There can be more than one Containers copied from the same Image.
  • Networking – Docker allows different Containers to speak to each other (and the outside world). The code running in the Container isn’t “aware” that it’s running within Docker. It simply makes network requests (REST, etc), and Docker routes the calls.
  • Volumes – Docker offers Volumes to allow for shared storage between Containers.
The Docker “ecosystem” consists of a few main software components:
Docker Engine
Docker’s main platform is the Docker Engine. It is the software that hosts and runs the Containers. It runs on the physical host machine, and is the “sandbox” all the containers will live within. The Docker Engine consists of the following components:
  • The Server, or Daemon – the Daemon is the “brains” of the whole operation. This is the main process that manages all the other Docker pieces. Those pieces include Images, Containers, Networks, and Volumes.
  • REST API – The REST API allows programs to communicate with the Daemon for all their needs. This includes adding/removing Images, stopping/starting Containers, adjusting configuration, etc.
  • Command Line Interface (CLI) – allows command line interaction with the Docker Daemon. This is how end users interact with Docker. It uses the Docker REST API under the covers.
Docker Hub
Docker Hub is an enormous online library containing vast quantities of “pre-made” images. Like Github, except instead of hosting Git repositories, it hosts Docker images. For almost any software need, there is an image on Docker Hub that provides it.For example, you might need:
  • a Rails environment for web services
  • connected with a MySQL database
  • with Redis available for caching.
Dockerhub contains “Official” images for these types of things. “Pull” the required images to your local environment, and use them to build Containers. Complex, production-ready environments can be ready within minutes. Companies can also pay for private repositories to host their internal Docker images. Dockerhub offers a centralized location to track and share images. History tracking, branching, etc. Like Github, except for Docker.
Docker Swarm
Docker Swarm is Docker’s open source Container Orchestration platform. Container Orchestration becomes important in large scale deployments. Large environments, with tens, hundreds or thousands of Containers. With this type of volume, manually tracking and deploying Containers becomes cost prohibitive. An Orchestration platform provides a “command center.” It monitors and deploy all the various Containers in an environment. Docker Swarm provides some of the same functionality as Kubernetes. It is simpler and less powerful, but easier to get started with. It uses the same CLI, making its usage familiar to a typical Docker user. We’ll get more into Container Orchestration below.
While Docker is the industry leader, there are alternatives. These include:
  • Core OS’ rkt (Rocket) – the “pod-native” container engine. Developed by a Kubernetes-based software team, this is a competitor to Docker.
  • Cloud Foundry – adds a layer of abstraction on top of Containers. Allows you to provide the application, and not worry about the layering beneath. With this service, you’re not really focused on the Container layer.
  • Digital Ocean – a cloud-based provider that calls its containers “droplets”. This appears to be like Cloud Foundry, in that they abstract away some complexity. There are still cloud/Kubernetes options in their control panel.
  • “Serverless” services – major cloud providers like AWS and Azure offer “serverless” services. These allow companies to create simple webservices on the fly. No hardware, or hardware virtualization. No worries about the underlying platform. Not technically Containers, but offer support for many of the same use cases.


Kubernetes is the industry leader in Container Orchestration. First, here’s an overview of what that is…
Container Orchestration
Containers are a very powerful tool, but in large environments, they can get out of hand. Different deployment schedules into different environments types. Tracking uptime, and knowing when things fall down. Networking spaghetti. Capacity planning. Tracking all that complexity requires more tools. As this technology has matured, Container Orchestration platforms have grown in importance. These orchestration engines offer some of the following benefits:
  • “Dashboard” for all the Containers. One place to watch and manage them all.
  • Automatic provisioning and deployment. Rather than individually spinning up Containers, the orchestration engine manages for you. Push a button, adjust a value, more Containers spring to life.
  • Redundancy – if a Container fails in the wild, an orchestration engine will notice it fail. And put a new one in its place.
  • Scaling – as your workload grows, you may outgrow what you have. An orchestration engine detects capacity shortages. It adds new Containers to spread the load.
  • Resource Allocation – under all those Containers, you’re still dealing with real-life computers. Orchestration engines can manage and optimize those physical resources.
While there are several options available, Kubernetes has become the market leader.
Rise of Kubernetes
Kubernetes (Greek for “governor”) began at Google in 2014. It was heavily influenced by Google’s internal “Borg” system. Borg was an internal tool Google used to manage all their environments. Google released and open-sourced Kubernetes in 2015. It has since grown to become one of the largest open source projects on the planet. All the major cloud providers offer Kubernetes solutions. Kubernetes is now the de facto Container Orchestration platform. This post goes into great detail about the growth of Kubernetes over the past couple of years.
Kubernetes architecture
At a very high level, Kubernetes helps manages large numbers of Containers. Simple enough, right? At a more granular level, Kubernetes consists of a Cluster managing lots of Nodes. It has one Master Node, and one-to-many Worker Nodes. These Nodes use Pods to deploy Containers to environments. As requirements scale, Kubernetes can deploy more Containers, Pods, and Nodes. Kubernetes tracks all the above, and adds/removes when needed. Here’s a closer look at all the concepts described above:
  • Cluster – A Cluster is an instance of a Kubernetes environment. It has a Master node and several Worker nodes.
  • Node – A Kubernetes Node is a process that runs on a server (physical or virtual). A node is either a Master node, or a Worker node. Together, Master and Workers manage all the distributed resources, both physical and virtual.
  • Master – the Master node is the control center for Kubernetes. It hosts an API server exposing a REST interface used to communicate with Worker nodes. The Master runs the Scheduler, which creates Containers on the various Worker Nodes. It contains the Controller Manager, which manages the current state of the cluster. If the cluster doesn’t match the desired state, the Controller Manager will correct. For example, if Containers fail, it creates new Containers to take their place.
  • Worker – the Worker Nodes carry out the wishes of the Master Node. This includes starting Containers, and reporting back their status. As an environment needs to scale to more machines, Kubernetes adds more Worker Nodes.
  • Pods – A Pod is the smallest deployable unit in the Kubernetes object model. It consists of one or more Containers, storage resources, networking glue, and configuration. Kubernetes deploys Pods to Nodes. Docker is the main Container technology Kubernetes uses, but others are available.
While Kubernetes is the front runner, there are alternative options for Container Orchestration. These include:
  • Docker Swarm – already mentioned above, this is Docker’s Container Orchestration offering. This has the advantage of coming from the same team that maintains Docker. It is also considered easier to use by some, and faster to get started. Additionally, Swarm uses the same CLI as Docker. This makes it easy to use for those already familiar with Docker.
  • Apache Marathon – a container swarm for Apache Mesos. Not as widespread or popular as Docker/Kubernetes. If you are already invested in the Apache ecosystem, this might be a good choice. This requires a decent level of Linux/Apache expertise to get started.
  • Nomad – this is a lightweight orchestration platform. This doesn’t feature all the bells and whistles of more advanced systems. It is more simple, though, which may appeal to some.
In Summary

With all this solid background in place, we are now better poised to make a decision. How to containerize everything? For starters, Docker is a must. While alternatives exist, Docker is the clear front runner. It has become the industry standard, and features extensive tooling and documentation. It is open source, and free to get started. You can’t go wrong using Docker as your container technology. Once things get big enough to orchestrate, you must make a decision. The best two choices seem to be:
  1. Docker Swarm – an easy stepping stone from simple Docker, Swarm is worth exploring first. Using the same CLI, you can grow your Docker environment to multiple Containers on several Machines. If you are able to manage everything this way, you might just stop there.
  1. Kubernetes – if Swarm doesn’t seem up to the task, it’s probably worth the leap to Kubernetes. It’s the leader in the orchestration space, which offers the same documentation and support advantages. It will grow as big as you need it, and supports the complications that arise with large-scale systems.
If your organization is looking to use Containers in the Cloud, can help you get there. supports Docker, Kubernetes and other alternatives.’s expert staff will help you intelligently scale your business on any of the major cloud platforms. is trusted by brands such as Zenefits, Google, and Untappd. Allow them to help your business containerized in the cloud!

* These fields are required.

Docker Jobs: 11 Awesome Jobs for 2019

If you’re searching for docker jobs online, it can be a real challenge to find open positions that fit your skills.

A development job presents a fantastic opportunity to work at an innovative company. However, finding positions can be a time-consuming process. Here’s some advice for finding docker jobs online. You’ll also find 11 open positions to consider.

docker jobs

Docker Skills Are The Next Best Thing

When it comes to automating the creation and deployment for container-based apps, Docker is the go-to technology. That’s why it’s one of the best skills you can possess as a developer in today’s marketplace.

Containers, being a lighter weight type of virtualization, are truly taking over. Docker has the hope of freeing developers from dependencies on software and other types of infrastructure. That means Docker’s approach is able to cut costs and boost efficiency.

Overall demand for DevOps skills has been steadily increasing since the early 2000s. As a developer, you recognize the importance of continuously expanding your skillset. Docker is the new thing you should be looking to sharpen up on.

docker jobs

The Benefits To Expect

Working a top-of-the-line position at an innovative new company means you’ll get to enjoy a number of different benefits. These are things the general workforce is yet to have access to.

First, innovations in workplace healthcare have brought in-office care to the scene. Other wellness programs are also being further emphasized. New perks are coming to big companies and innovative startups alike. And these are the places currently searching for Docker professionals.

Secondly, you’ll get to enjoy a strong work/life balance. This is thanks to a number of initiatives that larger companies are taking. Businesses are now working hard to support employees in living a healthier lifestyle. This includes paid time off and paid holiday. Oftentimes, sabbaticals are also offered that allow you to truly escape for a while.

Volunteering opportunities and other team-building outlets are abound. They can help you find more meaning in your career. You can even find purpose in your personal life thanks to work-sponsored endeavors. This is all part of a widespread effort on behalf of companies. Many companies are trying to be more supportive of employees’ well-being.

Many modern workplaces feature on-site gyms and fitness centers. Personal coaching is often included. It’s also becoming more common for companies to pay for a fitness membership on behalf of workers.

Some companies offer wellness bonuses. So you can even get paid money for keeping yourself in tip-top shape. That’s right, some companies actually monetarily reward employees. Get paid to lose those extra inches or make strides to living a healthier lifestyle.

Of course, this all pays off in the end for the company. Study after study is proving how important work/life balance is. Studies are also proving how motivating it can be for a company to go the extra mile to support workers’ health. This is why newer companies are adopting and offering such neat programs.

If you’re focused on your family, you may even get the joy of parental leave. At the very least, this work perk will allow you to take time off for your family without getting penalized. The best companies even offer paid parental leave. That means you can take time off without adding any financial stress.

docker jobs

11 Docker Jobs to Consider

  1. Senior Software Developer at ThoughtWorks. Work with Fortune 500 clients as you work through business challenges. Your job is to spot poorly written code and fix it. Experience with Docker preferred.
  2. Senior Backend Engineer at AllyO. This is a fast-growing startup looking to build a strong team. They need an experienced and motivated individual. Experience with Docker required.
  3. DevOps / Python Developer at Lore IO. This is a well-funded startup in its early stages. In this position, you’ll be integrating Lore into cloud ecosystems. Experience with Docker preferred.
  4. Senior Python Developer at Mako Professionals. As a senior developer, you’ll spend about 25% of your time coding. You’ll also test systems and work with a collaborative team. Experience with Docker required.
  5. Senior Site Reliability Engineer at Procurant. Support and maintain services in this exciting position. Your position will scale systems using automation. Evolving technologies is a must. Experience with Docker required.
  6. Senior Python Developer at Pearson. Lead development initiatives and work closely with scientists in this position. You’ll promote the use of new technologies too. Experience with Docker required.
  7. Senior Python Backend Developer at Mirafra. Design database architecture in this fast-paced environment. Your job includes delivering high performance applications. You’ll focus on scalability too. Experience with Docker preferred.
  8. Senior Database Administrator at Verisys. If you’re fun and energetic, this could be the right position for you. Work to build a next generation platform for healthcare credentialing. Experience with Docker preferred.
  9. Senior DevOps Engineer at Outset Medical. This privately held company has a number of investors backing it. Work on innovative medical technologies in this rewarding position. Experience with Docker required.
  10. Senior Site Reliability Engineer at Guardian Analytics. This company fights fraud in the financial industry. Your job will play a vital role in helping them keep consumers safe. Experience with Docker required.
  11. DevOps Engineer at Arthur Grand Technologies. Design and build automated systems in this high-paying position. Experience with Docker required.

docker jobs

Where to Find Docker Jobs

If you’re looking for Docker jobs, you should be looking on a number of different websites. These are offering a full list of open positions that you could potentially snag.

Indeed is one of the most popular job search platforms. You can also be looking on LinkedIn and other professional networking websites. Oftentimes, you’ll be able to find a great opportunity without ever looking at an official job ad.

If you have the right people in your network, have them put in a good word for you. This way, you could very well be the first person a company contacts. Be front of mind when they start looking for a professional with a strong Docker skillset.

You can also find plenty of new opportunities. Try websites like Monster and other job search platforms. Glass Door is also a good website to visit. It can help you review a potential company that is hiring and make sure that they are a worthy employer.

On Glass Door, you’ll often be able to see reviews from previous employees. They will share their experiences working with a particular employer. This information can be vital in helping you avoid bad companies. It can also aid you in understanding more about the company itself and what they are after.

You shouldn’t let a few bad reviews of disgruntled employees shake you. But, if a company’s reviews seem genuine, it’s probably a good idea to take them into consideration.

As far as choosing a job search site to use to look for open positions, try using more than one. Many companies cross-post on different platforms to reach the most potential candidates. But some only post on a few select platforms (or even just one). That means looking on multiple sites can reveal the most opportunities to you.

It doesn’t hurt to apply to all of the open positions you find, but you probably won’t be doing that. It takes time and a bit of research to craft a good application. It’s best to follow the below tips and only apply to the positions you really want.

docker jobs

Tips for Applying

When applying for a new position, it’s always best to start by reviewing your resume. Your resume needs to highlight the fact that you’re up-to-date on all the relevant skills.

You should also go the extra mile to tailor your resume for each position you’re applying for. Write a cover letter targeted at each specific company’s offerings too.

For example, if you are reading a job opening ad that mentions X, Y, and Z, you definitely want your resume to reflect that. Don’t waste time on A, B, and C.

Emphasize your proficiencies. Focus on what aligns with the specific skills outlined in the job ad. Place requested skills at the top of any bulleted lists.

Additionally, you should clean up your resume by cutting out unnecessary experience. Positions that simply aren’t relevant to your application aren’t needed. It’s a common mistake to try and list out as much as possible. But, if you’re listing every job you’ve had since you first started working, that’s fluff.

Similarly, avoid adding filler skills like “Microsoft Suite”. You need to shorten things up. Your resume should reflect only your most relevant experience. It should contain only relevant skills so that it’s easy for the recruiter to see your value.

Most recruiters only skim a resume. By taking out all the unnecessary items, you’ll be sure to get their attention instantly. You’ll portray yourself as an exact match. It will be clear that you specialize in what the company requires.

The next step is reviewing your cover letter. Your cover letter is a must to include because it’s your chance to speak to the recruiter. In it, you can detail the information in your resume to explain why you are the perfect fit for the given position.

Again, you’ll want to tailor this to fit the specific job opening you’re going after. When applying, be certain that you include your contact information. There is no need to include references unless you get a call back requesting them.

Most companies today have a multi-step interview process. It typically begins with a phone interview. This gives you the chance to ask questions and explain why you like the position. You’ll also let them know why you’re a great fit.

If you pass the phone interview, the next step is generally an on-site interview. Depending on the size of the company, there may be multiple phone interviews. There may also be multiple on-site interviews. Generally, they will explain the process in the first phone conversation with you.

It may feel like a lot of hoops to jump through. This is especially true if you’re applying at a larger company. However, these are necessary steps that help them see if you’re the right fit for the company. At the same time, they will help you understand whether you think the company is the right fit for you.

One final tip of the application process is to ask questions when given the opportunity. You should formulate questions that articulate your interest in the position. These questions also showcase your understanding of their expectations.

You should do some basic research into the company in order to come up with the right questions. This will enable you to better understand the company. You’ll also get a glimpse of the work environment. It can even help you understand what they are after in the employee they hire.

docker jobs

Next Steps

Now that you have read all about the importance of Docker skills, you should feel inspired. The next step is to begin looking for open positions where you can show off your new skillset.

The job ads you look at should detail what specific skills the company is looking for. Look for a position that best matches your list of current skills. Keep in mind, of course, that not every position will be a good fit for you.

There is increasing emphasis on matching values and other aspects today. So, you may find a company isn’t the right match for you even if you seem like the right match for the company (and vice versa). Put in the effort and you’ll be able to find the right docker job.

docker jobs

About features a suite of developer tools. The aim of is to empower developers to work smarter. Save time with a suite of Cloud Native products. Expert staff will stand by every step of the way. With, you can intelligently scale your business.

IBM MQ (IBM Message Queue): Overview and Comparison to IronMQ

Wherever two or more people need to wait in line for something, you can guarantee that there will be a queue: supermarkets, bus stops, sporting events, and more.

waiting in line

It turns out, however, that queues are also a useful concept in computer science.

The data structure known as a queue is a collection of objects that are stored in consecutive order. Elements in the queue may be removed from the front of the queue and inserted at the back of the queue (but not vice versa).

Queues are a good choice of data structure when you have items that should be processed one at a time in first-in, first-out (FIFO) order–in particular, the message queue. In this article, we’ll discuss IBM MQ, one of the most popular solutions for implementing message queues, and see how it stacks up against’s IronMQ software.

What is a Message Queue?

As the name suggests, a message queue is a queue of messages that are sent between different software applications. Messages consist of any data that an application wants to send, as well as a header at the start of the message that contains information about the data below it.

message in a bottle

Message queues are necessary because different applications consume and produce information at different speeds. For example, one application may sporadically create large volumes of messages at the same time, while another application may slowly process messages, one after another.

The differing speeds at which these two applications operate can pose an issue. Because they are all produced at the same time, all but one of the first application’s messages will be lost by the second application–unless they can be temporarily stored within a message queue.

A message queue is a classic example of asynchronous communication, in which messages and responses do not need to occur at the same time. The messages that are placed on the queue do not require an immediate response, but can be postponed until a later time. Emails and text messages are other examples of asynchronous communication in the real world.


While implementing a basic message queue is a fairly straightforward task, complex IT environments may include communications between separate operating systems and network protocols. In addition, basic message queues may not be resilient when the network goes down, causing important messages to be lost.

For these reasons, many organizations have chosen to use “message-oriented middleware” (MOM): applications that make it easier for different software and hardware components to exchange messages.

There are a variety of MOM products on the market today (like Delayed Job or Sidekiq in the Ruby on Rails world), each one intended for different situations and use cases. In the rest of this article, we’ll compare and contrast two popular options for exchanging data via MOM software: IBM MQ and IronMQ.

What to Consider When Selecting a Message Queue

Message queues are essential to how different applications interact and exchange data within your IT environment. This means, of course, that choosing the right message queue solution is no easy task–picking the wrong one can drastically affect your performance.

Below, we’ll discuss some of the most important factors to consider when choosing a message queue solution.


Depending on the specifics of your IT environment, you may require any number of different features from your message queue. Here’s just a small selection of potential functionality:

  • Pushing and/or pulling: Most message queues include support for both pushing and pulling when retrieving new messages. In the first option, new messages are “pushed” to the receiving application in the form of a direct notification. In the second option, the receiving application chooses to “pull” new messages itself by checking the queue at regular intervals.

pushing on a train

  • Delivery options: You may want to schedule messages at a specific time, or send messages more than once in order to make sure that they are delivered. If so, choose a message queue that includes support for these features.
  • Message priorities: Some messages are more critical or urgent than others. In order to receive the information you need in a timely manner, your message queue may use some way of migrating important messages up the queue (just like letting late passengers cut in front of you at the airport).
  • Persistence: Messages that are persistent are written to disk as soon as they enter the queue, while transient messages are only written to disk when the system is using a large amount of memory. Persistence improves the redundancy of your messages and ensures that they will be processed even in the event of a system crash.


Scalability and Performance

The more complex your IT environment is, the more difficult it is to scale it all at once. Instead, you can scale each application independently, decoupled from the rest of the environment, and use a message queue to communicate asynchronously.

Certain message queue solutions are better-suited for improving the scalability and performance of your IT environment. Look for options that are capable of handling high message loads at a rapid pace.


Different message queue solutions may have different price points and pricing models that lead you to choose one over the other.

“As a service” is currently the predominant pricing model for message queues. This means that customers have a “pay as you go” plan in which they are charged by the hours and the computing power that they use. However, there are also prepaid message queue plans with an “all you can eat” pricing model.


With hacks and data breaches constantly in the news, maintaining the security of your message queue should be a primary concern. Malicious actors may attempt to insert fraudulent messages into the queue and use them to exfiltrate data or gain control over your system.

Message queue solutions that use the Advanced Message Queuing Protocol (AMQP) include support for transport-level security. In addition, if the contents of the message itself may be sensitive, you should look for a solution that encrypts messages while in transit and at rest within the queue.

locked door

What is IBM MQ (IBM Message Queue)?

IBM Message Queue (IBM MQ) is a MOM product from IBM that seeks to help applications communicate and swap data in enterprise IT environments. IBM MQ calls itself a “flexible and reliable hybrid messaging solution across on-premises and clouds.” It includes support for a variety of different APIs, including Message Queue Interface (MQI), Java Message Service (JMS), REST, .NET, IBM MQ Light, and MQTT.

The IBM MQ software has been around in some form since 1993. Thanks to the widespread demand for real-time transactions on the Internet, IBM MQ and other message queue solutions have enjoyed a renewed popularity in recent years.

The benefits of using IBM MQ include:

  • Support for on-premises, cloud, and hybrid environments, as well as more than 80 different platforms.
  • Advanced Message Security (AMS) for encrypting and signing messages between applications.
  • Multiple modes of operation, including point-to-point, publish/subscribe, and file transfer.
  • A variety of tools for managing and monitoring queues, including MQ Explorer, the MQ Console, and MQ Script Commands.

On websites such as TrustRadius and IT Central Station, IBM MQ users mention a few advantages and disadvantages of the software. Some of the recurring themes in these reviews are:

  • IBM MQ is reliable and does its job well, without any lost messages.
  • The software helps to improve data integrity, availability, and security.
  • The user interface may be a little unintuitive and challenging, especially for first-time users.
  • Tools such as MQ Explorer seem to be “aging” and are not as effective as third-party solutions.
  • IBM MQ lacks certain integrations that would be useful in a modern IT enterprise environment.

IBM MQ vs. IronMQ

There’s no doubt that IBM MQ is a robust, mature message queue solution that fits the needs of many organizations. However, it’s far from the only MOM software out there. Offerings such as’s IronMQ are highly viable message queue alternatives, and in many cases may be superior to market leaders such as IBM MQ.

What is IronMQ?

IronMQ is a messaging queue solution from, a cloud application services provider based in Las Vegas. According to, the IronMQ message queue is “the most industrial-strength, cloud-native solution for modern application architecture.”

industrial strength

The software includes support for all major programming languages and is accessible via REST API calls. offers a number of different monthly and annual pricing models for IronMQ, ranging from the hobbyist all the way up to the large enterprise.

The benefits of IronMQ include:

  • Support for both push and pull queues, as well as “long polling” (holding a pull request open for a longer period of time).
  • The use of multiple clouds and availability zones, making the service highly scalable and resistant to failure. In the event of an outage, queues are automatically redirected to another zone without any action needed on the part of the user.
  • Backing by a high-throughput key/value data store. Messages are preserved without being lost in transit, and without the need to sacrifice performance.
  • Flexible deployment options. IronMQ can be hosted on’s shared infrastructure or on dedicated hardware to improve performance and redundancy. In addition, IronMQ can run on your internal hardware in cases where data must remain on-premises.

IBM MQ vs. IronMQ: The Pros and Cons

Both IBM MQ and IronMQ are cloud-based solutions, which means they enjoy all the traditional benefits of cloud computing: better reliability and scalability, faster speed to market, less complexity, and so on.

Since it was created with the cloud in mind, IronMQ is particularly well-suited for use with cloud deployments. Because IronMQ uses well-known cloud protocols and standards such as HTTP, JSON, and OAuth, cloud developers will find IronMQ exceedingly simple to work with.

software developer

IronMQ users enjoy access to an extensive set of client libraries, each one with easy-to-read documentation. The IronMQ v3 update has also made the software faster than ever for customers who need to maintain high levels of performance.

Customers who already use’s IronWorker software for task management and scheduling will find IronMQ to be the natural choice. According to one IronMQ user in the software industry, “I can run my Workers and then have them put the finished product on a message queue – which means my whole ETL process is done without any hassle.”

On the other hand, because it’s part of the IBM enterprise software family, IBM MQ is the right choice for organizations that already use IBM applications. If you already have an application deployed on IBM WebSphere, then it will be easier to simply use it together with IBM MQ.

What’s more, IBM MQ is capable of working well in many different scenarios with different technologies, including mainframe systems. However, some customers report that IBM MQ has a clunky, “legacy” feel to it and is difficult to use in an agile IT environment.

While it’s definitely able to compete with IBM MQ, IronMQ also stacks up favorably against other message queue solutions such as RabbitMQ and Kafka. For example, RabbitMQ’s use of the AMQP protocol means that it is more difficult to use and can only be deployed in limited environments. According to various benchmarks, IronMQ is roughly 10 times as fast as RabbitMQ.

IronMQ Customer Reviews

Of course, reading long lists of software features can only go so far–you need customer feedback in order to make sure that the application really does what it says on the tin.

The good news is that IronMQ has a number of happy customers who are all too eager to share their positive experiences. John Eskilsson, technical architect at the engineering firm Edeva, raves about IronMQ in his testimonial on FeaturedCustomers:

“IronMQ has been very reliable and was easy to implement. We can take down the central server for maintenance and still rely on the data being gathered in IronMQ. When we start up the harvester again, we can consume the queue in parallel using IronWorker and be back to real-time quickly.”

In a review on G2, one user working in marketing and advertising praised IronMQ’s reliability and performance:

“My experience with the message queues was a good one. I had no issues and found the message queues to be very reliable. The website has good monitoring showing exactly what is happening in real time.”

The world’s most popular websites may receive millions of page hits per day, and more during times of peak activity. Businesses such as CNN need a robust, feature-rich, highly available message queue solution in order to get the right information to the right people. CNN is one of many enterprise clients that uses IronMQ as its message queue solution.

IBM MQ vs. IronMQ: Which is Right for You?

At the end of the day, no one can tell you which message queue solution is right for your company’s situation. Both IBM MQ and IronMQ have their advantages and drawbacks, and only one may be compatible with your existing IT infrastructure.

In order to make the final decision, draw up a list of the features and functionality that are most important to you in a message queue. These may include issues such as persistence, fault tolerance, high performance, compatibility with existing software and hardware, and more.

Fortunately, you can also try IronMQ before you buy. Want to find out why so many clients are proud to use IronMQ and other products? Request a demo of IronMQ, or sign up today for a free, full-feature 14-day trial of the software.

Amazon SQS (Simple Queue Service): Overview and Tutorial

What’s a Queue?  What’s Amazon SQS?

Now that’s quite a queue!

Queues are a powerful way of combining software architectures. They allow for asynchronous communication between different systems, and are especially useful when the throughput of the systems is unequal.   Amazon offers their version of queues with Amazon SQS (Simple Queue Service).

For example, if you have something like:

  • System A – produces messages periodically in huge bursts
  • System B – consumes messages constantly, at a slower pace

With this architecture, a queue would allow System A to produce messages as fast as it can, and System B to slowly digest the messages at it’s own pace.

Queues have played an integral role in software architecture for decades along with core technology concepts like APIs (Application Programming Interfaces) and ETL/ELT (Extract, Load Transform). With the recent trend towards microservices, have become more important than ever.

Amazon Web Services

AWS (Amazon Web Services) is one of the leading cloud providers in the world, and anyone writing software is probably familiar with them. AWS offers a wide variety of “simple” services that traditionally had to be implemented in-house (eg, storage, database, computing, etc). The advantages offered by cloud providers are numerous, and include:

  • Better scalability – your data center is a drop in their ocean. They’ve got mind-boggling capacity. And it’s spread around the world.
  • Better reliability – they hire the smartest people in the world (oodles of them) to ensure these services work correctly, all the time.
  • Better performance – you can typically harness as much computing horsepower as you’d like with cloud providers, far exceeding what you could build in-house.
  • Better (lower) cost – nowadays, they can usually do all this cheaper than you could in your own data center, especially when you account for all the expertise they bring to the table. And many of these services employ a “pay as you go” model, charging for usage as it occurs. So you don’t have to pay the large up front cost for licenses, servers, etc.
  • Better security – their systems are always up to date with the latest patches, and all their smart brainiacs are also thinking about how to protect their systems.

If you have to choose between building out your own infrastructure, or going with something in the cloud, it’s usually an easy decision.

AWS Simple Queue Service

It comes as no surprise that AWS also offers a queueing service, simply named AWS Simple Queue Service. It touts all the cloud benefits mentioned before, and also features:

  • Automatic scaling – if your volume grows you never have to give a thought to your queuing architecture. AWS takes care of it under the covers.
  • Infinite scaling – while there probably is some sort of theoretical limit here (how many atoms are in the universe?), AWS claims to support any level of traffic.
  • Server side encryption – using AWS SSE (Server Side Encryption), messages can remain secure throughout their lifetime on the queues.

Their documentation is also top-notch. It’s straightforward to get started playing with the technology, and when you’re ready for serious, intricate detail, the documentation goes deep enough to get you there.


Let’s walk through a simple example of using AWS SQS, using the line at the DMV (Department of Motor Vehicles) as the example subject matter. The DMV is notorious for long waits, forcing people to corral themselves into some form of a line. While this isn’t an actual use case anyone would (presumably) solve using AWS SQS, it will allow us to quickly demo their capabilities, with a real-world situation most are all too familiar with.

While AWS SQS has SDK libraries for almost any language you may want to use, I’ll be using their REST interface for this exercise (with my trusted REST side kick Postman!).


Postman makes it easy to setup all the necessary authorization using Collections. Configure the AWS authorization in the parent collection with the Access Key and Secret Access Key found in the AWS Console:

AWS SWS Authorization

Then reference that authorization in each request:

AWS SQS Create Parent Auth

Using this pattern, it’s easy to quickly spin up requests and put AWS SQS through its paces.

Creating a Queue

When people first walk in the door, any DMV worth their salt will give them a number to begin the arduous process. This is your main form of identification for the next few minutes/hours (depending on that day’s “volume”), and it’s how the DMV employees think of you (“Number 14 over there sure seems a bit testy!”).

Let’s create our “main queue” now, with the following REST invocation:





Good deal. Now we’ve got a mechanism to track people as they come through the door.

Standard vs FIFO

One important detail that should be mentioned – there are two types of queues within AWS SQS:

  • Standard – higher throughput, with “at least once delivery”, and “best effort ordering”.
  • FIFO (First-In-First-Out) – not as high throughput, but guarantees on “exactly once” processing, and preserving the ordering of messages.

Long story short, if you need things super fast, can tolerate messages out of order, and possibly sent more than once, Standard queues are the answer. If you need absolute guarantees on order of operations, no duplication of work, and don’t have huge throughput needs, then FIFO queues are the best choice.

We’d better make sure we create our MainLine queue using FIFO! While a “mostly in order” guarantee might suffice in some situations, you’d have a riot on your hands at the DMV if people started getting called out of order. Purses swinging, hair pulling – it wouldn’t be pretty. Let’s add “FifoQueue=true” to the query string to indicate that the queue should be FIFO:


Send Message

Now that we’ve got a queue, let’s start adding “people” to it, using the “SendMessage” action. Note that when using REST, we need to URL encode the payload. So something like this:

"name": "Ronnie Van Zandt",
"drivers_license_number": "1234"

Becomes this:


There are many ways of accomplishing this, I find the urlencoder site to be easy and painless.

Here’s the final result:






After this call, we’ve got young Ronnie standing in line at the DMV. Thanks to AWS’s massive scale and performance, we can leave Ronnie there as long as we’d like. And we can add as many people as we’d like – with AWS SQS’s capacity, we could have a line around the world. But that’s horrible customer service, someone needs to find out what Ronnie needs!


At the DMVs I’ve been to, there’s usually a large electronic sign on the counter that will display the next lucky person’s number. You feel a brief pulse of joy when your number displays, and rush to the counter on a pillow of euphoria, eager to get on with your life. How do we recreate this experience in AWS SQS?

Why, “ReceiveMessage”, of course! (Note we are invoking it using the actual QueueUrl passed back by the CreateQueue call above)






"name": "Ronnie Van Zandt",
"drivers_license_number": "1234"


One thing to keep in mind – ReceiveMessage doesn’t actually REMOVE the item from the queue – the item will remain there until explicitly removed. Visibility Timeout can be used to ensure multiple readers don’t attempt to process the same message.

So how do we permanently mark the item as “processed”? By deleting it from the queue!


The DeleteMessage action is what removes items from a queue. There’s not really a good analogy with the DMV here (thankfully, DMV employees can’t “delete” us), so we’ll just go with an example. DeleteMessage takes the ReceiptHandle returned by the ReceiveMessage endpoint as a parameter (once again, encoded):




And just like that, Ronnie is able to leave the DMV with his newly printed license, all thanks to AWS SQS!

DMV line
It’s time to get out of here!


While AWS SQS has many strengths, there are advantages to using Iron MQ that make it a more compelling choice, including:

Client Libraries

Iron MQ features an extensive set of client libraries, with clear, straightforward documentation . Getting started with Iron MQ is a breeze. After playing with both SDKs, I found the Iron MQ experience to be easier.


Iron MQ is much faster than SQS, with V3 making it faster and more powerful than ever before. And with high volume systems, bottlenecks in your messaging architecture can bring the whole system to its knees. Faster is better, and Iron MQ delivers in this area.

Push Queues

Iron MQ offers something called Push Queues, which supercharge your queueing infrastructure with the ability to push messages OUT. So rather than relying solely on services pulling messages off queues, this allows your queues to proactive send the messages to designated endpoints, recipients, etc. This powerful feature expands the communication options between systems, resulting in faster workflow completion, and more flexible architectures.


Check out the comparison matrix between Iron MQ and its competitors (including SQS). It clearly stands out as the most feature-rich offering, with functionality not offered by SQS (or anyone else, for that matter).

Iron MQ offers a free 14 day trial to see for yourself how it compares to SQS. Signup here.

In Con-q-sion

Hopefully this simple walkthrough is enough to illustrate some possibilities of using AWS SQS for your queuing needs. It is easy to use, with incredible power, and their SDK supports a variety of language. And may your next trip to the DMV be just as uneventful as young Ronnie’s.

Happy queueing!