AWS Fargate Pricing and Alternatives


In this article, we will cover AWS Fargate pricing. The areas that will be addressed are:

Schedule a call to find out why Fargate users are moving to IronWorker to run their background tasks. IronWorker offers broad and effective support, excellent service, and competitive pricing.

Introduction to AWS Fargate

Many types of software and vendor services vie for attention in the world of container management. That’s abundantly true in the field of container management. Here, as in other areas of IT, big vendors have rolled out their name brand products to try to grab market share. For example, AWS pushes AWS Fargate as an option.

AWS Fargate is the next generation of AWS EC2. It’s a tool for serverless container management. AWS Fargate allows developers to simply work up a container and deploy it with ease of use and convenience. Some also use it to connect to object storage systems such as S3.

AWS Fargate Pricing Breakdown

There are several factors in how AWS Fargate pricing works: 

  • Amount of CPU resources – Users may pay more for larger systems that have higher CPU running over a given amount of time. In order to calculate the amount you will be paying for CPU depends on the amount of time, the percentage of vCPU being used, and the amount of GB’s being used. For instance, is you have used 5 hours worth of service and are using 0.5GB, your vCPU will be calculated as follows: 5×0.25vCPUx0.5GB=$0.071 USD.
  • Amount of memory resources – Users may also pay more for an AWS Fargate project with larger instances using more RAM. Using the same example above, you would be paying $0.013 for RAM.
  • Time running – Importantly, AWS Fargate charges for the hours when the container workloads are running, not the hours when the virtual machine itself is running. That means that users get savings if their containers don’t run as long. (Actually, AWS Fargate bills “per second,” but many users choose to rely on a per-hour cost basis for convenience.).

This per-hour pricing can make project estimates difficult. Developers might have to think about considerations like:

  • Whether the serverless container project is just being used for quick cleanups or batch handling, or running in the background 24/7
  • What kind of buy-in they have for long-term operation from different departments that are involved (and how well the top brass understands the pricing model) 
  • What kinds of resources their containers require in terms of CPU and RAM

AWS Compute Savings Plans

Importantly, AWS Fargate changed its costs in 2019, cutting some project costs down as much as 35%, with something called the Compute Savings Plan.

These types of plans cover both EC2 and AWS Lambda, as well as AWS Fargate. Amazon describes them this way:

“In exchange for a commitment to a consistent amount of usage (measured in $/hour) for a 1 or 3-year term. When you sign up for a Savings Plan, you will be charged the discounted Savings Plans price for your usage up to your commitment.”

That doesn’t change the calculus around how AWS Fargate is billed. Third-party analysts have put together detailed graphs showing whether EC2 or AWS Fargate breaks cheaper for a given project based on the container reservation rate, or the number of resources reserved for the container within the broader system.

Alternatives to AWS Fargate

When you seek out alternative to AWS Fargate like IronWorker, you see how significant savings can apply.

 Like AWS Fargate, IronWorker is a “compute engine” meant to help developers work with containers without an in-depth lack of server knowledge.

IronWorker leads the pack with broad and effective support, excellent service, and competitive pricing. Check out a 14-day free trial of this comprehensive resource for running containers with the versatility that you need

However, IronWorker charges by the month, with a free 14-day trial, so the initial cost will be relatively low, and can be markedly lower than using AWS Fargate with all of its convenience features built-in. With a “hobby price” of $24/month for one concurrency and 5 hrs/month with 256 MB of RAM, IronWorker would be significantly less than AWS Fargate for many types of projects, especially those that run 24/7, around the clock, or even most of the way around the clock.

As a serverless container management compute engine, IronWorker provides significant benefits beyond pricing, such as:

  • Key support features mean that development teams feel well served and that they can move forward with confidence, no matter what their architecture looks like or how they have provisioned containers (with some caveats!)
  • A DevOps philosophy allows for agile workload modernization and bringing systems toward a desired state.
  • The simplicity of IronWorker allows developers to tackle the learning curve right away, and master their environments more quickly.

Modern virtualization projects may have to “wear a lot of hats,” with attention to immediate business value as well as long term scaling. That’s when you need to have capable tools with good support, not just the best penny-pinching.

AWS Fargate and IronWorker: Other Context Factors

Some point out that AWS Fargate can be cheaper than EC2. True, but it also depends on many of the above factors. Dev teams will have to do the math to be on top of the cost factor and to defend their choices to gatekeepers.

Then there’s the option to use AWS Lambda for serverless computing.

This essentially involves ordering sets of computing functions from the vendor. Using AWS Lambda is an abstraction that fits some task models, but for other teams, it may be “too much, too soon,” with a more subtle strategy involving virtualizing container instances.

 IronWorker remains one of the easiest options for agile container systems. The platform makes it easy to scale, and easy to build for tomorrow, without an unreasonable impact on either CAPEX or OPEX costs. It’s part of the smart builder’s toolkit in the “age of Kubernetes,” where logical hardware systems are evolving quickly. The cloud is becoming a dominant method of vendor service implementation, and teams are looking toward new efficiencies with compute-storage solutions. 

IronWorker leads the pack with broad and effective support, excellent service, and competitive pricing. Sign up for your free 14-day trial and see what it’s all about.

The Top 7 Container Management Software


In this article, we will review the Top 7 container management solutions in 2020. The services we will be covering are:

  • IronWorker
  • Amazon ECS
  • AWS Fargate
  • Google Kubernetes Engine
  • Apache Mesos
  • Portainer
  • Rancher

IronWorker is a high-performance, feature-rich container management solution that powers some of the world’s top websites. Contact us today to start your 14-day free trial of IronWorker.

Table of Contents


Containers are software units that package together an application’s source code with its runtime environment, including libraries, frameworks, and configuration settings. In so doing, containers help your software operate consistently and predictably, no matter which system it’s running on.

Containers are software units that package together an application’s source code with its runtime environment, including libraries, frameworks, and configuration settings. In so doing, containers help your software operate consistently and predictably, no matter which system it’s running on.

Given the appeal of container solutions such as Docker and Kubernetes, it’s no surprise that container adoption and investment continues to grow rapidly among enterprise IT teams. To make the process even easier, many businesses employ container management software to automatically create, deploy, and scale containers.

But which container management software is best for your needs? Below, we’ll discuss 7 of the top container management software tools, so that you can make the choice that’s right for your situation.

The Top 7 Container Management Software

1. IronWorker

We’d be remiss here if we didn’t mention that IronWorker is one of the best container management software tools on the market. IronWorker is a container-based, distributed work-on-demand platform that’s built on top of the Docker container format.

The advantages of IronWorker include its ease of use and excellent support. IronWorker comes with a simple visual dashboard with strong reporting and analytics functionality, giving you insight into both high-level trends and low-level granularities. In addition, IronWorker offers detailed documentation and “white-glove” assistance to clients who need help developing custom configurations.

One of the best features of IronWorker is its tremendous flexibility. IronWorker is capable of deploying in whatever environment you need it to, including:

  • Shared cloud infrastructure
  • Hybrid cloud and on-premises environments
  •’s dedicated server hardware
  • On-premises IT infrastructure

IronWorker currently has a rating of 4.6 out of 5 stars on the business software review website G2. One user writes:

“I improved my CSV breakdown work drastically by putting 10 IronWorkers on the job. Up to that point, I was just using larger AWS instances… The UI is very intuitive and gives really good detail about the time each job takes. I can run my Workers and then have them put the finished product on a message queue—which means my whole ETL process is done without any hassle.”

2. Amazon ECS

Amazon Web Services is the most popular public cloud infrastructure platform, and for good reason: it offers a wide range of products and capabilities, from storage and compute to machine learning and data migration. Amazon ECS is a fully managed container orchestration service from AWS that is one of the top container management software tools.

The benefits of Amazon ECS include:

  • Scalability and high performance, with the ability to deploy thousands of containers simultaneously.
  • Access permissions that strictly govern the resources available for each container, helping maintain a high degree of security.
  • Excellent reliability, with guaranteed monthly uptime of 99.99 percent.

G2 reviewers currently give Amazon ECS a rating of 4.3 out of 5 stars. Reviewer Dave B. praises the service’s “power, flexibility, and customizability in setting up multiple containers on a set of EC2 instances. Great to deploy to one or more instances (kinda sorta easily), and fairly comprehensive providing information about the instances to which you’re deploying. It ties in pretty nicely with the rest of the Amazon ecosystem.” However, he also mentions some negatives of the platform, including a high learning curve and a difficult debugging process.

3. AWS Fargate

AWS Fargate is a container management solution that is specifically designed for serverless computing. Serverless computing is a computing paradigm in which the end user doesn’t have to worry about provisioning and managing servers. It’s worth noting that AWS Fargate isn’t the only container management software to use the serverless paradigm: for example, IronWorker also offers serverless capabilities, handling messy behind-the-scenes questions about infrastructure and scaling.

Rather than being its own container management software, AWS Fargate is used in conjunction with Amazon ECS to deploy, manage, and scale containers in the cloud. To get started, users simply have to build the container image, specify the system requirements (including CPU and memory), define the necessary access and network policies, and finally deploy the container. In addition to Amazon ECS, Fargate is also compatible with Amazon EKS, the AWS “Kubernetes as a service” offering.

AWS Fargate currently has a rating of 4.5 out of 5 stars on G2. One user writes: “Fargate’s UI is simple and very easy to navigate. I love that the AWS Fargate product allows for container storage management without the management of servers, as well.” However, common complaints about Fargate include higher costs and a few annoying feature limitations.

4. Google Kubernetes Engine

Not to be outdone by AWS, Google Cloud Platform also offers its own container management software: Google Kubernetes Engine. GKE works with Docker containers and uses the Kubernetes open-source container management system.

The benefits of Google Kubernetes Engine include:

  • Automatic container and environment management, including scaling, repairing nodes, and upgrading Kubernetes.
  • Simple identity and access management.
  • Compliance with data privacy regulations such as HIPAA and PCI DSS.
  • Integration with Google Cloud Platform tools for logging and monitoring.

Google Kubernetes Engine has an average rating of 4.5 out of 5 stars on G2. One reviewer writes that GKE “has a great UI and intuitive integration with the Kubernetes dashboard. Furthermore, adding users to clusters is definitely a lot easier than the AWS solution.” However, the reviewer also mentions that Google’s support is lacking when compared with AWS. Other complaints include a higher learning curve, challenges setting up persistent storage, and even potential security issues.

5. Apache Mesos

Apache Mesos is an open-source cluster management tool that can be used to deploy, manage, and scale Docker images. Initially developed as a research project at UC Berkeley, Apache Mesos is now used by major tech companies from Airbnb and Netflix to Cisco and PayPal.

Mesos ensures that different applications and containers have access to the resources they need to run within a cluster, including CPU, memory, and storage. Frameworks and projects such as Hadoop, Ruby on Rails, Node.js, and Memcached are all compatible with Apache Mesos. Because Mesos is part of the Apache open-source software ecosystem, it integrates well with other Apache tools such as Spark, a large-scale data processing engine.

The Apache Mesos software currently has 4.2 out of 5 stars on G2. Users generally praise Mesos’ efficiency and effectiveness, in particular the ease of use it offers by abstracting away the complicated IT details. However, some users mention that Mesos suffers from strange design choices and configuration options, insufficient documentation, and bugs such as memory leaks.

6. Portainer

Like Apache Mesos, Portainer is an open-source container management software tool. Portainer bills itself as “a lightweight management UI that allows you to easily manage your different Docker environments,” and is compatible with both Docker Swarm clusters and Kubernetes.

The top features of Portainer include:

  • Authentication: Portainer has three different ways to perform user authentication: internal methods, LDAP, or OAuth.
  • Templates: Users can deploy Docker Swarm services and Docker containers using predefined templates, dramatically simplifying the process.
  • Web interface: Portainer has a simple web interface that allows developers to directly inspect containers and check their logs, rather than going through a complex multi-step connection process.

Portainer currently has an average rating of 4.7 out of 5 stars on G2. PHP developer Khaled A. writes: “I’m very satisfied with the interface and how it’s easy to use! It’s very straightforward, easy to understand, and also easy to install.” However, multiple reviewers complain that the tool is challenging to use in clustered mode for new users. Other issues include bugs while using the software, as well as a subscription-based pricing model for software plugins rather than a lifetime purchase.

7. Rancher

Last but not least, Rancher is an open-source container managment tool that serves as “a complete software stack for teams managing containers.” Using Rancher, you can deploy, manage, and scale containers and Kubernetes clusters on bare-metal servers, as well as public and private clouds.

The features of Rancher include:

  • A simple UI that centralizes and simplifies the process of deploying, securing, maintaining, and upgrading Kubernetes clusters.
  • Best practices for security and compliance, including encryption, audit logging, and rate limiting.
  • Support for hybrid and multi-cloud environments.
  • Support for DevOps tools such as Jenkins, Gitlab, and Travis.

Rancher has a rating of 4.5 out of 5 stars on G2, where users give it above-average marks for ease of use but below-average marks for ease of setup. One reviewer writes that “Rancher was easy to get set up, reliable, and made container orchestration a breeze.” Still, the tool isn’t without its flaws: some users say that they had problems with bugs and performance issues, while others complain about missing features that the software could benefit from.


In this article, we’ve gone over 7 of the top container management software. So which of these container management tools is best for your situation? Here are our thoughts:

  • IronWorker: Best if you need a flexible, user-friendly tool that can run in multiple environments: public cloud, hybrid, dedicated, and on-premises.
  • Amazon ECS: Best if you want to leverage the rest of the Amazon Web Services ecosystem.
  • AWS Fargate: Best if you’re specifically looking for a serverless container management solution within AWS.
  • Google Kubernetes Engine: Best if you want to work with Google Cloud Platform.
  • Apache Mesos: Best if you want a robust open-source container management tool, or one that integrates well with the Apache ecosystem.
  • Portainer: Best if you need an open-source tool that’s compatible with both Docker Swarm and Kubernetes.
  • Rancher: Best if you want an end-to-end container management solution for Kubernetes clusters.

Want to enjoy container management software that’s cloud-native, easy to use, and built to scale for your high-performance needs? Give IronWorker a try.

Get in touch with our team today for a chat about your business goals and requirements, and start your free 14-day trial of the IronWorker platform.

AWS Fargate vs. IronWorker



Deciding on a container management solution for your worker system is an important decision for startups and enterprises. This article will compare two industry leading container management services, AWS Fargate and IronWorker, in terms of:

  • Features and Benefits
  • Pricing
  • User Reviews
  • And more…

Schedule a demo today to find out why Fargate users are moving to IronWorker.

Table of Contents


The serverless computing paradigm has taken the world of cloud computing by storm. Traditionally, cloud services require you to provision, manage, scale, and shut down servers yourself when running applications. Serverless computing handles this functionality for you, letting you focus on the task at hand rather than the technical details.

AWS Fargate is Amazon Web Services’ serverless offering, allowing users to run containers in the cloud without needing to manage them. In a previous article, we gave an overview of AWS Fargate that looked at some of the most popular AWS Fargate alternatives. This article will dive deeper into the comparison between Fargate and one of its top serverless competitors: IronWorker.

AWS Fargate vs. IronWorker: Features and Benefits

AWS Fargate and IronWorker are both robust, mature, feature-rich solutions for serverless container management. But how do their features stack up against each other?

Both Fargate and IronWorker are tremendously powerful and scalable container management tools. According to the AWS Fargate website, users can “launch tens or tens of thousands of containers in seconds.” IronWorker, too, allows users to spin up thousands of parallel workers at once.

When it comes to different deployment options, however, IronWorker is far ahead of Fargate:

  • AWS Fargate is only capable of running in the public cloud—more specifically, the AWS cloud. This has both advantages and disadvantages. Using Fargate allows you to take advantage of synergies with the rest of the AWS ecosystem, but also limits you in terms of the features and benefits that you can enjoy.
  • IronWorker is able to run not only in the public cloud, but also in a hybrid environment, on a dedicated server, or even on-premises. In a hybrid environment combining the cloud and on-premises, your containers can run on your own hardware, while IronWorker deals with concerns such as authentication and scheduling. Running containers on a dedicated server allows users to benefit from IronWorker’s built-in scaling functionality.

IronWorker’s high degree of flexibility, which makes it practically unique in the container management field, has important repercussions for the choice of AWS Fargate vs. IronWorker. Many businesses need to maintain legacy services on-premises, which makes using AWS Fargate an impossibility. In addition, relying too much on Fargate and the AWS ecosystem may result in vendor lock-in, making you unable to compete and innovate in a constantly evolving cloud landscape.

IronWorker also beats AWS Fargate when it comes to support. On the business software review website G2, IronWorker has an average “quality of support” rating of 9.2 out of 10, which is significantly higher than the industry average of 8.2. Fargate, meanwhile, has an average support rating of 8.2 out of 10, putting it right in the middle of the container management field.

Because IronWorker has a smaller clientele, the team is able to offer “white-glove” service to customers who need extra assistance in getting their containers up and running. In addition, users can benefit from IronWorker’s extensive documentation, complete with training videos and example code repositories.

AWS Fargate vs. IronWorker: Pricing

When it comes to pricing for AWS Fargate vs. IronWorker, which one is more cost-effective for your business? The answer will depend on what you’re looking for, as both tools use different pricing models.

AWS Fargate pricing is based on the CPU and memory resources that you use while running containers, and you pay only for what you consume. Pricing will also depend on the AWS region that you use. For example, as of writing, the AWS Fargate prices for the US East (Northern Virginia) AWS region are as follows:

  • $0.04048 per vCPU per hour
  • $0.004445 per gigabyte of storage per hour

Spot pricing is an option to lower your AWS Fargate costs, if your applications can tolerate the possibility of occasional interruptions. As of writing, the AWS Fargate spot pricing rates are:

  • $0.01255795 per vCPU per hour
  • $0.00137895 per gigabyte of storage per hour

For more discussion about AWS Fargate pricing, keep reading for AWS Fargate user reviews.

IronWorker has three separate tiers for organizations of different sizes, as well as a custom enterprise tier, making it easy for every user to find a plan that works for them. The three IronWorker pricing tiers are:

  • Hobby ($259/year): 1 concurrency, 5 hours/month, 256 megabytes of RAM, 60 seconds of runtime.
  • Launch ($1,609/year): 5 concurrencies, 50 hours/month, 512 megabytes of RAM, 60 minutes of runtime.
  • Professional ($10,789/year): 30 concurrencies, 500 hours/month, 512 megabytes of RAM, 60 minutes of runtime, automatic scaling, and organizational support.

AWS Fargate vs. IronWorker: Reviews

Thus far, we’ve discussed how AWS Fargate and IronWorker compare in theory, in terms of their features, pros and cons, and technical differences. But how do these two container management tools compare in practice? Let’s look at some AWS Fargate and IronWorker reviews to find out.

AWS Fargate reviews are largely positive, with a current average rating of 4.5 out of 5 stars on the business software review website G2:

  • According to front-end developer Sima I., AWS Fargate is “the best way to run containers in AWS without hours of setup.” However, she also mentions that the cost of AWS Fargate may be prohibitive for smaller businesses: “The pricing isn’t great and didn’t fit our startup’s needs.”
  • Small business founder and CEO Ralph K. writes: “The best thing about Fargate is that you can just start out of the gate without setting up servers… it’s all managed in a black box for you by AWS.” As with the previous review, the main negative was pricing: “Fargate is a good bit pricier than running your own servers.”
  • Other reviews mention problems with Fargate’s learning curve and support, although this may be less of an issue if you’re already used to the AWS ecosystem.

IronWorker reviews on G2 are slightly higher than AWS Fargate, with an average of 4.6 out of 5 stars. Many users praise IronWorker’s ease of setup and use:

  • CTO Daniel M. writes: “IronWorker runs my PHP code straight out of the box. Scheduling jobs is simple. It just works. Love it.”
  • Software engineer Erik J. agrees, writing: “The dashboard makes keeping track of various tasks and workers easy. There’s very little maintenance required once set up.”
  • Reviewer James C. says that IronWorker “helped us harness the efficiency of the cloud,” adding: “We used as a one-stop shop to help us get onto the cloud and tune our services to get cloud efficiencies… The result has been a drastic reduction in the AWS spend we have been getting, as we are finally optimized to take advantage of the elasticity that cloud computing offers.”


Both AWS Fargate and IronWorker are strong alternatives for serverless container management software—but which one is ultimately the right choice for you? When it comes to the question of AWS Fargate vs. IronWorker, the right choice will depend on your unique needs and objectives.

Although Fargate is a leader in the field of serverless computing, it also comes with downsides such as the lack of control and potentially higher costs. Here’s how the decision breaks down:

  • Features: Both AWS Fargate and IronWorker have excellent performance and scalability, with thousands of containers running simultaneously. Unlike Fargate, however, IronWorker offers far more flexibility in terms of deployment options: public cloud, hybrid, dedicated servers, and on-premises. IronWorker also has the advantage when it comes to customer support.
  • Pricing: AWS Fargate uses an à la carte pricing model in which you pay only for what you use. IronWorker uses a subscription-based annual pricing model with usage caps for different tiers.
  • Reviews: Both AWS Fargate and IronWorker have strong user reviews, although IronWorker’s reviews are slightly more positive.

While there’s no solution that’s right for every organization, IronWorker is a highly competitive alternative to AWS Fargate that powers some of the world’s largest brands, including Hotel Tonight, Bleacher Report, and Untappd.

Interested in giving IronWorker a try? Get in touch with our team today for a chat about your business needs and objectives, and a free 14-day trial of the IronWorker platform.

AWS Fargate Reviews

AWS Fargate has received both positive and negative reviews. This article will explore the following:

  • User reviews of AWS Fargate
  • What people like best about AWS Fargate
  • What people mainly dislike about AWS Fargate
  • IronWorker reviews: The AWS Fargate alternative

Enterprises are moving off AWS Fargate to IronWorker to manage their containers. Speak to us to talk about why.

Table of Contents

AWS Fargate Reviews

What People Like Best About AWS Fargate

What People Dislike About AWS Fargate

IronWorker Reviews: The AWS Fargate Alternative

AWS Fargate Reviews

AWS Fargate is still a relatively new player having entered the market in 2017. But many who have used it claimed that it was great. AWS Fargate received a general rating of 4.5 out of 5 stars. The ease of use, the fact that Fargate removes server reliance, and useful deployment of applications are among benefits. Developers have also noted that AWS Fargate is more secure, because of the ability to embed security within each container

But AWS Fargate isn’t the only management service to manage and scale your containers. There are other service CaaS and service IaaS options that act as a compute engine and aid with load balancing as well as to manage servers or clusters.

What People Like Best About AWS Fargate

Some of the things that people really like about AWS Fargate include:

  • You don’t need any servers or infrastructure to launch your containers
  • Increased ability to speed up the deployment of applications
  • Enhanced security
  • Allows you to focus on building the application
  • Don’t need to manage a cluster of Amazon EC2 instances
  • Alternative to Amazon Elastic Container Service ECS (AWS ECS) or EKS

The main benefit of AWS Fargate is that you do not have to own any infrastructure to manage your container. This means that any developer can use AWS Fargate and you do not buy any servers. Companies in the technology industry are also able to speed up the production process for of all their applications. AWS Fargate is a good infrastructure management service for dynamically scaling up or down container workloads.

AWS Fargate also comes with an existing ECS. This makes the process of building applications much easier. You can separate the application from the underlying resources. This helps the application become more secure and speeds up the application deployment process. 

AWS Fargate allows you to remain completely focused on the designing and building of applications. This makes the whole process much easier and more efficient. The user interface of AWS Fargate is quite simple which makes it easier to navigate. 

What People Dislike About AWS Fargate

Some of the things that people dislike about AWS Fargate are:

  • Higher charges
  • Compatibility issues 
  • AWS Fargate is complex and difficult to operate
  • Limited storage for containers
  • Limited regional availability 

The main downside of AWS Fargate is that it is more expensive than other services. That’s the case in the short term, and also in the long run when you do not have your servers. AWS Fargate users also pay higher fees per hour than Amazon ECS and EKS users. Many technology startup companies are thus unable to afford AWS Fargate.

The other downside of AWS Fargate is its incompatibility with other types of technologies such as EFS. Some users have also noticed that using AWS Fargate requires some experience and basic knowledge about certain services. As a result, it may not be universally accessible to a wide network of users.

There is also limited storage for containers. You can overcome this challenge by mounting an Elastic File System (EFS) into the AWS Fargate container. However, this makes AWS Fargate more expensive than regular EBS volumes. Currently, such volumes are not supported by AWS Fargate. Users also noted that the quality of support for AWS Fargate is not great. 

Because you don’t manage any of the underlying infrastructures, it is impossible to choose an operating system for the application. This becomes a big disadvantage for companies that run and manage sensitive applications.

AWS Fargate is available regionally. But the service is trying to roll out countrywide. Currently, the service is not available in places such as Northern California, Paris, Mumbai, Beijing, London, Stockholm, and Montreal. 

IronWorker: The AWS Fargate Alternative

Some of the benefits of IronWorker, compared to AWS Fargate, include:

  • Great concurrency features to scheduled task execution
  • Easy to scale, easy to set up and very reliable
  • The hybrid service helps the user have greater control
  • Simple to deploy new instances with one command

The three main advantages of IronWorker over AWS Fargate are simplicity, support, and more deployment options. Choose a management service based on your resources, deployment needs, and client satisfaction goals. This will help you minimize costs and maximize efficiency. 

Sign up with today and begin your free 14 day trial with IronWorker.

Fargate Container Startup Time Issues


This article will explore delay in AWS Fargate startup time. Factors that contribute to this time loss include:

  • Image extract time
  • Load balancer
  • Container size

The article also explores why IronWorker is the preferred alternative to AWS Fargate.

Fargate’s startup time is slowing you down from getting your information to your users. Find out why IronWorker is the faster container solution for your background jobs. Speak to us to talk about why.

Table of Contents

AWS Fargate Container Startup Time Issues

What Causes Delay in Startup Time When Using AWS Fargate?

Does Size of the Container Matter?

Why Choose IronWorker Over AWS Fargate?

AWS Fargate Container Startup Time Issues

  • The extract time for the image
  • The load balancer factor

The main issue with AWS Fargate is the container startup time. This problem is worse when a user is fairly new to the technology. Although AWS Fargate has an advantage in that you do not have to worry about the underlying infrastructure, deployments are much harder. This is because deployments in AWS Fargate may take up to five minutes to complete. Some users claim that it may take even longer, up to 10 minutes.

The below image shows how serverless computing works.

The startup time is negatively affected by the size of the Docker image. The image needs to be downloaded to the host for the task to launch. The larger the size of the Docker image the longer the startup time. AWS Fargate has received many positive reviews, but the delayed startup time is a particularly problematic issue. Luckily, it is not the only management solution for containers. 

One great alternative is IronWorker from IronWorker is an industry-leading solution that is container-based and offers Docker support for a great performance of work on-demand. To get this product, sign up with and begin a free 14-day trial today

What Causes Delay in Startup Time When Using AWS Fargate? 

Even a simple container set up can take a long time when using AWS Fargate. Many users get frustrated trying to understand the reason for the delay. 

The two main reasons include:

  • The extract time for the image, which is dependent on image size
  • The load balancer factor

The delayed time is even more frustrating considering that Docker containers’ startup time is mere seconds. AWS Fargate says that reducing the size of the Docker image may help speed up the process. But this is not an optimal solution. The elastic load balancer is responsible for conducting health checks of all instances. When the load balancer is on, the startup time will be longer. But many users have found that even without load balancing, the startup time when using AWS Fargate is much slower.

The image below shows the process of creating containers and how they are launched and managed.

Load balancing health checks take a grace period which is 300 seconds (5 minutes) long. AWS Fargate cannot act on the instances unless the grace period ends. This further delays the startup time especially when the instance is unhealthy. When this happens AWS Fargate will launch a replacement instance. 

Does Size of the Container Matter?

The size of the container, or containers, does matter. Uploading a bigger container will take more time. In addition, it will take more time to stop running larger containers. This is because AWS Fargate waits for the grace period, 300 seconds, to end before removing a running container. This delays the startup time when you want to remove a running container and upload a new one.

The image below shows running containers and hoe they get processed by the container orchestrator and virtual machines.

The grace period is the default time. That means you can reduce it. But eliminating or reducing the grace period does not guarantee that the startup time will be faster. The nature of the application in the container affects the startup time. Some applications will take more time to startup. This may be due to various factors.

One factor is a large number of existing old tasks that are still running. Removing the tasks from the target group and terminating the containers also takes a lot of time. Consequently, adding new applications and larger containers will take more time. Your application will have to pass all the health checks, especially when receiving traffic. AWS Fargate will also delay the startup time because of the target registration process. The load balancer also has to register that the newly added applications are a healthy endpoint. All this takes time especially when you are new to using AWS Fargate. 

Why Choose IronWorker Over AWS Fargate?

AWS Fargate has many great qualities, but the drawbacks of using it are also very significant. IronWorker offers all the great qualities of AWS Fargate without slow startup times. Additionally, you should use IronWorker because it has better support, simplicity, and more deployment options. 

IronWorker helps you develop each task differently depending on your specific needs. This helps get your application or task up and running in no time. IronWorker comes with a simple to use dashboard. You don’t have to be an expert or know a lot about the service to use it. Choosing IronWorker also means that you get more deployment options.

AWS Fargate technology does not have on-premises deployment. But with IronWorker, you get a variety of options such as shared, hybrid, dedicated, and on-premises deployment.

Switch to IronWorker and get a 14-day trial by signing up with today

Simple way to offload container based background jobs

What are container based background jobs?

Every web application needs to handle background jobs. A “background job” is a process that runs behind the scenes. Great effort goes into making web page response as fast as possible, which means getting data to the screen, completing the request, and returning control to the user. Background jobs are created to handle tasks that take time to complete or that aren’t critical to displaying results on the screen.

For example, if a query might take longer than a second, developers will want to consider running it in the background so that the web app can respond quickly and free itself up to respond to other requests. If needed, the background job can call back to the webpage when the task has been completed.

Why are container based background jobs important to developers?

Many things that rely on external services are also suited for running as background jobs. Sending an email as confirmation, storing a photo, creating a thumbnail, or posting to social media services are jobs that don’t need to be run in the front as part of the web page response. The controller in the application can put the email job, image processing, or social media posts into a jobs queue and then return control to a user. Jobs that run on a schedule are also considered background tasks.

Do container based background jobs help companies scale?

As your application grows, your background jobs system needs to scale which makes it a perfect match for’s services. IronWorker facilitates background job processing with the help of docker containers. Containers have become part of the infrastructure running just about everything. Almost everyone has their version of containers. The most commonly used is still Docker. IronWorker was among the very first to combine serverless management with containers.

Forgetting about managing a server to react to spikes in traffic or other processing needs greatly simplifies a developer’s job. Tasks and other processes are scaled automatically. At the same time this allows for detailed analytics. Because the containers are managed by IronWorker, whether they are short lived or take days, the jobs are completed with minimal developer input after the initial setup.

What is a SaaS company that provides a simple easy-to-use application to offload container based background jobs?

IronWorker provided by is the answer. Start running your background jobs using IronWorker today with a free 14 day trial.

Serverless Abstraction with Containers Explained


With the rapid growth of cloud computing, the “as a service” business model is slowly growing to dominate the field of enterprise IT. XaaS (also known as “anything as a service”) is projected to expand by a staggering annual growth rate of 38 percent between 2016 and 2020. The reasons for the rise of XaaS solutions are simple: in general, they are more flexible, more efficient, more easily accessible, and more cost-effective.

Serverless abstraction and containers are two XaaS cloud computing paradigms that have both become highly popular in recent years. Many articles pit the two concepts against each other, suggesting that businesses are able to use one but not both at the same time.

However, the choice between serverless abstraction and containers is a false dilemma. Both serverless and containers can be used together, enhancing one another and compensating for the other’s shortcomings. In this article, we’ll discuss everything you need to know about serverless abstraction with containers: what it is, what the benefits are, and how you can get started using them within your organization.


What is Serverless Abstraction?

“Serverless abstraction” is the notion in cloud computing that software can be totally separated from the hardware servers that it runs on. Users can execute an application without having to provision and manage the server where it resides.

There are two main types of serverless abstraction:

  • BaaS (backend as a service): The cloud provider handles the application backend, which concerns “behind the scenes” technical issues such as database management, user authentication, and push notifications for mobile applications.
  • FaaS (function as a service): The cloud provider executes the application’s code in response to a certain event, request, or trigger. The server is powered up when the application needs to run, and powered down once it completes.

The FaaS serverless paradigm is akin to the supply of a utility such as electricity in most modern homes. When you turn on a light or a kitchen appliance, your consumption of electricity increases, and it stops automatically when you flip the switch off again. The amount of the utility is infinite in practice for most use cases, and you pay only for the resources you actually consume.

FaaS is a popular choice for several different use cases. If you have an application that shares only static content, for example, FaaS will ensure that the appropriate resources and infrastructure are provisioned, no matter how much load your server is under. The ETL (extract, transform, load) data management process is another excellent use case for FaaS. Instead of running 24/7/365, your ETL jobs can spin up when you need to move information into your data warehouse, so that you only pay for the run instances that you actually need.

serverless abstraction

What are Containers?

Containers are software “packages” that combine an application’s source code with the libraries, frameworks, dependencies, and settings that are required to use it successfully. This ensures that a software application will always be able to run and behave predictably, no matter in which environment it is executed.

Products such as Docker and Kubernetes have popularized the use of containers among companies of all sizes and industries. 47 percent of IT leaders plan to use containers in a production environment, while another 12 percent already have.

Serverless Abstraction with Containers

The goal of both serverless abstraction and containers is to simplify the development process by removing the need to perform much of the tedious drudgery and technical overhead. Indeed, nothing prevents developers from using both containers and serverless abstraction in the same project.

Developers can make use of a hybrid architecture in which both the serverless and container paradigms complement each other, making up for the other’s shortcomings. For example, developers might build a large, complex application that mainly uses containers, but that transfers responsibility for some of the backend tasks to a serverless cloud computing platform.

In light of this natural relationship, it’s no surprise that there are a growing number of cloud offerings that seek to unite serverless and containers. For example, Google Cloud Run is a cloud computing platform from Google that “brings serverless to containers.”

Google Cloud Run is a fully managed platform that runs and automatically scales stateless containers in the cloud. Each container can be easily invoked with an HTTP request, which means that Google Cloud Run is also a FaaS solution, handling all the common tasks of infrastructure management.

Because Google Cloud Run is still in beta and under active development, it might not be the best choice for organizations who are looking for maximum stability and security. In this case, companies might turn to Google Cloud Run alternatives such as is a serverless platform offering a multi-cloud, Docker-based job processing service. The flagship product IronWorker is a task queue solution for running containers at scale. No matter what your IT setup, IronWorker can work with you: from on-premises IT to a shared cloud infrastructure to a public cloud such as AWS or Microsoft Azure.



Although they’re often thought of as opposing alternatives, the launch of Google Cloud Run and alternatives such as proves that serverless abstraction and containers can actually work together in harmony. Interested in learning more about which serverless/containers solution is right for your business needs and objectives? Speak with a knowledgeable, experienced technology partner like who can help you down the right path.

What is a Docker Image? (And how do you use one with IronWorker?)

What is a Docker image?

Love them or hate them, containers have become part of the infrastructure running just about everything. From Kubernetes to Docker, almost everyone has their version of containers. The most commonly used is still Docker. IronWorker was among the very first to combine serverless management with containers. In this article we will give a high-level overview of what a Docker image is, and how IronWorker uses them.

So, What is a Docker image?

To start, we need to have an understanding of the Docker nomenclature and environment. There is still not a clear consensus on terms in regards to containers. What Docker calls one thing, Google calls another, and so on. We will only focus on Docker here. (for more on Docker vs Kubernetes, read here).

Docker has three main components that we should know about in relation to IronWorker:

  1. Docker file
  2. Docker image
  3. Docker container

1) Docker File

A Docker file is the set of instructions to create a Docker image.

Let’s keep it simple. Docker files are configuration files that “tell” Docker images what to install, update, etc. Basically the Docker file says what to build that will be the Docker image.

2) Docker Image

A Docker image is the set of processes outlined in the Docker file. It is helpful to think of these as templates created by the Docker files. These are arranged in layers automatically. Each layer is dependent on the layer below it. Each layer then becomes more abstracted than the layer below.

By abstracting the actual “instructions” (remember the Docker files?), an environment that can function with its resources isolated is created. While virtual machines relied on calling OS level resources, containers have eliminated this. In turn, this creates a lightweight and highly scalable system. IronWorker takes these images and begins the process of creating and orchestrating complete containers. What exactly is the difference between a Docker image and a Docker container? Let’s see.

3) Docker Containers

Finally we come to the containers. To simplify, we can say that when a Docker image is instantiated it becomes a container. By creating an instance that draws on system resources like memory, the container begins to carry out whatever processes are together within the container. While separate image layers may have different purposes, Docker containers are formed to carry out single, specific tasks. We can think of a bee vs. a beehive. Individual workers carry out asynchronous tasks to achieve a single goal. In short, containers are packages which hold all of the required dependencies to run an application.

After the container has been run, The Docker image is inert and inactive. This is because Docker image has carried out its purpose and now serves only as a meta reference.

IronWorker and Docker

So, you have your containers configured and everything is ready to go. What next? While Docker containers can function on their own, things like scaling workloads is much faster, more reliable, and easier with an orchestrator. IronWorker is one such container orchestrator, with some unique properties. 

An orchestrator adds another layer of abstraction to implementing and running containers. This has become known as “serverless” in recent years. While there is no such thing as a truly serverless, the term simply means there is no server management involved. By this point in the configuration, we have likely all but forgot about our original Docker image.

Forgetting about managing a server to react to spikes in traffic or other processing needs greatly simplifies a developer’s job. Tasks and other processes are scaled automatically. At the same time this allows for detailed analytics. Because the containers are managed by IronWorker, whether they are short lived or take days, the jobs are completed with minimal developer input after the initial setup.

What about migrating to other clouds or on-premise?

Traditionally, containers have been cloud based. As new options develop beyond just Amazon Web Services, the need to deploy flexible tools increases. Obviously devops changes frequently. Sometimes it even changes daily. One of the key benefits to IronWorker is that exporting your settings (as Docker images) and continuing on, either redundantly or in new iterations, in varying environments is the easiest in the marketplace. This includes deploying fully on-premise. This ability to maintain freedom from vendor lock-in and future needs is what separates IronWorker from the rest.

Start IronWorker now with a free 14 day trial here.

Google Cloud Run: Review and Alternatives


Google Cloud Run is a new cloud computing platform that’s hot off the presses from Google, first announced at the company’s Google Cloud Next conference in April 2019. Google Cloud Run has generated a lot of excitement (and a lot of questions) among tech journalists and users of the public cloud alike, even though it’s still in beta.

We will discuss the ins and outs of Google Cloud Run in this all-in-one guide, including why it appeals to many Google Cloud Platform customers, what are the features of Google Cloud Run, and a comparison of the Google Cloud Run alternatives.

What Is Google Cloud Run (And How Does It Work?)

What is serverless computing?

To answer the question “What is Google Cloud Run?,” we first need to define serverless computing.

Often just called “serverless,” serverless computing is a cloud computing paradigm that frees the user from the responsibility of purchasing or renting servers to run their applications on.

(Actually, the term “serverless” is a bit of a misnomer: The code still runs on a server, just not one that the user has to worry about.)

Cloud computing has soared in popularity over the past decade. This is thanks in large part to the increased convenience and lower maintenance requirements. Traditionally, however, users of cloud services have still needed to set up a server, scale its resources when necessary, and shut it down when you’re done. This has all changed with the arrival of serverless.

The phrase “serverless computing” is applied to two different types of cloud computing models:

  • BaaS (backend as a service) outsources the application backend to the cloud provider. The backend is the “behind the scenes” part of the software for purposes such as database management, user authentication, cloud storage, and push notifications for mobile apps.
  • FaaS (function as a service) still requires developers to write code for the backend. The difference is this code is only executed in response to certain events or requests. This enables you to decompose a monolithic server into a set of independent functionalities, making availability and scalability much easier.

You can think of FaaS serverless computing as like a water faucet in your home. When you want to take a bath or wash the dishes, you simply turn the handle to make it start flowing. The water is virtually infinite, and you stop when you have as much as you need, only paying for the resources that you’ve used.

Cloud computing without FaaS, by contrast, is like having a water well in your backyard. You need to take the time to dig the well and build the structure, and you only have a finite amount of water at your disposal. In the event that you run out, you’ll need to dig a deeper well (just like you need to scale the server that your application runs on).

Regardless of whether you use BaaS or FaaS, serverless offerings allow you to write code without having to worry about how to manage or scale the underlying infrastructure. For this reason, serverless has come into vogue recently. In a 2018 study, 46 percent of IT decision-makers reported that they use and evaluate serverless.

What are containers?

docker containers

Now that we’ve defined serverless computing, we also need to define the concept of a container. (Feel free to skip to the next section if you’re very comfortable with your knowledge of containers.)

In the world of computing, a container is an application “package” that bundles up the software’s source code together with its settings and dependencies (libraries, frameworks, etc.). The “recipe” for building a container is known as the image. An image is a static file that is used to produce a container and execute the code within it.

One of the primary purposes of containers is to provide a familiar IT environment for the application to run in when the software is moved to a different system or virtual machine (VM).

Containers are part of a broader concept known as virtualization, which seeks to create a virtual resource (e.g., a server or desktop computer) that is completely separate from the underlying hardware.

Unlike servers or machine virtualizations, containers do not include the underlying operating system. This makes them more lightweight, portable, and easy to use.

When you say the word “container,” most enterprise IT staff will immediately think of one, or both, of Docker and Kubernetes. These are the two most popular container solutions.

  • Docker is a runtime environment that seeks to automate the deployment of containers.
  • Kubernetes is a “container orchestration system” for Docker and other container tools, which means that it manages concerns such as deployment, scaling, and networking for applications running in containers.

Like serverless, containers have dramatically risen in popularity among users of cloud computing in just the past few years. A 2018 survey found that 47 percent of IT leaders were planning to deploy containers in a production environment, while 12 percent already had. Containers enjoy numerous benefits: platform independence, speed of deployment, resource efficiency, and more.

Containers vs. serverless: A false dilemma

Given the massive success stories of containers and serverless computing, it’s hardly a surprise that Google would look to combine them. The two technologies were often seen as competing alternatives before the arrival of Google Cloud Run.

Both serverless and containers are intended to make the development process less complex. They do this by automating much of the busy work and overhead. But they go about it in different ways. Serverless computing makes it easier to iterate and release new application versions, while containers ensure that the application will run in a single standardized IT environment.

Yet nothing prevents cloud computing users from combining both of these concepts within a single application. For example, an application could use a hybrid architecture, where containers can pick up the slack if a certain function requires more memory than the serverless vendors has provisioned for it.

As another example, you could build a large, complex application that mainly has a container-based architecture, but that hands over responsibility for some backend tasks (like data transfers and backups) to serverless functions.

Rather than continuing to enforce this false dichotomy, Google realized that serverless and containers could complement one another, each compensating for the other one’s deficiencies. There’s no need for users to choose between the portability of containers and the scalability of serverless computing.

Enter Google Cloud Run…

What is Google Cloud Run?

In its own words, Google Cloud Run “brings serverless to containers.” Google Cloud Run is a fully managed platform that is capable of running Docker container images as a stateless HTTP service.

Each container can be invoked with an HTTP request. All the tasks of infrastructure management–provisioning, scaling up and down, configuration, and management–are cleared away from the user (as typically occurs with serverless computing).

Google Cloud Run is built on the Knative platform, which is an open API and runtime environment for building, deploying, and managing serverless workloads. Knative is based on Kubernetes, extending the platform in order to facilitate its use with serverless computing.

In the next section, we’ll have more technical details about the features and requirements of Google Cloud Run.

Google Cloud Run Features and Requirements


Google cites the selling points below as the most appealing features of Google Cloud Run:

  • Easy autoscaling: Depending on light or heavy traffic, Google Cloud Run can automatically scale your application up or down.
  • Fully managed: As a serverless offering, Google Cloud Run handles all the annoying and frustrating parts of managing your IT infrastructure.
  • Completely flexible: Whether you prefer to code in Python, PHP, Pascal, or Perl, Google Cloud Run is capable of working with any programming language and libraries (thanks to its use of containers).
  • Simple pricing: You pay only when your functions are running. The clock starts when the function is spun up, and ends immediately once it’s finished executing.

There are actually two options when using Google Cloud Run: a fully managed environment or a Google Kubernetes Engine (GKE) cluster. You can switch between the two choices easily, without having to reimplement your service.

In most cases, it’s best to stick with Google Cloud Run itself, and then move to Cloud Run on GKE if you need certain GKE-specific features, such as custom networking or GPUs. However, note that when you’re using Cloud Run on GKE, the autoscaling is limited by the capacity of your GKE cluster.

Google Cloud Run requirements

Google Cloud Run is still in beta (at the time of this writing). This means that things may change between now and the final version of the product. However, Google has already released a container runtime contract describing the behavior that your application must adhere to in order to use Google Cloud Run.

Some of the most noteworthy application requirements for Google Cloud Run are:

  • The container must be compiled for Linux 64-bit, but it can use any programming language or base image of your choice.
  • The container must listen for HTTP requests on the IP address, on the port defined by the PORT environment variable (almost always 8080).
  • The container instance must start an HTTP server within 4 minutes of receiving the HTTP request.
  • The container’s file system is an in-memory, writable file system. Any data written to the file system will not persist after the container has stopped.

With Google Cloud Run, the container only has access to CPU resources if it is processing a request. Outside of the scope of a request, the container will not have any CPU available.

In addition, the container must be stateless. This means that the container cannot rely on the state of a service between different HTTP requests, because it may be started and stopped at any time.

The resources allocated for each container instance in Google Cloud Run are as follows:

  • CPU: 1 vCPU (virtual CPU) for each container instance. However, the instance may run on multiple cores at the same time.
  • Memory: By default, each container instance has 256 MB of memory. Google says this can be increased up to a maximum of 2 GB.

Cloud Run Pricing

Google cloud run pricing

Google Cloud Run uses a “freemium” pricing model: free monthly quotas are available, but you’ll need to pay once you go over the limit. These types of plans frequently catch users off guard. They end up paying much more than expected. According to Forrester, a staggering 58% of companies surveyed said their costs exceeded their estimates.

The good news for Google Cloud Run users is that you’re charged only for the resources you use (rounded up to the nearest 0.1 second). This is typical of many public cloud offerings.

The free monthly quotas for Google Cloud Run are as follows:

  • CPU: The first 180,000 vCPU-seconds
  • Memory: The first 360,000 GB-seconds
  • Requests: The first 2 million requests
  • Networking: The first 1 GB egress traffic (platform-wide)

Once you bypass these limits, however, you’ll need to pay for your usage. The costs for the paid tier of Google Cloud Run are:

  • CPU: $0.000024 per vCPU-second
  • Memory: $0.0000025 per GB-second
  • Requests: $0.40 per 1 million requests
  • Networking: Free during the Google Cloud Run beta, with Google Compute Engine networking prices taking effect once the beta is over.

It’s worthwhile to note you are billed separately for each resource; for example, the fact that you’ve exceeded your memory quota does not mean that you need to pay for your CPU and networking usage as well.

In addition, these prices may not be definitive. Like the features of Google Cloud Run, prices for Google Cloud are subject to change once the platform leaves beta status.

Finally, Cloud Run on GKE uses a separate pricing model that will be announced before the service reaches general availability.

Google Cloud Run Review: Pros and Cons

Because it’s a brand new product product that’s still in beta, reputable Google Cloud Run reviews are still hard to find.

Reaction to Google’s announcement has been fairly positive, acknowledging the benefits of combining serverless computing with a container-based architecture. Some users believe that the reasonable prices will be enough for them to consider switching from similar services such as AWS Fargate.

Other users are more critical, however, especially given that Google Cloud Run is currently only in beta. Some are worried about making the switch, given Google’s track record of terminating services such as Google Reader, as well as their decision to alter prices for the Google Maps API, which effectively shut down many websites that could not afford the higher fees.

Given that Google Cloud Run is in beta, the jury is still out on how well it will perform in practice. Google does not provide any uptime guarantees for cloud offerings before they reach general availability.

The disadvantages of Google Cloud Run will likely overlap with the disadvantages of Google Cloud Platform as a whole. These include the lack of regions when compared with competitors such as Amazon and Microsoft. In addition, as a later entrant to the public cloud market, Google can sometimes feel “rough around the edges,” and new features and improvements can take their time to be released.

Google Cloud Run Alternatives

Since this is a comprehensive review of Google Cloud Run, we would be remiss if we didn’t mention some of the available alternatives to the Google Cloud Run service.

In fact, Google Cloud Run shares some of its core infrastructure with two of Google’s other serverless offerings: Google Cloud Functions and Google App Engine.

  • Google Cloud Functions is an “event-driven, serverless compute platform” that uses the FaaS model. Functions are triggered to execute by a specified external event from your cloud infrastructure and services. As with other serverless computing solutions, Google Cloud Functions removes the need to provision servers or scale resources up and down.
  • Google App Engine enables developers to “build highly scalable applications on a fully managed serverless platform.” The service provides access to Google’s hosting and tier 1 internet service. However, one limitation of Google App Engine is that the code must be written in Java or Python, as well as use Google’s NoSQL database BigTable.

Looking beyond the Google ecosystem, there are other strong options for developers who want to leverage both serverless and containers in their applications.

The most tested Cloud Run alternative: is a serverless platform that offers a multi-cloud, Docker-based job processing service. As one of the early adopters of containers, we have been a major proponent of the benefits of both technologies.

The centerpiece of’s products, IronWorker is a scalable task queue platform for running containers at scale. IronWorker has a variety of deployment options. Anything from using shared infrastructure to running the platform on your in-house IT environment is possible. Jobs can be scheduled to run at a certain date or time, or processed on-demand in response to certain events.

In addition to IronWorker, we also provide IronFunctions, an open-source serverless microservices platform that uses the FaaS model. IronFunctions is a cloud agnostic offering that can work with any public, private, or hybrid cloud environment, unlike services such as AWS Lambda. Indeed, allows AWS Lambda users to easily export their functions into IronFunctions. This helps to avoid the issue of vendor lock-in. IronFunctions uses Docker containers as the basic unit of work. That means that you can work with any programming language or library that fits your needs.


Google Cloud Run represents a major development for many customers of Google Cloud Platform who want to use both serverless and container technologies in their applications. However, Google Cloud Run is only the latest entrant into this space, and may not necessarily be the best choice for your company’s needs and objectives.

If you want to determine which serverless + container solution is right for you, speak with a skilled, knowledgeable technology partner like who can understand your individual situation. Whether it’s our own IronWorker solution, Google Cloud Run, or something else entirely, we’ll help you get started on the right path for your business.

Introducing: Computerless™

Iron was one of the pioneers of Serverless, so we’re excited to announce that we’ll also be one of the first companies to offer the next generation of compute:  It’s called Computerless™.

Unlike Serverless, this technology removes the physical machine completely.  Our offering piggy-backs off the recent developments in fiber optic technology developed at the University of Oxford.  If you haven’t heard about this breakthrough, we’ll do our best to explain:

Researchers have found a way to control how light travels at the molecular level, thus being in complete control of the resulting attenuation.  Molecular gates can then be created, and state stored in finite wavelengths. It’s somewhat equivalent to qubits in quantum computing, but in the case of optical fiber, it’s a physical reality.

The end result of this technological release allows for computers to be fully encapsulated in fiber optic cable.  The usual components needed are now mapped 1-to-1, via light. This has allowed Iron’s infrastructure to completely change.  While we’ve run our infrastructure on public clouds like AWS and GCP in the past, we’ve been able to leave that all behind. We’re now able to push our entire suite of products into optical cable itself:

Iron’s new and improved infrastructure on a cheap plot of land in Arkansas

In the next few months, we’ll be pushing all of our customer’s sensitive data into the cables shown above as well as running all Worker jobs through them.  We’re pretty sure the cables we purchased are for multi-tenant applications, so you can probably rest assured that we’re doing the right thing. In fact, NASA has already expressed an interest in licensing this technology from Iron. Other interested parties include the government of French Guiana and defense conglomerate Stark Industries. 

Researchers have kind-of concluded that this technology is ready for prime time, and also are quick to state the fact that in 1998, The Undertaker threw Mankind off Hell In A Cell, and plummeted 16 ft through an announcer’s table.