AWS Fargate: Overview and Alternatives

shipping containers

AWS Fargate is a serverless container management service (container as a service) that allows developers to focus on their application and not their infrastructure. While AWS Fargate does help with container orchestration, it does leave areas of concern where IronWorker fills the void.

You should be paying less for your AWS Fargate workloads.  Workload efficient enterprises are leaving Fargate for IronWorker. Speak to us to talk about why.

What are containers?

Before we talk about AWS Fargate, let’s talk about making software and containers. Making software applications behave predictably on different computers is one of the biggest challenges for developers. Software may need to run in multiple environments: development, testing, staging, and production. Differences in these environments can cause unexpected behavior, yet be very hard to track down.

To solve these challenges, more and more developers are using a technology called containers. Each container encapsulates an entire runtime environment. This includes the application itself, as well as the dependencies, libraries, frameworks, and configuration files that it needs to run.

Docker and Kubernetes were two of the first container technologies, but they are by no means the only alternatives. These containers are then used in container management services. For example, IronWorker, Iron.io’s container management service, uses Docker containers.

What is AWS Fargate?

Amazon’s first entry into the container market was Amazon Elastic Container Service (ECS). While many customers saw value in ECS, this solution often required a great deal of tedious manual configuration and oversight. For example, some containers may have to work together despite needing entirely different resources.

Performing all this management is the bane of many developers and IT staff. It requires a great deal of resources and effort, and it takes time away from what’s most important: deploying applications.

In order to solve these problems, Amazon has introduced AWS Fargate. According to Amazon, Fargate is “a compute engine for Amazon ECS that allows you to run containers without having to manage servers or clusters.”

Fargate separates the task of running containers from the task of managing the underlying infrastructure. Users can simply specify the resources that each container requires, and Fargate will handle the rest. For example, there’s no need to select the right server type, or fiddle with complicated multi-layered access rules.

AWS Fargate vs ECS vs EKS

docked ship with containers

Besides Fargate, Amazon’s other cloud computing offerings are ECS and EKS (Elastic Container Service for Kubernetes). ECS and EKS are largely for users of Docker and Kubernetes, respectively, who don’t mind doing the “grunt work” of manual configuration aka container orchestration.

One advantage of Fargate is that you don’t have to start out using it as an AWS customer. Instead, you can begin with ECS or EKS and then migrate to Fargate if you decide that it’s a better fit.

In particular, Fargate is a good choice if you find that you’re leaving a lot of compute power or memory on the table. Unlike ECS and EKS, Fargate only charges you for the CPU and memory that you actually use.

AWS Fargate: Pros and Cons

AWS Fargate is an exciting technology, but does it really live up to the hype? Below, we’ll discuss some of the advantages and disadvantages of using AWS Fargate.

Pro’s:

    • Less Complexity
    • Better Security
    • Lower Costs (Maybe)

Con’s:

    • Less Customization
    • Higher Costs (Maybe)
    • Region Availability

Pro: Less Complexity

These days, tech companies are offering everything “as a service,” taking the complexity out of users’ hands. There’s software as a service (SaaS), infrastructure as a service (IaaS), platform as a service (PaaS), and dozens of other buzzwords.

In this vein, Fargate is a Container as a Service (CaaS) technology. You don’t have to worry about where you’ll deploy your containers, or how you’ll manage and scale them. Instead, you can focus on defining the right parameters for your containers (e.g. compute, storage, and networking) for a successful deployment.

Pro: Better Security

Due to their complexity, Amazon ECS and EKS present a few security concerns. Having multiple layers of tasks and containers in your stack means that you need to handle security for each one.

With Fargate, however, the security of your IT infrastructure is no longer your concern. Instead, you embed security within the container itself. You can also combine Fargate with container security companies such as Twistlock. These companies offer products for guarding against attacks on running applications in Fargate.

Pro: Lower Costs (Maybe)

If you’re migrating from Amazon ECS or EKS, then Fargate could be a cheaper alternative. This is for two main reasons:

    • As mentioned above, Fargate charges you only when your container workloads are running inside the underlying virtual machine. It does not charge you for the total time that the VM instance is running.
    • Fargate does a good job at task scheduling, making it easier to start and stop containers at a specific time.

Want some more good news? In January 2019, Fargate users saw a major price reduction that will slash operating expenses by 35 to 50 percent.

Con: Less Customization

Of course, the downside of Fargate is that you sacrifice customization options for ease of use. As a result, Fargate is not well-suited for users who need greater control over their containers. These users may have special requirements for governance, risk management, and compliance that require fine-tuned control over their IT infrastructure.

Con: Higher Costs (Maybe)

Sure, Fargate is a cost-saving opportunity in the right situation when switching from ECS or EKS. For simpler use cases, however, Fargate may actually end up being more expensive. Amazon charges Fargate users a higher per-hour fee than ECS and EKS users. This is to compensate for the complexity of managing your containers’ infrastructure.

In addition, running your container workloads in the cloud will likely be more expensive than operating your own infrastructure on-premises. What you gain in ease of use, you lose in flexibility and performance.

Con: Regional Availability

AWS Fargate is slowly rolling out across Amazon’s cloud data centers, but it’s not yet available in all regions. As of June 2020, Fargate is not available for the following Amazon regions:

    • AWS Fargate (EKS)
      • Northern California 
      • Montreal
      • São Paulo
      • GovCloud (US-West and US-East)
      • London
      • Milan*
      • Paris
      • Stockholm
      • Bahrain
      • Cape Town
      • Osaka*
      • Seoul
      • Mumbai
      • Hong Kong
      • Beijing
      • Ningxia
    • * = Includes AWS Fargate (ECS)

AWS Fargate Reviews

user reviews

Even though AWS Fargate is still a new technology, it has earned mostly positive feedback on the tech review platform G2 Crowd. As of this writing, AWS Fargate has received an average score of 4.5 out of 5 stars from 12 G2 Crowd users.

Multiple users praise AWS Fargate’s ease of use. One customer says that Fargate “made the job of deploying and maintaining containers very easy.” A second customer praises Fargate’s user interface, calling it “simple and very easy to navigate.”

Another reviewer calls AWS Fargate an excellent solution: “I have been working with AWS Fargate for 1 or 2 years, and as a cloud architect it’s a boon for me…  It becomes so easy to scale up and scale down dynamically when you’re using AWS Fargate.”

Despite these advantages, AWS Fargate customers do have some complaints:

    • One user wishes that the learning curve were easier, writing that “it requires some amount of experience on Amazon EC2 and knowledge of some services.”
    • Multiple users mention that the cost of AWS Fargate is too high for them: “AWS Fargate is costlier when compared with other services”; “the pricing isn’t great and didn’t fit our startup’s needs.”
    • Finally, another user has issues with Amazon’s support: “as it’s a new product introduced in 2017, the quality of support is not so good.”

AWS Fargate Alternatives: AWS Fargate vs Iron.io

While AWS offers Fargate as a serverless container platform running on Docker, Iron.io offers an alternative industry leading solution called IronWorker. IronWorker is a container-based platform with Docker support for performing work on-demand. Just like AWS Fargate, IronWorker takes care of all the messy questions about servers and scaling. All you have to do on your end is develop applications, and then queue up tasks for processing.

Why select IronWorker over AWS Fargate?

IronWorker has been helping customers grow their business since 2015. Even with IronWorker’s AWS Fargate’s similarities, IronWorker has the advantage in:

    • Support
    • Simplicity
    • Deployment Options

Support

We understand every application and project is different. Luckily, Iron.io offers a “white glove” approach by developing custom configurations to get your tasks up and running. No project is too big, so please contact our development team to get your project started.  We also understand that documentation is critical to any developer and have made a Dev Center to help answer your questions.

Simplicity

When you start your free 14 day trial, you will get to interact with the simple and easy to use Iron.io dashboard.  Once you have your project running, you will receive detailed analytics providing both a high level synopsis and granular metrics.

Deployment Options

As of June 2020, Fargate’s container scaling technology is not available for on-premises deployments. On the other hand, one of the main goals of Iron.io is for the platform to run anywhere. Iron.io offers a variety of deployment options to fit every company’s needs:

    • Shared
      • Users can run containers on Iron.io’s shared cloud infrastructure.
    • Hybrid
      • Users benefit from a hybrid cloud and on-premises solution. Containers run on in-house hardware, while Iron.io handles concerns such as scheduling and authentication. This is a smart choice for organizations who already have their own server infrastructure, or who have concerns about data security in the cloud.
    • Dedicated
      • Users can run containers on Iron.io’s dedicated server hardware, making their applications more consistent and reliable. With Iron.io’s automatic scaling technology, users don’t have to worry about manually increasing or decreasing their usage.
    • On-premises
      • Finally, users can run IronWorker on their own in-house IT infrastructure. This is the best choice for customers who have strict regulations for compliance and security. Users in finance, healthcare, and government may all need to run containers on-premises.

Conclusion

Like it or now, AWS Fargate is a leader in serverless container managment services. As we’ve discussed in this article, however, it’s certainly not the right choice for every company. It’s true that Fargate often provides extra time and convenience. However, Fargate users will also sacrifice control and incur potentially higher costs.

As alternative to AWS Fargate, IronWorker has proven itself an enterprise solution for companies such as Hotel Tonight, Bleacher Report and UntappdIronWorker, made by Iron.io, offers a mature, feature-rich alternative to Fargate, ECS and EKS. Users can run containers on-premises, in the cloud, or benefit from a hybrid solution. Like Fargate, IronWorker takes care of infrastructure questions such as servers, scaling, setup, and maintenance. This gives your developers more time to spend on deploying code and creating value for your organization.

Looking for an AWS Fargate alternative?

Speak to us to learn about overcoming the issues associated with Fargate.

Save the Date: DockerCon 2019 Meet-up & Drink-Up

Mark your calendars next week for an awesome 3-day conference that brings the container technology community together to learn, share and connect. DockerCon 2019 will consist of workshops, keynotes, expo hall and of course a few drink-ups. (Our team will be co-hosting one of them!) You can expect a variety of attendees from across the world including C-level executives, systems admins, architects and developers.

Docker and Iron

So, what is Docker to Iron? Iron’s Worker product is a hosted background job solution that lets you run your containers with dynamic scale and detailed analytics. It has the ability to run short lived containers quickly, or even containers needing to work across multiple days. Think of it as serverless containers. (Plus it’s deployable in any environment, cloud, hybrid or on-premise.) While we have expanded its capabilities, IronWorker was built around Docker containers and has a long-standing relationship with Docker, so it’s only natural for Iron.io to be there.

The Drink-up

Iron.io is teaming up with CircleCI and Sauce Labs from 5:00PM – 8:00PM for a drink-up on Wednesday, May 1st. Whether you are attending DockerCon or will just be in the Bay area, we would love to meet up. It will not only be a great time, but a great opportunity to network and chat over drinks. Rumor has it that vintage board games might make some appearances as well.

We look forward to seeing you there!

Don’t forget to RSVP to the drink-up so that we can check you in at the door!

Top 10 Uses of a Worker System

A worker system is an essential part of any production-scale cloud application. The ability to run tasks asynchronously in the background, process tasks concurrently at scale, or schedule jobs to run on regular schedules is crucial for handling the types of workloads and processing demands common in a distributed application.

At Iron.io, we’re all about scaling workloads and performing work asynchronously and we hear from our customers on a continuous basis. Almost every customer has a story on how they use the IronWorker platform to get greater agility, eliminate complexity, or just get things done. We wanted to share a number of these examples so that other developers have answers to the simple question “How do I use a worker system?” or “What can I do with a task queue?” Continue reading “Top 10 Uses of a Worker System”

Batch Processing: A Tutorial on Workers, Queueing and Gelato

Worker_Queue_Gelato_V2

Batch processing is one of the earliest ways of data processing, utilized by Herman Hollerith’s Tabulating Machine in 1890. Batch processing was developed to take advantage of scarce computing resources: it avoids idling these expensive resources by queueing instructions to process data without manual user intervention, and can shift workload to times when resources are less scarce1.

Today, we can leverage modern architectural patterns like worker systems, message queues and the cloud to level-up these advantages and simplify our code. Let’s look at an example of queueing and workers using a calorie-dense metaphor: gelato.

gelato2

Using our favorite local gelato shop as an example, we explore how architectural concepts like queueing and workers can affect a given task. We chose gelato because:

  • Each order takes time to set up. You must examine the menu and display, choose and order.
  • Each order takes time to process. The time spent preparing an order can vary based on the order size and complexity, just as job size can vary in a worker system.
  • Adding additional workers helps. The queue of customers will be processed faster, in the same way that adding more workers to a particular batch processing job will make the queue shrink faster.
  • Like your infrastructure, gelato must be kept cold, and is delicious when consumed2.

Check it out:

1 https://en.wikipedia.org/wiki/Batch_processing

2This is not true. Iron.io assumes no responsibility if you eat your infrastructure.

How to Bake Your Own Pi

Baking Your Own Pi

It’s 3/14, and that means it’s international Pi day! A day where we rejoice over the transcendental number that seems to be everywhere.

So, why am I writing about pi on the Iron.io blog? It turns out pi is the best (read: the absolute best!) way to test out computers. It’s sufficiently random, requires large amounts of memory, CPU, and is easy to check.

I first learned about this aspect of pi while reading the book Heres Looking at Euclid. There, I also learned that Pi beyond 40 digits or so isn’t all that useful. So, why do we know pi into the billions of digits? To quote the many time world record holder,

“I have no interest as a hobby for extending the known value of pi itself. I have a major interest for improving the performance of computation. [..] Mathematical constants like the square root of 2, e, and gamma are some of the candidates, but pi is the most effective.”

How To Make Pi

I’m on board! I want to make Pi, myself. If Pi is a great way to test any computer, why not use it to test first-class distributed computing solutions, like IronWorker?

Humans have known about Pi for a while. Which is part of what makes it a great computation. We have multiple recipes for baking the same dish. That means it’s easy to check our work (by comparing two algorithms).

So, what goes into pi? How can I cook this dish? Let’s check out a few of the best recipes. Continue reading “How to Bake Your Own Pi”

The Next Frontier: Learning Microservices in the Classroom

working

As a Customer Success engineer here at Iron.io, I’ve been fortunate enough to see people using Iron.io in ways I never thought about. It’s actually one of my favorite parts of my job.

Recently, I was chatting with a customer who mentioned his students were using Iron.io in their final project. This peeked my interest, so I interviewed Soumya Ray, an associate professor at National Tsing Hua University in Taiwan, about his experience. Professor Ray’s  Service Oriented Architecture class is an 18 week course that takes students from idea creation to final product. And, as a cherry on top, the class has students create the building blocks of their own startup with zero dollars spent. Continue reading “The Next Frontier: Learning Microservices in the Classroom”

Introducing Custom Docker Images, Private Docker Repositories and Environment Variables

customcontainer

We’re happy to announce three awesome new IronWorker features:

  • Custom Docker Images for all and Docker is now the default code packaging mechanism
  • Support for private Docker images on any Docker Registry, including Docker Hub
  • Support for custom environment variables

I’ll explain each of these features in more detail below.

Custom Docker Images for All!

Previously only available to customers on dedicated plans, this is now available to everyone! You can create custom Docker images and run them at scale on the Iron.io Platform. And as usual, you don’t need to think about servers or scaling or managing anything, you just queue up jobs/tasks. Jobs are executed using your Docker image + a message/payload that defines that job. 

To read how to make your own Docker Worker using the language of your choice, please see our Docker Worker GitHub repository for full documentation for most languages. 

There is a size limit to custom images on our small/free plans of 200MB so you’ll definitely want to use our iron/x base images that we use in our examples to keep them small. If you need bigger images, you can upgrade your account. 

Once you’ve pushed your image to Docker Hub, simply let Iron.io know about it:

Then you can start queuing up jobs for it.

You can see the full API here and client libraries for the language of your choice here.

Private Docker Repositories

Not only can you use your own custom images, you can store those images privately and still use them on IronWorker. Obviously you don’t want other people to access your code inside your image or any type of config files you might have put in the image (although we recommend using environment variables for that, see next section) so you can use a private Docker repository to keep it private.  

To use your private images on Iron.io, you need to login like you do with Docker:

Then just do everything else like normal.

Environment Variables

Instead of uploading a config file as json, we’ve added support for custom environment variables that will be passed into your Docker container at runtime. This allows you to set options that you don’t want to include in your Docker image, such as database credentials or variables that might change based on the environment (development vs production for instance). 

These are set by using -e flags on iron register, for example:

That’s it for Now

These new features give you full capability to use Docker to it’s full potential. Now that this is in place, there’s a lot of exciting new things to come that will build on this.

Best Practices and Anti-Patterns for Workers and MQs

Best Practices and anti-patterns for workers and message queues

Thanks to Ruth Hartnup for the base image! CC BY 2.0

If you’ve been programming for a while, it’s probable that someone, somewhere, has recommended the Gang of Four book.

The book dissects Object Oriented programming. It lists numerous ways of royally messing things up, but it’s claim to fame is that it also lists ways to do it right! These well tested paths to success often come with explanations for when to use them and why they’re good at avoiding common pitfalls.

These are design patterns. They’re embedded in the culture of programming, and they’re an amazing way to learn from others’ mistakes. At the outset of any project, a lot of paths are open to you. Design patterns illuminate the dark paths from the healthy, low stress approaches.

Today, we’re releasing our very own set of best practices and anti-patterns in the form of a white paper. It’s a quick read and will save you time as you tinker on your own workers and message queues. So, what do you have to lose?

Continue reading “Best Practices and Anti-Patterns for Workers and MQs”

Docker + Iron.io = Super Easy Batch Processing

Docker containers

There are a ton of use cases for batch processing and every business is probably doing it in some way or another. Unfortunately, many of these approaches take much longer than need be. In a world of ever increasing data, the old way can now hinder the flow of business. Some examples of where you’ll see batch processing used are:

    • Image/video processing
    • ETL – Extract Transform Load
    • Genome processing
    • Big data processing
    • Billing (create and send bills to customers)
    • Report processing
    • Notifications (mobile, email, etc.)

We’ve all seen something that was created during a batch process (whether we like it or not).

Now, I’m going to show you how to take a process that would typically take a long time to run, and break it down into small pieces so it can be distributed and processed in parallel. Doing so will turn a really long process into a very quick one.

We’ll use Docker to package our worker and Iron.io to do the actual processing. Why Docker? Because, we can package our code and all our dependencies into an image for easy distribution. Why Iron.io?  It’s the easiest way to do batch processing. I don’t need to think about servers or deal with distributing my tasks among a bunch of servers.

Alright, so let’s go through how to do our own batch processing.

Continue reading “Docker + Iron.io = Super Easy Batch Processing”

See How Untappd Processes 100s of 1000s of Updates a Night

Ironio_web_HeroBanner_Untappd_V3_twitter

Untappd, a mobile check-in app for beer lovers, helps beer lovers share their passion with friends and other beer lovers from around the world.

With millions of users checking-in, tracking and sharing newly discovered beers and beer drinking locations, as well as earning points for coveted badges, the Untappd app’s processing power is tested daily.

Watch the Untappd video to see how the team was able to make the app an integrated part of many beer lovers night out. Continue reading “See How Untappd Processes 100s of 1000s of Updates a Night”