What is a Docker Image? (And how do you use one with IronWorker?)

What is a Docker image?

Love them or hate them, containers have become part of the infrastructure running just about everything. From Kubernetes to Docker, almost everyone has their version of containers. The most commonly used is still Docker. IronWorker was among the very first to combine serverless management with containers. In this article we will give a high-level overview of what a Docker image is, and how IronWorker uses them.

So, What is a Docker image?

To start, we need to have an understanding of the Docker nomenclature and environment. There is still not a clear consensus on terms in regards to containers. What Docker calls one thing, Google calls another, and so on. We will only focus on Docker here. (for more on Docker vs Kubernetes, read here).

Docker has three main components that we should know about in relation to IronWorker:

  1. Docker file
  2. Docker image
  3. Docker container

1) Docker File

A Docker file is the set of instructions to create a Docker image.

Let’s keep it simple. Docker files are configuration files that “tell” Docker images what to install, update, etc. Basically the Docker file says what to build that will be the Docker image.

2) Docker Image

A Docker image is the set of processes outlined in the Docker file. It is helpful to think of these as templates created by the Docker files. These are arranged in layers automatically. Each layer is dependent on the layer below it. Each layer then becomes more abstracted than the layer below.

By abstracting the actual “instructions” (remember the Docker files?), an environment that can function with its resources isolated is created. While virtual machines relied on calling OS level resources, containers have eliminated this. In turn, this creates a lightweight and highly scalable system. IronWorker takes these images and begins the process of creating and orchestrating complete containers. What exactly is the difference between a Docker image and a Docker container? Let’s see.

3) Docker Containers

Finally we come to the containers. To simplify, we can say that when a Docker image is instantiated it becomes a container. By creating an instance that draws on system resources like memory, the container begins to carry out whatever processes are together within the container. While separate image layers may have different purposes, Docker containers are formed to carry out single, specific tasks. We can think of a bee vs. a beehive. Individual workers carry out asynchronous tasks to achieve a single goal. In short, containers are packages which hold all of the required dependencies to run an application.

After the container has been run, The Docker image is inert and inactive. This is because Docker image has carried out its purpose and now serves only as a meta reference.

IronWorker and Docker

So, you have your containers configured and everything is ready to go. What next? While Docker containers can function on their own, things like scaling workloads is much faster, more reliable, and easier with an orchestrator. IronWorker is one such container orchestrator, with some unique properties. 

An orchestrator adds another layer of abstraction to implementing and running containers. This has become known as “serverless” in recent years. While there is no such thing as a truly serverless, the term simply means there is no server management involved. By this point in the configuration, we have likely all but forgot about our original Docker image.

Forgetting about managing a server to react to spikes in traffic or other processing needs greatly simplifies a developer’s job. Tasks and other processes are scaled automatically. At the same time this allows for detailed analytics. Because the containers are managed by IronWorker, whether they are short lived or take days, the jobs are completed with minimal developer input after the initial setup.

What about migrating to other clouds or on-premise?

Traditionally, containers have been cloud based. As new options develop beyond just Amazon Web Services, the need to deploy flexible tools increases. Obviously devops changes frequently. Sometimes it even changes daily. One of the key benefits to IronWorker is that exporting your settings (as Docker images) and continuing on, either redundantly or in new iterations, in varying environments is the easiest in the marketplace. This includes deploying fully on-premise. This ability to maintain freedom from vendor lock-in and future needs is what separates IronWorker from the rest.

Start IronWorker now with a free 14 day trial here.

Iron.io Support for Arm

Since releasing our flagship product in early 2011, Iron.io customers have enjoyed tightly coupled hosted solutions with Amazon Web Services (AWS). In addition customers are running Worker on-premise and in their own private clouds. 

In the last year, an increasing number of customers have requested support for the Arm architecture both for on-premise deployments and in the cloud on AWS. Based on customer demand, we added Arm support on our roadmap. We’re happy to announce that Worker now supports Arm based architectures!

Customers that run their own hardware using Worker’s Hybrid deployment method, run Worker completely on-premise, or those that run on AWS, can now start diversifying their container workloads. We already have customers taking advantage of this release. It greatly increases the variety of workloads that can be run with Worker.

Iron.IO Worker Support for AWS EC2 A1

As you might be aware, Amazon announced a new Amazon EC2 A1 instance in November last year. It is based on AWS Graviton Processors. A1 instances deliver significant cost savings for scale-out and Arm-based applications such as Web servers, containerized microservices, caching fleets and distributed data stores that are supported by the extensive Arm ecosystem. 

With Arm support, Worker now allows customers the ability to run workloads that require Arm based binaries.  There could also be potential cost savings by moving your current workloads to these new instance types.  That definitely depends on the resource load, though. It’s a good idea to read through the options (burstability vs no burstability, pricing, etc) and test out your specific workload before jumping in.  Feel free to reach out to us if you’d like to discuss!

Usage

When creating a cluster in Worker, you’ll now see the availability of the A1 instance types. In order to run your workloads on ARM processors, you’ll simply need to use our new image: iron/runner:arm, rather than our normal iron/runner image.  There’s also iron/runner:mplatform for cases where there could be multiple architecture types in the mix.

CivilMaps with Worker on Arm

CivilMaps runs Worker on-premise which allows for extremely low latency compute operations.  At the end of their complex workflow engine, Worker sits as the data processing backbone, running containerized jobs at high concurrency.

CivilMaps is an Iron customer that does edge based HD mapping and localization for autonomous driving platforms. Last year they announced that they’d be moving their internal infrastructure to Arm. A quote about the move:

“Civil Maps is excited to announce that we’ve migrated our edge-based HD mapping and localization solution to the Arm® family of processors. Arm is the licensor to the largest ecosystem of automotive grade system-on-chips (SoC) and system-on-modules (SoM), with its chips already found in 85% of automotive electronic control units (ECU) on the road. Our team sees this as a key step towards building a truly scalable platform for self-driving car developers. The industry still has a long way to go, but we believe that the arrival of cost-effective, production-grade systems for level 4 and 5 autonomous vehicles just got significantly closer.”

Conclusion

In the next few months we’ll be publishing more blog posts about our Arm support and sharing more customer success studies in depth. There are customers utilizing Worker in many unique ways and we believe our new support for Arm is going to open the door for many more.

Amazon EKS Alternatives and Review

Amazon EKS: Alternatives and Review

Kubernetes is one of the most popular choices for automating and managing Linux container operations. Originally developed by Google, the open-source Kubernetes project is now in use by some of the world’s largest enterprises. These include IBM, Nokia, Comcast, and Samsung.

With the rise of Kubernetes itself, we’ve also seen growth in accompanying services that aim to make it easier for developers to use Kubernetes. Amazon EKS is one such service.

Since its release to the general public in June 2018, Amazon EKS has generated a good deal of buzz among Amazon Web Services customers. But is it worth the hype, and what are the Amazon EKS alternatives out there?

In this blog post, we’ll go over everything you need to know about Amazon EKS. We’ll include brief history, the pros and cons, user reviews, and a look at your alternative options.

What is Amazon EKS?

 aws eks alternative

Amazon EKS (Elastic Container Service for Kubernetes) is a managed service from the Amazon Web Services cloud computing platform. Specifically, Amazon EKS aims to make it easier for AWS users to run Kubernetes without needing to install or manage their own Kubernetes clusters.

“What exactly is Kubernetes?” you might ask. Kubernetes is an open-source platform for managing software applications that have been packaged into so-called “containers” along with their libraries, dependencies, and settings. Containers make it easier for developers to ensure that their code behaves predictably even when running in different IT environments.

Amazon EKS takes care of Kubernetes deployment, management, and scaling, freeing users from having to handle these onerous technical details. Microservices, batch processing, and application migration are just a few of the ways that Amazon EKS might help organizations.

Amazon EKS Pros and Cons

Now that we’ve answered the question “What is Amazon EKS?”, we’ll discuss whether the Amazon EKS service actually meets the expectations outlined above. In this section, we’ll go over the pros and cons of Amazon EKS.

Amazon EKS Advantages

  • Good for AWS customers: Amazon EKS may be a wise choice if you’re very sure that you want to stick with AWS well into the future. If you migrate to another public cloud provider like Microsoft or Google, you’ll have to rework your operations all over again.
  • Automated control plane management: The Kubernetes control plane is a collection of processes that are running on a single cluster of computers. Amazon EKS handles the task of control plane management, taking it out of your hands.
  • Serverless architecture: Amazon EKS uses serverless architecture, which means that you don’t have to worry about manually overseeing your server rentals. You can write and deploy code without having to worry about managing or scaling the underlying infrastructure.

Amazon EKS Disadvantages

  • Not “cloud-agnostic”: Amazon EKS is only a solution for those companies that want to perform work on AWS. It’s a poor choice if you want to easily move applications between multiple public cloud providers. You’ll have to handle the task of container orchestration on these other clouds as well.
  • Not dynamic: Even if you want to use Amazon EKS as part of a larger multi-cloud puzzle, you’ll still need to handle the administration part yourself. This can pose challenges for dynamic multi-cloud models, where applications need to move quickly and easily between different cloud providers.
  • No integration: Of course, as an AWS-exclusive service, Amazon EKS doesn’t offer integrations with other managed Kubernetes services, and it isn’t likely to do so any time soon.

Amazon EKS Reviews

elastic kubernetes service options

In general, Amazon EKS has been well-received by many AWS customers. On the tech review platform G2 Crowd, Amazon EKS reviews currently have an average score of 4.3 out of 5 stars based on 10 user ratings.

According to these reviews, the greatest benefit of Amazon EKS is the ability to abstract away the underlying complexities of Kubernetes. One user says: “The best thing is that I don’t need to install and operate my own Kubernetes control plane. Instead, it makes work easy by giving us an API endpoint from which we can directly connect to the EKS managed control panel.” Another user writes approvingly that Amazon EKS “automatically manages the availability and scalability of the Kubernetes masters.”

However, the Amazon EKS reviews on G2 Crowd also point out two main disadvantages of the service: the pricing and the learning curve. Multiple reviewers note that Amazon EKS can be costly, especially for smaller businesses:

  • “It is a little expensive for business…”
  • “Can get pricy for small businesses”
  • “I dislike the pricing structure – maybe lower prices for smaller-sized businesses and those using it less, so that more could roll it out.”

In addition, some reviewers complain that the Amazon EKS learning curve can be challenging for new users:

  • “It takes a bit of an adjustment to learn the ropes of the whole process and overall general concept.”
  • “Can add more documentation on errors, it was hard to debug some errors. I had to rely on public sites to do it.”
  • “The configuration learning curve can be a bit steep for some.”

Another user frustrated by the Amazon EKS difficulty is Matthew Barlocker, software engineer and CEO at the AWS infrastructure monitoring company Blue Matador. He writes: “I found more negatives than positives… EKS is too complicated to set up to be valuable for newer users, and too fragile to be valuable to a legitimate DevOps person.”

Amazon EKS Alternatives

Given some of the issues discussed above, it’s understandable that some customers might want to find Amazon EKS alternatives.

The two other major cloud players, Microsoft Azure and Google Cloud Platform, both offer Kubernetes services that are very similar to Amazon EKS: Azure Kubernetes Service and Google Kubernetes Engine, respectively. Both offerings are well-reviewed on G2 Crowd, although some users mention having similar learning curve issues.

For the 85 percent of enterprises that operate in multicloud environments, however, services like Amazon EKS and Google Kubernetes Engine may not be enough to keep them satisfied. That’s why Iron.io offers IronWorker. IronWorker is a container-based platform that can easily be configured to work with Kubernetes as well as all the major public cloud providers.

Just like Amazon EKS, IronWorker’s goal is to handle all of the complicated technical issues with Kubernetes. This allows developers to produce more valuable and meaningful work. IronWorker has a variety of deployment options. These options can fit the needs of any organization, including shared infrastructure, hybrid cloud, dedicated servers, and on-premises. IronWorker is a matur feature-rich alternative to Amazon EKS. It lessens the IT burden and lets you focus on higher-quality final products.

Conclusion

Amazon EKS is a popular option for teams that want to simplify their Kubernetes deployments, but it’s not necessarily the best choice for all organizations. For example, companies that are already heavily invested in Microsoft Azure or Google Cloud Platform may opt for the offering from their preferred cloud provider.

Meanwhile, companies that are looking for flexibility across multiple clouds (including private and hybrid cloud setups) would do well to check out services like IronWorker.

Interested in learning more about Iron.io? Give it a test drive with a free, full-feature, no-obligations trial for 14 days. Contact us today to request a demo of IronWorker or IronMQ.

Strategies for Cloud Agnostic Architectures

Strategies for Cloud Agnostic Architectures

Introduction

If your business uses cloud computing–as most businesses do these days–it’s very likely that you have at least one public cloud solution. Ninety-one percent of organizations have adopted the public cloud. What’s more, a full half of large enterprises now spend more than $1.2 million every year on their public cloud deployments.

The “public cloud” refers to cloud computing services such as storage, software, and virtual machines that are provided by third parties over the internet. Some of the biggest public cloud providers are Amazon Web Services, Microsoft Azure, and Google Cloud Platform.

Increasingly, however, companies are growing interested in a “cloud agnostic” strategy. So what does “cloud agnostic” mean, and how can your own business be cloud agnostic? 

This article has all the answers.

Cloud Agnostic: Definition and Examples

cloud agnostic infrastructure

One of the greatest benefits of cloud computing is its flexibility. If you’re running out of storage, for example, your public cloud solution can automatically scale it up for you so that your operations will continue seamlessly.

Being “cloud agnostic” takes this idea of the flexible cloud one step further. As the name suggests, cloud agnostic organizations are those capable of easily running their workloads and applications within any public cloud.

The fact that an organization is “cloud agnostic” doesn’t mean that it’s completely indifferent as to which cloud provider it uses for which workloads. Indeed, the organization will likely have established preferences for their cloud setup, based on factors such as price, region, and the offerings from each provider.

Rather, being cloud agnostic means that you’re capable of switching tracks to a different public cloud provider should the need arise, with minimal hiccups and disruption to your business.

Why Do Companies Want to Be Cloud Agnostic?

benefits of being cloud agnostic

It’s hardly surprising that more companies are looking to be cloud agnostic, given that 84 percent of enterprises now use a multi-cloud strategy. This involves using two or more public cloud solutions, allowing you to take advantage of the differentials in features or prices between providers.

Another reason that companies want to be cloud agnostic is to avoid vendor lock-in. Cloud computing has revolutionized the ways that companies do business. It does so by giving them access to more products and services without having to support and maintain their own hardware and infrastructure. However, this increased reliance on cloud computing also comes with the risk of dependency.

Management consulting firm Bain & Company finds that 22 percent of companies see vendor lock-in as one of their top concerns about the cloud. “Vendor lock-in” is a phenomenon when a business becomes overly dependent on products or services from one of its vendors. This is highly dangerous if the vendor hikes its prices, stops providing a certain offering, or even ceases operations.

The world of cloud computing is rife with vendor lock-in horror stories. One example is that of Nirvanix, a cloud storage firm that went out of business and gave clients only two weeks to move their data. While it might seem impossible for an Amazon or Google to go out of business, remembering companies like AOL show that it’s not completely unrealistic for a vendor to cut services. By making your company more flexible and adaptable, being cloud agnostic inoculates against the risk of vendor lock-in.

Cloud Agnostic: Pros and Cons

Cloud agnostic architectures

The Pros of Being Cloud Agnostic

  • No vendor lock-in: As mentioned above, being cloud agnostic makes the risk of vendor lock-in much less likely. Companies that are cloud agnostic can “diversify their portfolio” and become more resilient to failure and changes in the business IT landscape.
  • More customization: Using a strategy that’s cloud agnostic and multi-cloud lets you tweak and adjust your cloud roadmap exactly as you see fit. You don’t have to miss out on a feature that’s exclusive to a single provider just because you’re locked into a different solution.
  • Redundancy. Having systems in place across various clouds means you are covered should any one encounter problems.

The Cons of Being Cloud Agnostic

  • Greater complexity: Being cloud agnostic sounds great on paper, but the realities of implementation can be much more difficult. Creating a cloud strategy with portability built in from the ground up generally incurs additional complexity and cost.
  • “Lowest common denominator”: If you focus too much on being cloud agnostic, you may only be able to use services that are offered by all of the major public cloud providers. Even if AWS has a great new feature for your business, for example, you may be reluctant to use it unless you can guarantee that you can replicate it in Microsoft Azure or Google Cloud Platform. While more of a choice in enterprise strategy than a drawback, it is something to be aware of.

Strategies for Being Cloud Agnostic

A number of articles say that being truly cloud agnostic is a “myth.” These pieces argue that “cloud agnostic” is a state that’s not realistic or even desirable for most organizations.

In fact, being entirely cloud agnostic is an ideal that may or may not be achievable for you. Unless you are sure that the future won’t change, it may not be worth the effort to reach this goal. In large part, the tradeoff comes at the expense of your other IT and cloud objectives.

Nevertheless, there are a number of “low-hanging fruit” technologies that you can adopt on the path toward being cloud agnostic. These will be advantageous for your business no matter where you stand on the cloud agnostic spectrum.

For example, container technologies such as Docker and Kubernetes are an invaluable part of being cloud agnostic. Essentially, a “container” is a software unit that packages source code together with its libraries and dependencies. This allows the application to be quickly and easily ported from one computing environment to another.

Another tactic for being cloud agnostic is to use managed database services. These are public cloud offerings in which the provider installs, maintains, manages, and provides access to a database. The major public clouds such as AWS, Microsoft Azure, and Google all offer theoretical possibilities for migrating between providers.

That said, using products such as IronWorker that can deploy on any cloud, including fully on-premise deploys, is the easiest and most cost effective way to remain cloud agnostic. This is because with virtually one click, you can save your settings and deploy to whatever environment your enterprise wishes. In short, simplicity equals operational cost efficiency.

Conclusion

Technologies such as containers and managed database services will go far toward making your business more flexible and adaptable. This is true even if not completely cloud agnostic. If you do decide to become a cloud agnostic organization, consider using the services of Iron.io. Set up a consultation with us today to find out how our cloud agnostic IronFunctions platform can help your developers become more productive and efficient.

DockerCon 2019 Unofficial Drink-Up Wrap

In short, it rocked! We had a great time with old and new friends alike. Thanks to everyone who made it awesome, not the least of which were our co-sponsors CircleCI and Sauce Labs. From interesting (and sometimes downright hilarious) conversations, to just shooting some billiards, it was a memorable and enjoyable night.

The sights and sounds at Tabletop Tap House

Iron.io’s swag was off the charts! (Did it have anything to do with our branded flasks being full of bourbon?…Nah, definitely not. No way.) Our shirts were also a hit, so a big shout out to our creative department for their efforts in creating them. Was it the best in DockerCon? Who knows. Was is pretty cool to give away some free Iron.io stuff to DockerCon attendees and friends? Yes.

We look forward to the next event with our amazing clients, friends and associates! Until next time, so long!

Just look at those beautiful T-Shirts!

Google Cloud Run: Review and Alternatives

Introduction

Google Cloud Run is a new cloud computing platform that’s hot off the presses from Google, first announced at the company’s Google Cloud Next conference in April 2019. Google Cloud Run has generated a lot of excitement (and a lot of questions) among tech journalists and users of the public cloud alike, even though it’s still in beta.

We will discuss the ins and outs of Google Cloud Run in this all-in-one guide, including why it appeals to many Google Cloud Platform customers, what are the features of Google Cloud Run, and a comparison of the Google Cloud Run alternatives.

What Is Google Cloud Run (And How Does It Work?)

What is serverless computing?

To answer the question “What is Google Cloud Run?,” we first need to define serverless computing.

Often just called “serverless,” serverless computing is a cloud computing paradigm that frees the user from the responsibility of purchasing or renting servers to run their applications on.

(Actually, the term “serverless” is a bit of a misnomer: The code still runs on a server, just not one that the user has to worry about.)

Cloud computing has soared in popularity over the past decade. This is thanks in large part to the increased convenience and lower maintenance requirements. Traditionally, however, users of cloud services have still needed to set up a server, scale its resources when necessary, and shut it down when you’re done. This has all changed with the arrival of serverless.

The phrase “serverless computing” is applied to two different types of cloud computing models:

  • BaaS (backend as a service) outsources the application backend to the cloud provider. The backend is the “behind the scenes” part of the software for purposes such as database management, user authentication, cloud storage, and push notifications for mobile apps.
  • FaaS (function as a service) still requires developers to write code for the backend. The difference is this code is only executed in response to certain events or requests. This enables you to decompose a monolithic server into a set of independent functionalities, making availability and scalability much easier.

You can think of FaaS serverless computing as like a water faucet in your home. When you want to take a bath or wash the dishes, you simply turn the handle to make it start flowing. The water is virtually infinite, and you stop when you have as much as you need, only paying for the resources that you’ve used.

Cloud computing without FaaS, by contrast, is like having a water well in your backyard. You need to take the time to dig the well and build the structure, and you only have a finite amount of water at your disposal. In the event that you run out, you’ll need to dig a deeper well (just like you need to scale the server that your application runs on).

Regardless of whether you use BaaS or FaaS, serverless offerings allow you to write code without having to worry about how to manage or scale the underlying infrastructure. For this reason, serverless has come into vogue recently. In a 2018 study, 46 percent of IT decision-makers reported that they use and evaluate serverless.

What are containers?

docker containers

Now that we’ve defined serverless computing, we also need to define the concept of a container. (Feel free to skip to the next section if you’re very comfortable with your knowledge of containers.)

In the world of computing, a container is an application “package” that bundles up the software’s source code together with its settings and dependencies (libraries, frameworks, etc.). The “recipe” for building a container is known as the image. An image is a static file that is used to produce a container and execute the code within it.

One of the primary purposes of containers is to provide a familiar IT environment for the application to run in when the software is moved to a different system or virtual machine (VM).

Containers are part of a broader concept known as virtualization, which seeks to create a virtual resource (e.g., a server or desktop computer) that is completely separate from the underlying hardware.

Unlike servers or machine virtualizations, containers do not include the underlying operating system. This makes them more lightweight, portable, and easy to use.

When you say the word “container,” most enterprise IT staff will immediately think of one, or both, of Docker and Kubernetes. These are the two most popular container solutions.

  • Docker is a runtime environment that seeks to automate the deployment of containers.
  • Kubernetes is a “container orchestration system” for Docker and other container tools, which means that it manages concerns such as deployment, scaling, and networking for applications running in containers.

Like serverless, containers have dramatically risen in popularity among users of cloud computing in just the past few years. A 2018 survey found that 47 percent of IT leaders were planning to deploy containers in a production environment, while 12 percent already had. Containers enjoy numerous benefits: platform independence, speed of deployment, resource efficiency, and more.

Containers vs. serverless: A false dilemma

Given the massive success stories of containers and serverless computing, it’s hardly a surprise that Google would look to combine them. The two technologies were often seen as competing alternatives before the arrival of Google Cloud Run.

Both serverless and containers are intended to make the development process less complex. They do this by automating much of the busy work and overhead. But they go about it in different ways. Serverless computing makes it easier to iterate and release new application versions, while containers ensure that the application will run in a single standardized IT environment.

Yet nothing prevents cloud computing users from combining both of these concepts within a single application. For example, an application could use a hybrid architecture, where containers can pick up the slack if a certain function requires more memory than the serverless vendors has provisioned for it.

As another example, you could build a large, complex application that mainly has a container-based architecture, but that hands over responsibility for some backend tasks (like data transfers and backups) to serverless functions.

Rather than continuing to enforce this false dichotomy, Google realized that serverless and containers could complement one another, each compensating for the other one’s deficiencies. There’s no need for users to choose between the portability of containers and the scalability of serverless computing.

Enter Google Cloud Run…

What is Google Cloud Run?

In its own words, Google Cloud Run “brings serverless to containers.” Google Cloud Run is a fully managed platform that is capable of running Docker container images as a stateless HTTP service.

Each container can be invoked with an HTTP request. All the tasks of infrastructure management–provisioning, scaling up and down, configuration, and management–are cleared away from the user (as typically occurs with serverless computing).

Google Cloud Run is built on the Knative platform, which is an open API and runtime environment for building, deploying, and managing serverless workloads. Knative is based on Kubernetes, extending the platform in order to facilitate its use with serverless computing.

In the next section, we’ll have more technical details about the features and requirements of Google Cloud Run.

Google Cloud Run Features and Requirements

Features

Google cites the selling points below as the most appealing features of Google Cloud Run:

  • Easy autoscaling: Depending on light or heavy traffic, Google Cloud Run can automatically scale your application up or down.
  • Fully managed: As a serverless offering, Google Cloud Run handles all the annoying and frustrating parts of managing your IT infrastructure.
  • Completely flexible: Whether you prefer to code in Python, PHP, Pascal, or Perl, Google Cloud Run is capable of working with any programming language and libraries (thanks to its use of containers).
  • Simple pricing: You pay only when your functions are running. The clock starts when the function is spun up, and ends immediately once it’s finished executing.

There are actually two options when using Google Cloud Run: a fully managed environment or a Google Kubernetes Engine (GKE) cluster. You can switch between the two choices easily, without having to reimplement your service.

In most cases, it’s best to stick with Google Cloud Run itself, and then move to Cloud Run on GKE if you need certain GKE-specific features, such as custom networking or GPUs. However, note that when you’re using Cloud Run on GKE, the autoscaling is limited by the capacity of your GKE cluster.

Google Cloud Run requirements

Google Cloud Run is still in beta (at the time of this writing). This means that things may change between now and the final version of the product. However, Google has already released a container runtime contract describing the behavior that your application must adhere to in order to use Google Cloud Run.

Some of the most noteworthy application requirements for Google Cloud Run are:

  • The container must be compiled for Linux 64-bit, but it can use any programming language or base image of your choice.
  • The container must listen for HTTP requests on the IP address 0.0.0.0, on the port defined by the PORT environment variable (almost always 8080).
  • The container instance must start an HTTP server within 4 minutes of receiving the HTTP request.
  • The container’s file system is an in-memory, writable file system. Any data written to the file system will not persist after the container has stopped.

With Google Cloud Run, the container only has access to CPU resources if it is processing a request. Outside of the scope of a request, the container will not have any CPU available.

In addition, the container must be stateless. This means that the container cannot rely on the state of a service between different HTTP requests, because it may be started and stopped at any time.

The resources allocated for each container instance in Google Cloud Run are as follows:

  • CPU: 1 vCPU (virtual CPU) for each container instance. However, the instance may run on multiple cores at the same time.
  • Memory: By default, each container instance has 256 MB of memory. Google says this can be increased up to a maximum of 2 GB.

Cloud Run Pricing

Google cloud run pricing

Google Cloud Run uses a “freemium” pricing model: free monthly quotas are available, but you’ll need to pay once you go over the limit. These types of plans frequently catch users off guard. They end up paying much more than expected. According to Forrester, a staggering 58% of companies surveyed said their costs exceeded their estimates.

The good news for Google Cloud Run users is that you’re charged only for the resources you use (rounded up to the nearest 0.1 second). This is typical of many public cloud offerings.

The free monthly quotas for Google Cloud Run are as follows:

  • CPU: The first 180,000 vCPU-seconds
  • Memory: The first 360,000 GB-seconds
  • Requests: The first 2 million requests
  • Networking: The first 1 GB egress traffic (platform-wide)

Once you bypass these limits, however, you’ll need to pay for your usage. The costs for the paid tier of Google Cloud Run are:

  • CPU: $0.000024 per vCPU-second
  • Memory: $0.0000025 per GB-second
  • Requests: $0.40 per 1 million requests
  • Networking: Free during the Google Cloud Run beta, with Google Compute Engine networking prices taking effect once the beta is over.

It’s worthwhile to note you are billed separately for each resource; for example, the fact that you’ve exceeded your memory quota does not mean that you need to pay for your CPU and networking usage as well.

In addition, these prices may not be definitive. Like the features of Google Cloud Run, prices for Google Cloud are subject to change once the platform leaves beta status.

Finally, Cloud Run on GKE uses a separate pricing model that will be announced before the service reaches general availability.

Google Cloud Run Review: Pros and Cons

Because it’s a brand new product product that’s still in beta, reputable Google Cloud Run reviews are still hard to find.

Reaction to Google’s announcement has been fairly positive, acknowledging the benefits of combining serverless computing with a container-based architecture. Some users believe that the reasonable prices will be enough for them to consider switching from similar services such as AWS Fargate.

Other users are more critical, however, especially given that Google Cloud Run is currently only in beta. Some are worried about making the switch, given Google’s track record of terminating services such as Google Reader, as well as their decision to alter prices for the Google Maps API, which effectively shut down many websites that could not afford the higher fees.

Given that Google Cloud Run is in beta, the jury is still out on how well it will perform in practice. Google does not provide any uptime guarantees for cloud offerings before they reach general availability.

The disadvantages of Google Cloud Run will likely overlap with the disadvantages of Google Cloud Platform as a whole. These include the lack of regions when compared with competitors such as Amazon and Microsoft. In addition, as a later entrant to the public cloud market, Google can sometimes feel “rough around the edges,” and new features and improvements can take their time to be released.

Google Cloud Run Alternatives

Since this is a comprehensive review of Google Cloud Run, we would be remiss if we didn’t mention some of the available alternatives to the Google Cloud Run service.

In fact, Google Cloud Run shares some of its core infrastructure with two of Google’s other serverless offerings: Google Cloud Functions and Google App Engine.

  • Google Cloud Functions is an “event-driven, serverless compute platform” that uses the FaaS model. Functions are triggered to execute by a specified external event from your cloud infrastructure and services. As with other serverless computing solutions, Google Cloud Functions removes the need to provision servers or scale resources up and down.
  • Google App Engine enables developers to “build highly scalable applications on a fully managed serverless platform.” The service provides access to Google’s hosting and tier 1 internet service. However, one limitation of Google App Engine is that the code must be written in Java or Python, as well as use Google’s NoSQL database BigTable.

Looking beyond the Google ecosystem, there are other strong options for developers who want to leverage both serverless and containers in their applications.

The most tested Cloud Run alternative: Iron.io

Iron.io is a serverless platform that offers a multi-cloud, Docker-based job processing service. As one of the early adopters of containers, we have been a major proponent of the benefits of both technologies.

The centerpiece of Iron.io’s products, IronWorker is a scalable task queue platform for running containers at scale. IronWorker has a variety of deployment options. Anything from using shared infrastructure to running the platform on your in-house IT environment is possible. Jobs can be scheduled to run at a certain date or time, or processed on-demand in response to certain events.

In addition to IronWorker, we also provide IronFunctions, an open-source serverless microservices platform that uses the FaaS model. IronFunctions is a cloud agnostic offering that can work with any public, private, or hybrid cloud environment, unlike services such as AWS Lambda. Indeed, Iron.io allows AWS Lambda users to easily export their functions into IronFunctions. This helps to avoid the issue of vendor lock-in. IronFunctions uses Docker containers as the basic unit of work. That means that you can work with any programming language or library that fits your needs.

Conclusion

Google Cloud Run represents a major development for many customers of Google Cloud Platform who want to use both serverless and container technologies in their applications. However, Google Cloud Run is only the latest entrant into this space, and may not necessarily be the best choice for your company’s needs and objectives.

If you want to determine which serverless + container solution is right for you, speak with a skilled, knowledgeable technology partner like Iron.io who can understand your individual situation. Whether it’s our own IronWorker solution, Google Cloud Run, or something else entirely, we’ll help you get started on the right path for your business.

Save the Date: DockerCon 2019 Meet-up & Drink-Up

Mark your calendars next week for an awesome 3-day conference that brings the container technology community together to learn, share and connect. DockerCon 2019 will consist of workshops, keynotes, expo hall and of course a few drink-ups. (Our team will be co-hosting one of them!) You can expect a variety of attendees from across the world including C-level executives, systems admins, architects and developers.

Docker and Iron

So, what is Docker to Iron? Iron’s Worker product is a hosted background job solution that lets you run your containers with dynamic scale and detailed analytics. It has the ability to run short lived containers quickly, or even containers needing to work across multiple days. Think of it as serverless containers. (Plus it’s deployable in any environment, cloud, hybrid or on-premise.) While we have expanded its capabilities, IronWorker was built around Docker containers and has a long-standing relationship with Docker, so it’s only natural for Iron.io to be there.

The Drink-up

Iron.io is teaming up with CircleCI and Sauce Labs from 5:00PM – 8:00PM for a drink-up on Wednesday, May 1st. Whether you are attending DockerCon or will just be in the Bay area, we would love to meet up. It will not only be a great time, but a great opportunity to network and chat over drinks. Rumor has it that vintage board games might make some appearances as well.

We look forward to seeing you there!

Don’t forget to RSVP to the drink-up so that we can check you in at the door!

Google Cloud Functions Alternatives

Google cloud functions alternatives

Google Cloud Functions is a serverless environment that many developers use. It enables programmers to write simple functions and attach them to events related to their cloud infrastructure. It’s a fully managed environment, which means there is no need to allocate servers or other equipment in order for your functions to run.

However, Google Cloud Functions is far from the only tool on the market. In fact, many competitors are out there. Some companies have taken note of the features and capabilities that Google Cloud Functions lacks, making for an even better solution. Here’s a look at serverless environments, an overview of Google Cloud Functions, and an alternative worth considering.

Understanding the Concept of Serverless

The solutions featured here all work on the same premise. They are all based in the cloud, without servers. Here’s an explanation of the terminology you’ll come across when seeing these services described.

  • Functions: A function is a simple code snippet that servers just one purpose. Functions get associated with events.
  • Events: A service will set off an event when something has changed states.
  • Triggers: Events will happen regardless of whether developers define functions and triggers. But the trigger is what connects the function to the event.

These lead into the five principles that should guide any developer who’s adopting a serverless environment.

1. Execute Code on Demand Using a Compute Service

alternatives to google cloud functions

Regardless of the type of serverless environment you choose, all of them serve the same purpose: executing code. With a cloud environment, you will not have to run your own virtual machines (VMs), servers, or containers. Everything stays in the cloud.

In the case of open source architecture, you can run the compute service in your public, private, or hybrid cloud environment. Alternatively, you can pay a vendor monthly and run your cloud environment as a FaaS (function as a service). FaaS means all you have to focus on is custom code. The vendor handles the cloud and coding environment for you.

2. Write Stateless Functions That Serve a Single Purpose

The single responsibility principle (SRP) should guide your work in a serverless environment. You should focus on writing functions, or code snippets, that serve just one purpose. That’s because functions are easier to create. This explains the shift to function-based thinking, as it allows for much easier testing and debugging.

Compared to the development of single functions to the traditional approach of trying to develop an entire app or container all at once, it is an easier process. The focus on microservices allows for greater agility. This granular workflow puts your focus on specific actions, enabling developers to test and launch sooner. In turn, this approach increases efficiency.

3. Design Pipelines Driven by Events

In these environments, developers build pipelines driven by events, allowing even the most complex computing tasks to run easily. The main purpose of a serverless environment is to allow you to create code that connects various services. These pipelines allow you to do just that, giving you the capacity to make different services interact to give you the results you want.

With an event-driven pipeline that’s set up to be push-based, there should be minimal (if any) human intervention. It should all run to be as hands-off as possible. This makes for less involvement on behalf of the users. So, it streamlines the workflow.

4. Create a Powerful Front End Interface

If you’re creating an entire system that consists of both a back and front end, and hosting it in a serverless environment, then this principle is also important. On the other hand, if you find yourself building a pipeline meant to transform a file or some other kind of system, you might not need to worry about a front end.

However, in situations where a front end interface is applicable, the idea is to make it as smart as possible. This requires developers to allocate as much logic as they can to the front end interface. In other words, this front end should interact directly with services. In turn, this will work to decrease how many serverless functions you need to run in the environment.

Obviously, there will always be situations where you cannot or should not set up direct communication with services. Security and privacy are some major examples of why that might be the case. In those situations, it’s best to use the serverless functions for these particular actions. With that said, when you can put something on the front end, you should definitely opt to.

5. Use Third-Party Applications and Services

Cloud functions best alternatives

The beauty of a serverless environment is that you can connect any number of third-party applications and services to reduce how much custom code you have to create. Obviously, this saves developers a great deal of time. By connecting other services, you’re able to leverage the code they’ve already created and use it for certain elements of your application.

Of course, when using third-party apps, always run tests and consider the disadvantages of doing so. Typically, using a third-party app means giving up control to make things faster. There is always a trade-off in some form. And this may prove worth it in many situations, but the goal is to make the right decision to best use your serverless environment.

Choosing a Cloud Environment

If you think that a serverless environment is the right fit for your development, the first decision you have to make is which provider to opt for. There are many out there, but Google Cloud Functions has quickly become one of the most well-known. Despite having just joined the market in 2017, Google Cloud Functions is widely used.

Amazon was actually the one to introduce the idea of a serverless cloud environment. It did so back in 2014 with the release of Amazon Web Services (AWS) Lambda.

Obviously, these services come with a number of benefits, including added flexibility and reduced cost. But choosing Google Cloud Functions will lead to some disadvantages and challenges, too. At the top of the list is the risky business of vendor lock-in along with the cost, and documentation (or lack thereof).

To help you decide which of the main providers might be the right fit for your business, here’s an overview of Google Cloud Functions next to IronFunctions, a popular alternative.

Overview of Google Cloud Functions

Google describes Cloud Functions as an, “event-driven serverless compute platform.” It’s touted as the easiest way to get your code up and running within a cloud environment. Since it’s based in the cloud, it’s also easy to scale and extra reliable. There aren’t any servers to manage, either.

Like other cloud software, you only pay for what you need. In the case of Google Cloud Functions, you’ll only pay for the resources it takes to run your code. The cloud also makes it easy to extend functionality through the connection of other cloud-based services.

Cloud Functions has many use cases, including backends for serverless applications where you can run the backends for your IoT and mobile applications, plus integrate third-party APIs and services. You can also use it for real-time data processing to process files and streams, and use extract, transform, and load (ETL) functions on an event-driven basis.

Additionally, this environment can function as a foundation for intelligent applications. With it, you can set up and run many smart apps, like virtual assistants and chatbots for your business. You can also take advantage of image, video, and sentiment analysis.

The core philosophy behind Cloud Functions is that developers should begin focusing more on small, individual features that they can deploy independently rather than building an entire container or application at once. This is similar to other cloud-based development tools today.

Overview of IronFunctions

IronFunctions best alternative google cloud functions

IronFunctions is a direct competitor of Google Cloud Functions. This open-source environment also allows for serverless computing with the same capabilities of Cloud Functions. One of the main highlights is that it allows you to avoid vendor lock-in. That’s because IronFunctions works on any cloud, whether public, private, or hybrid.

You can also implement Functions directly into any application you are building. Its design purposely allows for seamless and simple integration into your environment. You’ll find that you’ll spend less time working on tasks thanks to advanced job processing. Plus, good job management allows you to focus on building better software.

Functions enables you to better use the infrastructure you already have. You can also easily integrate other tools you want to use, whether they’re commercial or open source. It supports Docker Swarm, Kubernetes, Mesosphere, and countless other popular applications.

To set up Functions, simply implement it into your app and quickly establish the infrastructure and job processing you need to handle. You can then begin focusing on building your software as tasks that typically overload your CPU begin to run seamlessly in the background.

It’s a simple and free serverless environment. But cost isn’t the only area where IronFunctions competes with Google Cloud Functions. As such, comparing them side by side is a must in order to determine which is right for your business.

Deciding What’s Right for You

open source serverless functions

There are many pros to using a serverless environment. The main advantage is that it enables developers to focus on the application they’re trying to build instead of worrying about the infrastructure they’re running it in.

Most developers spend a significant amount of time implementing, maintaining, and fixing their environment in between application development. The notion is that, by running a serverless environment, all of that legwork is no longer needed. Thus, developers have more time to focus on what they do best: developing apps.

The scalability of a serverless, cloud-based environment also makes it very appealing and functional. That’s why the likes of Netflix, AOL, Reuters, and countless other companies already run serverless environments. Fortunately, thanks to its scalability (in either direction), the cloud is also very accessible to smaller companies and even individual developers.

In fact, the reduced cost of operating in a serverless environment is definitely one of its highlights. The price of a serverless environment depends entirely on how many executions you run. There’s no pre-purchasing capacity and overpaying. You’ll only pay for what you need. Plus, without needing to manage servers, there is no cost to keep people active 24/7 managing and fixing said servers when things break.

Another major advantage of operating inside of a serverless environment is that it’s very easy to create multiple environments for development. You can do it in just a click. This is different compared to traditional environments that take planning, developing, staging, and test runs prior to being able to use the new architecture.

With all of this in mind, here’s a look at the primary options on the market for running a serverless environment.

Determining which serverless environment is right for your applications can prove overwhelming. Here’s a simple run-through to help you decide.

Pricing

cloud functions alternatives

Cloud Functions runs on Google’s infrastructure, so you have to pay to use it. Meanwhile, IronFunctions is open source. It’s made to run in any environment, be it public, private, or hybrid. That said, IronFunctions is also hosting some serverless environments for select customers who request it.

If you choose to use Google Cloud Functions, your cost will depend on the number of invocations (flat rate of $0.40 per million), compute time ($0.00001 per GHz-second), and networking (flat rate of $0.12 per GB). When deploying, you’ll need to specify how much memory your function requires.

With this in mind, you can experiment with Google Cloud Functions for free using the introductory tier. This is a good way to see for yourself whether or not it will work for you before committing a large portion of your budget to it.

Programming Languages

When it comes to supported programming languages, Google Cloud Functions has limited options. It was one of the last serverless environments to join the game, and it shows. The platform continues to support only Node.js 6, Node.js 8 (Beta), and Python 3.7 (Beta).

Meanwhile, IronFunctions leverages containers. This enables you to use any language that Linux or Windows supports. Plus, you can use AWS Lambda format and import any Lambda functions you’ve created before. The CLI tool also makes it easy to create and deploy new functions.

Support

As it’s an open-source project, IronFunction has extensive documentation. There is also a community of developers always happy to help if you post your issue or question online.

Google Cloud Functions, on the other hand, has limited documentation. Many have also brought up its community support, or lack thereof, in many reviews. With that said, the paid support for Google Cloud Functions is highly reliable.

User Interface

When it comes to the user interface of either solution, both Google Cloud Functions and IronFunctions have a great layout. With that said, Google Cloud Functions can sometimes feel a bit disjointed as you go about using it.

BigQuery, for instance, uses a slightly different user interface, which can make it feel detached from the core features. StackDriver, the logging feature for Google Cloud, suffers from the same ill design.

The Bottom Line

IronFunctions allows you to take advantage of all the perks of a serverless environment without getting locked in to one vendor or paying hefty fees. You can run it in your own environment, on your own terms. It’s highly flexible and allows for deep integration so that you can produce the best apps possible.

To sum it up, IronFunctions is open-source, budget-friendly, and ready to run your projects. Want to learn more about IronFunctions and everything it can do? Click here.

ECS Alternatives

If you are in the field of software development, you have probably heard of containers. A containerized application has myriad benefits, including efficiency, cost, and portability. One of the big questions with this technology is where and how to host it? In house, in the cloud, somewhere else? Amazon Web Services (AWS) offers a few options for container hosting. Elastic Container Services (ECS) is one of those offerings. ECS provides robust container management, supercharged with the power of AWS. However, there are other options out there. ECS alternatives may better fit your needs. An important decision like this justifies some shopping around.

There are several things to consider when choosing a container host. One size does not fit all! Each customer has their own in-house skillset and existing cloud integrations.

This post will illustrate the important things to consider. We will dig into details around alternatives to ECS. We will compare and contrast the offerings, looking at the pros and cons of each. With this background information, you will be better educated on this decision. You can then decide which solution best fits your business needs.

Alternatives to ECS and EC2.

AWS Elastic Container Service

AWS Elastic Container Service (ECS) is Amazon’s main offering for container management. Utilizing ECS allows you to take advantage of AWS’s scale, speed, security, and infrastructure. With this power, you can launch one, tens, or thousands of containers to handle all your computing needs. ECS also ties in with all the other AWS services, including databases, networking, and storage.

ECS offers two main options for containers:

  • AWS Elastic Compute Cloud (EC2): EC2 is AWS’s virtual machine service. Using this option, you are responsible for selecting the servers you want in your container cluster. Once that’s complete, AWS handles the management and orchestration of the servers.
  • AWS Fargate: Fargate abstracts things another level, eliminating the need to manage EC2 instances. Rather, you specify the CPU and memory requirements, and AWS provisions EC2 instances under the covers. This offers all the power of ECS, without worrying about the details of the actual underlying servers.

Pros and Cons

Here are some things to consider with the ECS offerings:

  • Integration with AWS: One of the biggest decisions around using ECS is its integration and reliance on AWS. This is either a pro or a con, depending on your circumstances. If you are already using AWS, adding ECS to the mix is a straightforward proposal. However, if you are not currently using AWS, there is a considerable learning curve to get up and running.
  • More Automation: ECS provides layers of automation over your containers. Customers without in-house expertise to manage the lower-level complexities may prefer this. However, it may also bind the hands of someone who wants more control over their container landscape. Fargate takes the automation a step further. Again, that could be good or bad, depending on your situation.
  • Cost: In this age of modern cloud computing, it is typically more cost effective to run everything in the cloud. No more hardware to purchase, networking snafus to resolve, or expertise to hire and retain. However, the cost differences in the container offerings are more nuanced. If you have container expertise in-house, it might be more cost effective to run your own container solution on top of AWS services. If not, you may save money using something like ECS.
  • Deployments: One key drawback to ECS is that it is not available on-premise. While all cloud may be fine for many businesses, there are instances where maintaining legacy services or closed networks is preferential if not mandatory.
  • Vendor lock in: In order to use ECS, you must be on AWS cloud. This also means the possibility of getting locked into a single technology provider if steps are not taken to painstakingly avoid this.

Google Cloud/Kubernetes

Similar to AWS, Google offers “all the things” on its cloud services. This includes servers, storage, databases, networking, and other technologies. Google’s solution for managing containers is Kubernetes, an industry leader in container orchestration. Kubernetes began as a project within Google, which eventually made it open source, available to the public. Since then, it has become one of the strongest options for container orchestration. Kubernetes is a service that all the major cloud providers offer. Google currently offers this service similar to AWS’ ECS called Google Kubernetes Engine, or GKE for short.

ECS alternatives cloud.

Pros and Cons

There are some pros and cons of using Google for your container services:

  • Integration with Google services: Like the AWS decision, you need to consider whether you currently use Google Cloud services. If you are already heavily invested there, adding Kubernetes to the top makes sense. If you are not, then it may introduce a large amount of time and cost to the equation.
  • Familiarity with Kubernetes: This is a big one. If you have in-house expertise with Kubernetes, you’ll feel comfortable running it in Google Cloud. If not, there’s a fairly steep learning curve to get there. Kubernetes is not for the faint-hearted.
  • Less Automation: With Kubernetes, Google puts more power (and responsibility) in the hands of their customers. Some customers may prefer that level of control. Others may not want to worry about these lower-level details.
  • Deployments: As with AWS, a key drawback is that it is not available for on-premise deployments.
  • Vendor lock in: In order to use GKE, you must be on GCP. Again, this means the possibility of getting locked into a single technology provider if steps are not taken to avoid this.

Microsoft Azure

Amazon Web Services Elastic Container Service alternatives.

Rounding out the offerings of the “Big Three” cloud providers is Microsoft’s Azure. It offers a few flavors of container management, including the following:

  • Azure Kubernetes Service (AKS): Azure provides hosting for a Kubernetes service, and with it, the same pros and cons. Good for customers with Kubernetes know-how, maybe not for those without.
  • Azure App Service: This is a more limited option, where a small set of Azure-specific application types can run within hosted containers.
  • Azure Service Fabric: Service Fabric allows for hosting an unlimited number of microservices. They can run in Azure, on premises, or within other clouds. However, you must use Microscofts infrastucture.
  • Azure Batch: This service runs recurring jobs using containers.

Pros and Cons

Here are some pros and cons of the Azure offerings:

  • Confusion: The list above illustrates the many container-based services Azure offers. There are many “Azure-specific” technologies at play here. It can be hard to differentiate where the containerization stops and the Azure-specific things begin.
  • Integration with Azure Services: If you are already using Azure for other services, using its container offerings makes sense. If not, you’ll need to climb the Azure learning curve. As with the other cloud providers, this introduces time and resource expenses.
  • Less (or More?) Automation: The Azure offerings run the gamut. They start with no management (Azure Container Registry) to fully managed (Azure App Service and Azure Service Fabric). Once educated on all the features, pros, and cons of each, you may find a solution that perfectly meets your needs. Or, you might possibly drown in the details.
  • Deployments: Differing from both AWS and GCP, Azure Service Fabric is actually available on-premise. However, (and it s a big however), you must use Microsoft servers that Azure provides. By going down this route you are virtually guaranteed to be locked into the Azure/Microsoft technology architecture with no easy way out.
  • Vendor lock in: See above, as with both GCP and AWS, vendor lock-in is difficult to avoid and expensive to leave.

Iron.io

AWS ECS alternatives.

Another ECS alternative that may surprise you is Iron.io. It provides container services but shields customers from the underlying complexities. This may be perfect for customers not interested in developing large amounts of in-house expertise. Iron.io offers a container management solution called Worker. It is a hosted background job solution supporting a variety of computing workloads. Iron.io allows for several deployment options (on its servers, on your servers, in the cloud, or a combination of these). It manages all your containers and provides detailed analytics on their performance. By handling the low-level details, Iron.io allows you to focus on your applications. You focus on your business; they’ll worry about making sure it all runs correctly.

Pros and Cons

Here are some things to know about Iron.io:

  • Easy to Use: For customers that want the benefits of containerization without having to worry about the lower-level details, Iron.io is perfect. Focus on your applications and let the pros worry about infrastructure.
  • Flexible: For customers that have Docker/Kubernetes expertise, Iron.io provides its hybrid solution. You host the hardware and run the workers there. Iron.io provides automation, scheduling, and reporting. You don’t have to give up what you already have to gain what Iron.io has to offer. Iron also offers a completely on-premise deployment of Worker. This allows installing Worker in environments with high compliance and security requirements.
  • Powerful: Iron.io can scale from one to thousands of parallel workers, easily accommodating all sizes of computing needs.
  • Deployments: Unique to Iron Worker is the ability to deploy fully on-premise, as well as hybrid and fully cloud.
  • No Vendor lock-in: Another unique aspect of Iron Worker is the ability to avoid being locked into any single vendor. It is cloud agnostic, so it will run on any cloud. Migration is also virtually a one-click process. This means operational expenses are kept to a bare minimum. It also means deploying redundantly to multiple clouds is an easy, efficient process.

Conclusion

Containerization is the future of computing. The need to own and run our own servers (or even our own operating systems) is slowly fading. The big question is where to start? Customers with Docker expertise, and existing cloud provider integrations, may find a container solution from a big cloud provider as the best choice. For customers just starting out in this field, or those looking to add management and analytics to an existing solution, Iron.io adds a good deal of power. Iron.io will grow with you, and with initial architectures in place, other options will unfold.

With this information in hand, you’re better prepared to answer some big questions. May your containers go forth and multiply!

Ready to get started with IronWorker?

Start you free 14 day trial, no cards, no commitments needed. Signup here.

Introducing: Computerless™

Iron was one of the pioneers of Serverless, so we’re excited to announce that we’ll also be one of the first companies to offer the next generation of compute:  It’s called Computerless™.

Unlike Serverless, this technology removes the physical machine completely.  Our offering piggy-backs off the recent developments in fiber optic technology developed at the University of Oxford.  If you haven’t heard about this breakthrough, we’ll do our best to explain:

Researchers have found a way to control how light travels at the molecular level, thus being in complete control of the resulting attenuation.  Molecular gates can then be created, and state stored in finite wavelengths. It’s somewhat equivalent to qubits in quantum computing, but in the case of optical fiber, it’s a physical reality.

The end result of this technological release allows for computers to be fully encapsulated in fiber optic cable.  The usual components needed are now mapped 1-to-1, via light. This has allowed Iron’s infrastructure to completely change.  While we’ve run our infrastructure on public clouds like AWS and GCP in the past, we’ve been able to leave that all behind. We’re now able to push our entire suite of products into optical cable itself:


Iron’s new and improved infrastructure on a cheap plot of land in Arkansas

In the next few months, we’ll be pushing all of our customer’s sensitive data into the cables shown above as well as running all Worker jobs through them.  We’re pretty sure the cables we purchased are for multi-tenant applications, so you can probably rest assured that we’re doing the right thing. In fact, NASA has already expressed an interest in licensing this technology from Iron. Other interested parties include the government of French Guiana and defense conglomerate Stark Industries. 

Researchers have kind-of concluded that this technology is ready for prime time, and also are quick to state the fact that in 1998, The Undertaker threw Mankind off Hell In A Cell, and plummeted 16 ft through an announcer’s table.