Best DevOps Tools

DevOps processes help companies to overcome the organizational challenges in an efficient, robust, and repeatable way. DevOps tools are a collection of complementary, task-specific tools that can be combined to automate processes. IronWorker and IronMQ are two DevOps tools from that can help your business save money and scale on demand. Start your free 14 free Iron.i trial today!

The following solutions are some of the best DevOps tools that will ensure the creation and improvement of your products at a faster pace:

Source Control Management

  1. GitHub is a web-based Git repository hosting service that offers all of the distributed revision control and source code management features as well as adding its own. Unlike Git, it provides a web-based graphical interface, desktop, and mobile integration.
  2. GitLab, similar to GitHub, is a web-based Git repository manager with wiki and issue tracking features. Unlike GitHub, GitLab does not have an open-source version.
  3. JFrog Artifactory is an enterprise-ready repository manager that supports software packages created by any language or technology. It supports secure, clustered, High Availability Docker registries.

More source control management tools are:

Database Lifecycle Management

  1. DBmaestro offers Agile development and Continuous Integration and Delivery for the Database. It supports the streamlining of development process management and enforcing change policy practices. 
  2. Delphix is a software company that produces software for simplifying the building, testing, and upgrading of applications built on relational databases.
  3. Flyway is an open-source database migration tool based around six basic commands: Migrate, Clean, Info, Validate, Baseline, and Repair. Migrations support SQL or Java.

More database lifecycle management tools are:

Continuous Integration (CI)

  1. Bamboo is a continuous integration server that supports builds in any programming language using any build tool, including Ant, Maven, make, and any command-line tools. 
  2. Travis CI is an open-source continuous integration utility for building and testing projects hosted at GitHub. 
  3. Codeship is a continuous deployment tool focused on being an end-to-end solution for running tests and deploying apps. It supports Rails, Node, Python, PHP, Jaca, Scala, Groovy, and Clojure. 

More continuous integration tools are:

Recommended reading: DevOps Best Practices

Recommended reading: The Future of DevOps

Software Testing

  1. FitNesse is an automated testing solution for software. It supports acceptance testing rather than unit testing in that it facilitates a detailed readable description of system function.
  2. Selenium is a software testing tool for web apps that offers a record/playback solution for writing tests without knowledge in a test scripting language. 
  3. JUnit is a unit testing tool for Java. It has been prominent in the development of test-driven development and is a family of unit testing frameworks.
  4. Apache JMeter is an Apache load testing tool for analyzing and measuring the performance of various services, with a focus on web applications.
  5. TestNG is a testing solution for Java inspired by JUnit. TestNG’s design aims to cover a broader range of test categories: unit, functional, end-to-end, integration, etc., with more easy-to-use functionalities.

More software testing tools are:

Configuration Tools

  1. Ansible is an open-source software solution for configuring and managing computers. It offers multi-node software deployment, ad hoc task execution, and configuration management. 
  2. Puppet is an open-source configuration management solution for running on many Unix-like systems and Microsoft Windows. It also provides its declarative language to describe system configuration. 
  3. Salt platform is an open-source configuration management and remote execution application. It supports the “infrastructure-as-code” approach to deployment and cloud management.
  4. Rudder is an open-source audit and configuration management tool that automates system configuration across large IT infrastructures. 

More configuration tools are:

Deployment Tools

  1. Terraform is a utility for building, combining, and launching infrastructure. It can create and compose all the components from physical servers to containers to SaaS products necessary to run applications.
  2. AWS CodeDeploy is an automation tool for code deployments to any instance, including Amazon EC2 instances and instances running on-premises.
  3. ElasticBox is an agile DevOps tool for defining, deploying and managing application automation agnostic of any infrastructure or cloud.
  4. GoCD is an open-source automation tool for continuous delivery (CD). It automates the build-test-release process from code check-in to deployment. Various version control tools are available, including Git, Mercurial, and Subversion.

More deployment are:

Container Tools

  1. Docker is an open-source product that makes it easier to create, deploy, and run applications in containers by providing a layer of abstraction and automation of operating-system-level virtualization on Linux. 
  2. Kubernetes is an open-source system for managing multiple hosts’ containerized applications, providing basic mechanisms for deployment, maintenance, and scaling of applications.
  3. Apache Mesos is an open-source cluster manager that provides resource isolation and sharing across distributed applications or frameworks.

More container tools are:

Release Orchestration

  1. OpenMake is a strategic software delivery utility that deploys to multi-platform servers, clouds, or containers. It simplifies component packaging, database updates, jumping versions, calendaring, and offloads your overworked CI process.
  2. Plutora is an on-demand Enterprise IT Release Management software solution for building from the ground up to help companies deliver Releases that better serves businesses.
  3. Spinnaker is an open-source multi-cloud continuous delivery platform for releasing software changes by enabling key features: cluster management and deployment management. 

More release orchestration are:

Cloud Tools

  1. Amazon Web Services (AWS) is a set of web services that Amazon offers as a cloud computing platform in 11 geographical regions across the world. The most prominent of these services are Amazon Elastic Compute Cloud and Amazon S3.
  2. Microsoft Azure is a cloud computing platform for building, deploying, and managing applications and services through a global network of Microsoft-managed datacenters. It supports many different programming languages, tools, and frameworks, including both Microsoft-specific and third-party software and systems.
  3. Google Cloud is a cloud computing solution by Google that offers to host on the same supporting infrastructure that Google uses internally for end-user products.

More cloud tools are:

Container Management Services

  1. IronWorker is a tool that offers greater computing insights of tasks in real-time to optimize resource allocation and scheduling better. It tracks tasks with greater usage to understand the changing nature of organizations’ target audience and identify opportunities to streamline their compute.
  2.  AWS Fargate: is a serverless container management service (container as a service) that allows developers to focus on their application and not their infrastructure. While AWS Fargate does help with container orchestration, it does leave areas of concern where IronWorker fills the void, such as support, simplicity, and deployment options.
  3. Google Cloud Run: is a managed platform that takes a Docker container image and runs it as a stateless, autoscaling HTTP service. The areas of concern where IronWorker fills the void are some key features, such as containerized environment, high-scale Processing, and flexible scheduling.

More container management tools are:

AI Ops Tools

  1. Splunk enables searching, monitoring, and analyzing big data via a web-style interface. It can create graphs, reports, alerts, dashboards, and visualizations.
  2. Moogsoft is an AIOps platform that helps organizations streamline incident resolution, prevent outages, and meet SLAs.
  3. Logstash is a solution for managing events and logs. It enables collecting logs, parsing them, and storing them.

More AIOps tools are:


  1. Datadog is an analytics and monitoring platform for IT infrastructure, operations, and development teams. It gets data from servers, databases, applications, tools, and services to give a centralized view of the applications in the cloud.
  2. Elasticsearch is a search server that enables a distributed full-text search engine with a RESTful web interface and schema-free JSON documents. 
  3. Kibana is a data visualization plugin for Elasticsearch that provides visualization features on top of the Elasticsearch cluster’s content index. 

More analytics tools are:


  1. Nagios is an open-source solution for monitoring systems, networks, and infrastructure. It provides alerting services for servers, switches, applications, and services. 
  2. Zabbix is an open-source monitoring tool for networks and applications. It tracks the status of various network services, servers, and other network hardware.
  3. Zenoss software builds real-time models of hybrid IT environments, providing performance insights that facilitate eliminating outages and reducing downtime, and IT spending.

More monitoring tools are:


  1. SonarQube is a utility to manage code quality. It can cover new languages, adding rules engines, and advanced computing metrics through a robust extension mechanism. More than 50 plugins are available.
  2. Tripwire is an open-source security and data integrity tool for monitoring and alerting on specific file change(s) on a range of systems. 
  3. Fortify reduces software risk by recognizing security vulnerabilities. It determines the root cause of the vulnerability, correlates, and prioritizes results, and provides best practices so developers can develop code more securely.

More security tools are:


  1. Slack is a business communication tool that offers a set of features, including persistent chat rooms arranged by topic, private groups, etc.
  2. Trello is a free project management utility that operates a freemium business model. Basic service is provided free of charge, though a Business Class paid-for service was launched in 2013.
  3. JIRA is an issue tracking utility that offers bug and issue tracking, as well as project management functions. 

More collaboration tools are:

Messaging queues tools

  1. IronMQ: is an elastic message queue created specifically with the cloud in mind. It’s easy to use, runs on industrial-strength cloud infrastructure, and offers developers ready-to-use messaging with highly reliable delivery options and cloud-optimized performance.
  2. AWS SQS: is a distributed message queuing solution offered by Amazon. It supports programmatic sending of messages via web service applications as a way to communicate over the Internet. 
  3. RabbitMQ: is an open-source message-broker solution for advanced message queuing with a plug-in for streaming text-oriented messaging protocol, MQ Telemetry Transport, and other protocols. 

More messaging queues tools are: Tools

Organizational transformations need some level of technological or tool-based assistance. DevOps, continuous integration, and continuous delivery are no distinctions. The tools organizations identify and use in pursuit of their goals are critical for their DevOps strategy’s success. offers two critical infrastructure DevOps tools in IronWorker and IronMQ. These tools will save your business money by allowing your teams to focus on application development and not waste time maintaining infrastructure. Start your free trial today, and take your business to the next level. Please follow our blog for more articles on DevOps.

Google Cloud Run: Review and Alternatives


Google Cloud Run is a new cloud computing platform that’s hot off the presses from Google, first announced at the company’s Google Cloud Next conference in April 2019. Google Cloud Run has generated a lot of excitement (and a lot of questions) among tech journalists and users of the public cloud alike, even though it’s still in beta.

We will discuss the ins and outs of Google Cloud Run in this all-in-one guide, including why it appeals to many Google Cloud Platform customers, what are the features of Google Cloud Run, and a comparison of the Google Cloud Run alternatives.

What Is Google Cloud Run (And How Does It Work?)

What is serverless computing?

To answer the question “What is Google Cloud Run?,” we first need to define serverless computing.

Often just called “serverless,” serverless computing is a cloud computing paradigm that frees the user from the responsibility of purchasing or renting servers to run their applications on.

(Actually, the term “serverless” is a bit of a misnomer: The code still runs on a server, just not one that the user has to worry about.)

Cloud computing has soared in popularity over the past decade. This is thanks in large part to the increased convenience and lower maintenance requirements. Traditionally, however, users of cloud services have still needed to set up a server, scale its resources when necessary, and shut it down when you’re done. This has all changed with the arrival of serverless.

The phrase “serverless computing” is applied to two different types of cloud computing models:

  • BaaS (backend as a service) outsources the application backend to the cloud provider. The backend is the “behind the scenes” part of the software for purposes such as database management, user authentication, cloud storage, and push notifications for mobile apps.
  • FaaS (function as a service) still requires developers to write code for the backend. The difference is this code is only executed in response to certain events or requests. This enables you to decompose a monolithic server into a set of independent functionalities, making availability and scalability much easier.

You can think of FaaS serverless computing as like a water faucet in your home. When you want to take a bath or wash the dishes, you simply turn the handle to make it start flowing. The water is virtually infinite, and you stop when you have as much as you need, only paying for the resources that you’ve used.

Cloud computing without FaaS, by contrast, is like having a water well in your backyard. You need to take the time to dig the well and build the structure, and you only have a finite amount of water at your disposal. In the event that you run out, you’ll need to dig a deeper well (just like you need to scale the server that your application runs on).

Regardless of whether you use BaaS or FaaS, serverless offerings allow you to write code without having to worry about how to manage or scale the underlying infrastructure. For this reason, serverless has come into vogue recently. In a 2018 study, 46 percent of IT decision-makers reported that they use and evaluate serverless.

What are containers?

docker containers

Now that we’ve defined serverless computing, we also need to define the concept of a container. (Feel free to skip to the next section if you’re very comfortable with your knowledge of containers.)

In the world of computing, a container is an application “package” that bundles up the software’s source code together with its settings and dependencies (libraries, frameworks, etc.). The “recipe” for building a container is known as the image. An image is a static file that is used to produce a container and execute the code within it.

One of the primary purposes of containers is to provide a familiar IT environment for the application to run in when the software is moved to a different system or virtual machine (VM).

Containers are part of a broader concept known as virtualization, which seeks to create a virtual resource (e.g., a server or desktop computer) that is completely separate from the underlying hardware.

Unlike servers or machine virtualizations, containers do not include the underlying operating system. This makes them more lightweight, portable, and easy to use.

When you say the word “container,” most enterprise IT staff will immediately think of one, or both, of Docker and Kubernetes. These are the two most popular container solutions.

  • Docker is a runtime environment that seeks to automate the deployment of containers.
  • Kubernetes is a “container orchestration system” for Docker and other container tools, which means that it manages concerns such as deployment, scaling, and networking for applications running in containers.

Like serverless, containers have dramatically risen in popularity among users of cloud computing in just the past few years. A 2018 survey found that 47 percent of IT leaders were planning to deploy containers in a production environment, while 12 percent already had. Containers enjoy numerous benefits: platform independence, speed of deployment, resource efficiency, and more.

Containers vs. serverless: A false dilemma

Given the massive success stories of containers and serverless computing, it’s hardly a surprise that Google would look to combine them. The two technologies were often seen as competing alternatives before the arrival of Google Cloud Run.

Both serverless and containers are intended to make the development process less complex. They do this by automating much of the busy work and overhead. But they go about it in different ways. Serverless computing makes it easier to iterate and release new application versions, while containers ensure that the application will run in a single standardized IT environment.

Yet nothing prevents cloud computing users from combining both of these concepts within a single application. For example, an application could use a hybrid architecture, where containers can pick up the slack if a certain function requires more memory than the serverless vendors has provisioned for it.

As another example, you could build a large, complex application that mainly has a container-based architecture, but that hands over responsibility for some backend tasks (like data transfers and backups) to serverless functions.

Rather than continuing to enforce this false dichotomy, Google realized that serverless and containers could complement one another, each compensating for the other one’s deficiencies. There’s no need for users to choose between the portability of containers and the scalability of serverless computing.

Enter Google Cloud Run…

What is Google Cloud Run?

In its own words, Google Cloud Run “brings serverless to containers.” Google Cloud Run is a fully managed platform that is capable of running Docker container images as a stateless HTTP service.

Each container can be invoked with an HTTP request. All the tasks of infrastructure management–provisioning, scaling up and down, configuration, and management–are cleared away from the user (as typically occurs with serverless computing).

Google Cloud Run is built on the Knative platform, which is an open API and runtime environment for building, deploying, and managing serverless workloads. Knative is based on Kubernetes, extending the platform in order to facilitate its use with serverless computing.

In the next section, we’ll have more technical details about the features and requirements of Google Cloud Run.

Google Cloud Run Features and Requirements


Google cites the selling points below as the most appealing features of Google Cloud Run:

  • Easy autoscaling: Depending on light or heavy traffic, Google Cloud Run can automatically scale your application up or down.
  • Fully managed: As a serverless offering, Google Cloud Run handles all the annoying and frustrating parts of managing your IT infrastructure.
  • Completely flexible: Whether you prefer to code in Python, PHP, Pascal, or Perl, Google Cloud Run is capable of working with any programming language and libraries (thanks to its use of containers).
  • Simple pricing: You pay only when your functions are running. The clock starts when the function is spun up, and ends immediately once it’s finished executing.

There are actually two options when using Google Cloud Run: a fully managed environment or a Google Kubernetes Engine (GKE) cluster. You can switch between the two choices easily, without having to reimplement your service.

In most cases, it’s best to stick with Google Cloud Run itself, and then move to Cloud Run on GKE if you need certain GKE-specific features, such as custom networking or GPUs. However, note that when you’re using Cloud Run on GKE, the autoscaling is limited by the capacity of your GKE cluster.

Google Cloud Run requirements

Google Cloud Run is still in beta (at the time of this writing). This means that things may change between now and the final version of the product. However, Google has already released a container runtime contract describing the behavior that your application must adhere to in order to use Google Cloud Run.

Some of the most noteworthy application requirements for Google Cloud Run are:

  • The container must be compiled for Linux 64-bit, but it can use any programming language or base image of your choice.
  • The container must listen for HTTP requests on the IP address, on the port defined by the PORT environment variable (almost always 8080).
  • The container instance must start an HTTP server within 4 minutes of receiving the HTTP request.
  • The container’s file system is an in-memory, writable file system. Any data written to the file system will not persist after the container has stopped.

With Google Cloud Run, the container only has access to CPU resources if it is processing a request. Outside of the scope of a request, the container will not have any CPU available.

In addition, the container must be stateless. This means that the container cannot rely on the state of a service between different HTTP requests, because it may be started and stopped at any time.

The resources allocated for each container instance in Google Cloud Run are as follows:

  • CPU: 1 vCPU (virtual CPU) for each container instance. However, the instance may run on multiple cores at the same time.
  • Memory: By default, each container instance has 256 MB of memory. Google says this can be increased up to a maximum of 2 GB.

Cloud Run Pricing

Google cloud run pricing

Google Cloud Run uses a “freemium” pricing model: free monthly quotas are available, but you’ll need to pay once you go over the limit. These types of plans frequently catch users off guard. They end up paying much more than expected. According to Forrester, a staggering 58% of companies surveyed said their costs exceeded their estimates.

The good news for Google Cloud Run users is that you’re charged only for the resources you use (rounded up to the nearest 0.1 second). This is typical of many public cloud offerings.

The free monthly quotas for Google Cloud Run are as follows:

  • CPU: The first 180,000 vCPU-seconds
  • Memory: The first 360,000 GB-seconds
  • Requests: The first 2 million requests
  • Networking: The first 1 GB egress traffic (platform-wide)

Once you bypass these limits, however, you’ll need to pay for your usage. The costs for the paid tier of Google Cloud Run are:

  • CPU: $0.000024 per vCPU-second
  • Memory: $0.0000025 per GB-second
  • Requests: $0.40 per 1 million requests
  • Networking: Free during the Google Cloud Run beta, with Google Compute Engine networking prices taking effect once the beta is over.

It’s worthwhile to note you are billed separately for each resource; for example, the fact that you’ve exceeded your memory quota does not mean that you need to pay for your CPU and networking usage as well.

In addition, these prices may not be definitive. Like the features of Google Cloud Run, prices for Google Cloud are subject to change once the platform leaves beta status.

Finally, Cloud Run on GKE uses a separate pricing model that will be announced before the service reaches general availability.

Google Cloud Run Review: Pros and Cons

Because it’s a brand new product product that’s still in beta, reputable Google Cloud Run reviews are still hard to find.

Reaction to Google’s announcement has been fairly positive, acknowledging the benefits of combining serverless computing with a container-based architecture. Some users believe that the reasonable prices will be enough for them to consider switching from similar services such as AWS Fargate.

Other users are more critical, however, especially given that Google Cloud Run is currently only in beta. Some are worried about making the switch, given Google’s track record of terminating services such as Google Reader, as well as their decision to alter prices for the Google Maps API, which effectively shut down many websites that could not afford the higher fees.

Given that Google Cloud Run is in beta, the jury is still out on how well it will perform in practice. Google does not provide any uptime guarantees for cloud offerings before they reach general availability.

The disadvantages of Google Cloud Run will likely overlap with the disadvantages of Google Cloud Platform as a whole. These include the lack of regions when compared with competitors such as Amazon and Microsoft. In addition, as a later entrant to the public cloud market, Google can sometimes feel “rough around the edges,” and new features and improvements can take their time to be released.

Google Cloud Run Alternatives

Since this is a comprehensive review of Google Cloud Run, we would be remiss if we didn’t mention some of the available alternatives to the Google Cloud Run service.

In fact, Google Cloud Run shares some of its core infrastructure with two of Google’s other serverless offerings: Google Cloud Functions and Google App Engine.

  • Google Cloud Functions is an “event-driven, serverless compute platform” that uses the FaaS model. Functions are triggered to execute by a specified external event from your cloud infrastructure and services. As with other serverless computing solutions, Google Cloud Functions removes the need to provision servers or scale resources up and down.
  • Google App Engine enables developers to “build highly scalable applications on a fully managed serverless platform.” The service provides access to Google’s hosting and tier 1 internet service. However, one limitation of Google App Engine is that the code must be written in Java or Python, as well as use Google’s NoSQL database BigTable.

Looking beyond the Google ecosystem, there are other strong options for developers who want to leverage both serverless and containers in their applications.

The most tested Cloud Run alternative: is a serverless platform that offers a multi-cloud, Docker-based job processing service. As one of the early adopters of containers, we have been a major proponent of the benefits of both technologies.

The centerpiece of’s products, IronWorker is a scalable task queue platform for running containers at scale. IronWorker has a variety of deployment options. Anything from using shared infrastructure to running the platform on your in-house IT environment is possible. Jobs can be scheduled to run at a certain date or time, or processed on-demand in response to certain events.

In addition to IronWorker, we also provide IronFunctions, an open-source serverless microservices platform that uses the FaaS model. IronFunctions is a cloud agnostic offering that can work with any public, private, or hybrid cloud environment, unlike services such as AWS Lambda. Indeed, allows AWS Lambda users to easily export their functions into IronFunctions. This helps to avoid the issue of vendor lock-in. IronFunctions uses Docker containers as the basic unit of work. That means that you can work with any programming language or library that fits your needs.


Google Cloud Run represents a major development for many customers of Google Cloud Platform who want to use both serverless and container technologies in their applications. However, Google Cloud Run is only the latest entrant into this space, and may not necessarily be the best choice for your company’s needs and objectives.

If you want to determine which serverless + container solution is right for you, speak with a skilled, knowledgeable technology partner like who can understand your individual situation. Whether it’s our own IronWorker solution, Google Cloud Run, or something else entirely, we’ll help you get started on the right path for your business.

How We Went from 30 Servers to 2: Go

When we built the first version of IronWorker, about 3 years ago, it was written in Ruby and the API was built on Rails. It didn’t take long for us to start getting some pretty heavy load and we quickly reached the limits of our Ruby setup. Long story short, we switched to Go. For the long story, keep reading, here’s how things went down.
Continue reading “How We Went from 30 Servers to 2: Go”