OpenShift Ecosystem: Brings a Serverless Experience to OpenShift

There has been a lot of buzz around the Serverless trend lately; what it really means and what are its merits. At the end of the day it’s really just a new way to treat certain workloads – background jobs. How does this new pattern fit in the context of developing cloud native applications and operating container platforms such as Red Hat OpenShift?


Delivering continuous innovation to customers often leads to continuous pressure on the developers to build and ship software… well, continuously. Smart companies are doing all they can to empower their development teams with the right culture to encourage productivity, and the right tools to make it happen. Emerging as the foundational layer for many organizations’ application development efforts is a container application platform, with OpenShift as a leading choice.

As infrastructure resources continue to be commoditized, and as services continue to be exposed as APIs, having a foundational layer is critical to bring everything together. This is especially important when dealing with multiple distributed applications and multiple distributed teams, as containerized applications, workloads, and services need a unifying environment.

PRIMED FOR A DEEPER INTEGRATION has been a long time partner of Red Hat and OpenShift, with both its products, IronMQ and IronWorker, available in the OpenShift Hub. We were early adopters of container technologies, and the recent introduction ofHybrid makes it easy to integrate with container-based platforms such as OpenShift through an independently deployable runtime service.

With the introduction of OpenShift Primed by Red Hat, customers who wish to enable the container-based job processing capabilities that provides, can do so with the confidence that the solution is validated and works well in any OpenShift environment, public or private.


Getting started with on OpenShift is easy. The following instructions walk through a simple example. To learn more about the process and to get started yourself, Contact Us.

1. Create OpenShift Application

First create a project within OpenShift to setup the environment. For this demo, we’ll create a Ruby application with the sample code available from Red Hat.


2. Create a Deployment Configuration

With Hybrid, the runtime is an independent service available as an container image on Docker Hub (iron/runner is a private image at the time of writing this article while in beta). We refer to this as the “runner”, which is a service that monitors an IronMQ queue for new jobs. When a job is picked up, the runner grabs the associated Docker image, spins up a fresh container, executes the process, and tears down the container gracefully. Rinse and repeat at massive scale.

With the deployment configuration created, it’s time to register with OpenShift.

$ oc create -f runner-deployment.yml

Before we run the deployment, we need to create an cluster so we can set the right environment variables when starting the pod.

3. Create an Cluster

If you don’t already have an account, you can start a trial here. From the Dashboard, you can easily create a cluster that provides you with an id and token value. Go to ‘Profile -> My Clusters -> New Cluster’. Give your new cluster a name, and then select Memory and CPU values, which are the resources to be allocated to each job. Once created, copy the id and token values as shown below.


4. Deploy the cluster

First, set the environment variables with the deployment configuration.

$ oc env deploymentconfigs runner CLUSTER_ID=%CLUSTER_ID

$ oc env deploymentconfigs runner CLUSTER_TOKEN=%CLUSTER_TOKEN

The iron/runner image runs in privileged mode, which can be enabled for OpenShift by modifying the service account to ‘allowPrivilegedContainer: true’ as documented here.

Now we’re ready to deploy our cluster using the OpenShift CLI.

$ oc deploy runner

This will create a pod with the number of runner containers from the ‘replicas’ as set in the deployment configuration. scales based on the number of concurrent containers available to do work, which is a multiple of the replicas and the resource allocation.


5. Queue tasks to your new cluster

Your cluster is now ready to do work. To send jobs to the runner pod for execution, all you need to do is use the cluster_id when using the API.

First, register an image with

$ iron register iron/hello

Then queue up a job via CLI, or use one of our many client libraries in your language of choice. The cluster argument means that the job will be delivered to the cluster you create on OpenShift. Leaving that out means the job will run on the default public cluster operated by

$ iron worker queue --cluster %CLUSTER_ID iron/hello


Get Started Now!
As of today, beta users are being accepted for on OpenShift. The pairing will provide users with an end-to-end environment for building and deploying applications at scale, without the headaches of complex operations. Flexible, abstracted platforms provide the best of both worlds, a sentiment shared by both Red Hat and

Leave a Comment

This site uses Akismet to reduce spam. Learn how your comment data is processed.