Docker vs Kubernetes – How Do They Stack Up?

Docker and Kubernetes are two hot technologies in the world of software. Most software architectures are using them, or considering them. The question is often asked – Docker vs Kubernetes – which is better? Which one should WE be using? As it turns out, this question misunderstands the two. These two technologies don’t actually do the same thing! However, they do complement each other nicely. In this post, we will explore the “Docker vs Kubernetes” question. We will dig into some backgrounds and details of both, and show how they differ. With this information, you decide how Docker and Kubernetes fit in your architecture.First, some background…
How Did We Get Here?
Before diving into the topic, let’s walk through a brief history of how we got here.
In the Beginning…
 
In the REALLY early days of computing (like, the 1960s), there was time sharing on mainframes. On the surface, this looked nothing like it’s modern day counterparts. A room full of big iron, and perhaps a primitive text-based terminal. Lots of little lights. Very limited functionality. Yet, the concept is the same – one machine serving many users at once. Each isolated from each other. While not practical for today’s needs, this technology planted the seed for the future.Around the 1980s and 1990s, computer workstations began to grow in prominence. Computers no longer required a room full of mainframe hardware. Instead, a server that could fit on your desk. One in every home! In the software industry, these workstations become the main workhorses of web serving. This didn’t scale well to a large number of users and services, due to the expensive hardware. For most users, a beefy workstation offered far more capacity than a one person required!
Virtual machines
 
Virtual Machines (VM) offered a solution to this problem. Full Virtualization allowed one physical server to host several “VM instances”. Each instance featured it’s own copy of the Operating System. This allowed “machines” to be rapidly created and deployed. Instead of deploying a physical server each time you needed a computer, a VM could take its place. These VMs were usually not as powerful as a full workstation. But they didn’t need to be.This advance made it much easier to add new machines to a computing environment. However, it was inefficient and costly. Each VM instance required a full operating system. Lots of duplicate code and processes would run on a single VM server. Lots of OS licenses needed purchasing. The industry kept working on better alternatives.
Containers
 
Containers (also known as Operating-System-Level Virtualization) provide a solution to this waste. A single container environment provides the “core” Operating System processes. Each container running in this environment is an isolated “user-space” instances. In other words, the instances share the common functionality (file system, networking, etc).This eliminates the duplicate OS-level processes. As a result, a single physical server can support a much large volume of containers. Additionally, cloud computing landscape lends itself very well to container architecture. Customers generally don’t want (or need) to worry about individual machines. It’s all “in the cloud”. Developers can code, test, and deploy containers to the cloud. Never worrying about the hardware they are running on. Containers have exploded in popularity with the growth of cloud computing.
Docker
 
Docker (both the company and product) is a big name in containerization. Docker begin as an internal project at a dotCloud, a Platform as a Service company. It soon outgrew it’s creator, and debuted to the public in 2013. It is an open source project, and has rapidly become a leader in the Container space. “Google” is synonymous with “Search”. You might say, “google it”. The same has almost become true for Docker – “use docker” means “use containers”. Docker is available on all major cloud platforms, with rapid growth since it’s release.Here are some key concepts from world of Docker:
 
  • Image – the Docker Image is the file that holds everything necessary to run a Container. This includes:
 
  • the actual application code
 
  • a run-time environment, with all the OS services the application needs.
 
  • any libraries needed for your application
 
  • environment variables and config files, such as connection strings and other settings.
 
  • Container – a Container is a “copy” of an Image, either running or ready to run in Docker. There can be more than one Containers copied from the same Image.
 
  • Networking – Docker allows different Containers to speak to each other (and the outside world). The code running in the Container isn’t “aware” that it’s running within Docker. It simply makes network requests (REST, etc), and Docker routes the calls.
 
  • Volumes – Docker offers Volumes to allow for shared storage between Containers.
 
The Docker “ecosystem” consists of a few main software components:
Docker Engine
 
Docker’s main platform is the Docker Engine. It is the software that hosts and runs the Containers. It runs on the physical host machine, and is the “sandbox” all the containers will live within. The Docker Engine consists of the following components:
 
  • The Server, or Daemon – the Daemon is the “brains” of the whole operation. This is the main process that manages all the other Docker pieces. Those pieces include Images, Containers, Networks, and Volumes.
 
  • REST API – The REST API allows programs to communicate with the Daemon for all their needs. This includes adding/removing Images, stopping/starting Containers, adjusting configuration, etc.
 
  • Command Line Interface (CLI) – allows command line interaction with the Docker Daemon. This is how end users interact with Docker. It uses the Docker REST API under the covers.
Docker Hub
 
Docker Hub is an enormous online library containing vast quantities of “pre-made” images. Like Github, except instead of hosting Git repositories, it hosts Docker images. For almost any software need, there is an image on Docker Hub that provides it.For example, you might need:
 
  • a Rails environment for web services
 
  • connected with a MySQL database
 
  • with Redis available for caching.
 
Dockerhub contains “Official” images for these types of things. “Pull” the required images to your local environment, and use them to build Containers. Complex, production-ready environments can be ready within minutes.Companies can also pay for private repositories to host their internal Docker images. Dockerhub offers a centralized location to track and share images. History tracking, branching, etc. Like Github, except for Docker.
Docker Swarm
 
Docker Swarm is Docker’s open source Container Orchestration platform. Container Orchestration becomes important in large scale deployments. Large environments, with tens, hundreds or thousands of Containers. With this type of volume, manually tracking and deploying Containers becomes cost prohibitive. An Orchestration platform provides a “command center”. It monitors and deploy all the various Containers in an environment.Docker Swarm provides some of the same functionality as Kubernetes. It is simpler and less powerful, but easier to get started with. It uses the same CLI, making its usage familiar to a typical Docker user.We’ll get more into Container Orchestration below.
Alternatives
 
While Docker is the industry leader, there are alternatives. These include:
 
  • Core OS’ rkt (Rocket) – the “pod-native” container engine. Developed by a Kubernetes-based software team, this is a competitor to Docker.
 
  • Cloud Foundry – adds a layer of abstraction on top of Containers. Allows you to provide the application, and not worry about the layering beneath. With this service, you’re not really focused on the Container layer.
 
  • Digital Ocean – a cloud-based provider that calls it’s containers “droplets”. This appears to be like Cloud Foundry, in that they abstract away some complexity. There are still cloud/kubernetes options in their control panel.
 
  • “Serverless” services – major cloud providers like AWS and Azure offer “serverless” services. These allow companies to create simple webservices on the fly. No hardware, or hardware virtualization. No worries about the underlying platform. Not technically Containers, but offer support for many of the same use cases.

 

Kubernetes

Kubernetes is the industry leader in Container Orchestration. First, here’s an overview of what that is…
Container Orchestration
 
Containers are a very powerful tool, but in large environments, they can get out of hand. Different deployment schedules into different environments types. Tracking uptime, and knowing when things fall down. Networking spaghetti. Capacity planning. Tracking all that complexity requires more tools.As this technology has matured, Container Orchestration platforms have grown in importance. These orchestration engines offer some of the following benefits:
 
  • “Dashboard” for all the Containers. One place to watch and manage them all.
 
  • Automatic provisioning and deployment. Rather than individually spinning up Containers, the orchestration engine manages for you. Push a button, adjust a value, more Containers spring to life.
 
  • Redundancy – if a Container fails in the wild, an orchestration engine will notice it fail. And put a new one in it’s place.
 
  • Scaling – as your workload grows, you may outgrow what you have. An orchestration engine detects capacity shortages. It adds new Containers to spread the load.
 
  • Resource Allocation – under all those Containers, you’re still dealing with real-life computers. Orchestration engines can manage and optimize those physical resources.
 
While there are several options available, Kubernetes has become the market leader.
Rise of Kubernetes
 
Kubernetes (Greek for “governor”) began at Google in 2014. It was heavily influenced by Google’s internal “Borg” system. Borg was an internal tool Google used to manage all their environments. Google released and open-sourced Kubernetes in 2015. It has since grown to become one of the largest open source projects on the planet. All the major cloud providers offer Kubernetes solutions. Kubernetes is now the de facto Container Orchestration platform. This post goes into great detail about the growth of Kubernetes over the past couple of years.
Kubernetes architecture
 
At a very high level, Kubernetes helps manages large numbers of Containers. Simple enough, right?At a more granual level, Kubernetes consists of a Cluster managing lots of Nodes. It has one Master Node, and one-to-many Worker Nodes. These Nodes use Pods to deploy Containers to environments. As requirements scale, Kubernetes can deploy more Containers, Pods, and Nodes. Kubernetes tracks all the above, and adds/removes when needed.Here’s a closer look at all the concepts described above:
 
  • Cluster – A Cluster is an instance of a Kubernetes environment. It has a Master node, and several Worker nodes.
 
  • Node – A Kubernetes Node is a process that runs on a server (physical or virtual). A node is either a Master node, or a Worker node. Together, Master and Workers manage all the distributed resources, both physical and virtual.
 
  • Master – the Master node is the control center for Kubernetes. It hosts an API server exposing a REST interface used to communicate with Worker nodes. The Master runs the Scheduler, which creates Containers on the various Worker Nodes. It contains the Controller Manager, which manages the current state of the cluster. If the cluster doesn’t match the desired state, the Controller Manager will correct. For example, if Containers fail, it creates new Containers to take their place.
 
  • Worker – the Worker Nodes carry out the wishes of the Master Node. This includes starting Containers, and reporting back their status. As an environment needs to scale to more machines, Kubernetes adds more Worker Nodes.
 
  • Pods – A Pod is the smallest deployable unit in the Kubernetes object model. It consists of one or more Containers, storage resources, networking glue, and configuration. Kubernetes deploys Pods to Nodes. Docker is the main Container technology Kubernetes uses, but others are available.
Alternatives
 
While Kubernetes is the front runner, there are alternative options for Container Orchestration. These include:
 
  • Docker Swarm – already mentioned above, this is Docker’s Container Orchestration offering. This has the advantage of coming from the same team that maintains Docker. It is also considered easier to use by some, and faster to get started. Additionally, Swarm uses the same CLI as Docker. This makes it easy to use for those already familiar with Docker.
 
  • Apache Marathon – a container swarm for Apache Mesos. Not as widespread or popular as Docker/Kubernetes. If you are already invested in the Apache ecosystem, this might be a good choice. This requires a decent level of Linux/Apache expertise to get started.
 
  • Nomad – this is a lightweight orchestration platform. This doesn’t feature all the bells and whistles of more advanced systems. It is more simple, though, which may appeal to some.
In Summary

With all this solid background in place, we are now better poised to make a decision. How to containerize everything?For starters, Docker is a must. While alternatives exist, Docker is the clear front runner. It has become the industry standard, and features extensive tooling and documentation. It is open source, and free to get started. You can’t go wrong using Docker as your container technology.Once things get big enough to orchestrate, you must make a decision. The best two choices seem to be:
 
  1. Docker Swarm – an easy stepping stone from simple Docker, Swarm is worth exploring first. Using the same CLI, you can grow your Docker environment to multiple Containers on several Machines. If you are able to manage everything this way, you might just stop there.
 
  1. Kubernetes – if Swarm doesn’t seem up to the task, it’s probably worth the leap to Kubernetes. It’s the leader in the orchestration space, which offers the same documentation and support advantages. It will grow as big as you need it, and supports the complications that arise with large-scale systems.
Iron.io
If your organization is looking to use Containers in the Cloud, Iron.io can help you get there. Iron.io supports Docker, Kubernetes and other alternatives. Iron.io’s expert staff will help you intelligently scale your business on any of the major cloud platforms. Iron.io is trusted by brands such as Zenefits, Google, and Untappd. Allow them to help your business containerized in the cloud!

Docker Jobs: 11 Awesome Jobs for 2019

If you’re searching for docker jobs online, it can be a real challenge to find open positions that fit your skills.

A development job presents a fantastic opportunity to work at an innovative company. However, finding positions can be a time-consuming process. Here’s some advice for finding docker jobs online. You’ll also find 11 open positions to consider.

docker jobs

Docker Skills Are The Next Best Thing

When it comes to automating the creation and deployment for container-based apps, Docker is the go-to technology. That’s why it’s one of the best skills you can possess as a developer in today’s marketplace.

Containers, being a lighter weight type of virtualization, are truly taking over. Docker has the hope of freeing developers from dependencies on software and other types of infrastructure. That means Docker’s approach is able to cut costs and boost efficiency.

Overall demand for DevOps skills has been steadily increasing since the early 2000s. As a developer, you recognize the importance of continuously expanding your skillset. Docker is the new thing you should be looking to sharpen up on.

docker jobs

The Benefits To Expect

Working a top-of-the-line position at an innovative new company means you’ll get to enjoy a number of different benefits. These are things the general workforce is yet to have access to.

First, innovations in workplace healthcare have brought in-office care to the scene. Other wellness programs are also being further emphasized. New perks are coming to big companies and innovative startups alike. And these are the places currently searching for Docker professionals.

Secondly, you’ll get to enjoy a strong work/life balance. This is thanks to a number of initiatives that larger companies are taking. Businesses are now working hard to support employees in living a healthier lifestyle. This includes paid time off and paid holiday. Oftentimes, sabbaticals are also offered that allow you to truly escape for a while.

Volunteering opportunities and other team-building outlets are abound. They can help you find more meaning in your career. You can even find purpose in your personal life thanks to work-sponsored endeavors. This is all part of a widespread effort on behalf of companies. Many companies are trying to be more supportive of employees’ well-being.

Many modern workplaces feature on-site gyms and fitness centers. Personal coaching is often included. It’s also becoming more common for companies to pay for a fitness membership on behalf of workers.

Some companies offer wellness bonuses. So you can even get paid money for keeping yourself in tip-top shape. That’s right, some companies actually monetarily reward employees. Get paid to lose those extra inches or make strides to living a healthier lifestyle.

Of course, this all pays off in the end for the company. Study after study is proving how important work/life balance is. Studies are also proving how motivating it can be for a company to go the extra mile to support workers’ health. This is why newer companies are adopting and offering such neat programs.

If you’re focused on your family, you may even get the joy of parental leave. At the very least, this work perk will allow you to take time off for your family without getting penalized. The best companies even offer paid parental leave. That means you can take time off without adding any financial stress.

docker jobs

11 Docker Jobs to Consider

  1. Senior Software Developer at ThoughtWorks. Work with Fortune 500 clients as you work through business challenges. Your job is to spot poorly written code and fix it. Experience with Docker preferred.
  2. Senior Backend Engineer at AllyO. This is a fast-growing startup looking to build a strong team. They need an experienced and motivated individual. Experience with Docker required.
  3. DevOps / Python Developer at Lore IO. This is a well-funded startup in its early stages. In this position, you’ll be integrating Lore into cloud ecosystems. Experience with Docker preferred.
  4. Senior Python Developer at Mako Professionals. As a senior developer, you’ll spend about 25% of your time coding. You’ll also test systems and work with a collaborative team. Experience with Docker required.
  5. Senior Site Reliability Engineer at Procurant. Support and maintain services in this exciting position. Your position will scale systems using automation. Evolving technologies is a must. Experience with Docker required.
  6. Senior Python Developer at Pearson. Lead development initiatives and work closely with scientists in this position. You’ll promote the use of new technologies too. Experience with Docker required.
  7. Senior Python Backend Developer at Mirafra. Design database architecture in this fast-paced environment. Your job includes delivering high performance applications. You’ll focus on scalability too. Experience with Docker preferred.
  8. Senior Database Administrator at Verisys. If you’re fun and energetic, this could be the right position for you. Work to build a next generation platform for healthcare credentialing. Experience with Docker preferred.
  9. Senior DevOps Engineer at Outset Medical. This privately held company has a number of investors backing it. Work on innovative medical technologies in this rewarding position. Experience with Docker required.
  10. Senior Site Reliability Engineer at Guardian Analytics. This company fights fraud in the financial industry. Your job will play a vital role in helping them keep consumers safe. Experience with Docker required.
  11. DevOps Engineer at Arthur Grand Technologies. Design and build automated systems in this high-paying position. Experience with Docker required.

docker jobs

Where to Find Docker Jobs

If you’re looking for Docker jobs, you should be looking on a number of different websites. These are offering a full list of open positions that you could potentially snag.

Indeed is one of the most popular job search platforms. You can also be looking on LinkedIn and other professional networking websites. Oftentimes, you’ll be able to find a great opportunity without ever looking at an official job ad.

If you have the right people in your network, have them put in a good word for you. This way, you could very well be the first person a company contacts. Be front of mind when they start looking for a professional with a strong Docker skillset.

You can also find plenty of new opportunities. Try websites like Monster and other job search platforms. Glass Door is also a good website to visit. It can help you review a potential company that is hiring and make sure that they are a worthy employer.

On Glass Door, you’ll often be able to see reviews from previous employees. They will share their experiences working with a particular employer. This information can be vital in helping you avoid bad companies. It can also aid you in understanding more about the company itself and what they are after.

You shouldn’t let a few bad reviews of disgruntled employees shake you. But, if a company’s reviews seem genuine, it’s probably a good idea to take them into consideration.

As far as choosing a job search site to use to look for open positions, try using more than one. Many companies cross-post on different platforms to reach the most potential candidates. But some only post on a few select platforms (or even just one). That means looking on multiple sites can reveal the most opportunities to you.

It doesn’t hurt to apply to all of the open positions you find, but you probably won’t be doing that. It takes time and a bit of research to craft a good application. It’s best to follow the below tips and only apply to the positions you really want.

docker jobs

Tips for Applying

When applying for a new position, it’s always best to start by reviewing your resume. Your resume needs to highlight the fact that you’re up-to-date on all the relevant skills.

You should also go the extra mile to tailor your resume for each position you’re applying for. Write a cover letter targeted at each specific company’s offerings too.

For example, if you are reading a job opening ad that mentions X, Y, and Z, you definitely want your resume to reflect that. Don’t waste time on A, B, and C.

Emphasize your proficiencies. Focus on what aligns with the specific skills outlined in the job ad. Place requested skills at the top of any bulleted lists.

Additionally, you should clean up your resume by cutting out unnecessary experience. Positions that simply aren’t relevant to your application aren’t needed. It’s a common mistake to try and list out as much as possible. But, if you’re listing every job you’ve had since you first started working, that’s fluff.

Similarly, avoid adding filler skills like “Microsoft Suite”. You need to shorten things up. Your resume should reflect only your most relevant experience. It should contain only relevant skills so that it’s easy for the recruiter to see your value.

Most recruiters only skim a resume. By taking out all the unnecessary items, you’ll be sure to get their attention instantly. You’ll portray yourself as an exact match. It will be clear that you specialize in what the company requires.

The next step is reviewing your cover letter. Your cover letter is a must to include because it’s your chance to speak to the recruiter. In it, you can detail the information in your resume to explain why you are the perfect fit for the given position.

Again, you’ll want to tailor this to fit the specific job opening you’re going after. When applying, be certain that you include your contact information. There is no need to include references unless you get a call back requesting them.

Most companies today have a multi-step interview process. It typically begins with a phone interview. This gives you the chance to ask questions and explain why you like the position. You’ll also let them know why you’re a great fit.

If you pass the phone interview, the next step is generally an on-site interview. Depending on the size of the company, there may be multiple phone interviews. There may also be multiple on-site interviews. Generally, they will explain the process in the first phone conversation with you.

It may feel like a lot of hoops to jump through. This is especially true if you’re applying at a larger company. However, these are necessary steps that help them see if you’re the right fit for the company. At the same time, they will help you understand whether you think the company is the right fit for you.

One final tip of the application process is to ask questions when given the opportunity. You should formulate questions that articulate your interest in the position. These questions also showcase your understanding of their expectations.

You should do some basic research into the company in order to come up with the right questions. This will enable you to better understand the company. You’ll also get a glimpse of the work environment. It can even help you understand what they are after in the employee they hire.

docker jobs

Next Steps

Now that you have read all about the importance of Docker skills, you should feel inspired. The next step is to begin looking for open positions where you can show off your new skillset.

The job ads you look at should detail what specific skills the company is looking for. Look for a position that best matches your list of current skills. Keep in mind, of course, that not every position will be a good fit for you.

There is increasing emphasis on matching values and other aspects today. So, you may find a company isn’t the right match for you even if you seem like the right match for the company (and vice versa). Put in the effort and you’ll be able to find the right docker job.

docker jobs

About Iron.io

Iron.io features a suite of developer tools. The aim of Iron.io is to empower developers to work smarter. Save time with a suite of Cloud Native products. Expert staff will stand by every step of the way. With Iron.io, you can intelligently scale your business.

Iron at APIdays, see you there?

First off, we’re giving away a few free tickets to the SF APIdays conference on July 31st.  Comment about your favorite API on this post for a chance to win a free ticket!

With the freebie out of the way, we’re huge fans of APIdays (and API’s in general) and love to reference this landscape diagram.  If the landscapes weren’t moving so fast, we’d probably have a copy printed on our office wall alongside the Cloud Native Landscape diagram.

API’s are everywhere

As engineers, most of us are inherently API minded.  Others, not so much.  It’s only been in the last 5 or 6 years that the idea behind API’s has gained public mindshare.  Following Executive Order 13571 in 2011, the Obama administration directed federal agencies to deploy Web APIs which put API’s in the public spotlight.   There’s been a lot of progress in the public sector, and now we’re holding conferences about API’s in general.  These are steps in the right direction.

Iron <3’s API’s

We build all of our products with API’s in mind.  All of our client libraries for each of our products use our HTTP API’s, and we’ve received a lot of praise for building API centric and cloud agnostic services.  Internally we rely on a lot of API’s as well.  We use API management solutions like DreamFactory to coordinate data sources, RingCaptcha for SMS verification and Zapier to tie disparate services together.  We obviously use all of the public cloud API’s directly as well.

What API’s do you use?

There are many others API’s that we use that I didn’t list.  What are some of your favorite API’s?  Comment below and you might be sent a free ticket to APIdays.  If you’re already going, let us know as we’d be happy to meet up!

Iron Enters the Manifold

We’re happy to announce that Iron’s suite of products are now publicly available on the Manifold marketplace (see the bottom of this post for a coupon code you’ll want to redeem).

If you haven’t heard of Manifold before, you definitely need to check them out.  They’re an application marketplace that lets you provision and manage services without being tied to a specific platform.  For those of you familiar with Heroku, you can think of it as their Add-ons marketplace but without being tied to Heroku the platform.  Manifold itself was founded by ex-Heroku leadership, so there’s plenty of best practice and real-world experience being brought to the table with their offering.

With Manifold, you have the freedom to deploy your apps wherever you’d like.  Tight Terraform and Kubernetes integrations mean you aren’t constrained to a single platform.  This is extremely important.  You’re future proofing your 3rd party integrations.  You’re minimizing the operational expenditure hit you’d normally face when migrating to new infrastructure.  This is something we pride ourselves with at Iron as well, and a lot of our customers like having the insurance that if they move off a public cloud elsewhere, we can deploy on-premise and easily move with them.

We’re proud to be a part of the Manifold platform and highly recommend trying it.  As part of this release, they’re offering $25 in Manifold credits if you use the coupon code “IRON”.

It’s extremely easy to get started and we definitely think it’s the way developers will be building applications moving forward.

 

 

 

 

3 Key Benefits to Container-Based Background Job Processing

Whether deploying applications or providing microservices, being able to get tasks done in the background without user intervention is key to operating efficiently for IT and development teams. One effective way to facilitate background job processing is with the help of containers.

Container-based background job processing comes with a whole host of benefits. Here are some of the key benefits of using container-based background job processing that IT and development teams can leverage.

Enhanced Security

With ever-increasing data breaches and ransomware threats, keeping applications secure during deployment is vital. Managing the deployment of applications often calls for working with several development teams distributed across different locations. Having more people work on these teams can create a higher risk of exposure and data breaches due to errors or vulnerabilities from mistakes by the staff.

The great news is that containers offer enhanced security. That’s because more effort has been put in place to safeguard containers. For instance, container systems and container management systems, such as Docker and Kubernetes, require container image signing to ensure your team is deploying containers from trusted resources.

Moreover, container scanning solutions also help enhance security by quickly identifying vulnerabilities that may exist in your containers, including the containers that were signed. This helps reduce security risks, including deploying unsafe containers.

Versatile Background Job Capabilities

Being able to provide on-time delivery to clients is essential for enhancing the customer’s experience. With the help of container-based background job processing, IT and development teams can manage a variety of background tasks.

For instance, tasks such as email delivery, automated scaling, calculating bandwidths or automating push notifications can be handled by containers. That’s because containers can fragment applications into smaller components while enabling communication among developer teams. This also helps facilitate speedy software development and testing. Moreover, using a container-based workload platform from development tool expert services, such as Iron IO, helps enterprises free up staff from handling background job processing so they can focus on more vital tasks, such as testing and developing their software applications.

Flexible Deployment

Thanks to the container’s shareability, enterprises can leverage flexible deployment options, including the shared, on-premise, dedicated or hybrid deployment options offered by a reliable container-based workload hosted platform, such as Iron IO’s Worker. That means enterprise leaders can choose a deployment option that’s customized to their needs.

For instance, development teams working in enterprises that often deal with classified or highly sensitive data or personal information, such as banks, hospitals or federal agencies, often have to follow several compliance regulations. Having the ability to use on-premise deployment solutions can help support background tasks in a secure manner.

At the same time, enterprises that must support staying in compliance with enterprise and federal rules while facilitating a distributed team may find a hybrid deployment approach more feasible. This deployment option is ideal for handling secure background job processing for tasks, such as scheduling and authentication, while letting development teams run their containers on-premise.

Final Thoughts

From flexible deployment options to versatile background task processing capabilities, containers offer much for development teams to leverage. While containers provide several benefits, it’s important to also use reputable platforms and professional teams that have the experience and expertise in managing and implementing containers to support container-based background jobs.  By leveraging containers and the platforms that support them, enterprises can better serve their clients for an enhanced customer experience.

Iron’s East/West Coast Drink-up

A bunch of Iron employees will be out and about in April, looking to meet up with customers to chat about our up and coming platform changes.  Beer (or wine, or cocktails, or <insert drink here>) will be on us! We’re sticking to the east and west coasts for now, and our current plans are:

April 5th,    San Francisco
April 14th,  Boston
April 15th,  NYC
April 17th,  Los Angeles

If you’re interested in attending, fill out the form below.  We’ll be in touch with the details once we have them confirmed on our end.  Cheers!

A Serverless Message Queue Without the Glue

More and more technologies get involved as systems grow, and it’s sometimes hard to keep track of what’s doing what. Caching layers, message queues, serverless functions, tracing frameworks… the list goes on.  Once you start sprinkling in public cloud services, you may find yourself developing your way into vendor lock-in.  All of the sudden, you’re dealing with one cloud, tons of services, and having to glue everything together in order to make the services talk to each other.  One of Iron’s primary goals is to make life easier for developers, and IronMQ’s little known “Push Queue” feature is one that can help prevent you from having to write the glue.

What are Push Queues?

IronMQ has a built-in feature called Push Queues, which when enabled, fire off an event any time a message gets pushed onto that queue. This comes in extremely handy when you immediately want to “do something” (or many things) with that message. With traditional message queues, you’d usually need to write another process that polls your queues for messages at a given duration. MQ’s push queues can fire off events to different types of endpoints, each extremely helpful in their own way.

What Type of events can be triggered?

HTTP
When a message gets put onto your push queue, IronMQ can make a POST request (with the message in the request body) to any URL of your choice. This is extremely handy when you want to notify other systems that some sort of event just happened or kick off another process.

MQ
Inception! You can have the delivery of a message populate another IronMQ queue. This is helpful if you want to tie multiple queues together or create a dead letter queue for example.

Worker
MQ can connect directly to IronWorker and pass its message as the payload to one of your jobs. How cool is that!?  In order to exemplify how cool that actually is, we’ll run through a real-life scenario.

MQ & Worker Example

Let’s say you have a time-sensitive nightly job that processes uploaded CSV files.  It needs to process all of the files uploaded during that day and finish within a set amount of time.   As your system grows and there are more CSV files to process, your nightly process starts running behind schedule.

You realize that a lot of the time spent in your nightly worker is spent formatting the CSV file into the correct format.  It would make sense to split this process into two distinct stages, formatting and processing.  When a CSV file is received, you could send a message to your push queue which in turn will kick off a “formatting” worker job to pre-process the CSV file into the correct format. Your nightly “processing” worker job will then be able to fly through the CSV files because it no longer needs to fix any formatting issues.

The beauty here is that you can continue to add more push events to the queue.  When a file is uploaded, maybe you also need to ping another worker that handles OCR or post an update to an external HTTP endpoint letting it know exactly “what” file was uploaded.  Without a push queue, you’d be adding a lot of custom code to handle these requests, retries, errors, etc.  IronMQ’s push queues take care of all of this for you.

How can I configure a Push Queue?

Retries
You can configure your queue to allow for a custom amount of retries, a custom delay between retries, and even provide another queue to store failed push attempts. For example, using an HTTP event, MQ will retry pushes (3 times by default) every time it receives a non-200 response code.

Timeouts
If your event never receives a response after a certain period of time (10 seconds by default), it will chalk that up as a failed attempt and retry.

Unicast or Multicast?
You can even fire off multiple events from one queue. If you need to trigger one HTTP endpoint and also fire off a Worker job, that’s not a problem.

How do I create a push queue?

Creating one is straightforward.  Here’s an example cURL request that creates a multicast Push Queue with an HTTP endpoint as well as a Worker endpoint.

IronMQ has client libraries available in most languages, so you can easily create one programmatically as well.  Here’s an example in PHP:

Conclusion

With one IronMQ Push Queue, you can make a lot happen. If you were to try and replicate a multicast Push Queue in a traditional message queue, for example, you’d end up writing a lot of custom code to glue everything together. You’d also have to deal with scaling your infrastructure as your message queue needs grew. With IronMQ, you can save time and money focusing on your applications business logic, and less time on glue and infrastructure. For more detailed information about Push Queues, visit the IronMQ documentation.

If you’re interested in knowing more about IronMQ, or want to chat about how we may be able to help, call us anytime at 888-501-4766 or email at support@iron.io.

* These fields are required.


Docker, Inc isn’t Dead

Chris Short recently wrote up a piece entitled Docker, Inc is Dead, with a prediction that the company would no longer exist sometime in 2018.  It’s well written and he does a good job of running through some of Docker’s history in recent years.  Although I agree with some of his sentiments, I don’t think Docker, Inc will exit the stage anytime soon.  Here are some reasons I think Docker, Inc will live a healthy life in 2018.

Docker is Good Software

This was the first point in Chris’ piece, and he’s right.  Docker definitely helped widen the spotlight on *n?x kernels.  Discussions around namespaces, cgroups, lxc, zones, jails, etc… lit up across communities in different disciplines.  Dockers’ simple interface lowered the barrier of entry for non-administrators, and the developer community immediately added it to their workflows.  Docker released EE/UCP, and larger organizations jumped on board.  It “is” good software for developers, SMB’s, and large organizations, and Docker, Inc isn’t slowing down development efforts.

DOCKER HAS FRIENDS

“I’m really excited to welcome Solomon and Docker to the Kubernetes community”.  Brendan Burns (of Microsoft, Lead Engineer of Kubernetes) definitely made me do a double take when he said that on stage at DockerCon EU a few months ago.  Many people I spoke to at the conference referenced that statement and saw this as a big blow to Docker.  “Who’s joining who’s community? ”  The thing is, the real purpose of Brendan’s talk was about the collaboration between companies, and the effort to make our lives as developers and administrators better.  The whole “it takes a village to raise a child” saying.  This village is composed of some of the brightest engineers from many of the world’s largest companies, and they’re all striving to make things better.  Docker and Kubernetes worked together, and the Kubernetes integration into UCP made perfect sense.

Docker has business

They don’t have a lack of coherent leadership.  They’ve received a ton of money, their marketing is great, and they’re acting like what they are;  a rapidly growing company moving into the enterprise market.  Were some of their keynotes awkward at DockerCon EU this year?  Yes.  Were there fantastic sessions from customers who shared real-life Docker success stories?  Yes.  Have they made some mistakes here and there?  Yes.  Have they moved past those and grown?  Yes.  If you’ve been around the block and watched small companies rapidly grow into behemoths, this is all typical.  Growing isn’t easy.  Their “Modernizing Enterprise Applications” mantra is perfect.  There are countless technical budgets from Fortune 10,000 companies that Docker, Inc will capitalize on.  The best part is that they’ll actually be making a positive difference.  They are not snake-oil salesmen.  These companies will probably see real ROI in their engagements.

Conclusion

Docker, Inc isn’t going to be acquired (yet) or close their doors.  There is a lot going on at Docker, Inc right now but they aren’t signs of a company that is getting ready for a sale.

It’s a company that’s based on OSS with a lot of opportunity in the market.  While one of the products at Iron is Docker-based, we use a wide variety of software from many companies with roots in OSS.  We’re happy to pay for a higher level of support and features for OSS software backed by a business.  For other projects, we often donate through Open Collective to help maintainers and small development teams.  Docker’s donation of containerd was a great move and I think it is a project that fits perfectly into CNCF’s charter.

While Docker, Inc is moving upstream, they haven’t at all abandoned its real users;  developers.   We use Docker daily, contribute back when we can, and are optimistic about its trajectory as a business and a product.  Docker, Inc has a lot of room to grow, and in 2018, it will.

* These fields are required.


Webhooks the Right Way™

If you’re a developer, dealing with webhooks is a part of your life. Nowadays almost every subscription service allows for these user-defined callbacks.  For example, when a Lead is added to Salesforce, you may want to have a task that runs in the background to generate more information about the company they work for.  Maybe you want to receive a request from Stripe when a customers payment fails so you can send them dunning emails?  You get the drift.

The most common way to deal with webhooks is adding an endpoint to your application that handles the request and response. There are some benefits to this.  No external dependencies by having all your code in one place for example.  However, the cons usually outweigh the pros.

Common problems handling Webhooks

Application downtime

If your application is down, or in maintenance mode, you won’t be able to accept webhooks.  Most external services will have retries built in but there are many that don’t.  You’d need to be OK with missing data sent from these services.

IronMQ and IronWorker have great uptime
If your application is down, you could lose valuable data
Request queuing

What happens if you have a ton of webhooks from a bunch of different services all coming in at once?  Your application/reverse proxy/etc will probably end up queuing up the requests along with other customer requests.  If your application is customer facing, this could result in a degraded user experience or even full-blown timeouts.

Use IronMQ and IronWorker to prevent bad user experiences
Too many requests to your frontend could result in request queuing and negatively affect the end user experience
Thundering herds and Cache stampedes

Even if you’re able to process all of the webhooks coming in at once, your system is going to feel the effects one way or another.  This could result in unwanted resource spikes (CPU/MEM/IO).  Unless you’re set up to autoscale, bad things could happen to your infrastructure.

 

IronMQ and IronWorker can help prevent downtime caused by webhooks
Handling webhooks at scale via your application could result in infrastructure issues

 

At Iron, many of our customers get around these issues by using IronMQ and IronWorker in conjunction.  Since IronMQ is HTTP based, highly available, and built to handle thousands of requests a second, it’s a perfect candidate for receiving webhooks.  One of the great things about IronMQ is that it supports push queues.  When a message is received, it can push its payload to an external HTTP endpoint, to another queue, or even to IronWorker.

IronWorker is a container based enterprise-ready background job processing system that can autoscale up and down transparently.  We have customers processing 100’s of jobs concurrently one minute, while the next minute the number is up in the 100’s of thousands.

The beauty of the IronMQ and IronWorker integration is that IronMQ can push its payloads directly to IronWorker.  Your work is then picked up and worked on immediately (or at a specific date and time if required).  You can have a suite of different workers firing off for different types of webhooks and handling this process transparently.  This is great for a number of reasons.

Handling Webhooks the Right Way

Happy application

Now your application is never involved in the process of handling webhooks.  This all happens outside of your normal application lifecycle.  Your application machines will never have to deal with the excessive load that could deal with infrastructure issues.

Happy users

All the work you need to do to process webhooks now happens in the background and on hardware that your users aren’t interacting with.  This ensures that processing your webhooks won’t affect your user experience.

MQ and Worker to handle Webhooks
Using IronMQ and IronWorker to handle incoming Webhooks

This is a pattern that our customers are pretty happy with, and we’re constantly improving both IronMQ and IronWorker to handle their additional needs. For example, being able to programmatically validate external API signatures and the ability to respond with custom response codes are on our list.  That said, similar to microservices, this level of service abstraction can also introduce its own complexities.  For example, dependency and access management come to mind.  We’ve had long conversations about these topics with our customers and in almost all cases, the pro’s out-weigh the cons.  This approach has been a success and we’re seeing it implemented more and more.

If you have any questions or want to get started with the pattern above, contact us and we’ll be happy to help.

* These fields are required.


AWS EC2: P2 vs P3 instances

Amazon announced its latest generation of general-purpose GPU instances (P3) the other day, almost exactly a year after the launch of its first general-purpose GPU offering (P2).  While the CPU’s on both suites of instance types are similar (both Intel Broadwell Xeon’s), the GPU’s definitely improved.  Note that the P2/P3 instance types are well suited for tasks that have heavy computation needs (Machine Learning, Computational Finance, etc) and that AWS does provide G3 and EG1 instances specifically for graphic intensive applications.

The P2’s sport NVIDIA GK210 GPU’s whereas the P3’s run NVIDIA Tesla V100’s.  Without digging too deep into the GPU internals, the Tesla V100’s are a huge leap forward in design and specifically target the needs of those running computationally intensive machine learning operations.  Tesla V100’s tout “Tensor cores” which increase the performance of floating point computations and the larger of the P3 instance types support NVIDIA’s “NVLINK”, which allow multiple GPU’s to share intermediate results at high speeds.

While the P3’s are more expensive than the P2’s, they fill in the large gaps in on-demand pricing that existed when just the P2’s were available.  That said, if you’re running a ton of heavy GPU computation through EC2, you might find the P3’s that offer NVLink a better fit, and picking them up off the spot market might make a lot of sense (they’re quite expensive).  Here’s what the pricing landscape looks like now, with the older generation in yellow and latest in green:

 

When the P2’s first came out, Iraklis Mathiopoulos had a great blog post where he ran Hashcat (a popular password “recovery” tool) with GPU support against the largest instance size available… the p2.16xlarge.  Just a few days ago he repeated the test against the largest of the P3 instances, the p3.16xlarge.  If you’ve ever played around with Hashcat on your local machine, you’ll quickly realize how insanely fast one p3.16xlarge can compute.  Iraklis’ test on the p2.16xlarge cranked out 12,275.6 MH/s (million hashes per second) while the p3.16xlarge at 59,971.8 MH/s against SHA-256.  The author’s late 2013 MBP clocks in at a whopping 121.7 MH/s.  The p3.16xlarge instance type is about to get some heavy usage by AWS customers who are concerned with results rather than price.

Of course, the test above is elementary and doesn’t exactly show the benefits on the NVIDIA Tesla V100 vs the NVIDIA GK210 in regard to ML/AI and neural network operations.  We’re currently testing different GPU’s in our Worker product and hope to have some benchmarks we can soon share based on real customer workloads in the ML/AI space.  The performance metrics and graphs that Worker produces will give a great visual on model building/teaching, and we’re excited to share our recent work with our current ML/AI customers.

While most of our ML/AI customers are on-premise, we’ll soon be looking to demonstrate Iron’s integration with P2 and P3 instances for GPU compute in public forums. In the meantime, if you are considering on-premise or hybrid solutions for ML/AI tasks, or looking to integrate the power of GPU compute, reach out and we’d be happy to help find an optimal strategy based on your needs.