With the rapid growth of cloud computing, the “as a service” business model is slowly growing to dominate the field of enterprise IT. XaaS (also known as “anything as a service”) is projected to expand by a staggering annual growth rate of 38 percent between 2016 and 2020. The reasons for the rise of XaaS solutions are simple: in general, they are more flexible, more efficient, more easily accessible, and more cost-effective.
Serverless abstraction and containers are two XaaS cloud computing paradigms that have both become highly popular in recent years. Many articles pit the two concepts against each other, suggesting that businesses are able to use one but not both at the same time.
However, the choice between serverless abstraction and containers is a false dilemma. Both serverless and containers can be used together, enhancing one another and compensating for the other’s shortcomings. In this article, we’ll discuss everything you need to know about serverless abstraction with containers: what it is, what the benefits are, and how you can get started using them within your organization.
What is Serverless Abstraction?
“Serverless abstraction” is the notion in cloud computing that software can be totally separated from the hardware servers that it runs on. Users can execute an application without having to provision and manage the server where it resides.
There are two main types of serverless abstraction:
BaaS (backend as a service): The cloud provider handles the application backend, which concerns “behind the scenes” technical issues such as database management, user authentication, and push notifications for mobile applications.
FaaS (function as a service): The cloud provider executes the application’s code in response to a certain event, request, or trigger. The server is powered up when the application needs to run, and powered down once it completes.
The FaaS serverless paradigm is akin to the supply of a utility such as electricity in most modern homes. When you turn on a light or a kitchen appliance, your consumption of electricity increases, and it stops automatically when you flip the switch off again. The amount of the utility is infinite in practice for most use cases, and you pay only for the resources you actually consume.
FaaS is a popular choice for several different use cases. If you have an application that shares only static content, for example, FaaS will ensure that the appropriate resources and infrastructure are provisioned, no matter how much load your server is under. The ETL (extract, transform, load) data management process is another excellent use case for FaaS. Instead of running 24/7/365, your ETL jobs can spin up when you need to move information into your data warehouse, so that you only pay for the run instances that you actually need.
What are Containers?
Containers are software “packages” that combine an application’s source code with the libraries, frameworks, dependencies, and settings that are required to use it successfully. This ensures that a software application will always be able to run and behave predictably, no matter in which environment it is executed.
Products such as Docker and Kubernetes have popularized the use of containers among companies of all sizes and industries. 47 percent of IT leaders plan to use containers in a production environment, while another 12 percent already have.
Serverless Abstraction with Containers
The goal of both serverless abstraction and containers is to simplify the development process by removing the need to perform much of the tedious drudgery and technical overhead. Indeed, nothing prevents developers from using both containers and serverless abstraction in the same project.
Developers can make use of a hybrid architecture in which both the serverless and container paradigms complement each other, making up for the other’s shortcomings. For example, developers might build a large, complex application that mainly uses containers, but that transfers responsibility for some of the backend tasks to a serverless cloud computing platform.
In light of this natural relationship, it’s no surprise that there are a growing number of cloud offerings that seek to unite serverless and containers. For example, Google Cloud Run is a cloud computing platform from Google that “brings serverless to containers.”
Google Cloud Run is a fully managed platform that runs and automatically scales stateless containers in the cloud. Each container can be easily invoked with an HTTP request, which means that Google Cloud Run is also a FaaS solution, handling all the common tasks of infrastructure management.
Because Google Cloud Run is still in beta and under active development, it might not be the best choice for organizations who are looking for maximum stability and security. In this case, companies might turn to Google Cloud Run alternatives such as Iron.io.
Iron.io is a serverless platform offering a multi-cloud, Docker-based job processing service. The flagship Iron.io product IronWorker is a task queue solution for running containers at scale. No matter what your IT setup, IronWorker can work with you: from on-premises IT to a shared cloud infrastructure to a public cloud such as AWS or Microsoft Azure.
Although they’re often thought of as opposing alternatives, the launch of Google Cloud Run and alternatives such as Iron.io proves that serverless abstraction and containers can actually work together in harmony. Interested in learning more about which serverless/containers solution is right for your business needs and objectives? Speak with a knowledgeable, experienced technology partner like Iron.io who can help you down the right path.
Love them or hate them, containers have become part of the infrastructure running just about everything. From Kubernetes to Docker, almost everyone has their version of containers. The most commonly used is still Docker. IronWorker was among the very first to combine serverless management with containers. In this article we will give a high-level overview of what a Docker image is, and how IronWorker uses them.
So, What is a Docker image?
To start, we need to have an understanding of the Docker nomenclature and environment. There is still not a clear consensus on terms in regards to containers. What Docker calls one thing, Google calls another, and so on. We will only focus on Docker here. (for more on Docker vs Kubernetes, read here).
Docker has three main components that we should know about in relation to IronWorker:
1) Docker File
Let’s keep it simple. Docker files are configuration files that “tell” Docker images what to install, update, etc. Basically the Docker file says what to build that will be the Docker image.
2) Docker Image
A Docker image is the set of processes outlined in the Docker file. It is helpful to think of these as templates created by the Docker files. These are arranged in layers automatically. Each layer is dependent on the layer below it. Each layer then becomes more abstracted than the layer below.
By abstracting the actual “instructions” (remember the Docker files?), an environment that can function with its resources isolated is created. While virtual machines relied on calling OS level resources, containers have eliminated this. In turn, this creates a lightweight and highly scalable system. IronWorker takes these images and begins the process of creating and orchestrating complete containers. What exactly is the difference between a Docker image and a Docker container? Let’s see.
3) Docker Containers
Finally we come to the containers. To simplify, we can say that when a Docker image is instantiated it becomes a container. By creating an instance that draws on system resources like memory, the container begins to carry out whatever processes are together within the container. While separate image layers may have different purposes, Docker containers are formed to carry out single, specific tasks. We can think of a bee vs. a beehive. Individual workers carry out asynchronous tasks to achieve a single goal. In short, containers are packages which hold all of the required dependencies to run an application.
After the container has been run, The Docker image is inert and inactive. This is because Docker image has carried out its purpose and now serves only as a meta reference.
IronWorker and Docker
So, you have your containers configured and everything is ready to go. What next? While Docker containers can function on their own, things like scaling workloads is much faster, more reliable, and easier with an orchestrator. IronWorker is one such container orchestrator, with some unique properties.
An orchestrator adds another layer of abstraction to implementing and running containers. This has become known as “serverless” in recent years. While there is no such thing as a truly serverless, the term simply means there is no server management involved. By this point in the configuration, we have likely all but forgot about our original Docker image.
Forgetting about managing a server to react to spikes in traffic or other processing needs greatly simplifies a developer’s job. Tasks and other processes are scaled automatically. At the same time this allows for detailed analytics. Because the containers are managed by IronWorker, whether they are short lived or take days, the jobs are completed with minimal developer input after the initial setup.
What about migrating to other clouds or on-premise?
Traditionally, containers have been cloud based. As new options develop beyond just Amazon Web Services, the need to deploy flexible tools increases. Obviously devops changes frequently. Sometimes it even changes daily. One of the key benefits to IronWorker is that exporting your settings (as Docker images) and continuing on, either redundantly or in new iterations, in varying environments is the easiest in the marketplace. This includes deploying fully on-premise. This ability to maintain freedom from vendor lock-in and future needs is what separates IronWorker from the rest.
Google Cloud Run is a new cloud computing platform that’s hot off the presses from Google, first announced at the company’s Google Cloud Next conference in April 2019. Google Cloud Run has generated a lot of excitement (and a lot of questions) among tech journalists and users of the public cloud alike, even though it’s still in beta.
We will discuss the ins and outs of Google Cloud Run in this all-in-one guide, including why it appeals to many Google Cloud Platform customers, what are the features of Google Cloud Run, and a comparison of the Google Cloud Run alternatives.
Often just called “serverless,” serverless computing is a cloud computing paradigm that frees the user from the responsibility of purchasing or renting servers to run their applications on.
(Actually, the term “serverless” is a bit of a misnomer: The code still runs on a server, just not one that the user has to worry about.)
Cloud computing has soared in popularity over the past decade. This is thanks in large part to the increased convenience and lower maintenance requirements. Traditionally, however, users of cloud services have still needed to set up a server, scale its resources when necessary, and shut it down when you’re done. This has all changed with the arrival of serverless.
The phrase “serverless computing” is applied to two different types of cloud computing models:
BaaS (backend as a service) outsources the application backend to the cloud provider. The backend is the “behind the scenes” part of the software for purposes such as database management, user authentication, cloud storage, and push notifications for mobile apps.
FaaS (function as a service) still requires developers to write code for the backend. The difference is this code is only executed in response to certain events or requests. This enables you to decompose a monolithic server into a set of independent functionalities, making availability and scalability much easier.
You can think of FaaS serverless computing as like a water faucet in your home. When you want to take a bath or wash the dishes, you simply turn the handle to make it start flowing. The water is virtually infinite, and you stop when you have as much as you need, only paying for the resources that you’ve used.
Cloud computing without FaaS, by contrast, is like having a water well in your backyard. You need to take the time to dig the well and build the structure, and you only have a finite amount of water at your disposal. In the event that you run out, you’ll need to dig a deeper well (just like you need to scale the server that your application runs on).
Regardless of whether you use BaaS or FaaS, serverless offerings allow you to write code without having to worry about how to manage or scale the underlying infrastructure. For this reason, serverless has come into vogue recently. In a 2018 study, 46 percent of IT decision-makers reported that they use and evaluate serverless.
What are containers?
Now that we’ve defined serverless computing, we also need to define the concept of a container. (Feel free to skip to the next section if you’re very comfortable with your knowledge of containers.)
In the world of computing, a container is an application “package” that bundles up the software’s source code together with its settings and dependencies (libraries, frameworks, etc.). The “recipe” for building a container is known as the image. An image is a static file that is used to produce a container and execute the code within it.
One of the primary purposes of containers is to provide a familiar IT environment for the application to run in when the software is moved to a different system or virtual machine (VM).
Containers are part of a broader concept known as virtualization, which seeks to create a virtual resource (e.g., a server or desktop computer) that is completely separate from the underlying hardware.
Unlike servers or machine virtualizations, containers do not include the underlying operating system. This makes them more lightweight, portable, and easy to use.
When you say the word “container,” most enterprise IT staff will immediately think of one, or both, of Docker and Kubernetes. These are the two most popular container solutions.
Docker is a runtime environment that seeks to automate the deployment of containers.
Kubernetes is a “container orchestration system” for Docker and other container tools, which means that it manages concerns such as deployment, scaling, and networking for applications running in containers.
Like serverless, containers have dramatically risen in popularity among users of cloud computing in just the past few years. A 2018 survey found that 47 percent of IT leaders were planning to deploy containers in a production environment, while 12 percent already had. Containers enjoy numerous benefits: platform independence, speed of deployment, resource efficiency, and more.
Containers vs. serverless: A false dilemma
Given the massive success stories of containers and serverless computing, it’s hardly a surprise that Google would look to combine them. The two technologies were often seen as competing alternatives before the arrival of Google Cloud Run.
Both serverless and containers are intended to make the development process less complex. They do this by automating much of the busy work and overhead. But they go about it in different ways. Serverless computing makes it easier to iterate and release new application versions, while containers ensure that the application will run in a single standardized IT environment.
Yet nothing prevents cloud computing users from combining both of these concepts within a single application. For example, an application could use a hybrid architecture, where containers can pick up the slack if a certain function requires more memory than the serverless vendors has provisioned for it.
As another example, you could build a large, complex application that mainly has a container-based architecture, but that hands over responsibility for some backend tasks (like data transfers and backups) to serverless functions.
Rather than continuing to enforce this false dichotomy, Google realized that serverless and containers could complement one another, each compensating for the other one’s deficiencies. There’s no need for users to choose between the portability of containers and the scalability of serverless computing.
Enter Google Cloud Run…
What is Google Cloud Run?
In its own words, Google Cloud Run “brings serverless to containers.” Google Cloud Run is a fully managed platform that is capable of running Docker container images as a stateless HTTP service.
Each container can be invoked with an HTTP request. All the tasks of infrastructure management–provisioning, scaling up and down, configuration, and management–are cleared away from the user (as typically occurs with serverless computing).
Google Cloud Run is built on the Knative platform, which is an open API and runtime environment for building, deploying, and managing serverless workloads. Knative is based on Kubernetes, extending the platform in order to facilitate its use with serverless computing.
In the next section, we’ll have more technical details about the features and requirements of Google Cloud Run.
Google Cloud Run Features and Requirements
Google cites the selling points below as the most appealing features of Google Cloud Run:
Easy autoscaling: Depending on light or heavy traffic, Google Cloud Run can automatically scale your application up or down.
Fully managed: As a serverless offering, Google Cloud Run handles all the annoying and frustrating parts of managing your IT infrastructure.
Completely flexible: Whether you prefer to code in Python, PHP, Pascal, or Perl, Google Cloud Run is capable of working with any programming language and libraries (thanks to its use of containers).
Simple pricing: You pay only when your functions are running. The clock starts when the function is spun up, and ends immediately once it’s finished executing.
There are actually two options when using Google Cloud Run: a fully managed environment or a Google Kubernetes Engine (GKE) cluster. You can switch between the two choices easily, without having to reimplement your service.
In most cases, it’s best to stick with Google Cloud Run itself, and then move to Cloud Run on GKE if you need certain GKE-specific features, such as custom networking or GPUs. However, note that when you’re using Cloud Run on GKE, the autoscaling is limited by the capacity of your GKE cluster.
Google Cloud Run requirements
Google Cloud Run is still in beta (at the time of this writing). This means that things may change between now and the final version of the product. However, Google has already released a container runtime contract describing the behavior that your application must adhere to in order to use Google Cloud Run.
Some of the most noteworthy application requirements for Google Cloud Run are:
The container must be compiled for Linux 64-bit, but it can use any programming language or base image of your choice.
The container must listen for HTTP requests on the IP address 0.0.0.0, on the port defined by the PORT environment variable (almost always 8080).
The container instance must start an HTTP server within 4 minutes of receiving the HTTP request.
The container’s file system is an in-memory, writable file system. Any data written to the file system will not persist after the container has stopped.
With Google Cloud Run, the container only has access to CPU resources if it is processing a request. Outside of the scope of a request, the container will not have any CPU available.
In addition, the container must be stateless. This means that the container cannot rely on the state of a service between different HTTP requests, because it may be started and stopped at any time.
The resources allocated for each container instance in Google Cloud Run are as follows:
CPU: 1 vCPU (virtual CPU) for each container instance. However, the instance may run on multiple cores at the same time.
Google Cloud Run uses a “freemium” pricing model: free monthly quotas are available, but you’ll need to pay once you go over the limit. These types of plans frequently catch users off guard. They end up paying much more than expected. According to Forrester, a staggering 58% of companies surveyed said their costs exceeded their estimates.
The good news for Google Cloud Run users is that you’re charged only for the resources you use (rounded up to the nearest 0.1 second). This is typical of many public cloud offerings.
The free monthly quotas for Google Cloud Run are as follows:
CPU: The first 180,000 vCPU-seconds
Memory: The first 360,000 GB-seconds
Requests: The first 2 million requests
Networking: The first 1 GB egress traffic (platform-wide)
Once you bypass these limits, however, you’ll need to pay for your usage. The costs for the paid tier of Google Cloud Run are:
CPU: $0.000024 per vCPU-second
Memory: $0.0000025 per GB-second
Requests: $0.40 per 1 million requests
Networking: Free during the Google Cloud Run beta, with Google Compute Engine networking prices taking effect once the beta is over.
It’s worthwhile to note you are billed separately for each resource; for example, the fact that you’ve exceeded your memory quota does not mean that you need to pay for your CPU and networking usage as well.
In addition, these prices may not be definitive. Like the features of Google Cloud Run, prices for Google Cloud are subject to change once the platform leaves beta status.
Finally, Cloud Run on GKE uses a separate pricing model that will be announced before the service reaches general availability.
Google Cloud Run Review: Pros and Cons
Because it’s a brand new product product that’s still in beta, reputable Google Cloud Run reviews are still hard to find.
Reaction to Google’s announcement has been fairly positive, acknowledging the benefits of combining serverless computing with a container-based architecture. Some users believe that the reasonable prices will be enough for them to consider switching from similar services such as AWS Fargate.
Other users are more critical, however, especially given that Google Cloud Run is currently only in beta. Some are worried about making the switch, given Google’s track record of terminating services such as Google Reader, as well as their decision to alter prices for the Google Maps API, which effectively shut down many websites that could not afford the higher fees.
Given that Google Cloud Run is in beta, the jury is still out on how well it will perform in practice. Google does not provide any uptime guarantees for cloud offerings before they reach general availability.
The disadvantages of Google Cloud Run will likely overlap with the disadvantages of Google Cloud Platform as a whole. These include the lack of regions when compared with competitors such as Amazon and Microsoft. In addition, as a later entrant to the public cloud market, Google can sometimes feel “rough around the edges,” and new features and improvements can take their time to be released.
Google Cloud Run Alternatives
Since this is a comprehensive review of Google Cloud Run, we would be remiss if we didn’t mention some of the available alternatives to the Google Cloud Run service.
In fact, Google Cloud Run shares some of its core infrastructure with two of Google’s other serverless offerings: Google Cloud Functions and Google App Engine.
Google Cloud Functions is an “event-driven, serverless compute platform” that uses the FaaS model. Functions are triggered to execute by a specified external event from your cloud infrastructure and services. As with other serverless computing solutions, Google Cloud Functions removes the need to provision servers or scale resources up and down.
Google App Engine enables developers to “build highly scalable applications on a fully managed serverless platform.” The service provides access to Google’s hosting and tier 1 internet service. However, one limitation of Google App Engine is that the code must be written in Java or Python, as well as use Google’s NoSQL database BigTable.
Looking beyond the Google ecosystem, there are other strong options for developers who want to leverage both serverless and containers in their applications.
The most tested Cloud Run alternative: Iron.io
Iron.io is a serverless platform that offers a multi-cloud, Docker-based job processing service. As one of the early adopters of containers, we have been a major proponent of the benefits of both technologies.
The centerpiece of Iron.io’s products, IronWorkeris a scalable task queue platform for running containers at scale. IronWorker has a variety of deployment options. Anything from using shared infrastructure to running the platform on your in-house IT environment is possible. Jobs can be scheduled to run at a certain date or time, or processed on-demand in response to certain events.
In addition to IronWorker, we also provide IronFunctions, an open-source serverless microservices platform that uses the FaaS model. IronFunctions is a cloud agnostic offering that can work with any public, private, or hybrid cloud environment, unlike services such as AWS Lambda. Indeed, Iron.io allows AWS Lambda users to easily export their functions into IronFunctions. This helps to avoid the issue of vendor lock-in. IronFunctions uses Docker containers as the basic unit of work. That means that you can work with any programming language or library that fits your needs.
Google Cloud Run represents a major development for many customers of Google Cloud Platform who want to use both serverless and container technologies in their applications. However, Google Cloud Run is only the latest entrant into this space, and may not necessarily be the best choice for your company’s needs and objectives.
If you want to determine which serverless + container solution is right for you, speak with a skilled, knowledgeable technology partner like Iron.io who can understand your individual situation. Whether it’s our own IronWorker solution, Google Cloud Run, or something else entirely, we’ll help you get started on the right path for your business.
Iron was one of the pioneers of Serverless, so we’re excited to announce that we’ll also be one of the first companies to offer the next generation of compute: It’s called Computerless™.
Unlike Serverless, this technology removes the physical machine completely. Our offering piggy-backs off the recent developments in fiber optic technology developed at the University of Oxford. If you haven’t heard about this breakthrough, we’ll do our best to explain:
Researchers have found a way to control how light travels at the molecular level, thus being in complete control of the resulting attenuation. Molecular gates can then be created, and state stored in finite wavelengths. It’s somewhat equivalent to qubits in quantum computing, but in the case of optical fiber, it’s a physical reality.
The end result of this technological release allows for computers to be fully encapsulated in fiber optic cable. The usual components needed are now mapped 1-to-1, via light. This has allowed Iron’s infrastructure to completely change. While we’ve run our infrastructure on public clouds like AWS and GCP in the past, we’ve been able to leave that all behind. We’re now able to push our entire suite of products into optical cable itself:
In the next few months, we’ll be pushing all of our customer’s sensitive data into the cables shown above as well as running all Worker jobs through them. We’re pretty sure the cables we purchased are for multi-tenant applications, so you can probably rest assured that we’re doing the right thing. In fact, NASA has already expressed an interest in licensing this technology from Iron. Other interested parties include the government of French Guiana and defense conglomerate Stark Industries.
Researchers have kind-of concluded that this technology is ready for prime time, and also are quick to state the fact that in 1998, The Undertaker threw Mankind off Hell In A Cell, and plummeted 16 ft through an announcer’s table.
It was an honor to give a talk on the future of Serverless at goto Chicago, an enterprise developer conference running from May 24 to 25, 2016. As you can see from the full room, containers, microservices and serverless are popular topics with developers, and this interest extends across a wide swath of back-end languages, from Java to Ruby to node.js. Unfortunately, the talk was not recorded, so I’m providing these notes (and my slide deck) for those who could not attend.
The Evolution of Deployed Applications
Before we look forward into the future of Serverless, let’s look back. We’ve seen a historical evolution in deployed applications at multiple different levels. Whereas before the unit of scale was measured by how many servers you could deploy, we’ve moved through rolling out virtual machines to the current pattern of scaling our containerized infrastructure. Similarly, we’ve seen a shift from monolithic architectures deployed through major releases to containerized, continuously-updated microservices. This paradigm is Iron.io’s “sweet spot,” and we’re leading the enterprise towards a serverless computing world.
Travis Reeder, the co-founder and CTO of Iron.io, spoke at last night’s Docker NYC meetup about Microcontainers. In addition, Hermann Hesse of Sumo Logic spoke about Logging in Docker.
Iron.io is a big proponent of microcontainers, which are minimalistic docker containers that can still process full-fledged jobs. We’ve seen microcontainers gaining traction amongst software architects and developers because their minimalistic size makes them easy to download and distribute via a docker registry. Microcontainers are easier to secure due to the small amount of code, libraries and dependencies, which reduces the attack surface and makes the OS base more secure. Continue reading “Microcontainers, and Logging in Docker: Iron.io CTO speaks at Docker NYC”
My previous post, Distinguished Microservices: It’s in the Behavior, made a comparison between two types of microservices – real-time requests (“app-centric”) and background processes (“job-centric”). As a follow up, I wanted to take a deeper look at job-centric microservices as they set the stage for a new development paradigm — serverless computing.
Of course, this doesn’t mean we’re getting rid of the data center in any form or fashion — it simply means that we’re entering a world where developers never have to think about provisioning or managing infrastructure resources to power workloads at any scale. This is done by decoupling backend jobs as independent microservices that run through an automated workflow when a predetermined event occurs. For the developer, it’s a serverless experience.
Microservices is more than just an academic topic. It was born out of the challenges from running distributed applications at scale; enabled by recent advancements in cloud native technologies. What started as a hot topic between developers, operators, and architects alike, is now catching on within the enterprise because of what the shift in culture promises — the ability to deliver software quickly, effectively, and continuously. In today’s fast-paced and ever-changing landscape, that is more than just desirable; it’s required to stay competitive.
Culture shifts alone are not enough to make a real impact, so organizations embarking down this path must also examine what it actually means for the inner workings of their processes and systems. Dealing with immutable infrastructure and composable services at scale means investing in operational changes. While containers and their surrounding tools provide the building blocks through an independent, portable, and consistent workflow and runtime, there’s more to it than simply “build, ship, run.”
Docker enables you to package up your application along with all of the application’s dependencies into a nice self-contained image. You can then use use that image to run your application in containers. The problem is you usually package up a lot more than what you need so you end up with a huge image and therefore huge containers. Most people who start using Docker will use Docker’s official repositories for their language of choice, but unfortunately if you use them, you’ll end up with images the size of the empire state building when you could be building images the size of a bird house. You simply don’t need all of the cruft that comes along with those images. If you build a Node image for your application using the official Node image, it will be a minimum of 643 MB because that’s the size of the official Node image.
I created a simple Hello World Node app and built it on top of the official Node image and it weighs in at 644MB.
That’s huge! My app is less than 1 MB with dependencies and the Node.js runtime is ~20MB, what’s taking up the other ~620 MB?? We must be able to do better.
What is a Microcontainer?
A Microcontainer contains only the OS libraries and language dependencies required to run an application and the application itself. Nothing more.
Rather than starting with everything but the kitchen sink, start with the bare minimum and add dependencies on an as needed basis.
Taking the exact same Node app above, using a really small base image and installing just the essentials, namely Node.js and its dependencies, it comes out to 29MB. A full 22 times smaller!
Try running both of those yourself right now if you’d like, docker run –rm -p 8080:8080 treeder/tiny-node:fat, then docker run –rm -p 8080:8080 treeder/tiny-node:latest. Exact same app, vastly different sizes.
Last night’s meetup, which was hosted by Betable, included two presentations and two lightning talks rounding out a solid evening for the GoSF group. Topics included identity on the web, safe storage of tokens (beyond ENV vars), and even the debut of a new Go-inspired embedded systems language.