Blog

Search This Blog

Loading...

Monday, June 30, 2014

Introducing IronMQ Enterprise

Cloud-Native Message Queuing Now Available On-Premise

Message Queuing for Public, Private, and Hybrid Clouds
Today we’re excited to release IronMQ Enterprise, our cloud-native message queuing technology for large enterprise and carrier-grade workloads.

IronMQ Enterprise features a more advanced backend persistence layer, improved message throughput and latency, and, for the first time, deployment options for on-premise, private and hybrid cloud configurations. With IronMQ, enterprises can move the power of the cloud behind the firewall.

Since 2011 Iron.io has handled billions of messages each month for production scale customers like AAA, Mentor Graphics, Code for America, Bleacher Report, YouNow, and Hotel Tonight. Our largest customers transmit millions of messages and process tens of thousands of compute hours daily, making Iron.io one of the largest cloud message queuing and asyc processing platforms available today.

Building on this experience has allowed us to improve on previous generations of messaging solutions. IronMQ Enterprise is the result of a major upgrade in the backend persistence layer by using a highly performant embeddable key/value database. The new release also benefits from streamlined authentication access and an improved API. In addition, IronMQ Enterprise eliminates outside dependencies, creating an exceptionally tight, carrier-grade messaging solution than can be quickly deployed in horizontally scalable configurations across multiple zones and regions.


Critical Cloud Messaging Features

IronMQ delivers advanced messaging options including push queues and long-polling along with simple and secure access using HTTP/REST APIs and OAuth authentication. Built to be distributed across multiple availability zones and geographic regions, IronMQ provides reliable message delivery and fault tolerance through message persistence, redundant systems, and automated failover. Additional features include advanced real-time monitoring, message retries, error queues, and queue triggers. IronMQ Enterprise also provides improved gateways for supporting other messaging protocols including Amazon’s SQS messaging protocol and OpenStack’s Marconi project, increasing cloud compatibility and interoperability and reducing vendor lock-in.

Public, Private, and Hybrid Clouds

Introducing IronMQ’s on-premise availability is a huge step for us as a company and technology, making Iron.io the only message queue provider with high-availability services for public, private, and hybrid clouds. IronMQ Enterprise now gives organizations more options when creating more responsive, scalable, and fault-tolerant systems using readily accessible cloud technologies. Single datacenter, multi-datacenter, and carrier grade options are available and can include managed hosting and 24/7 global support options.




Download and Availability 
IronMQ Enterprise is written in Go and a single server evaluation version can be installed from binary files or as a Docker image. Go to the IronMQ Enterprise for access: www.iron.io/mq-enterprise.

The IronMQ Enterprise single server evaluation version is free to use, installs in minutes, and provides message persistence, multi-tenancy, one-time guaranteed delivery, and the same features, capabilities, and access methods that can be found in the public cloud version of IronMQ.

For more information on pricing of high availability multi-cluster versions, please contact us at www.iron.io/mq-enterprise.




For More Technical Details
For more information on the innovations in this release and the core message queuing engine, please check out our post here.

IronMQ Enterprise: Powered by IronMQ v3

Delivering Improved Performance + On-Premise and Hybrid Cloud Options

Today we announced IronMQ Enterprise, a set of offerings that includes more flexible configuration options including deployment on-premise and within private clouds as well as improved performance around message throughput and latency.

At the heart of this release is IronMQ v3, our latest version of IronMQ. A lot of work went on behind the scenes to improve the core messaging engine and so we wanted to use this post to give you some of the details on the efforts by the Iron.io team.

A good part of the effort on IronMQ v3 was focused on improving the backend persistence layer. Persistence is a key requirement within any production-scale messaging solution. If a server or the message queue goes down, you don’t want messages to be lost. With some message queues, however, especially open source versions, message persistence has to be configured at some considerable cost, and can entail some serious performance hits.

IronMQ offers persistence by design – meaning we don’t offer it in non-persistent form. As a result, the persistent storage layer receives a lot of attention and is something we make significant efforts to get right. In addition to the work on persistence, other efforts that went into IronMQ v3 included changes to the authentication layer as well as improvements to our APIs. All this work resulted in solid performance improvements along with greater ease of deployment and operation. At the end we share some of the performance comparisons against RabbitMQ.

Improved Backend Persistence Layer

IronMQ v3 moves to a modular format that can make use of embeddable key/value databases for the backend persistence layer, replacing the prior version that had been based on a NoSQL database implementation. The evaluation version uses RocksDB, an embeddable open-source key/value database that is a fork of LevelDB. The move to this type of key/value database provides a persistence layer that is far better suited to the needs of message queuing and the deployment needs of distributed cloud technologies. A few of these benefits include:

Read/Write Optimizations

A Key/Value DB is
Better Suited for MQs
The read-write patterns for a queue are different than most other transactional or data storage use cases. The most common queue pattern is generally write once, read once, and then delete. While additional metadata gets stored and additional accesses go on behind the scenes, the pattern of messages in and messages out, does mean a continual recycling of data around limited durations. A key/value database handles frequent deletions more gracefully than our prior NoSQL solution, performing cleanup in multiple background threads thereby giving live traffic little to no performance degradation. The key/value database also uses lookup optimization to reduce the time for most “get” operations and is further optimized in conjunction with a large in-memory cache.

Locking Optimizations

Improved Read/Writes
Reduce Locks
The IronMQ v3’s database layer is logically partitioned into a read-only path, a read-write path plus its write patterns scale concurrently across queues. The separate paths and concurrent writes drastically reduces the number of locks on write operations that are critical for supporting high throughput transactions. Whereas the database guarantees atomicity and durability, we added our own level of granularity on top of the database to provide locking for guarantees such as FIFO, one-time message delivery, and other consistent operations. This means we can optimize our locks on a per queue basis, thereby taking advantage of the write concurrency to avoid unnecessary locks on unrelated queues.


Storage Optimizations

An Optimized DB Means
Faster Data Access
Multicore servers and new storage options such as SSD flash drives are allowing storage-IOPS on the order of millions of requests per second. Database driver software that can make use of the IOPS offered by flash storage can perform much faster than unoptimized DBs across random read, write, and bulk uploads. The switch to a more modern key/value database provides greater upside for gains on write workload, bulk uploads, as well as pure random read workload. Multi-threaded compaction processes can also provide gains over single-threaded processes IO-bound workloads, translating into fewer write-stalls and more consistent latency.


Streamlined Authentication Access 

An Embedded AuthDB
Reduces Latency
Iron.io v3 also contains streamlined authentication. IronMQ is a multi-tenant message queue that uses HTTP/REST-protocols and OAuth authentication. API operations get authenticated, which means there is overhead to either the AuthDB or cache for every operation. The same improvements to the backend persistence layer described above were also made for the auth componentry within IronMQ v3. Namely the prior database was replaced with an efficient key/value store which means that it also inherits the same performance characteristics, which means not just reduced auth overhead but also greater consistency in response times.The change within the auth access layer also includes a more modular architecture which provides an easier ability to support additional non-OAuth access methods such as PKI signatures.

IronMQ and HTTP/REST APIs

One thing to highlight here is that IronMQ Enterprise uses HTTP as the transport protocol for connecting to the service. This is in contrast to RabbitMQ which uses AMQP. We believe there are distinct advantages in using HTTP. One of the more pressing reasons to favor HTTP over AMQP is that AMQP is a separate application layer protocol than the one developers are used to using plus it has a significant amount of complexity to it. Everyone can easily speak HTTP, but it takes special effort to speak AMQP. Another distinction is that certain cloud application hosts don’t allow socket connections to and from their virtual environments but they do allow HTTP requests. Additionally HTTP and HTTPS are always open on most enterprise firewalls, but special ports for AMQP may not be. You can read more of our reasons to favor HTTP over AMQP here.


APIs Modifications | Support for Other Messaging Protocols



 Improved HTTP/REST
API Consistency
The process of upgrading to IronMQ v3 also allowed us to make changes to the IronMQ APIs to accommodate learnings from users and their use of a cloud-native message queue. The HTTP/REST calls didn’t change drastically but we were able to address a few idiosyncrasies of the prior API structure as well as introduce greater consistency within the command set.

One of the changes to the API include changing the IronMQ ‘get’ operation from an HTTP GET to an HTTP POST operation to be consistent with REST conventions. Another change is the introduction of a reservation ID when getting a message that must be used upon deleting a message so as to avoid race conditions (which can occur when a message times out but is still in use by the client).

Developers should not see any real difference in the APIs as most users make use of client libraries to interface with the service. (The client libraries add an abstraction layer to the APIs to make it easier to use the service within a specific language or app framework.) The use of IronMQ v3 does require the use of v3 specific client libraries, which you can find available on GitHub.

More Flexible Protocol Support

Improved API Gateways
Enhance Protocol Support

Also included alongside the API modifications is the inclusion of gateways to support other protocols such as AWS SQS protocol and OpenStack Marconi protocol. SQS is a popular message queuing service provide by Amazon that uses proprietary approaches for authentication and API commands. Marconi is an cloud messaging service project within OpenStack. (Iron.io participated and assisted in developing the specification for. IronMQ is compatible with almost all the features of both services. The more flexible gateway in IronMQ v3 allows easier support these two messaging protocols.


IronMQ v3 – A Better Cloud MQ

Our development team put in a hard work into improving IronMQ and releasing IronMQ Enterprise. This work has paid off with solid performance gains, simpler componentry, and elimination of outside dependencies. The result is a fully featured but very tight high-performance messaging solution that can be more easily deployed in high availability configurations across clouds.




Downloading IronMQ v3

IronMQ is written in Go. A single server evaluation version can be installed from binary files or as a Docker image from here: www.iron.io/mq-enterprise.

The IronMQ single server evaluation version is free to use, installs in minutes, and provides message persistence, multi-tenancy, one-time guaranteed delivery, and the same features, capabilities, and access methods that can be found in the public cloud version of IronMQ.



Performance Measurements

IronMQ vs RabbitMQ
The work described above has resulted in significantly higher throughput for writes as well as performance increases for reads over prior versions of IronMQ.

Below are performance tests for IronMQ and RabbitMQ. We chose RabbitMQ as a base comparison given how popular and well-established RabbitMQ is. The results are indicative of the performance characteristics of IronMQ v3.

A total of 5 tests were performed. The first 4 tests were performed in “transient” mode for RabbitMQ, meaning no message persistence – all messages exist only in memory. The last test was performed in “persistent” mode for RabbitMQ. Note that performance drops significantly in this mode. IronMQ is always persistent (meaning that messages are persisted to disk and there cannot be lost in the event of a server or mq crash).

Test Specifications

The code for the test suite can be found at: github.com/rdallman/iron-maiden

  • Both RabbitMQ and IronMQ were run on separate AWS m3.2xlarge boxes with the following specs:
    • 8 vCPU clocked at 2.5GHz
    • 30 GB RAM
    • 2 x 80 GB SSD storage
  • Databases for each MQ were cleared before each benchmark
  • Producers and consumers ran on a single AWS m1.small box in the same datacenter
  • Each message body was a 639-character phrase

  • 4 tests were performed in “transient” mode for RabbitMQ (no persistence)
  • 1 test was performed in “persistent” mode for RabbitMQ (resulting in performance hits)

Processing 1 Message at a Time (1 message per API request)

Processing 100 Messages per at a Time (100 messages per API request)


Processing 100 Messages at a Time w/Single Consumer/Producer


Processing 10,000 Messages per Queue, 100 Queues, 1 Message at a Time, 

Processing 1 Message at a Time with Persistence Turned on for RabbitMQ


Future Benchmarks

We’re doing more benchmarks against other MQs and configurations and so stay tuned. For now, though, we’re celebrating a bit and then we’re getting back to work. There’s a lot more performance we believe we can squeeze out.

Salud!








Wednesday, June 25, 2014

Getting Pushy with Symfony2! (guest post)


This is a guest post from Keith Kirk who is VP Engineering at Underground Elephant. 

Message queues are not a new concept – neither is Push Notifications and surely not HTTP Posts for that matter. However, when you combine these ideas you have a very flexible queueing system.

Sprinkle in a little bit of Symfony's EventDispatcher... suddenly your Symfony application starts to feel a whole lot more responsive. And that feels great.

The Lead Up...
Underground Elephant is an online performance marketing company, and provides customer acquisition software for its clients. We're constantly weighing the benefits of hosting our own services vs using cloud-based alternatives. More often than not, in our fast paced environment - we have bigger fish to fry then dealing with administrating, scaling, and maintaining simple services that, while are important, are not core to our business.

So, when it came to finding ways to increase the responsiveness of our Symfony2 applications and move non-essential processes to the background, we started exploring job queues and messaging.

Admittedly, spinning up an instance of RabbitMQ or ØMQ ain't hard, neither is writing a script that will keep a socket open waiting for new messages.

I guess, frankly, we just didn't want to do it. It was another moving part that my teams had to maintain outside of their applications and more dependencies and infrastructure in our deployment.

Queue Your Cake and Eat it Asynchronously
What I wanted was to keep my dependencies light and in code – maintained by composer. I wanted a way to write my worker code as simple services within my Symfony application - all maintained within the same repository and deployed just as easily.

I wanted persistence in my queue and a level of failover in case of errors. I wanted hands off scaling and distribution - and I really wanted someone else to manage it.

So, I wanted a lot, actually.

Enter the Symfony2 QPush Bundle, integrated with IronMQ, Iron.io's cloud message queue as a service.

The bundle integrates push queue providers directly into your Symfony application allowing you to create and manage multiple queues. Subscribers are configurable, allowing differing and/or multiple subscribers per queue.

You can publish messages easily...
src/My/Bundle/ExampleBundle/Controller/MyController.php

The bundle leverages the EventDispatcher to dispatch a MessageEvent when your published messages are received from your queue. Your services are called automatically based on simple tagging which gives a lot of flexibility in chaining services or handling multiple queues in a single service.

Because it utilizes simple services in Symfony, its very easy to reuse your existing code. For us, this made adoption incredibly easy.

Handle Events in Your Services
src/My/Bundle/ExampleBundle/Service/ExampleService.php


Check out the bundle and the documentation on ReadTheDocs for more information on how to incorporate this into your application.

Push Queues are Awesome!
Push Queues may not appeal to everyone or fit each use case, but there is a ton of upside to them.

You know that queue you have that's not heavily utilized, but processing each message as soon as possible is incredibly important? Yeah, the one you're polling on the same 5 second interval at 3am that you are at the heaviest time of day. By pushing notifications directly to you application, you can remove that wasted compute, the wasted api calls, and wasted money.

No more daemon, no more cron.

For PHP specifically, threading has always been a sore spot (read, "non-existent"). However, with Push Queues, you can utilize your web server (Apache, Nginx, etc) to handle the threading for you. This also means you can easily scale horizontally by either registering more subscribers or utilizing a cluster of web servers behind a load balancer.

Wrapping Up
The QPush Bundle is open source and openly available to use. We're also very welcome to receiving contributions and feedback. If you have any questions, please visit us on GitHub!

About the Author
Keith Kirk is VP Engineering at Underground Elephant. Underground Elephant delivers enterprise marketing software solutions. They were founded in 2008 and are headquartered 
in San Diego, California.

Sunday, June 22, 2014

Evan Shaw speaking at Gopher SummerFest

Evan Shaw, one of Iron.io's top systems engineer, is speaking at the Gopher SummerFest on June 23rd. The event is produced by GoSF, the largest Go language user group in the world, and sponsored by Google and Iron.io.

GoSF is approaching 1200 members and comprised of CTOs, engineering heads, system architects, and developers from around the San Francisco Bay Area. (If there's a group more experienced with modern system architectures, we'd like to see it.)

The list of speakers and topics include many of the leading voices in the Go community.

  •  Talk 1: Andrew Gerrand, Google – The State of Go
  •  Talk 2: Derek Collison – Go at Apcera
  •  Talk 3: Evan Shaw – Go at Iron.io
  •  Talk 4: Blake Mizerany – Go at CoreOS
  •  Talk 5: Matt Aimonetti – Go at Splice

Evan will be talking about testing Go applications. Specifically, he'll go over the use of the testing/quick package, a useful tool in a developer's testing toolbox, demonstrate an extension to shrink test cases, and share how we're using it to find bugs in Iron.io's distributed architecture.

About the Speaker – Evan Shaw
Evan is a systems and lead engineer at Iron.io. He has a background in embedded software, systems programming, and C/C++ programming. In Go's early years, he was the top non-Google contributor to the Go programming language. He splits his time at Iron.io between building system-level features and making components more high-performance and durable.

Tuesday, June 3, 2014

How HotelTonight uses Iron.io and AWS Redshift to create Ruby-based ETL pipeline (repost)

Creating an ETL pipeline with Iron.io and Redshift
Operating at scale in the cloud almost always equates to having a highly distributed system architecture in place to handle workloads by auto-scaling components out horizontally

Harlow Ward is a developer at HotelTonight and he put together a great post on how they handle issues of scale. In it he talks about their use of Iron.io and Amazon's Redshift offering to create a simple highly scalable ETL pipeline.

Here's an excerpt from his article.
As the data requirements of our Business Intelligence team grow, we’ve been leveraging Iron.io’s IronWorker as our go-to platform for scheduling and running our Ruby-based ETL (Extract, Transform, Load) worker pipeline. 
Business units within HotelTonight collect and store data across multiple external services. The ETL pipeline is responsible for gathering all the external data and unifying it into Amazon’s excellent Redshift offering. 
Redshift hits a sweet spot for us as it uses familiar SQL query language, and supports connections from any platform using a Postgres adapter. 
This allows our Ruby scripts to connect with the PG Gem, our Business Intelligence team to connect with their favorite SQL Workbench, and anyone in our organization with Looker access to run queries on the data. 
HotelTonight's Dashboard of Workers
The team at Iron.io have been a great partner for us while building the ETL pipeline. Their worker platform gives us a quick and easy mechanism for deploying and managing all our Ruby workers. 
The administration area boasts excellent dashboards for reporting worker status and gives us great visibility over the current state of our pipeline. 
Read more >>

––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––

About the Author/Developer
Harlow Ward is a developer at HotelTonight. Prior to that, he was at thoughtbot (creators of Paperclip, Factory Girl, Shoulda, Airbrake, and more). He's co-author of "Ruby Science," and enjoys writing technical articles focused on sharing development techniques throughout the community. (@futuresanta)