Blog

Search This Blog

Loading...

Wednesday, February 26, 2014

Iron.io Launches on Pivotal Cloud Foundry Platform

Iron.io is proud to announce that IronMQ and IronWorker are now available as add-on services on Pivotal’s web-based Platform-as-a-Service (PAAS) which is available at http://run.pivotal.io and runs the open source Cloud Foundry platform.

Run.pivotal provides app developers with a powerful option to rapidly deploy and scale new applications. The recent launch of Pivotal CF – a commercial distribution of Cloud Foundry from Pivotal that is deployable on VMware’s vSphere IAAS platform – adds an industrial-strength option for deploying applications on cloud infrastructure, providing choice for business owners who want a combination of on-premise, cloud and hybrid application hosting solutions.

James Watters
Cloud Foundry at Pivotal
“IronMQ and IronWorker adds a proven suite of developer focused tools to the Cloud Foundry eco-system,” said James Watters, Head of Product Cloud Foundry at Pivotal. “This is a great win for developers who want to use the best of breed tools to build the next generation web and mobile apps. It augments the breadth of options available today, such as the current AMQP-based message services with additional message queueing and worker services designed for the way developers build products.”


About IronMQ and IronWorker

IronMQ and IronWorker are elastic cloud services that scale to handle whatever messages and workloads you send it.

IronMQ is a reliable message queueing service perfect for building multi-tier applications. The service features push queues, error queues, message retries, alerts, and a number of other capabilities that are critical for separating internal app components and interfacing with third-party services. IronMQ supports asynchronous messaging, work dispatch, load buffering, database offloading, and more. Accessible through HTTP/REST API calls and client libraries, IronMQ is easy to use, highly available, and requires no setup or maintenance.

IronWorker is an elastic task queue / worker service that scales out the processing to let you focus on building applications that scale. Every production-scale application needs to do work in the background. IronWorker gives you an easy and reliable way to tens, hundreds, or thousands of tasks at once. Queue tasks from your app, run tasks via webhooks, or schedule jobs to run later. IronWorker supports all common languages including binary packages, offers task retries, and sophisticated task monitoring.


The Growth of Multi-Tier Architectures

Whether it’s deploying, monitoring, scaling, or making fail-safe, the base cloud stack has long been one where app servers and storage solutions are the core. This view is a valid one but only a partial picture. The reason is because cloud applications have become much more complex.

Instead of starting with a two-tier application – the application tier and the database tier  – developers are building multi-tier architectures from the onset. They are including components such as message queues, worker systems, key-value data caches, job schedulers, and other services to offload workloads from the main request/response loop and allow applications be more responsive and do more processing.

Multi-Tier Architectures Increase Scale and Agility
Production-scale cloud applications, for example, use message queues to provide ways to connect processes within systems, interface with other systems, buffer activity to databases, and power service-oriented architectures. They use worker systems to offload processing to the background, scale out processing across many concurrent tasks, or run tasks on regular schedules. Examples of these types of workflows can be creating thumbnails, sending emails and notifications, or hitting multiple apis to get data to display.


The Advantages of Cloud Services

Ready-to-use cloud-base services for message queueing and task processing create tremendous efficiencies and agility. By plugging into elastic cloud services, developers no longer have to stand up and maintain these infrastructure components. They do not have to make them redundant and provision them in multiple zones and regions.

Making message queuing and task processing readily available for Pivotal developers means that they get to build advanced processing capabilities into their applications from the start. With simple API calls, they can create queues, send and receive messages, and process hundreds or thousands of tasks, not just from day one but from minute one.

And they can do it without having to worry about managing servers or dealing with infrastructure or system concerns. The benefits of cloud-based messaging and background/async processing include:

  • Speed to market: applications and systems can be built much more quickly
  • Reduced complexity: reduced risk/overhead in critical but non-strategic areas
  • Increased scalability: ability to seamlessly scale throughput and functionality


Chad Arimura
CEO, Iron.io
"Iron.io offers high-scale HTTP-based messaging and task processing services that accelerate the way cloud developers build distributed systems and create service-oriented architectures. These capabilities alongside the Pivotal Cloud Foundry platform is a powerful combination for developers creating production-scale applications."


Pivotal Cloud Foundry + Iron.io = A Powerful Combination

Just as VMs have made it easier to create new applications, elastic on-demand message queues and asynchronous processing will power another era – large-scale distributed cloud-based systems where message queuing and high-scale task processing is abstracted away from servers and where ease of use, reliability, monitoring, and features specific for the cloud are key.

Developers win because they will be able to build and scale applications much more quickly, at a lower cost, and with far less complexity. Iron.io is honored to be partnering with Pivotal in that we share the same mission in that is to drive this shift in computing and deliver this greater ease and much higher value.


Wednesday, February 19, 2014

Iron.io Announces Alerts for IronMQ

Alerts can now be set on queues to trigger actions.
Iron.io is pleased to announce the release of alerts for IronMQ. IronMQ is a cloud-based message queuing service that offers high scale and high availability. It provides pull and push queues – meaning processes can get messages (pull) and events or the queue can push messages to processes and other endpoints.

Alerts have now been incorporated into IronMQ. This feature lets developers control actions based on the activity within a queue. With alerts, actions can be triggered when the number of messages in a queue reach a certain threshold. These actions can include things like auto-scaling, failure detection, load-monitoring, and system health. 


An Important Feature for Production-Scale Applications

IronMQ has been designed to be the message queue for the cloud. It can serve as a simple buffer between processes but it is also meant for more complex use. It offers push queues, HTTP/REST access, guaranteed one-time delivery, FIFO and now alerts. As a result, it’s even easier to build production-scale applications on cloud infrastructure. 

Instead of a monolithic app structure consisting of a bunch of app servers and a database, applications can be built right from the start as distributed systems ready to scale as needed to handle increasing workloads. Processes can be separated and scaled up and down effortlessly. More automated workflows can be created to deal with a varying amount of request/response loops and the resulting workloads that are at the backend of all the inputs.


Flexible Control of Alerts

Because alerts are so important, we put a flexible alert mechanism in place, giving developers fine-grained control over how they want to be alerted and under what circumstances. Users can select the trigger (or size of the message queue) as well as whether it should be a fixed or a progressive alert (one time or on a scaled basis every x messages). In the case of a progressive trigger, users can choose whether it’s ascending or descending. There’s also a snooze parameter that lets users limit the number of alerts within a certain period of time. 

Alerts are sent to an endpoint to notify which is an IronMQ queue that you define. This queue can be configured to trigger one or more actions. You can push to a single endpoint or you can fan-out several actions (up to 100 if you want). You can also kick off workers in IronWorker from this alert queue or send messages to other queues.  

This flexibility in settings and using a queue to deliver the alerts means that you can send the alert to a variety of processes and services. You can send messages to workers using ascending alerts, for example, to launch more servers to handle increasing workloads. (Alternatively, you can scale your servers down with descending alerts.) You can send notifications via SMS or emails using Twilio or SendGrid for example, or you can hit services like PagerDuty. Because an alert queue can be a push queues, you can communicate with any service that accepts a webhook. And in a world where webhooks are becoming pretty magical, this capability opens up a lot of possibilities that even we can’t predict. 



How to Use Alerts in IronMQ

One or more alerts can be set on a pull queue. Within the API alerts can added onto a queue by making a post to the queue receiving the alerts: 

POST /projects/{Project ID}/queues/{Queue Name}/alerts/

URL Parameters
  • Project ID: The project that the queue belongs to.
  • Queue Name: The name of queue to for the alert.
Body Parameters
  • Alerts: An array of alerts hashes containing required "type", "direction", "queue", "trigger", and optional "buffer" fields. Maximum number of alerts per queue is 5. 

Acceptable fields of an alert hash are:
  • type - required - "fixed" or "progressive". In case of alert's type set to "fixed", alert will be triggered when queue size pass value set by trigger parameter. When type is set to "progressive", alerts will be triggered when queue size passes any of the values calculated by trigger * N where N >= 1. (For example, if trigger set to 10, alert will be triggered at queue sizes 10, 20, 30,...)
  • trigger - required. It will be used to calculate actual values of queue size when alert must be triggered. See type field description. Trigger must be integer value greater than 0.
  • direction - required - "asc" or "desc". Set direction in which queue size must be changed when pass trigger value. If direction set to "asc" queue size must growing to trigger alert. When direction is "desc" queue size must decreasing to trigger alert.
  • queue - required. Name of queue which will be used to post alert messages.
  • snooze - optional. Number of seconds between alerts. If alert must be triggered but delay is still active, alert will be omitted. Snooze must be integer value greater than or equal to 0.

Note:  The IronMQ client libraries will follow a similar approach in terms of the array and hash fields. See the client library for the language of your choice for more specifics.


Sample Settings for Alerts

Setting up Auto-scaling for a Queue
To have a pull queue auto-scale the processing of messages, you can use a progressive alert. For example, set a progressive alert with a trigger of 1000 on a queue in the ascending direction for a queue entitled “worker_push_queue”. This pattern would send an alert to the “worker_push_queue” which can then trigger additional workers and allow for seamless auto-scaling.

  {
  "type": "progressive",
  "trigger": 1000,
  "direction": "asc",
  "queue": "worker_push_queue"
  }`

Tuesday, February 18, 2014

Top 10 Uses of a Worker System

A worker system is an essential part of any production-scale cloud application. The ability to run tasks asynchronously in the background, process tasks concurrently at scale, or schedule jobs to run on regular schedules is crucial for handling the types of workloads and processing demands common in a distributed application.

At Iron.io, we’re all about scaling workloads and performing work asynchronously and we hear from our customers on a continuous basis. Almost every customer has a story on how they use the IronWorker platform to get greater agility, eliminate complexity, or just get things done. We wanted to share a number of these examples so that other developers have answers to the simple question “How do I use a worker system?” or "What can I do with a task queue?"

The following list is a pretty powerful set of examples. We’re confident there are uses here that every developer can benefit from. If you see any common ones that are missing, though, be sure to let us know and we'll add to the list.


1.  Image Processing

Process Images in the Background
Pictures are a critical piece in consumer applications. If you’re not making use of them in your app then you’re missing out on ways to capture users and increase engagement. Nearly every use of photos requires some element of image processing whether that’s resizing, rotating, sharpening, watermarks, thumbnails, or otherwise. Image processing is, more often than not, compute-heavy, asynchronous in nature, and linearly scaling (more users mean more processing). These aspects all make it a great fit the flexible and elastic nature on IronWorker.

The most common libraries for image processing we see in IronWorker are ImageMagick, GraphicsMagick,  and LibGD. These packages are easy to use and provide some incredible capabilities. It’s easy to include them within a worker and then upload to IronWorker. The beauty of this use case is that image processing is typically an atomic operation. An image is uploaded, processed, and then stored in S3 or another datastore. There may be call-backs to the originating client or another event might be triggered but the processing is isolated and perfect for running within a distributed and virtual environment. Scaling something like this is IronWorker is as simple as sending IronWorker more tasks – very little additional work for developers in return, almost limitless scale.


2.  Web Crawling  |  Data Extraction

Access and Crawl Data at Scale
The web is full of data — from social, to weather, to real estate, to bitcoin transactions, data is available to access, extract, share, create derivatives, and transform in any number of ways. But crawling and extracting data from the web requires lots of concurrent processes that run on a continual or frequent basis. Another great fit for background processing and IronWorker.

Several great code libraries exist to help with webcrawling including packages such as PhantomJSCasperJSNutch, and Nokogiri – all of which run seamlessly on the IronWorker platform. As with image processing, web crawling is essentially a matter of including these packages within your worker, uploading them to IronWorker, and then crawling at will.

There might be a sequence of steps – grab a page, extract links, get various page entities, and then process the most important ones – in which case, additional workers can be created and chained together. To give you a good idea of what’s possible here, we've written several examples and blog posts that you can find here and here.


3.  Sending Push Notifications

Coordinate Push Notifications
A push notification is a message sent from a central server (publisher) to an end device (subscriber). The two most common platforms for sending push notifications are the Apple Push Notification Service for iOS, and the Google Cloud Messaging for Android.

Push notifications tend to go out in batches. For example, a breaking news alert might be sent to millions of subscribers. Notice of a flight delay might be sent to thousands of flyers. Sending these notifications out in serial batches takes way too long. A better architecture is to use IronWorker to deliver these push notifications through APNS and GCM in parallel. This approach also lends itself to processing the lists on the fly to either dedup lists or offer customized messages.

With a news alert, for example, you could spawn up 1000 workers in parallel that would each send 1000 batches of notifications serially. This would reach over a million news subscribers in the time it took to process a single set. This is a huge advantage on delivery speed and a capability that would be hard to create and manage on your own. With IronWorker, it’s a relatively simple matter to get this type of concurrency and throughput.

4.  Mobile Compute Cloud

Process Mobile Events in the Background
Mobile applications push a lot of the processing off the device and into the background. Services and frameworks like Parse and Firebase allow for rapid mobile app development by providing backend services such as user management and mobile app-centric datastores.

But these frameworks don’t work so well when it comes to providing processing capabilities. (Parse Cloud Code, as an example, provides a number of capabilities but falls short in many ways). Processing lots of evented data is where IronWorker shines.

Data can be put on a message queue directly from mobile apps and then workers in IronWorker can be running to continually process events from the queue. The processing that’s performed is entirely dependent on the needs of the app.

Alternatively, the mobile frameworks mentioned above also allow connections to HTTP webhooks. You can make these endpoints point to workers which can then be kicked off to perform actions. Using IronWorker as an asynchronous (and almost serverless) processing engine making building powerful mobile applications a breeze.

5.  Data Processing

“Big data” is certainly a hot topic these days and Hadoop is a common answer. But all data is not “big” and even when it is, many “big-data” problems don’t work well with a map-reduce model. A couple of supporting articles on this theme can be found here and here.

Process Data on a Continual Basis
In the end, a large amount of “big data” use cases essentially boil down to large scale “data processing” and IronWorker is made for this. Let’s say you have a big list of zip codes and need to pull weather data from a weather API as well as population data from a different API which times out after 10 concurrent connections. Traditional “big data” solutions are simply too complex to manage situations like this. IronWorker provides a flexible but still massively parallel way to accomplish this.

Or Scale-out Your Processing
with Task-Level Granularity 
You can run tasks in parallel as well as perform complex workflows. High concurrency can be brought to bear so that 1000s of processes can run at a single time. Alternatively, you can put constraints on the processing so that only a limited number of workers run at a single time. In the case above, setting a max concurrency would ensure that you don’t exceed the 10 connection limit on the population API.

As with web crawling, tasks can be chained together and results stored in a cache or other datastore or placed on a queue for additional processing or aggregating results. The Iron.io platform is flexible and powerful enough to process almost any type of data – big, small, hot, cold, or anywhere in between.


Monday, February 17, 2014

How One User Automated a Research Study with IronWorker and Twilio (a repost from Usability Panda)


Katarzyna Stawarz, a PhD student at University College London, wrote a really nice blog post on using Twilio and IronWorker to communicate via SMS. She was conducting a study that tested a method of habit formation research and needed a way to send out reminders to participants at specific times during the day, every day, for 4 weeks.

Originally, Katarzyna was going to do it manually but then decided to dust off her programming skills and automate the whole thing. Brilliant!

She used Twilio for the SMS, of course, along with IronWorker for scheduling and async processing and after a just a bit of coding, she was up and running in no time flat – receiving and responding to several thousand study responses and managing 1,000+ reminders.

Katarzyna explains as to just how easy it was to use IronWorker with Twilio:
I needed a process running on my server to fire up reminders at a specific time and since I haven’t touched servers and their settings for at least 6 years, I wasn’t very keen to suddenly do that. So I used IronWorker to trigger my Twilio code. It was free and surprisingly easy to use, with everything nicely explained.
Here's another quote we couldn't pass up.
Twilio + IronWorker (with code!)
If you want to run a study that requires sending or receiving SMS (or both!), the Twilio + IronWorker combo is a great solution. It’s easy to set up and affordable. One of my colleagues already re-used my code to run her study (although she triggered her messages manually) and was quite happy with the tech. So yay for Twilio and iron.io :-)



To read more about Katarzyna's use case, visit her blog Usability panda or find her on twitter @falkowata!



To learn more about how IronWorker can help your app effortlessly perform work in the background asynchronously, please visit Iron.io today.

Friday, February 14, 2014

Iron.io Drinkup – Booze Queues' Edition

In keeping with all the love going around today, wanted to let you know about an Iron.io Drinkup we're hosting with our friends at Keen IO next week (Wed, Feb 19th). It'll be at our offices at Heavybit. In keeping with the theme, here are details in the form of some JSON. Drink up.

  event: {
    name: "Booze Queues' Happy Data Hour",
    type: "meetup",
    pretty_timestamp: "Wednesday, February 19th, 6:30pm-8:30pm",
    location: {
      venue :"Heavybit",
      street_address: "325 9th Street",
      city: "San Francisco",
      state: "CA",
      zip: 94103
    },
    beer: true,
    snacks: true,
    good_times: true,
    host: "Iron.io & Keen IO"
  }



Other Events Next Week

We also have a couple other events going on next week with our Iron faithful.
  1. Wednesday, Feb 19th GoSF meetup at Heroku
    • Food & Drink!
    • Talk 1: Building Distributed Systems with Mesos + Go
    • Talk 2: Stream Multiplexing in Go 
    • Talk 3: Dependency Management



  2. Thursday, Feb 20th SFRails meetup at Blurb
    • Food & Drink!
    • Talk 1: Caching and HTTP Acceleration with Varnish
    • Talk 2: Introduction to Docker + Tips on using Docker
    • Tech Talk: GitHub repo discovery via Sourcegraph


So, join us and introduce yourself to one of our evangelists (@yaronsadka and @stephenitis)...they might have a couple shirts to dish out.

Monday, February 10, 2014

Go Sessions: Teaching Go to Experienced Devs (via GoSF)


As a result of the growing interest in Go, Travis Reeder and other GoSF organizers have decided to create a program within GoSF to teach Go programming concepts and fundamentals. We're calling them Go Sessions and they consist of evening guided pair-programming sessions.

Iron.io is one of main organizers of the GoSF meetup and we've seen the group grown tremendously over the past year. It's not too surprising given the number of developers in the SF Bay Area who are looking at Go as a solution to address scalability and core server-side processing needs. The number of companies that are using Go in production is pretty impressive. Besides Iron.io, the list includes Heroku, Apcera, Bitly, Sourcegraph, Canonical, Pivotal, and many more.

About Go Sessions

The Go Sessions are targeted towards experienced developers – developers who have built production systems and who know their way around server-side components. Expert Go developers will lead the sessions and run participants through Go compiling, dependency management, program structure, concurrency, and more.

Sessions will be held on a regular basis and at-large spots available as well as slots for workgroups and companies who are exploring the use of Go in production.

Go Session #1

The first session is sponsored by Rackspace and Airbrake and will be held on Wed, March 5th at Sourcegraph. Quinn Slack, Beyang Liu, and Yin Wang from Sourcegraph will host and run the first Go Session. Travis Reeder and Sidney Zhang from Iron.io will be on hand as well to assist.

You can see more details on the GoSF event page. Given the popularity of Go, we have no doubt they'll be popular events.

Wednesday, February 5, 2014

Iron.io Launches IronMQ in Europe

IronMQ is now available in Europe
Iron.io is happy to announce today the launch of IronMQ services in Europe. IronMQ EU provides the full functionality of IronMQ as is currently available in the US and is available to the public in general release.

IronMQ offers two endpoints in Europe – AWS EU-West and Rackspace-LON. These endpoints join services in AWS US-East (N.Virginia) and Rackspace-Ord (Chicago). Switching regions and clouds is as simple as changing endpoints within messaging clients and can be accomplished on the fly.

IronMQ is one of the leading messaging queuing services in the cloud and provides developers with an easy and reliable way to create distributed applications and operate at scale. Customers are using IronMQ to create service-oriented architectures, process streaming data, connect mobile apps, sync with legacy systems, and handle other core messaging and event-handling needs.


Reduced Message Latency + European Data Locality


The launch is in response to increased demand from European customers who want to have message queues available in the same region as their applications. With the release, these customers in the EU can now benefit from reduced message latency as well as having all message data retained within the European region.

The API servers, MQ servers, and data persistence layer for IronMQ EU all run in European datacenters. IronMQ EU is running the most current release of IronMQ and will continue to do so on the same schedule as Iron.io’s US-based datacenters.

Features available in IronMQ EU include one-time guaranteed delivery (no duplicate messages), secure OAuth gateways, HTTP/REST access, FIFO (first-in, first-out), push queues, multiple language bindings, and more.

Reduced message latency and European data locality are the biggest drivers for our push to launch IronMQ in Europe. It’s important that applications running in Europe avoid the cost of transmitting messages to and from US regions. This release answers these needs and provides startups and production-scale applications with easy and reliable access to state-of-the-art message queuing services.
– Chad Arimura, CEO, Iron.io

Connecting to IronMQ EU


Changing your cloud is as simple as selecting the host you want. You can set the host in your Iron.io configuration files and connect to the service via REST APIs or client libraries. Each of the official IronMQ client libraries allows you to change a configuration setting to set the host the library connects to. Client libraries are available for almost all common languages.

IronMQ also has support for a growing number of application frameworks via third-party integrations. These include bindings for Laravel, Drupal, Celery, Yii, .NET Framework, and others. Use of these frameworks with IronMQ EU is also a matter specifying the appropriate endpoints within the configuration files.

CloudHost
AWS US-East  (Virginia)mq-aws-us-east-1.iron.io
AWS EU-West  (Ireland)mq-aws-eu-west-1.iron.io
Rackspace ORD  (Chicago)mq-rackspace-ord.iron.io
Rackspace LON  (London)mq-rackspace-lon.iron.io
Note: Certain elements of the backend persistence layer for IronMQ EU reside in the Rackspace European datacenter.


IronMQ – Built for the Cloud and Geographic Distribution


A messaging layer is key to creating reliable and scalable distributed systems. It is a primary structural component in production-scale applications and provides work dispatch, load buffering, synchronicity, database offloading, and many other core needs.

IronMQ provides a durable high-performance message queue accessible through API calls and open-standard message protocols. It it built specifically for the cloud and for running independently in datacenters around the world.

Message queues can now be created in Europe.
The service is easy to use, highly available, and requires no setup, no maintenance, and no ops. This translates into reduced complexity, greater speed to market, and increased reliability and scalability.

Iron.io services are being used to power social media, ecommerce, mobile, transportation, and industrial applications. With the release of IronMQ EU, developers in Europe now have increased flexibility and increased performance as they move from monolithic application structures and adopt more distributed and scalable service-oriented architectures.


Safe Harbor Compliance


Iron.io is certified to US-EU and US-Swiss Safe Harbor Frameworks as set forth by the U.S. Department of Commerce. These frameworks are with respect to the collection, use, and retention of personal information from EU member countries and Switzerland.
Specificially, Iron.io has certified that it adheres to the Safe Harbor Privacy Principles of notice, choice, onward transfer, security, data integrity, access, and enforcement. For more information, please refer to sections in our privacy policy and the associated links to the Safe Harbor specifications.


We Want Your Feedback


IronMQ service in Europe is available now and so you can sign up and get running in seconds. If you have comments or questions about the Iron.io European region or wish to address private or enterprise plans, please contact us at support (at) iron.io.