Blog

Search This Blog

Loading...

Monday, July 28, 2014

Iron.io Increases Message Size in IronMQ

Message Size Increases from 64KB to 256KB
Large message sizes are now available within IronMQ. We have increased the message size from 64KB to 256KB.

This increase was originally in response to some uses cases around XML but also allows the service to handle almost every need possible when it comes to messaging. The increased sizes are available on Developer plans and above.

To try IronMQ for free, sign up for an account at Iron.io. You can also contact our sales team if you have any questions on high-volume use or on-premise installations.




Note that it's a good design practice to pass IDs in message payloads when dealing with large blocks of data such as images and files. You can put them in IronCache which has a maximum size of 1MB or something like AWS S3 and then pass the key as part of the message. 

Monday, July 21, 2014

Iron.io Releases Dedicated Worker Clusters

IronWorker Now Offers Dedicated Worker Clusters
Dedicated workers clusters are now available within IronWorker. Sets of workers can be provisioned to run tasks on a dedicated basis for specific customers. The benefits include guaranteed concurrency and stricter latencies on task execution.

This capability is designed for applications that have a steady stream of mission-critical work or organizations that have greater requirements around task execution and job latency.

The IronWorker service is a multi-tenant worker system that uses task priorities to modulate the execution across millions of tasks a day. Each priority has a different targeted max time in queue. The targeted max queue times is 15 seconds for p2, two minutes for p1, and 15 minutes for p0.

The average latencies are far less than the targets (for example, even most p0 tasks run in seconds). On occasion, when under heavy load, the latencies can stretch to these windows and beyond. If low latencies are critical or if usage patterns warrant allocating specific sets of worker resources, then we suggest looking at one or more clusters of dedicated workers. 

The way they work is that a set amount of dedicated workers can be allocated on a monthly basis. Additional capacity can be turned on an on-demand basis as needed (usually on a day-by-day basis – with advanced notice or without). 

Clusters can be in units of 25 workers starting at 100 workers. On-demand allocations are also typically provisioned in units of 25 although this can be adjusted as necessary. 


A Few Use Cases for Dedicated Workers

Here are just a few use cases for dedicated workers.

Push Notifications

Push Notifications
A number of media sites are using dedicated workers to send out push notifications for fast-breaking news and entertainment. These media properties have allocated a specific number of dedicated workers giving them guaranteed concurrency to handle the steady flow of notifications. They augment the set number by adding on-demand clusters in anticipation of large events or when breaking news hits. When a news event takes place, they queue up thousands of workers to run within the worker clusters. The dedicated nature of the clusters means they’re able to meet their demanding targets for timely delivery. 


Continuous Event Processing

Another use for dedicated workers is for asynchronously processing a continual set of events. This can be in the form of offloading tasks from the main app response loop so as to gain by concurrent execution of processes and reduce the response time to users. Several customers for example use IronWorker as part of a location check-in process. Each event might trigger several related actions such as sending posts to Twitter or Facebook or, in the case of one customer, kicking off processes that bring back real-time location-based recommendations. 
Continuous Event Processing

Another example might involves sensors and other devices for Internet of Things applications. A continual set of data inputs get streamed to a message queue and then workers either perform mass inserts into a datastore or process them on-the-fly to create aggregate and derivative values in near real-time. 

In these cases, it can make sense to use dedicated clusters. Even though standard IronWorker tasks will generally meet the performance requirements, dedicated clusters can provide added assurances that tasks will execute at a continual pace and with finer latency constraints.

 

Getting Access to Dedicated Worker Clusters

Making use of dedicated workers is as simple as passing in a dedicated cluster option when you queue a task. When tasks run, they'll be processed within the dedicated cluster.

To get access to dedicated worker clusters, check in with our sales team and we'll get you up and running in no time.

What are you waiting for? High-scale processing awaits.





To learn more about what you can do with a worker system and see more details on the use cases above, check out this article on top uses of IronWorker.

To try IronWorker for free, sign up for an account at Iron.io.


Wednesday, July 16, 2014

Iron.io Releases High Memory Workers

IronWorker Can Now Handle Larger Tasks
We're pleased to announce the availability of high-memory workers within IronWorker. This new capability will provide greater processing power to tackle even more ambitious tasks.

The added boost to the IronWorker environment is perfect for tasks that consume large amounts of computing resources – tasks that might include big data processing, scientific analysis, image and document processing, and audio and video encoding.


A High-Performance Worker System Gets Better

Workers systems are key for scaling applications and building distributed systems – whether it’s handling tasks that get sent to the background, pre-processing data to speed up load times, scaling-out processing across large segments of data, or consuming streams of continuous events. A good worker system can handle a variety of tasks and worker patterns and address the majority of work.

A certain number of tasks, however, might not fit the typical worker system and therefore might need isolated setups with specific machine configurations. Examples might include processing large media files, doing computational analysis over large sets of data, or running other jobs that require greater machine resources, complicated code packages, or dedicated resources.
Image Processing

Memory issues can be elusive to address, especially in a worker system. Depending on language, when a worker runs out of memory, it can do some strange stuff such as timeout (Node), segfault (Ruby 1.9), or just die without much indication (Ruby 2.1).

High-memory workers extend IronWorker's current capabilities so that you can pass it a greater set of application workloads. The early use cases we're seeing are for image and audio processing but you can use it for just about anything where larger in-memory resources will be a benefit.


More Memory and Faster Networking Speeds

Audio Processing
The standard worker configuration provides 330 MB of memory and enough CPU power for almost all general application tasks. (This is especially true if work is split up across a number of workers and various worker patterns are employed such as master-slave or task-chaining.)

The high-memory worker configuration provides 1.5 GB+ RAM which translates into much more in-memory processing and little to no storage swapping. The high-memory workers also provide faster I/O and networking capabilities which means faster job execution, faster file transfers, and faster run times.


Getting Started with High-Memory Workers

Using our high-memory clusters is as easy as passing in a hi-mem cluster option when you queue a task. When tasks run, they'll be processed within a high-memory cluster of runners. (The feature is just starting to roll out into production so we'll need to enable your account for access.)

To get started with high-memory workers, check in with our sales team and we'll get you up and running in no time.

What are you waiting for? High-memory awaits.





To learn more about what you can do with a worker system, check out this article on top uses of IronWorker.

To try IronWorker for free, sign up for an account at Iron.io.


Tuesday, July 15, 2014

Iron.io Adds Derek Collison as an Advisor – Former SVP/Chief Architect at Tibco, Architect of Cloud Foundry

We're happy to announce that we recently added Derek Collison to the Iron.io advisory board. Derek is Founder/CEO of Apcera and an industry veteran and pioneer in large-scale distributed systems and enterprise computing.  He has held executive positions at Google, VMware, and TIBCO Software and so is a great resource for technical and business insight.

Derek Collison
Iron.io Advisor
Since starting Apcera, Derek has been dedicated to delivering composable technology for modern enterprises to innovate faster. He was previously CTO at VMware where he designed and architected the industry's first open PaaS, Cloud Foundry. Prior to that, he was one of two Technical Directors at Google where he co-founded the AJAX APIs group. He also spent over 10 years at TIBCO Software, where he designed and implemented a wide range of messaging products, including Rendezvous and EMS. As TIBCO’s SVP and Chief Architect, Derek led technical and strategic product direction and delivery.

Aside from the wealth of knowledge and experience in the tech industry, Derek is also a big proponent of Go and speaks often on its use within cloud systems. (Here's a slide presentation of his from the recent Gophercon conference.) We're also big users of Go and have written several articles on the subject (you can find them here and here) and so it's a great meeting of like minds.

Our goal has always been to provide the best cloud infrastructure services for message queuing and task processing. As we grow the company and expand our offerings into larger organizations, we're grateful to have not only have a strong team but also a solid set of investors and advisors to help guide the way.

As someone who has done much in technology and in high-performance cloud systems, Derek Collison is a great addition to our advisory board and we couldn't be more pleased to have him as a part of our team.