Search This Blog


Friday, May 30, 2014

Real-time Logging for IronWorker with Logentries (repost)

Real-time logging of workers tasks is a popular feature in IronWorker. We provide an interface to third-party logging components and services which allows you to send your log output to any syslog endpoint and see the output in real-time.

Prior to this, you had to wait for your task to finish before being able to view a log file within the IronWorker system. To capture and view the data, you can run your own syslog server or you can use logging services such as Papertrail, Loggly, and Logentries.

Quentin Rousseau wrote a great post that shows the steps to connect with Logentries, a leading SaaS service for log management and real-time analytic. It's a simple process that takes about as long to read as it does to get connected and start viewing log files.

Here's an excerpt from the post.
When you a executing a job with Iron Worker Service and writing logs on STDOUT you have to wait that the job is terminated to read the log file. Not very convenient if want to see your logs in real time right ? 
Iron blog wrote a post about how setup real-time logging with Papertrail. 
This post is how setup real time logging with Logentries.

About the Author/Developer

Quentin Rousseau is full-stack engineer working at DOWN in San Francisco, CA. He's an IT Engineer Graduate from Télécom Bretagne (Brest, FR) and worked at Onefeat for 2 years prior to his current role. (@quentinrousseau)

Tuesday, May 13, 2014

Schedule Email with SendGrid and (repost)

Using IronWorker to Schedule Emails in Node.js, Ruby, and PHP

Schedule Email with SendGrid and
Nick Quinlan from SendGrid just put out a really nice post on using IronWorker to schedule email using SendGrid.

And what's even better is he shows how to do it in three languages:

  • Node.js
  • PHP
  • Ruby

SendGrid offers an industrial-strength service that solves the challenge of email delivery by delivering emails on behalf of companies. They eliminate the complexity of sending email while providing reliable delivery to the inbox.

Given the goals of the SendGrid and are similar – to provide highly reliable services that make it easy for developers to get work done – it only makes sense to show how they can be used together.

Here's an excerpt from the post:
There are tons of reasons to schedule an email: maybe you want to send an email daily or weekly, or perhaps, you just want to send an email in the morning rather than the evening. Luckily, with and SendGrid this is easy. is a cloud platform that gives developers tools to solve many common problems. One of these tools is the IronWorker, a way to asynchronously run code in a number of languages. Workers are run by “tasks” which can be queued, scheduled in advance, or even repeated. Tasks can contain JSON Payloads which the worker can then process.
What's great about the post is that it walks you through the steps and includes the scripts and code blocks needed. (He's covering 3 different languages and so the code is in triplicate in spots.)

We love posts like this at

Nicely done, Nick.

About the Author

Nick Quinlan is a SendGrid Developer Evangelist based out of San Francisco. He works to make developers lives easier by providing them with the help they need to grow their ideas. Give him a shout, @YayNickQ.

Friday, May 9, 2014

Laracon, Laracast, LaravelSF – Oh My!

Laracon 2014
is invading New York City on May 15-16th and we're calling all Artisans!

Laravel is a modern PHP framework built for large enterprise applications as well as simple JSON APIs. It's possible to write powerful controllers or slim RESTful routes. Laravel is the perfect framework for jobs of all sizes.

Laracon 2014 is the place for all things Laravel. is a sponsor and will be there providing our support as well as having a bit of fun.

Follow and tweet #IronLaracon to @yaronsadka and/or @stephenitis to connect with us as well as get day of event communication.

Pre-Conference Drinkup (Wed, 5/14)
What's a conference without a bit of socializing. is teaming up with Laracasts and SendGrid for some pre-Laracon fun and networking. Join us at the Whiskey Tavern for drinks, appetizers, and an all around good time.

Pre-Laracon Drinkup at Whiskey Tavern
Come hang out with Yaron Sadka and Stephen Nguyen from, Jeffrey Way from Laracasts, and a host of developers and friends in the community.

Get details on Eventbrite!
Iron Q/A (Thursday/Friday 5/14)
11:10-11:40am While on the break will be available to answer questions about our platform and give a quick walkthrough of IronMQ and IronWorker. Come early and receive some sweet Iron swag!

LaravelSF is Starting Up
Join the local San Francisco Laravel community and meet fellow developers at LaravelSF, the meetup group in the Bay Area specifically for Laravel.

The first meetup is will be on May 27th. Come join us and be in on the ground floor on the next revolution in PHP.

Tuesday, May 6, 2014

IronMQ Long Polling

Another one of our most requested features is now out in the wild: long polling. Long polling reduces the number of requests you need to make on an empty queue by not returning immediately when there are no messages available. Instead, IronMQ will wait until a message becomes available or until the "wait" time has passed (maximum 30 seconds).

This feature is really easy to use, just add a "wait" parameter to your GET messages request. The wait parameter is a number between 0 and 30, representing how many seconds to wait. For example, using the Ruby client:

Other clients/languages will make use of a similar parameter.

You can read more about long polling in the API Documentation.

Bonus Feature:  Get-Delete as One Operation

We've also added another often requested feature. You can now get and delete messages in one request by passing "delete=true" in the URL of your request.

Typical usage of IronMQ is to get a message in one request and then delete it in a separate request to acknowledge that you're done with it. In some instances, however, developers do not need the assurance that a message has been successfully processed and want to avoid the extra delete request, and so now they can.

Using the Ruby client, it would look like the following:

Warning: Don't use this if you need to ensure a message has been processed. If this is the case, then stick with the two step get-delete paradigm 

As always, let us know what you think in the comments below or shoot us a note via our support channel.

Friday, May 2, 2014

Building an Analytics Engine using MongoDB, Go, and

Building a Relevancy Engine Using MongoDB and Go
On the heels of a recent post on powering intelligent traffic systems using MongoDB and comes a presentation on building an analytics engine using MongoDB, Go, and

William Kennedy gave a presentation on his recent work at GopherCon and friends of ours from Sourcegraph were kind enough to write up details on the talk.

Here's an excerpt from their summary.
The search for a solution
The first version that Bill built used a SQL database. But whenever they wanted to make a new feed, they’d have to build a new table, populate it with data, etc. That would take a long time. This solution wasn’t working out for them. 
They needed a system that could easily and quickly create dynamic feeds based on all of the data. They also wanted to be able to write rules to alter the overall content of a user’s feed quickly (for example, to create custom Valentine’s Day suggestions). 
Their system needed to allow them to: 
  • Write rules that can be updated and applied at runtime
  • Pass variables to filter and pinpoint relevance
  • Use data aggregation technique to filter and group data
  • Build tests around aggregated datasets
  • Build tests against multiple aggregated datasets
  • Publish data from offer/deal feed and other internal feeds 
After evaluating a number of other tools, they settled on Go, Linux, MongoDB, beego, mgo, and (They choose beego over other Go Web frameworks because they liked its MVC architecture.) 
They used a denormalized schema for their feed data and kept it updated using workers running tasks on 
Q: How well do you think this will scale? A: I went with MongoDB because of its scalability. It is scaling much better than the previous SQL database solution. is also super helpful in allowing this system to scale easily.
To read the full summary, go here.

Here's the video:


About the Developer
William writes Going Go Programming and is one of the authors of Go in Action and so he knows what he's talking about when it comes to using Go. His posts are well written and have great technical details and so we highly recommend spending time with his posts.

Thursday, May 1, 2014

Going (Almost) Serverless with

The NoisyTwit App
This is a guest post by Dieter Van der Stock, a full-stack developer in Antwerp, Belgium. In it, he talks about his experience building NoisyTwit and how the combination of HTML/JavaScript, PHP,, and made for a simple but scalable solution. 

Modern app development doesn't need to be complicated as this post details.

Building NoisyTwit in a Few Easy Steps

A few weeks ago I set out on a project I've been wanting to make for a while now. The idea is both silly and simple – I wanted to know who the noisiest people in my Twitter feed were. Usually you know who these are, obviously, but with accounts that retweet a lot it can be difficult to estimate.

The idea for the app was more of an excuse though. What I really wanted to do was create a back-endless web application. A service that performed some logic, without me having to actually run any server.

I'm a back-end engineer by trait, so it's not that I'm afraid of servers, or Apache configs or anything like that. But a server is nonetheless a moving part in your stack and I wanted to get rid of them wherever possible. The less moving parts there are in your stack, the easier the maintenance and the lower the chance is you'll have to be called in to fix things.

Being familiar with the stack, I figured IronWorker could do the 'heavy lifting', that is, get the tweets, analyze them, and produce results. The front-end could be pure HTML/Javascript and hosted on Amazon S3 which means no moving parts needed for the front-end either.

The OAuth communication with Twitter was the last piece of the puzzle, and that's where came in. It's a great library that handles the complex OAuth details by acting as a proxy. All you need to do is call their JavaScript methods and you're pretty much good to go.

(Before we go too far, yes, "serverless" may be somewhat of a misnomer because servers are still needed. It's just that by using cloud services, I don't have to mess with them. Somebody else does and that's fine by me.)

The Application Flow

For every user trying the service, a number of things need to happen in sequence. Here's what I came up with for the event handling:
  1. On the front-end: let the user authorize NoisyTwit to read their Twitter information
  2. Push the access keys we receive after authorization to an IronWorker (via a webhook)
  3. In the worker: get the tweets from their timeline and analyze them (count the times any user showed up in the timeline)
  4. Push the results to an IronCache slot
  5. Again in the front-end: show the results to the user
Since the front-end can't know when the work is done, it polls IronCache periodically and only renders the results once there are any. An AJAX spinner is shown to let the user know that something is actually happening.