Search This Blog


Tuesday, December 31, 2013

#IronHack Showcase

During these holidays We here at challenged programmers sipping on eggnog, snowed in, stuck in airports, hanging out in sweatpants, and mellowing out to the third marathon of Home Alone to hack on a project for the holidays. 

These are some of the projects we wanted to share with the public!

HN Sentiment Analysis
Searches Hacker News ( for posts with a keyword that you enter, parses the text of post(if any) and comments on the post then analyzes them for positive/negative keywords to try to get a grasp of the communities sentiment towards a certain topic.
Martin Gingras

Iron TicTacToe
Simple hackathon project to implement "parallel game tree search" for TicTacToe using IronWorker, IronCache, IronMQ, and SendGrid.
David Jones

IronUnfollowThe purpose of IronUnfollow is to declutter the user's Twitter account.
Matthias Sieber

Ping Transmit. This past summer I was maintaining a test environment where the most reliable thing was the internet connection. I started with a ping script that stored pings locally, in a text file.
Bill Eichin

GitHub monitorAdds a github project to monitor.
Borja Roux

An online game of Telephone Pictionary. A player is texted by the previous player and prompted to submit a phrase or a drawing based on what the previous player wrote or said.
Feather Knee

An innovative bookmarking 2.0 service that lets you 'gather' links as you browse the web.

Http:// (Use "gather" as the invite code)
S Sriram

Results from our remote judging is scheduled to conclude at the end of this week.

How Kuhcoon uses IronWorker and Node.js to improve social media advertising

[This post is part of a series of customer success stories that Chad Arimura is putting together highlighting key customers and how they are using to do some pretty big things.]

Kuhcoon offers a suite of tools built around the management and optimization of social media advertising for businesses. They provide two tools – a basic social media management application that is used for managing, publishing, and tracking earned social media campaigns, and an intelligent Facebook Ads Platform that lets users easily create and manage paid social ad campaigns while receiving automated help and spending optimization. Both services are growing in popularity and require constant account updates and voluminous data collection.

Charles Szymanski
CTO, Kuhcoon
Chad Arimura caught up with Charles (CJ) Szymanski, CTO of Kuhcoon, to talk about his challenges and how helped him build a reliable and scalable worker system. Here's what CJ had to say:

Tell us about the challenges you were facing with Node.js worker queues.

One of the major problems we had was developing/finding an easy to implement a solution for creating worker queues using Node.js. All of the worker queues that we could find that were open source were unstable or provided very low performance. We needed a solution to create a scalable data processing backend without having to sink hundreds of hours of development time into creating our own. Compared to IronWorker, all other solutions were unstable and untrackable.

Why choose provided an easy to use dashboard to track workers, errors, program speed, and other metrics. This improved the development process of workers and allowed for a full proof product backend that could easily be managed by a small team. With an SaaS product in the digital marketing space, companies depend on reliability of code, seeing as how one mistake could mean a massive marketing disaster. met this reliability standard plus provided email updates and log data to instantly track issues and fix any problems.

How does IronWorker help you process social data?

IronWorker is used to collect and process data from numerous outside sources such as Facebook, Twitter, Google, and other social marketing data sources. It then processes and saves the data to be used on the front end.

The system is controlled by a single worker that is scheduled to continually run and schedule other smaller workers to perform specific tasks at different time periods. This set up has provided a way to easily collect millions of data points daily without a single issue.

How much data do you process and how does help you scale? 

Kuhcoon now processes 1000's of user accounts daily, collecting and combing through millions of data points. The platform allows our development team to develop, test, and push production worker code in a fraction of the time that older systems used to take.

Kuhcoon Improves Social Marketing
Our worker processes use Node.js to make asynchronous calls to multiple API end points at once and can now collect hundreds of data points during a single worker cycle that takes less than 2 seconds.

With the scalability of the platform and the ability to run 1000's of concurrent workers at any given time, truly does deliver the perfect asynchronous worker platform.

Nicely said, CJ.

To learn more about how Kuhcoon can help your business manage its social media advertising, visit

To learn more about how IronMQ and IronWorker can help your app effortlessly scale to thousands of concurrent workers, please visit today.

Friday, December 27, 2013

How a simple Node.js project turned into Iron Scheduler (guest post)

This is a guest post from David Hessing, Director of Data Analytics at Appirio, on how a simple project to teach himself Node.js turned into Iron Scheduler, a powerful scheduling tool built on top of IronWorker. 

The Power of Task Processing in the Cloud

[David Hessing] I remember coming across's services and immediately being intrigued. IronWorker in particular looked pretty awesome. While I had come across other cloud-based message queueing and caching tools, I hadn’t seen anything that was a pure cloud task processor. 

The idea that I could fire off independent jobs into the cloud where they would run, where I could scale them up infinitely and in parallel, and where I would never have to worry about underlying resources is incredible. 

The potential to break away from a tight app structure and move to something more distributed also appealed to me. IronWorker would force me to have good code design because my tasks had to be stateless and entirely self contained. 

Building a Flexible Scheduler in Node.js

When I started working on an independent side project, I looked at how I might leverage the toolset. (The project at first started as a way to teach myself Node.js. Now I’m actually using it – but that’s another story.) This project needed a recurring job scheduling component and so IronWorker seemed a natural fit. 

Reading through the documentation, I came across recommendations for how to design and schedule workers. As the docs acknowledge, the built-in scheduling functionality in IronWorker is pretty basic and any complex scheduling requires custom code.

For my project, I wanted jobs to run only at selected hours of the day, and at those particular hours to run multiple times in quick bursts. I obviously needed some custom scheduler logic, and that led me to create Iron Scheduler. Iron Scheduler is simple tool that can handle some of these more complex scheduling scenarios.

How Iron Scheduler Works

At its core, Iron Scheduler is pretty straightforward – whenever it runs, it looks at a bunch of scheduled tasks that you give it. Each task has three main parameters associated to it. First a regular expression defining the schedule, second a number N defining the number of tasks to schedule and third an interval specifying the delay between each one. 

More Granular Scheduling
(written in Node.js on IronWorker)
To decide whether a task should be scheduled, Iron Scheduler simply checks the regex against the current UTC time string. If it matches, Iron Scheduler will schedule the task to run N times in IronWorker, with each instance having a delay of n times the specified interval.

So for example, if the number is set to 3 and the interval is set to 20, Iron Schedule will queue three instances of the task, the first one with a delay of 0, the second with a delay of 20, and the third with a delay of 40.

Iron Scheduler can itself be run as worker on IronWorker or you can use it directly within your own Node.js projects. More detailed documentation about all the parameters and how Iron Scheduler runs is available on the project page.

Final Thoughts

It's worth mentioning that Iron Scheduler does have a few limitations, the main one being that currently there is no support for automatic timezone changes. So if say you need a job to always run at 9:00 AM standard time, you'll need to remember to adjust the scheduling regex when daylight savings rolls back around.

Iron Scheduler has helped me leverage the power of IronWorker. I hope it’s of use to others. Iron Scheduler is open sourced and so I certainly welcome enhancements and fixes!

About the Author
David Hessing is Director of Data Analytics at Appirio. He started his career designing and writing back-office trade accounting logic at Bridgewater Associates. He joined TopCoder in 2008 and eventually assumed the role of Chief Architect. (TopCoder was recently acquired by Appirio.) In addition to managing delivery on TopCoder's largest accounts, he developed and evolved many of the processes and tools that comprise the TopCoder platform. David's current  focus is on taking TopCoder's algorithm competitions to the next level — stay tuned in 2014 for amazing stuff.


Part 2: Using Iron Scheduler (excerpt)

This is an excerpt from the project page on BitBucket. The scheduler can be uploaded and run as either a straight worker task within IronWorker or as an NPM (Node Packaged Module). 

Using as a Worker
To use as a worker, create and upload Iron Scheduler as its own task in Then schedule it to run as often as you like to check whether it should queue the tasks you configured.

Friday, December 20, 2013

The Holiday Hack

In honor of our hackathon post, we're hosting a holiday hackathon
What we ask of you: build something useful, beautiful, informative, or just plain cool. You

Why Mentor Graphics replaced SQS with IronMQ

[This post is part of a series of customer success stories that Chad Arimura is putting together highlighting key customers and how they are using to do some pretty big things.]

Mentor Graphics [NASDAQ: MENT] is a leader in electronic design and automation software. They enable companies to develop better electronic products faster and more cost-effectively. Their innovative products and solutions help engineers conquer design challenges in the increasingly complex worlds of board and chip design.

Chad Arimura recently spoke with Keith Childers, Principal Architect at Mentor Graphics, about why they chose to replace Amazon's SQS with IronMQ. Here's what he had to say.
Keith Childers
Mentor Graphics

What is your group responsible for?

Our group is responsible for all customer-facing web properties for Mentor Graphics Corporation, a leading software and solutions provider in the EDA space. Among these are,,, and

What are you doing and where does IronMQ fit in?

In our current architecture, we launch infrastructure on behalf of customers using Amazon CloudFormation. The CloudFormation service broadcasts status update messages through Amazon’s SNS message publisher service which dispatch the messages to an IronMQ push queue. The IronMQ push queue then uses the fanout pattern to our various listeners in real-time. 

Why did you switch from SQS?

While Amazon SQS is generally reliable and has acceptable performance, it doesn’t support either FIFO or guaranteed once-only or once-per delivery. We had spent a couple dozen developer hours working on identifying duplicate messages, but the lack of FIFO was seriously hobbling those efforts, so we began looking for another messaging vendor. Additionally, we were polling Amazon SQS which was resulted in a lot of wasted resources. Switching to IronMQ push queues has allowed us to fan those directly out to various listeners in real time.

How did the switch from SQS benefit Mentor? thus far has been far less latent than SQS was. In addition we’ve been able to redirect “workaround” effort into real development, to the tune of ~30-40 developer hours, accelerating our project roadmap. We’ve also had other project teams adopt services since we scaled up to's Professional Tier plan, as we now have a very large allocation of API requests and extremely fast message processing with the isolated endpoints. Our initial data is showing that even traversing from us-west-2 to us-east-1 with messaging traffic, messages are getting delivered anywhere from 30 to 60% faster in IronMQ than with Amazon SQS.

Any final thoughts?

Ultimately we have a much higher degree of confidence that we are getting the shortest possible wait times for our customers, and that we are not messaging them redundantly or prematurely. In the end, our customers satisfaction is what technology choices are all about, right?

That's exactly how we feel too Keith!

To learn more about IronMQ push queues and our Professional Tier plans, please visit our website or give us a call (888-939-4623). We're also happy to set up a time to talk about our enterprise offerings and other full service capabilities.

Tuesday, December 17, 2013

Hackathons: Beyond the Prizes

Here in San Francisco hackathons are common place — you can find one most every weekend.The basic premise of a hackathon is to show up, build an app in 24-48 hours, and go home. All food and drink is taken care of and sleep is frowned upon, though a couple hours of nap time is suggested. The draw of hackathons, other than the lack of sleep and free food, are the prizes you can win which range anywhere from a few hundred dollars to a million.

For those of you looking to attend a hackathon for the first time, or even you savvy veterans that are looking for another win, here are a few things I’ve picked up during my time at various events:

  1. Ignore the prize(s)
  2. Use pull requests
  3. Test out TDD
  4. Talk to people, not just yourself
  5. Learn a new technology
Many of you will call me crazy for this list but read on and then cherry-pick the ones you want to try out.

1. Ignore the prize(s)

Seriously, don’t think about prizes during the hackathon.
‘But Yaron! That’s why I go.’
Is it? Or would you rather build a killer app and win the thing? It sounds nuts but you’ll probably build a much better app if you don’t revolve it around a specific technology. Too often we steer ourselves towards the money even though the odds are stacked against us and we don’t love the idea whole heartedly.

At the Launch Hackathon we originally went in with an idea to show users what was in stock nearby because we wanted to win $5000 from Kohl’s. We weren’t set on the idea, but there was $5000 on the line so we started wire-framing. Right before we were ready to begin coding we discovered…even our layouts were the same. Game, set, and match. Our excitement was depleted and we scrapped everything. A few minutes later we came up with the idea for and never looked back. team @ Launch Hackathon (L to R)— YaronFabNate
Similarly, at another hackathon, I was on a team that built a stock market for NFL Rookies. Venmo was present so we decided to implement it as a way for users to transfer funds easily. Integration was taking a little too long so we went with a simpler product for the purposes of the demonstration. We knew we’d lose the Venmo prize but we didn’t care, we could taste victory.

In the end, without even realizing what we’d done, we had won the $500 Venmo prize.
‘What? How?’
We had used Braintree, Venmo’s parent company. Boom!
The main reason I say to forget the prize is that judging comes down to an individual, usually the company’s evangelist, and it’s all subjective. Did they like your app or not? Does your app measure up to the ones they’ve seen in the past? Is there a real use of their service or did you integrate it for the purposes of the prize? Who knows what is going on up there. If you build something you love then your passion will show and the excitement you exude will be contagious.

2. Use pull requests

The first hackathon I ever attended was the Emirates Hackathon. This is the one which taught me the most and showed me the value of utilizing pull requests. Profession aside, when working under a time constraint we tend to make mistakes. In this case, when all you want to do is push code you tend to ignore the details and best practices.
Basic breakdown of a pull request (i.e. PR)
The main benefit of using pull requests is that the more eyes that see the code the better the quality. With a five person team, excess energy drinks, and everyone freely pushing changes we were bound to run into errors — and we did.

Our Emirates Hackaton team (L to R) — Jared
Dexter, Yaron, Nate, Stephen
The lack of communication between team members caused errors to sprout left and right. On the front-end, element ids and classes were being overridden constantly and driving the JavaScript to throw errors.With pull requests, one must read each line to understand what is going on and can spot an error that was easily glazed over due to tunnel vision. A second set of eyes are the first line of defense against errors reaching the master branch but also reduce the likelihood that code will be overwritten by a teammate and repetitive work from occurring. Discussion on a pull request is also necessary and communication is rarely a bad thing.

3. Test out TDD

After pull requests, tests are your second line of defense. Yes, they can be time consuming, but features are notorious for breaking right before presentations thanks to that last minute push to production.

So close, just one last push…
Talk to anyone in the industry and they’ll tell you that tests have saved them more than once; not to mention all the time saved from having to find the bug.

In fact, at a hackathon, the biggest return a test provides is not having to spend time finding the error. Sure, you’ll spend a couple minutes writing it but the future payoffs are enormous when in a crunch. Not only that, but you’ll get faster at testing the more you do it. Testing will become second nature and your boss will thank you for it. *sniff* *sniff* I smell a raise!!

Never used TDD or have trouble writing tests? Pseudo-code your goals first and then build a few tests that would be most beneficial. Precision isn’t necessary but it will come with time. For you Rails people, I use a gem called SimpleCov. It gives you a quick and dirty breakdown of your testing coverage.

4. Talk to people

Hackathons are as much about building new connections as building a new product. It’s so easy to get deep into your code and ignore everything around you, but take a break once in a while and chat up your fellow programmers and event sponsors. Talk about what you’re building and the technologies you’ve used. By sharing this information you’ll get tips, suggestions, and feedback that you wouldn’t have gotten otherwise. You might even be able to integrate a sponsor’s technology and potentially win their prize!

While at the Launch Hackathon, where we built, we used to scrape data from the web. At the end of it all I shot an email to Andrew Fogg, their CDO,and told him about what we had built with the help of his product. He happened to be at the hackathon and asked us to demo the product for him. He loved the product, invited us to their offices for lunch, and blogged about our app. In a sense, by walking around and talking to people, we’d just created our own prize!

5. Learn new technology

Hackathons are a perfect place to try something new. The stuff you build will likely never be used and if it does then a rebuild is probably in order. Take this time to learn a technology you’ve had your eye on or one that you have little experience in. It’s proven that the best way to learn something new is through immersion and pressure — I think 24 hours to build an app definitely qualifies.

My friend Robbie decided to fly solo during a hackathon and built an app in node.js. He’d had node on his radar for some time and decided that the hackathon was as good a time as ever. After 24 hours he pitched his final product and, though it wasn’t the prettiest, it worked. After all was said and done he attracted the most attention at the end of the event and had people lining up to talk to him.
Pick no more than 1-2 new technologies you want to work with and ideally something that someone else on the team has some experience with. Though not a large challenge, I chose to learn CoffeeScript during my last hackathon and now use it regularly. Why? Here’s an example:

Left: CoffeeScript — Right: JavaScript

Side note: Integrating a sponsor’s product doesn’t count as new technology if you won’t use it again after the hackathon. For example, I rarely build apps utilizing wearable devices so learning Plantronics’ technology wouldn’t be of much use beyond the 24 hours. However, learning to use Workers or Twilio benefited me on many projects.


So, all in all, build/create something you would get behind wholeheartedly . Put a little more effort into making sure it functions properly, tell people about it, and don’t focus too short term. You’ll have a lot more fun and get some amazing training in. You might even build a solid product that could be a new side project. By doing things a bit differently you’ll find that you can get much more out of hackathons other than just free food and no sleep. Try it!

Thanks for reading! You can find me on Twitter - @yaronsadka

In honor of this post is hosting a holiday hackathon! The hackathon will begin on Monday, Dec. 23 and end on Monday, Dec. 30. Click on the link for more info and registration!

Tuesday, December 10, 2013

How Munzee Keeps Gameplay Real with IronMQ

This post is part of a series of customer success stories highlighting key customers and how they are using to do some pretty big things.

Munzee is a 21st century scavenger hunt that utilizes iOS, Android, and Windows Mobile apps to create a modern-day, high-tech adventure game. Munzee QR Codes and NFC Tags are captured or deployed by players allowing them to earn points, unlock badges, and compete in clans against each other.
Scott Foster
Co-Founder / VP of Technology

What problem did Munzee face before Iron?

An initial concern with the launch of Munzee was an increasing demand for background processing to handle tasks secondary to active gameplay. We needed a way to handle outbound email, validation checks, and other asynchronous processing associated with gameplay. This needed to be done without slowing the user down in their effort to capture and deploy munzees.

Our previous solution required a tremendous amount of time and effort to maintain, optimize, and monitor.  This eventually caused reliability issues for our small team whose main focus was to provide a great gaming experience for our users.

Where does fit and how does it help?

While exploring solutions, we discovered IronMQ and instantly fell in love with it! Being able to offload messages via a RESTful API to a 3rd party provider has saved our company time, money, and the headaches/hassles of constant maintenance.

Seeing all of our messages in real-time allows us to scale when we need to. IronMQ has great client libraries for processing and delivering messages, which is always a plus for any cloud-based messaging queue service.

What have the results been like?

On Cyber Monday of this year (Dec 2nd, 2013), we put IronMQ to the real test with our Cyber Monday Holiday sale. We anticipated a heavy increase in traffic with sudden spikes, but weren't confident in which queues would be used the most. When we “opened the floodgates” for our Holiday sale, we went from a few messages at a time to hundreds of messages in a matter of seconds.

Queuing up all of these messages has lets us scale only the resources we need in order to process the gaming workloads. This gave us tremendous time and cost savings, while still allowing us to get all the messages and events processed in as soon as possible.

Any final thoughts? 

I tip my hat to you IronMQ! You’ve allowed Munzee access to better insight of what we need to manage and keep running server-wise. Now we can spend our time focusing on the important things – like getting people out in the real world to have fun with the Munzee game!

For those interested in exploring the world in a whole new way, you can play Munzee by downloading the free app from the App Store, Windows Store, or Google Play. (@munzee

Thursday, December 5, 2013

Message Queues, Background Processing and the End of the Monolithic App (reposted from Heroku blog)

Here's a post of ours on message queuing and background processing that we published on the Heroku blog the other day. Definitely worth checking out if you believe like us that distributed multi-tier architectures are the future of production-scale cloud applications. Guest Post on Heroku
Message Queues, Background Processing and the End of the Monolithic App
Platform as a Service has transformed the use of cloud infrastructure and drastically increased cloud adoption for common types of applications, but apps are becoming more complex. There are more interfaces, greater expectations on response times, increasing connections to other systems, and lots more processing around each event. The next shift in cloud development will be less about building monolithic apps and more about creating highly scalable and adaptive systems.
Don’t get us wrong, developers are not going to go around calling themselves systems engineers any time soon but at the scale and capabilities that the cloud enables, the title is not too far from the truth.
Platforms as Foundation
It makes sense that platforms are great for framework-suited tasks – receiving requests and responding to them – but as the web evolves, more complex architectures are called for, and these architectures haven’t yet evolved in an equivalent manner as all encompassing framework-centered applications.
By way of example, apps are rapidly evolving away from a synchronous request/response model towards a more asynchronous/evented model. The reason is because users are demanding faster response times and more immediate data. Also, more actions are being triggered by each event.
Rather than thinking of the request and response as the lifecycle of your application, many developers are thinking of each request loop as just another set of input/output opportunities. Your application is always-on, and by building your architecture to support events and process responses asynchronously and highly concurrently, you can increase throughput and reduce operational complexity and rigidity substantially.
Evented Flow: Shorter Response Loops + Async Processing and Callbacks

 For the rest of the article, go here >>

Monday, December 2, 2013

GoSF Talk: Travis Reeder on Go after 2 Years in Production

Travis Reeder,'s CTO, gave a talk recently at a GoSF meetup on's use of Go. We're big proponents of Go, using it for all our backend components and have written or talked about our experiences in a number of places (here, here, and here). Travis' talk here is a compilation of a number of those thoughts. It contains a good rundown of why Go works for us.
Golang Series
Talk 1, Travis Reeder, from, provides some in-depth details on why Go turned out to be the right choice for the backend. He talks about issues related to performance, memory usage, concurrency, reliability, and deployment ease and goes through key areas in the architecture where Go made the difference.  
Talk 2, Quinn Slack from Sourcegraph will show off a tool for navigating GitHub to everywhere a Go function is used or a Go interface is implemented. He will also show you how to use it to make your own open source projects better. These talks were recorded at the GoSF meetup at Cisco SF.

Travis Reeder at GoSF Meetup (Nov 6th, 2013)

The talk was produced by our friends at g33ktalk. They do a great job of keeping developers up on the latest news and events on things tech.

Note: Check out the pretty good presentation glitch at 8:20 in.