Queuing in node.js
Now using Redis as a queue, lets imagine we had a server that had lots of jobs to run. These might include sending out emails or running some database queries to update the stats on some widget on our site. We have to do so many of these operations, that if we did them all at once we'd crash our server. We'd also like to scale our architecture out to having multiple servers able to perform these operations.
In this example, we're going to push 15 jobs on to a queue and then we're going to simulate two separate node instances running and popping jobs from the queue and executing them. We'll use the very basic code below to simply log the id of the job. We can pretend it did some really complex and expensive task instead of a console.log.
Note: As I'm sure you're aware, node.js is asynchronous meaning that one function can start before another finishes. While we could simulate synchronicity in node, we won't for the sake of simplicity.
var interval = Math.floor((Math.random() * 10) + 3);
console.log("executing job every " + interval + " seconds");
setInterval(function () {
redis.rpop("queue", function (err, reply) {
if (!err && reply != null) {
console.log('executing job: ', reply);
}
});
}, interval * 1000);
Below are the results of running this as two separate processes.
In the real world, these jobs would all have different execution times. The great thing about Redis is that many processes can connect to it and start popping off jobs whenever they are available to run them. Redis acts as the gatekeeper/single source of truth and makes this easy to manage.