NodeJs Scaling - Architecting zero downtime


One of the advantages of running multiple processes is that your app never has to go down, it can always be available to your users, this is called zero downtime.



Sometimes our apps go down, this could be due to some mysterious and obscure bug, it could be due to high traffic or sometimes we just need to update the code and restart the process, in a cluster when a single instance fails, the traffic will use the remaining worker instances, and the main process can detect worker failures and automatically restart those workers. When you need to update your app you no longer need to tell your customers that the website will be down due to maintenance. You can simply program your cluster to restart each instance with the updated code.


const http = require('http')
const cluster = require('cluster')
const numCPUs = require('os').cpus().length

if (cluster.isMaster) {
  console.log('this is the master process: ', process.pid)
  for (let i=0; i<numCPUs; i++) {
    cluster.fork()
  }

  cluster.on('exit', worker => {
    console.log(`worker process ${process.pid} had died`)
    console.log(`starting new worker`)
    cluster.fork()
  })

} else {
  console.log(`started a worker at ${process.pid}`)
  http.createServer((req, res) => {
    res.end(`process: ${process.pid}`)
    if (req.url === '/kill') {
      process.exit()
    } else if (req.url === '/') {
      console.log(`serving form ${process.pid}`)
    }
  }).listen(3000)
}