Clean shut down for aerospike Client for Node JS express server

I intend to do a clean shut down of the aerospike connection post the node js express shut down irrespective of the cause, based on experience will taking care of the following events: ‘exit’, ‘SIGINT’ and ‘uncaughtException’ for the process take care of all of it and did you encounter stale connection routing from something else as well?

In order to do a clean shut down, you’ll have to make sure that you call client.close() when Node.js shuts down. You can do that by setting up an ‘exit’ event handler on process. If you want to have special handling for uncaught exceptions, interrupts, etc. you can setup special event handlers for those cases as well. But note that the exit handler will also be called in those cases, so you will need to take care that you do not call client.close() twice.

Another thing to watch out for, is that you need to call client.close() if you want to shut down your application without waiting for one of these external events. (Though that’s normally not the case for an Express application.) Unless you call client.close() the Node.js event loop will never be “finished” so Node.js will not shut down.

Here is a simple demo that shows all these ideas:

const Aerospike = require('aerospike')
let client = Aerospike.client({ /* optional client config */ })
let outcome = process.argv[2] // first command line argument

client.connect(() => {
  console.log('Connected to Aerospike cluster.')
  setTimeout(() => {
    switch (outcome) {
      case 'exit':
        console.log('Exiting process.')
        process.exit(1)
      case 'exception':
        console.log('Throwing error.')
        throw new Error('Uh, oh!')
      case 'interrupt':
        console.log('Sending interrupt signal.')
        process.kill(process.pid, 'SIGINT')
        break
      default:
        console.log('Time\'s up! Shutting down.')
        closeConnection()
    }
  }, 1000)
})

function closeConnection () {
  if (client.isConnected(false)) {
    console.info('Closing Aerospike connection.')
    client.close()
  }
}

process.on('uncaughtException', error => {
  console.warn(`Uncaught exception: ${error}`)
  closeConnection()
})
process.on('SIGINT', signal => {
  console.warn(`Received ${signal} signal.`)
  closeConnection()
})
process.on('exit', code => {
  console.warn(`About to exit with code ${code}.`)
  closeConnection()
})

Let’s try out all four cases:

  1. Shutting down using process.exit()
$ node . exit
Connected to Aerospike cluster.
Exiting process.
About to exit with code 1.
Closing Aerospike connection.
  1. Shutting down due to uncaught exception
$ node . exception
Connected to Aerospike cluster.
Throwing error.
Uncaught exception: Error: Uh, oh!
Closing Aerospike connection.
About to exit with code 0.
  1. Handling SIGINT
$ node . interrupt
Connected to Aerospike cluster.
Sending interrupt signal.
Received SIGINT signal.
Closing Aerospike connection.
About to exit with code 0.
  1. Application terminates on it’s own
$ node .
Connected to Aerospike cluster.
Time's up! Shutting down.
Closing Aerospike connection.
About to exit with code 0.

Hope this helps!

Thanks @Jan for the reply, we are using forever to start and stop our server which always emits “SIGKILL” by default on which one cannot have a listener installed. Thus none of the above would work.

Having said that what are you guys using for start/restart of your server or are you overriding the using --killSignal for forever?

REF: https://nodejs.org/api/process.html

One more thing, is anyone handling close for the server in case of forced kill, as this will emit “SIGHUP” . We are using systemctl at times to force restart which internally sends SIGHUP.

If you are using SIGKILL to terminate your application, then it’s impossible for the application to perform any clean-up actions. That’s the purpose of the SIGKILL signal after all. You might want to consider using SIGTERM instead, to give your application a chance to perform a clean shut-down. I’m not sure if that is possible with forever?

I am not sure I understand your question regarding the SIGHUP signal. The SIGHUP signal is sent to a process when its controlling terminal is closed. But it’s also used by many daemon processes to trigger reloading config files and/or reopen logfiles.

Thanks once again for the reply @Jan

  1. When I tried overriding the kill signal in ‘forever’ the logs come out well for application and the server also restarts but the command executing the restart never ends up exiting itself. I couldn’t find the logs for the same to get the reason behind it’s hung up state.
  2. Apart from forever we also looked at using systemctl, for it uses SIGHUP to kill. But the same issue was repeated here as well the restart happens logs come out well for the application but the systemctl command never exited and ended up timing out. (‘journalctl -u service-name’ : just gave application logs /var/log/syslog - also didn’t carry traces of it)

Also I eventually saw these connections being released when a close shut down wasn’t completed. What are the implication of such a scenario?