Most efficient connection handling?

Hello,

I am currently in the process of testing aerospike (we are currently using redis) to potentially become out main KV store.

The biggest issues with all stores like this and PHP-FPM is of course the lack of being able to connection pool - however what would key tuning / performance recommendations be while using php-fpm?

The following is what I have currently:

  • Two name spaces
  • First namespace is in memory and for testing has been assigned 110 GB of memory
  • Second namespace is in memory / disk persistent which is allocated 10 GB of memory and 48 GB of disk (SSD)
  • During testing, I disable the firewall for the aerospike server (as with our load we will start to drop packets)
  • We do not seem to have an issue with FD limits as the ulimit was set to 1m.

My test consists of giving a percentage of traffic the ability to make a single aerospike connection / write / close connection. This has shown that we can get to ~13k client connections and then it started backing up and begins to struggle sitting around ~9k client connections.

Our aerospike server does not even go above 2% load while the test is being run so we presume that is non-issue, but it would appear we have a networking bottleneck.

Any advice?

Edit:

If I am using PHP-FPM and I set my pm.max_requests to a higher number, and use persistent connections with aerospike - will the connection then stay alive for the life of the process cycle (e.g. pm.max_requests = 500 → serves 500 requests before closing)?

Hi Ewan,

I’ve answered similar questions regarding the way connections are handled in a dynamic language / FastCGI context.

  1. PHP-fpm creating too many connections
  2. Aerospike server got very high CPU usage when using with the php-client
  3. Connections stuck in CLOSE_WAIT
  4. The Client Creates 17 Threads for Each PHP Process

FPM tuning is something straight out of the late 90s, back when PHP leaked memory and the php_mod process inside Apache would bloat to a dangerous level. PHP has been very stable on memory for a long time, and the idea of killing the process after only 500 requests is a head scratcher. You’re potentially going to be doing thousands of requests per-second, why would you choose to kill and fork a new process after so few requests?

It’s not just the overhead of the PHP process forking by FPM. There’s also an overhead to initializing the Aerospike client. It connects to the first node it can in a list of hostnames, and from that seed node it:

  • Learns the IP addresses of the other nodes.
  • Establishes connections to the service port (3000) on each of those nodes.
  • Grabs the partition map.
  • Starts a cluster tending thread which periodically (by default every 1s) checks on the cluster.

For this reason, we recommend using persistent connections when you’re in any multi-process context (FPM, Apache prefork, etc). The Aerospike object is stored in the process scope and does not get destroyed at the end of the request (PHP_RSHUTDOWN). Next request executing the same code will attempt to reuse this object. To amortize the overhead of initializing the client you should raise the max_requests as high as you can. Track the PHP processes for memory bloat, and keep raising the value, the higher the better. If you notice a memory leak please report it as a bug on the aerospike/aerospike-client-php repo on GitHub.

Thanks for the comment, I have been reading your other posts - there were just some small tid-bits that I was missing which you seem to have resolved with your reply.

The only issues I am seeing now is aerospike is causing PHP-FPM to segfault as soon as we step up the number of instruction being sent.