Very slow puts of conflicting keys with the CREATE_ONLY record exists action


Hello, I’ve written a simple program in Go that puts single bin records. I want only the first write for the given key to succeed, so I created a write policy with the CREATE_ONLY record exists action. When I run the program for the first time to put 90000 records, it runs very fast. However, when I run it for the second time to try putting the same records when they already exist, it runs 10x slower. Here’s the relevant part of the Go code:

func (s *tilesServiceServer) TakeTile(ctx context.Context, request *pb.TakeTileRequest) (*pb.TakeTileResponse, error) {
	key, err := aerospike.NewKey(s.namespace, s.set, tileCoordinatesToKey(request.X, request.Y))

	bin := aerospike.NewBin("ownership", 1)
	err = s.client.PutBins(s.writepolicy, key, bin)
	if ae, ok := err.(ast.AerospikeError); ok && ae.ResultCode() == ast.KEY_EXISTS_ERROR {
		return &pb.TakeTileResponse{false}, nil
	} else {

	return &pb.TakeTileResponse{true}, nil
func newServer() *tilesServiceServer {
	var err error
	s := new(tilesServiceServer)
	s.namespace = "test"
	s.set = "ownerships"
	s.writepolicy = aerospike.NewWritePolicy(0, -1)
	s.writepolicy.RecordExistsAction = aerospike.CREATE_ONLY // removing this makes it faster
	s.client, err = aerospike.NewClient("", 3000)
	return s

(the program is an RPC server that is invoked from a Java client, but it shouldn’t matter here)

If I remove the line which sets the record exists action, everything is very fast again, but it isn’t a solution because it’d allow the server to overwrite previously added records (I’m planning to have more bins in each record). I also tried using GenerationPolicy, but I encountered the same issue.

Is it supposed to be like that? Maybe there’s something wrong with my code?

Best regards, Michal


I’ve made an experiment and I created a Java program that implements the same ‘put if absent’ behaviour. And I didn’t notice the slowdown I experienced using Go while trying to put already existing keys. So this means that there must be something wrong with the Go client, or there’s something wrong with my Go code. Can anyone help? Here’s the Java code for reference:

  private static class TilesServiceImpl {
        private final AsyncClient asClient;
        public TilesServiceImpl() {
          AsyncClientPolicy policy = new AsyncClientPolicy();
          policy.asyncWritePolicyDefault.expiration = -1;
          policy.asyncWritePolicyDefault.generation = 0;
          policy.asyncWritePolicyDefault.recordExistsAction = RecordExistsAction.CREATE_ONLY;
          asClient = new AsyncClient(policy, "", 3000);
        public void takeTile(TakeTileRequest request, StreamObserver<TakeTileResponse> responseObserver) {
          Key key = new Key("test", "ownerships", tileCoordinatesToKey(request.getX(), request.getY()));
          Bin bin = new Bin("ownership", 1);
          asClient.put(null, new WriteListener() {
            public void onSuccess(Key key) {
              TakeTileResponse reply = TakeTileResponse.newBuilder().setResult(true).build();
            public void onFailure(AerospikeException exception) {
              if (exception.getResultCode() == ResultCode.KEY_EXISTS_ERROR) {
                TakeTileResponse reply = TakeTileResponse.newBuilder().setResult(false).build();
              } else {
          }, key, bin);



Sorry for the late reply. I suspected this was a server problem, but I was wrong. Thank you for your persistence and very helpful report.

This is a subtle Go Client bug, which prevents the connections to go back to the pool.

I’ll release the fixes in a few hours today.


New version is released and in github:master now.


Yes, it’s fixed. Thanks!