Insert of data using java client is too slow using docker

Hi all I tray to insert data using java client server is configured as cluster with 2 nodes the insert of 100 records taked 20min?!!

Why are you creating client object twice? (OK, my misread and thanks for putting the code in readable format now.)

thanks for reply

Nothing obvious pops out. Is the there a network issue between where this code is running (client machine) and the server node? Going over slow wifi? Did you try running the client test code and sever on same machine (localhost)?

Also, try putting your start time and end time around the write loop. Don’t include time to make connection to the cluster although that should be seconds/milliseconds and not minutes!

Iam using docker with swarm-manager when i have one node everything is fine?

OK, I must bow out. Don’t have experience with docker.

is this due to migration of partitions?

Suggest changing title of post to include “docker” describe your setup in the post first. someone with docker experience may help you out.

your right thanks lot

migration of partitions occurs when nodes are added or removed from cluster and it should not affect client side writes or reads that latency goes to minutes. It may go up by a few milliseconds … but not minutes.

thanks ps could you tell me why i get Error Code 14: Hot key

and also Client timeout: timeout=70000 iterations=2 lastNode=BB90600FF0A4202 192.168.99.107 3000

Hot key means you are either reading or writing the same key repeatedly. The transactions pending limit is 20 by default.

Two issues:

  1. Hotkey: You’re writing the SAME record and just replacing bins (fields in RDBs). Since all of your operations are on the same record, you have a “hot” key (index). There are some parameters where you can tune to queue up more hotkey operations, but hotkeys will still occur based on how you’re using it. To resolve the root of the issue, don’t do all of your work in a single record or change to in-memory only storage engine for your really hot records.
  2. General performance issue: What are you running on? What’s the hardware behind your docker daemon? What’s the hardware behind your client? What is the relationship between client and server (Different machines over lan? wifi? Internet? Same machine?)

He is iterating on i, so its a different record each write. Hot Key error must be from some other part of the code.

yes they are differents records my test for university shall insert 50000 records so iam iterating on i to generate each time new record

University

Are you running everything on a 2/4 core laptop with a 5400rpm HDD?

client on mac os

If this is your particular iMac model, then you have the fusion drive.

The Fusion drive behaves like a hybrid SSHD. That is, you have no explicit control over which files are SSD accelerated and either the drive itself makes the caching decision (in the case of SSHDs) or your OS does (as in the Fusion Drive). In the case of Docker containers, where the raw storage for each container is is hidden from you in a file buried deep in the host filesystem, this creates even more variability in whether the underlying data gets accelerated.

I will treat these types of storage to be no different than HDDs from Aerospike’s operational view.

In short, do not expect performance anywhere near what’s you’d see with an SSD. A typical HDD would be lucky to reach 100 IOPs. However 20mins for 100 record inserts is still concerning. What do the logs say? docker logs <your aerospike container>

If you can replicate this, please attach your namespace config and screenshot of Activity Monitor. Or more preferably, use docker-machine ssh to log in to a Swarm node and run top in boot2docker:

$ tce-load -wi ncurses
$ TERM=vt100 top -c