Intel S3500 480G SSD write ONLY 20000TPS, is this ok?


#1

hi, all

we use intel S3500 in our server,

cpu:  Intel(R) Xeon(R) CPU E5-2620 v2 @ 2.10GHz
ssd: intel s3500 480G * 2
raid controller:  Serial Attached SCSI controller: LSI Logic / Symbios Logic SAS2308 PCI-Express Fusion-MPT SAS-2 (rev 05)

we set up a 10 nodes cluster. here is some latency info:

Monitor> latency
	====writes_master====
                                   timespan   ops/sec   >1ms   >8ms   >64ms
192.168.3.52:3000    15:26:50-GMT->15:27:00   19732.4   0.73   0.18    0.09
192.168.3.53:3000    15:26:58-GMT->15:27:08   20556.8   0.27   0.17    0.11
192.168.3.54:3000    15:26:53-GMT->15:27:03   20136.4   0.20   0.18    0.11
192.168.3.55:3000    15:26:54-GMT->15:27:04   18756.7   0.27   0.24    0.15
192.168.3.56:3000    15:26:57-GMT->15:27:07   19104.4   0.32   0.27    0.12
192.168.3.57:3000    15:26:55-GMT->15:27:05   20008.0   0.39   0.35    0.18
192.168.3.58:3000    15:26:51-GMT->15:27:01   19202.8   0.35   0.32    0.14
192.168.3.59:3000    15:26:50-GMT->15:27:00   21004.8   0.28   0.27    0.14
192.168.3.60:3000    15:26:51-GMT->15:27:01   19832.9   0.33   0.31    0.19
192.168.3.61:3000    15:26:58-GMT->15:27:08   19469.2   0.39   0.36    0.22

	====writes_reply====
                                   timespan   ops/sec   >1ms   >8ms   >64ms
192.168.3.52:3000    15:26:50-GMT->15:27:00   19732.4   0.73   0.18    0.09
192.168.3.53:3000    15:26:58-GMT->15:27:08   20556.8   0.27   0.17    0.11
192.168.3.54:3000    15:26:53-GMT->15:27:03   20136.4   0.20   0.18    0.11
192.168.3.55:3000    15:26:54-GMT->15:27:04   18756.7   0.27   0.24    0.15
192.168.3.56:3000    15:26:57-GMT->15:27:07   19104.2   0.32   0.27    0.12
192.168.3.57:3000    15:26:55-GMT->15:27:05   20008.0   0.39   0.35    0.18
192.168.3.58:3000    15:26:51-GMT->15:27:01   19203.0   0.35   0.32    0.14
192.168.3.59:3000    15:26:50-GMT->15:27:00   21004.6   0.28   0.27    0.14
192.168.3.60:3000    15:26:51-GMT->15:27:01   19833.0   0.33   0.31    0.19
192.168.3.61:3000    15:26:58-GMT->15:27:08   19469.0   0.39   0.36    0.22

	====reads====
                                   timespan   ops/sec    >1ms   >8ms   >64ms
192.168.3.52:3000    15:26:50-GMT->15:27:00     790.7   10.55   0.89    0.35
192.168.3.53:3000    15:26:58-GMT->15:27:08     850.0    0.73   0.28    0.13
192.168.3.54:3000    15:26:53-GMT->15:27:03     832.2    0.36   0.25    0.13
192.168.3.55:3000    15:26:54-GMT->15:27:04     763.5    0.37   0.29    0.14
192.168.3.56:3000    15:26:57-GMT->15:27:07     810.4    1.09   0.38    0.14
192.168.3.57:3000    15:26:55-GMT->15:27:05     853.9    0.55   0.45    0.19
192.168.3.58:3000    15:26:51-GMT->15:27:01     757.3    0.50   0.41    0.22
192.168.3.59:3000    15:26:50-GMT->15:27:00     861.3    0.35   0.29    0.14
192.168.3.60:3000    15:26:51-GMT->15:27:01     799.7    0.44   0.35    0.20
192.168.3.61:3000    15:26:58-GMT->15:27:08     819.8    0.63   0.55    0.27

is this good enough ?


#2

Do you have 10 identical servers?

Is the raid controller being used to provide raid functionality?

How many clients do you have running benchmarks?

Just wanted to clarify your setup a bit.

Thanks Petter


#3

Yes, we have 10 identical servers. this is not a benchmarks, it’s our Production Environment Server, we use about 100 clients to import data into the cluster, and the data look like this:

cli -h 127.0.0.1 -n NAMESPACE -s SETNAME -o get -k 10009_a558d1268f26f79aaf43d96d8abb21a8
{'k': '10009_a558d1268f26f79aaf43d96d8abb21a8', 'v': '1411313176957524'}

and we setup up the SSD Over-provision using Partitions:

C9:~> fdisk -H 32 -S 32 /dev/sdc

WARNING: DOS-compatible mode is deprecated. It's strongly recommended to
         switch off the mode (command 'c') and change display units to
         sectors (command 'u').

Command (m for help): p

Disk /dev/sdc: 479.6 GB, 479559942144 bytes
32 heads, 32 sectors/track, 914688 cylinders
Units = cylinders of 1024 * 512 = 524288 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x0005a2a8

   Device Boot      Start         End      Blocks   Id  System

Command (m for help): n
Command action
   e   extended
   p   primary partition (1-4)
p
Partition number (1-4): 1
First cylinder (1-914688, default 1): 2
Last cylinder, +cylinders or +size{K,M,G} (2-914688, default 914688): 722600

Command (m for help): p

Disk /dev/sdc: 479.6 GB, 479559942144 bytes
32 heads, 32 sectors/track, 914688 cylinders
Units = cylinders of 1024 * 512 = 524288 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x0005a2a8

   Device Boot      Start         End      Blocks   Id  System
/dev/sdc1               2      722600   369970688   83  Linux

and here is some reads latency:

Monitor> latency -k reads


	====reads====
                                   timespan   ops/sec   >1ms   >8ms   >64ms
192.168.3.52:3000    04:25:45-GMT->04:25:55    9080.8   0.67   0.64    0.43
192.168.3.53:3000    04:25:37-GMT->04:25:47    9963.9   0.58   0.50    0.39
192.168.3.54:3000    04:25:41-GMT->04:25:51    9518.6   0.86   0.81    0.58
192.168.3.55:3000    04:25:41-GMT->04:25:51    9047.0   0.21   0.19    0.15
192.168.3.56:3000    04:25:36-GMT->04:25:46    9358.4   0.34   0.31    0.21
192.168.3.57:3000    04:25:42-GMT->04:25:52    9706.8   0.86   0.80    0.56
192.168.3.58:3000    04:25:37-GMT->04:25:47    9050.6   0.39   0.38    0.30
192.168.3.59:3000    04:25:37-GMT->04:25:47    9988.6   0.28   0.26    0.19
192.168.3.60:3000    04:25:38-GMT->04:25:48    9196.8   0.23   0.21    0.15
192.168.3.61:3000    04:25:45-GMT->04:25:55    9250.7   0.29   0.26    0.17




Monitor> latency -k reads


	====reads====
                                   timespan   ops/sec   >1ms   >8ms   >64ms
192.168.3.52:3000    04:25:45-GMT->04:25:55    9080.8   0.67   0.64    0.43
192.168.3.53:3000    04:25:47-GMT->04:25:57    9627.4   0.26   0.21    0.15
192.168.3.54:3000    04:25:41-GMT->04:25:51    9518.6   0.86   0.81    0.58
192.168.3.55:3000    04:25:41-GMT->04:25:51    9047.0   0.21   0.19    0.15
192.168.3.56:3000    04:25:46-GMT->04:25:56    9076.7   0.47   0.44    0.33
192.168.3.57:3000    04:25:42-GMT->04:25:52    9706.8   0.86   0.80    0.56
192.168.3.58:3000    04:25:37-GMT->04:25:47    9050.6   0.39   0.38    0.30
192.168.3.59:3000    04:25:37-GMT->04:25:47    9988.6   0.28   0.26    0.19
192.168.3.60:3000    04:25:38-GMT->04:25:48    9196.8   0.23   0.21    0.15
192.168.3.61:3000    04:25:45-GMT->04:25:55    9250.7   0.29   0.26    0.17

is the reads latency a little height ?

and what is the raid controller provide “raid functionality” mean ?


#4

hi, petter

ths SSD NOT use raid controller,

scsi 0:0:1:0: Direct-Access     ATA      INTEL SSDSC2BB48 0370 PQ: 0 ANSI: 6
scsi 0:0:1:0: SATA: handle(0x000a), sas_addr(0x4433221107000000), phy(7), device_name(0x55cd2e404be1f138)
scsi 0:0:1:0: SATA: enclosure_logical_id(0x56c92bf0000a197d), slot(7)
scsi 0:0:1:0: atapi(n), ncq(y), asyn_notify(n), smart(y), fua(y), sw_preserve(y)
scsi 0:0:1:0: qdepth(32), tagged(1), simple(1), ordered(0), scsi_level(7), cmd_que(1)
scsi 0:0:2:0: Direct-Access     ATA      INTEL SSDSC2BB48 0370 PQ: 0 ANSI: 6
scsi 0:0:2:0: SATA: handle(0x000b), sas_addr(0x4433221106000000), phy(6), device_name(0x55cd2e404be1f143)
scsi 0:0:2:0: SATA: enclosure_logical_id(0x56c92bf0000a197d), slot(6)
scsi 0:0:2:0: atapi(n), ncq(y), asyn_notify(n), smart(y), fua(y), sw_preserve(y)
scsi 0:0:2:0: qdepth(32), tagged(1), simple(1), ordered(0), scsi_level(7), cmd_que(1)

but it’s “Direct-Access ATA”, which mode should i change to ?


#5

Hi there,

Could you post your aerospike.conf file?

Thanks Petter


#6

hi, petter

here is the aerospike.conf

service {
	user root
	group root
	paxos-single-replica-limit 1 # Number of nodes where the replica count is automatically reduced to 1.
	pidfile /var/run/aerospike/asd.pid
	service-threads 20
	transaction-queues 20
	transaction-threads-per-queue 3
	proto-fd-max 15000
}

logging {
	# Log file must be an absolute path.
	file /data/logs/aerospike/aerospike.log {
		context any info
	}
}

network {
	service {
		address any
		port 3000
	}

	heartbeat {
#		mode multicast
#		address 239.1.99.222
#		port 9918

		# To use unicast-mesh heartbeats, comment out the 3 lines above and
		# use the following 4 lines instead.
		mode mesh
		port 3002
		mesh-address 192.168.3.54 
		mesh-port 3002

		interval 150
		timeout 10
	}

	fabric {
		port 3001
	}

	info {
		port 3003
	}
}

namespace disk-base {
        replication-factor 2
        memory-size 10G
        #default-ttl 30d # 30 days, use 0 to never expire/evict.

#        storage-engine memory

        # To use file storage backing, comment out the line above and use the
        # following lines instead.
       storage-engine device {
               file /data/aerospike/data/disk-base.dat
               filesize 100G
               data-in-memory false # Store data in memory in addition to file.
       }
}

namespace base {
        replication-factor 2
        memory-size 10G
        #default-ttl 30d # 30 days, use 0 to never expire/evict.

#        storage-engine memory

        # To use file storage backing, comment out the line above and use the
        # following lines instead.
       storage-engine device {
               file /data/aerospike/data/base.dat
               filesize 20G
               data-in-memory true # Store data in memory in addition to file.
       }
}

namespace cmmapping {
	replication-factor 1  # ONLY ONE
	memory-size 230G 
	default-ttl 365d  

	storage-engine device {
		device /dev/sdb
		device /dev/sdc

		scheduler-mode noop
		write-block-size 128K

		data-in-memory false
	}
}

we only use the cmmapping namespace.
and here is the node info:

Monitor> info
===NODES===
2015-01-02 22:33:13.413405
Sorting by IP, in Ascending order:
ip:port               Build   Cluster      Cluster   Free   Free   Migrates              Node         Principal      Replicated    Sys
                          .      Size   Visibility   Disk    Mem          .                ID                ID         Objects   Free
                          .         .            .    pct    pct          .                 .                 .               .    Mem
192.168.3.52:3000    3.3.21        10         true     68     74      (0,0)   BB900007F9D790B   BB900007FE8138B   1,076,737,640     74
192.168.3.53:3000     3.4.0        10         true     69     74      (0,0)   BB900007FA813CB   BB900007FE8138B   1,058,426,436     74
192.168.3.54:3000     3.4.0        10         true     71     75      (0,0)   BB900007FA0BD4B   BB900007FE8138B   1,008,715,367     75
192.168.3.55:3000     3.4.0        10         true     68     73      (0,0)   BB900007F7B828B   BB900007FE8138B   1,097,633,688     73
192.168.3.56:3000     3.4.0        10         true     70     75      (0,0)   BB900007FBBD30B   BB900007FE8138B   1,045,347,442     74
192.168.3.57:3000     3.4.0        10         true     66     72      (0,0)   BB900007FB5040B   BB900007FE8138B   1,162,928,093     72
192.168.3.58:3000     3.4.0        10         true     68     73      (0,0)   BB900007FE8138B   BB900007FE8138B   1,118,536,393     73
192.168.3.59:3000     3.4.0        10         true     68     73      (0,0)   BB900007F3D5DCB   BB900007FE8138B   1,097,554,832     73
192.168.3.60:3000     3.4.0        10         true     70     75      (0,0)   BB900007FC9328B   BB900007FE8138B   1,045,329,331     74
192.168.3.61:3000     3.4.0        10         true     71     76      (0,0)   BB900007F4C840B   BB900007FE8138B     993,002,627     76
Number of nodes displayed: 10


 ===NAMESPACE===
Total (unique) objects in cluster for disk-base : 0
Total (unique) objects in cluster for base : 0
Total (unique) objects in cluster for cmmapping : 10,704,211,906
Note: Total (unique) objects is an under estimate if migrations are in progress.


ip/namespace              Avail   Evicted          Master     Repl     Stop       Used   Used      Used   Used    hwm   hwm
                            Pct   Objects         Objects   Factor   Writes       Disk   Disk       Mem    Mem   Disk   Mem
                              .         .               .        .        .          .      %         .      %      .     .
192.168.3.57/cmmapping       55         0   1,162,928,095        1    false   277.26 G     39   69.32 G     31     50    60
192.168.3.58/cmmapping       57         0   1,118,536,386        1    false   266.68 G     38   66.67 G     29     50    60
192.168.3.59/cmmapping       57         0   1,097,554,850        1    false   261.68 G     37   65.42 G     29     50    60
192.168.3.55/cmmapping       57         0   1,097,633,677        1    false   261.70 G     37   65.42 G     29     50    60
192.168.3.52/cmmapping       58         0   1,076,737,662        1    false   256.71 G     37   64.18 G     28     50    60
192.168.3.53/cmmapping       59         0   1,058,426,452        1    false   252.35 G     36   63.09 G     28     50    60
192.168.3.60/cmmapping       59         0   1,045,329,328        1    false   249.23 G     35   62.31 G     28     50    60
192.168.3.56/cmmapping       59         0   1,045,347,442        1    false   249.23 G     35   62.31 G     28     50    60
192.168.3.61/cmmapping       61         0     993,002,648        1    false   236.75 G     34   59.19 G     26     50    60
192.168.3.54/cmmapping       61         0   1,008,715,366        1    false   240.50 G     34   60.12 G     27     50    60

and here is the latency

Monitor> latency
	====writes_master====
                                   timespan   ops/sec   >1ms   >8ms   >64ms
192.168.3.52:3000    14:34:35-GMT->14:34:45     502.0   0.28   0.22    0.14
192.168.3.53:3000    14:34:35-GMT->14:34:45     508.2   0.65   0.57    0.45
192.168.3.54:3000    14:34:44-GMT->14:34:54     458.4   0.26   0.24    0.17
192.168.3.55:3000    14:34:36-GMT->14:34:46     505.8   0.26   0.24    0.18
192.168.3.56:3000    14:34:41-GMT->14:34:51     476.9   0.31   0.29    0.21
192.168.3.57:3000    14:34:42-GMT->14:34:52     550.4   0.94   0.91    0.64
192.168.3.58:3000    14:34:37-GMT->14:34:47     538.8   0.09   0.07    0.06
192.168.3.59:3000    14:34:36-GMT->14:34:46     512.0   0.25   0.25    0.18
192.168.3.60:3000    14:34:43-GMT->14:34:53     499.0   0.20   0.20    0.14
192.168.3.61:3000    14:34:43-GMT->14:34:53     466.4   0.17   0.13    0.13

	====writes_reply====
                                   timespan   ops/sec   >1ms   >8ms   >64ms
192.168.3.52:3000    14:34:35-GMT->14:34:45     502.0   0.28   0.22    0.14
192.168.3.53:3000    14:34:35-GMT->14:34:45     508.2   0.65   0.57    0.45
192.168.3.54:3000    14:34:44-GMT->14:34:54     458.4   0.26   0.24    0.17
192.168.3.55:3000    14:34:36-GMT->14:34:46     505.8   0.26   0.24    0.18
192.168.3.56:3000    14:34:41-GMT->14:34:51     476.9   0.31   0.29    0.21
192.168.3.57:3000    14:34:42-GMT->14:34:52     550.4   0.94   0.91    0.64
192.168.3.58:3000    14:34:37-GMT->14:34:47     538.8   0.09   0.07    0.06
192.168.3.59:3000    14:34:36-GMT->14:34:46     512.0   0.25   0.25    0.18
192.168.3.60:3000    14:34:43-GMT->14:34:53     499.1   0.20   0.20    0.14
192.168.3.61:3000    14:34:43-GMT->14:34:53     466.4   0.17   0.13    0.13

	====reads====
                                   timespan   ops/sec   >1ms   >8ms   >64ms
192.168.3.52:3000    14:34:35-GMT->14:34:45    9536.9   0.20   0.17    0.11
192.168.3.53:3000    14:34:35-GMT->14:34:45    9229.5   0.71   0.61    0.43
192.168.3.54:3000    14:34:44-GMT->14:34:54    8620.5   0.19   0.17    0.13
192.168.3.55:3000    14:34:36-GMT->14:34:46    9673.7   0.23   0.21    0.14
192.168.3.56:3000    14:34:41-GMT->14:34:51    8958.8   0.44   0.41    0.31
192.168.3.57:3000    14:34:42-GMT->14:34:52    9926.0   0.70   0.66    0.49
192.168.3.58:3000    14:34:37-GMT->14:34:47    9749.2   0.10   0.08    0.07
192.168.3.59:3000    14:34:36-GMT->14:34:46    9630.2   0.33   0.30    0.23
192.168.3.60:3000    14:34:43-GMT->14:34:53    8922.2   0.20   0.18    0.11
192.168.3.61:3000    14:34:43-GMT->14:34:53    8448.6   0.23   0.20    0.13

#7

Your config is showing that NCQ is available - good - and avoiding the raid card is good. The number of 99% read latencies under 1 ms is good, but the number > 64ms is poor.

Your storage configuration shows that you are using a file system. I suspect this is the problem.

For low latency applications, we suggest avoiding the file system and using the devices directly. This is exactly because of the number of high latency responses that all filesystems we’ve tested generate.

I strongly suspect that if you put your data directly on a raw partition (doesn’t have to be a whole drive) you will see a DRAMATIC reduction in worst case latency — we are better tuned than even the best flash-optimized file systems.

I realize this might be operationally difficult for you, but you can do it one server at a time. Wait for data migration to complete before reconfiguring the next server, and you will be able to make this change without any operational downtime.

Check this page, and look at the Recipe for an SSD Storage Engine. https://www.aerospike.com/docs/operations/configure/namespace/storage/


#8

hi, bbulkow

we don’t use a file system.
by now we ONLY use cmmapping this namespace,

namespace cmmapping {
	replication-factor 1  # ONLY ONE
	memory-size 230G 
	default-ttl 365d  

	storage-engine device {
		device /dev/sdb
		device /dev/sdc

		scheduler-mode noop
		write-block-size 128K

		data-in-memory false
	}
}

it use two whole SSD


#9

Ok. That’s a good configuration.

The first namespace in your configuration (diskbase) - which is filesystem but not data in memory - is not a good config, but if you don’t use it, I guess that’s OK.

If this is the namespace you tested against in your previous post, the situation is interesting because the read and write latency are very similar - but reads are reading through to storage, and writes are buffering. This would direct our attention away from the storage system and toward other parts of the configuration.

Again, the fact that you are 99% < 1ms is very good and expected. The fact that you have so many > 64ms — and identical for reads and writes — is surprising.

I can’t see your Linux distribution & version. Can you please provide that?

Let me check with some of the ops guys and see what next to ask.


#10

hi, bbulkow

here is some system info

C53:~> uname -a
Linux 2.6.32-358.el6.x86_64 #1 SMP Fri Feb 22 00:31:26 UTC 2013 x86_64 x86_64 x86_64 GNU/Linux
C53:~> cat /etc/issue
CentOS release 6.4 (Final)
Kernel \r on an \m

C53:~> cat /proc/cpuinfo |grep name |uniq -c
     24 model name	: Intel(R) Xeon(R) CPU E5-2620 v2 @ 2.10GHz
C53:~> getconf LONG_BIT
64
C53:~> free -g
             total       used       free     shared    buffers     cached
Mem:           252         66        185          0          0          0
-/+ buffers/cache:         66        186
Swap:          127          0        127

and iostat

C53:~> iostat -x 2 10
Linux 2.6.32-358.el6.x86_64   	_x86_64_	(24 CPU)

avg-cpu:  %user   %nice %system %iowait  %steal   %idle
           5.87    0.00    2.67    0.98    0.00   90.47

Device:         rrqm/s   wrqm/s     r/s     w/s   rsec/s   wsec/s avgrq-sz avgqu-sz   await  svctm  %util
sda               0.00     0.50    0.01    0.92     0.28    11.34    12.50     0.02   25.01   7.27   0.68
sdb               0.00  1083.93 2012.99   12.33  2094.97  3074.74     2.55     0.85    0.42   0.09  18.83
sdc               0.00  1083.93 2010.87   12.33  2092.92  3074.87     2.55     0.87    0.43   0.09  18.86

avg-cpu:  %user   %nice %system %iowait  %steal   %idle
           5.31    0.00    2.74    1.61    0.00   90.33

Device:         rrqm/s   wrqm/s     r/s     w/s   rsec/s   wsec/s avgrq-sz avgqu-sz   await  svctm  %util
sda               0.00     0.00    0.00    0.00     0.00     0.00     0.00     0.00    0.00   0.00   0.00
sdb               0.00     0.00 3472.00    1.00  3472.00   256.00     1.07     0.34    0.10   0.08  27.25
sdc               0.00     0.00 3593.50    1.00  3593.50   256.00     1.07     0.33    0.09   0.08  28.25

avg-cpu:  %user   %nice %system %iowait  %steal   %idle
           5.32    0.00    2.79    1.70    0.00   90.19

Device:         rrqm/s   wrqm/s     r/s     w/s   rsec/s   wsec/s avgrq-sz avgqu-sz   await  svctm  %util
sda               0.00     0.00    0.00    0.00     0.00     0.00     0.00     0.00    0.00   0.00   0.00
sdb               0.00     0.00 3682.00    0.50  3682.00   128.00     1.03     0.36    0.10   0.08  30.00
sdc               0.00     0.00 3606.50    1.50  3989.00   384.00     1.21     0.34    0.10   0.08  28.90

avg-cpu:  %user   %nice %system %iowait  %steal   %idle
           5.25    0.00    2.99    1.67    0.00   90.09

Device:         rrqm/s   wrqm/s     r/s     w/s   rsec/s   wsec/s avgrq-sz avgqu-sz   await  svctm  %util
sda               0.00     1.00    0.00    1.00     0.00    16.00    16.00     0.01   10.50  10.50   1.05
sdb               0.00     0.00 3694.00    1.00  3821.50   256.00     1.10     0.35    0.09   0.08  28.40
sdc               0.00     0.00 3611.50    1.00  3739.00   256.00     1.11     0.33    0.09   0.08  28.20

avg-cpu:  %user   %nice %system %iowait  %steal   %idle
           5.25    0.00    3.03    1.65    0.00   90.07

Device:         rrqm/s   wrqm/s     r/s     w/s   rsec/s   wsec/s avgrq-sz avgqu-sz   await  svctm  %util
sda               0.00     0.50    0.00    1.00     0.00    12.00    12.00     0.01    9.50   9.50   0.95
sdb               0.00     0.00 3551.00    1.00  3678.50   256.00     1.11     0.35    0.10   0.08  29.45
sdc               0.00     0.00 3567.00    1.50  3822.00   384.00     1.18     0.35    0.10   0.08  28.90

avg-cpu:  %user   %nice %system %iowait  %steal   %idle
           5.25    0.00    2.89    1.65    0.00   90.21

Device:         rrqm/s   wrqm/s     r/s     w/s   rsec/s   wsec/s avgrq-sz avgqu-sz   await  svctm  %util
sda               0.00     0.00    0.00    0.00     0.00     0.00     0.00     0.00    0.00   0.00   0.00
sdb               0.00     0.00 3529.00    1.00  3529.00   256.00     1.07     0.34    0.10   0.08  28.55
sdc               0.00     0.00 3615.50    1.50  3998.00   384.00     1.21     0.35    0.10   0.08  27.85

avg-cpu:  %user   %nice %system %iowait  %steal   %idle
           5.29    0.00    2.80    1.65    0.00   90.26

Device:         rrqm/s   wrqm/s     r/s     w/s   rsec/s   wsec/s avgrq-sz avgqu-sz   await  svctm  %util
sda               0.00     0.00    0.00    1.00     0.00     8.00     8.00     0.04   22.50  12.50   1.25
sdb               0.00     0.00 3534.50    1.00  3662.00   256.00     1.11     0.34    0.10   0.08  28.50
sdc               0.00     0.00 3612.50    1.00  3612.50   256.00     1.07     0.37    0.10   0.08  30.05

avg-cpu:  %user   %nice %system %iowait  %steal   %idle
           5.36    0.00    2.74    1.61    0.00   90.29

Device:         rrqm/s   wrqm/s     r/s     w/s   rsec/s   wsec/s avgrq-sz avgqu-sz   await  svctm  %util
sda               0.00     0.00    0.00    2.00     0.00    16.00     8.00     0.02   16.75   6.25   1.25
sdb               0.00     0.00 3397.50    1.50  3907.50   384.00     1.26     0.31    0.09   0.08  25.65
sdc               0.00     0.00 3526.00    1.00  3526.00   256.00     1.07     0.33    0.09   0.08  27.95

avg-cpu:  %user   %nice %system %iowait  %steal   %idle
           5.27    0.00    2.80    1.61    0.00   90.31

Device:         rrqm/s   wrqm/s     r/s     w/s   rsec/s   wsec/s avgrq-sz avgqu-sz   await  svctm  %util
sda               0.00     0.00    0.00    0.00     0.00     0.00     0.00     0.00    0.00   0.00   0.00
sdb               0.00     0.00 3447.00    0.50  3447.00   128.00     1.04     0.33    0.10   0.08  27.45
sdc               0.00     0.00 3542.50    1.50  4052.50   384.00     1.25     0.36    0.10   0.08  28.65

avg-cpu:  %user   %nice %system %iowait  %steal   %idle
           5.23    0.00    2.78    1.57    0.00   90.42

Device:         rrqm/s   wrqm/s     r/s     w/s   rsec/s   wsec/s avgrq-sz avgqu-sz   await  svctm  %util
sda               0.00     1.00    0.00    0.50     0.00    12.00    24.00     0.01   17.00  17.00   0.85
sdb               0.00     0.00 3431.00    1.00  3431.00   256.00     1.07     0.33    0.10   0.08  26.95
sdc               0.00     0.00 3424.00    1.50  3551.50   384.00     1.15     0.31    0.09   0.08  25.95

#11

hi, bbulkow

we found that CPU usage not balanced

Cpu0  :  8.1%us, 13.2%sy,  0.0%ni, 52.5%id,  3.1%wa,  0.0%hi, 23.1%si,  0.0%st
Cpu1  :  1.3%us,  1.0%sy,  0.0%ni, 97.0%id,  0.7%wa,  0.0%hi,  0.0%si,  0.0%st
Cpu2  :  0.3%us,  0.3%sy,  0.0%ni, 99.0%id,  0.3%wa,  0.0%hi,  0.0%si,  0.0%st
Cpu3  :  0.3%us,  0.0%sy,  0.0%ni, 99.7%id,  0.0%wa,  0.0%hi,  0.0%si,  0.0%st
Cpu4  :  0.0%us,  0.0%sy,  0.0%ni, 99.7%id,  0.3%wa,  0.0%hi,  0.0%si,  0.0%st
Cpu5  : 98.0%us,  1.0%sy,  0.0%ni,  1.0%id,  0.0%wa,  0.0%hi,  0.0%si,  0.0%st
Cpu6  :  9.7%us, 12.4%sy,  0.0%ni, 54.4%id, 17.8%wa,  0.0%hi,  5.7%si,  0.0%st
Cpu7  :  3.4%us,  3.4%sy,  0.0%ni, 88.1%id,  5.1%wa,  0.0%hi,  0.0%si,  0.0%st
Cpu8  :  1.0%us,  1.0%sy,  0.0%ni, 96.3%id,  1.7%wa,  0.0%hi,  0.0%si,  0.0%st
Cpu9  :  0.0%us,  0.3%sy,  0.0%ni, 99.3%id,  0.3%wa,  0.0%hi,  0.0%si,  0.0%st
Cpu10 :  0.0%us,  0.0%sy,  0.0%ni,100.0%id,  0.0%wa,  0.0%hi,  0.0%si,  0.0%st
Cpu11 :  0.0%us,  0.0%sy,  0.0%ni,100.0%id,  0.0%wa,  0.0%hi,  0.0%si,  0.0%st
Cpu12 :  7.1%us, 10.5%sy,  0.0%ni, 72.4%id,  9.9%wa,  0.0%hi,  0.0%si,  0.0%st
Cpu13 :  0.0%us,  0.0%sy,  0.0%ni,100.0%id,  0.0%wa,  0.0%hi,  0.0%si,  0.0%st
Cpu14 :  0.3%us,  0.0%sy,  0.0%ni, 99.7%id,  0.0%wa,  0.0%hi,  0.0%si,  0.0%st
Cpu15 :  0.0%us,  0.0%sy,  0.0%ni,100.0%id,  0.0%wa,  0.0%hi,  0.0%si,  0.0%st
Cpu16 :  0.0%us,  0.0%sy,  0.0%ni,100.0%id,  0.0%wa,  0.0%hi,  0.0%si,  0.0%st
Cpu17 :  0.0%us,  0.0%sy,  0.0%ni,100.0%id,  0.0%wa,  0.0%hi,  0.0%si,  0.0%st
Cpu18 :  3.0%us,  4.0%sy,  0.0%ni, 89.6%id,  3.4%wa,  0.0%hi,  0.0%si,  0.0%st
Cpu19 :  0.7%us,  0.0%sy,  0.0%ni, 99.3%id,  0.0%wa,  0.0%hi,  0.0%si,  0.0%st
Cpu20 :  0.0%us,  0.0%sy,  0.0%ni,100.0%id,  0.0%wa,  0.0%hi,  0.0%si,  0.0%st
Cpu21 :  0.0%us,  0.0%sy,  0.0%ni,100.0%id,  0.0%wa,  0.0%hi,  0.0%si,  0.0%st
Cpu22 :  0.3%us,  0.0%sy,  0.0%ni, 99.7%id,  0.0%wa,  0.0%hi,  0.0%si,  0.0%st
Cpu23 :  0.0%us,  0.0%sy,  0.0%ni,100.0%id,  0.0%wa,  0.0%hi,  0.0%si,  0.0%st
Mem:  264504088k total, 70192616k used, 194311472k free,   234408k buffers
Swap: 134217720k total,        0k used, 134217720k free,   539680k cached

#12

Although only 1251TPS, but latency is high

Monitor> latency
	====writes_master====
                                   timespan   ops/sec   >1ms   >8ms   >64ms
192.168.3.52:3000    03:02:31-GMT->03:02:41     233.4   0.00   0.00    0.00
192.168.3.53:3000    03:02:21-GMT->03:02:31     227.8   0.09   0.09    0.09
192.168.3.54:3000    03:02:22-GMT->03:02:32     218.7   0.09   0.09    0.09
192.168.3.55:3000    03:02:21-GMT->03:02:31     227.9   0.13   0.04    0.04
192.168.3.56:3000    03:02:28-GMT->03:02:38     230.4   0.09   0.09    0.09
192.168.3.57:3000    03:02:26-GMT->03:02:36     244.6   0.12   0.12    0.04
192.168.3.58:3000    03:02:29-GMT->03:02:39     253.1   0.32   0.28    0.20
192.168.3.59:3000    03:02:25-GMT->03:02:35     239.1   0.29   0.25    0.13
192.168.3.60:3000    03:02:28-GMT->03:02:38     229.2   0.09   0.09    0.04
192.168.3.61:3000    03:02:29-GMT->03:02:39     225.2   0.31   0.13    0.13

	====writes_reply====
                                   timespan   ops/sec   >1ms   >8ms   >64ms
192.168.3.52:3000    03:02:31-GMT->03:02:41     233.4   0.00   0.00    0.00
192.168.3.53:3000    03:02:21-GMT->03:02:31     227.8   0.09   0.09    0.09
192.168.3.54:3000    03:02:22-GMT->03:02:32     218.7   0.09   0.09    0.09
192.168.3.55:3000    03:02:21-GMT->03:02:31     227.9   0.13   0.04    0.04
192.168.3.56:3000    03:02:28-GMT->03:02:38     230.4   0.09   0.09    0.09
192.168.3.57:3000    03:02:26-GMT->03:02:36     244.6   0.12   0.12    0.04
192.168.3.58:3000    03:02:29-GMT->03:02:39     253.1   0.32   0.28    0.20
192.168.3.59:3000    03:02:25-GMT->03:02:35     239.1   0.29   0.25    0.13
192.168.3.60:3000    03:02:28-GMT->03:02:38     229.2   0.09   0.09    0.04
192.168.3.61:3000    03:02:29-GMT->03:02:39     225.2   0.31   0.13    0.13

	====reads====
                                   timespan   ops/sec   >1ms   >8ms   >64ms
192.168.3.52:3000    03:02:31-GMT->03:02:41    1251.3   0.27   0.27    0.26
192.168.3.53:3000    03:02:21-GMT->03:02:31    1227.0   0.23   0.21    0.16
192.168.3.54:3000    03:02:22-GMT->03:02:32    1165.4   0.33   0.33    0.29
192.168.3.55:3000    03:02:21-GMT->03:02:31    1272.9   0.28   0.23    0.16
192.168.3.56:3000    03:02:28-GMT->03:02:38    1211.2   0.32   0.30    0.23
192.168.3.57:3000    03:02:26-GMT->03:02:36    1359.3   0.17   0.16    0.15
192.168.3.58:3000    03:02:29-GMT->03:02:39    1292.4   0.32   0.25    0.22
192.168.3.59:3000    03:02:25-GMT->03:02:35    1289.4   0.26   0.19    0.16
192.168.3.60:3000    03:02:28-GMT->03:02:38    1207.5   0.27   0.17    0.17
192.168.3.61:3000    03:02:29-GMT->03:02:39    1145.3   0.23   0.17    0.15

#13

Hi —

The overall numbers are good (you’re getting 99.7 less than 1 millisecond) but the tail numbers of > 64ms are way too high. Again, this points at the problem NOT being the storage layer — these disks don’t respond that way.

I have brought this entry to our support staff so they can give some suggestions about where these > 64ms requests might be coming from, and isolating whether the issue is the aerospike server or some other part of your infrastructure.


#14

Could you grep you logs for “{cmmapping}” while the same load is running as before?

 tail -n 2000 /var/log/aerospike/aerospike.log | grep "{cmmapping}"

#15

hi, kporter

here is the log:

Jan 10 2015 12:28:01 GMT: INFO (nsup): (thr_nsup.c::1221) {cmmapping} nsup start
Jan 10 2015 12:36:18 GMT: INFO (nsup): (thr_nsup.c::1139) {cmmapping} Records: 1072973311, 0 0-vt, 70224(149378589) expired, 0(0) evicted, 0(0) set deletes, 0(0) set evicted. Evict ttls: 0,0,0.000. Waits: 0,0,0. Total time: 496442 ms
Jan 10 2015 12:36:19 GMT: INFO (nsup): (thr_nsup.c::1221) {cmmapping} nsup start
Jan 10 2015 12:44:49 GMT: INFO (nsup): (thr_nsup.c::1139) {cmmapping} Records: 1072988417, 0 0-vt, 68829(149447418) expired, 0(0) evicted, 0(0) set deletes, 0(0) set evicted. Evict ttls: 0,0,0.000. Waits: 0,0,0. Total time: 509790 ms
Jan 10 2015 12:44:50 GMT: INFO (nsup): (thr_nsup.c::1221) {cmmapping} nsup start
Jan 10 2015 12:53:12 GMT: INFO (nsup): (thr_nsup.c::1139) {cmmapping} Records: 1073001868, 0 0-vt, 71105(149518523) expired, 0(0) evicted, 0(0) set deletes, 0(0) set evicted. Evict ttls: 0,0,0.000. Waits: 0,0,0. Total time: 502051 ms
Jan 10 2015 12:53:13 GMT: INFO (nsup): (thr_nsup.c::1221) {cmmapping} nsup start
Jan 10 2015 13:01:42 GMT: INFO (nsup): (thr_nsup.c::1139) {cmmapping} Records: 1073013200, 0 0-vt, 71669(149590192) expired, 0(0) evicted, 0(0) set deletes, 0(0) set evicted. Evict ttls: 0,0,0.000. Waits: 0,0,0. Total time: 509203 ms
Jan 10 2015 13:01:43 GMT: INFO (nsup): (thr_nsup.c::1221) {cmmapping} nsup start

and the TPS is :

Monitor> latency
	====writes_master====
                                   timespan   ops/sec   >1ms   >8ms   >64ms
192.168.3.52:3000    13:13:37-GMT->13:13:47     573.1   1.03   0.91    0.49
192.168.3.53:3000    13:13:35-GMT->13:13:45     595.2   0.67   0.60    0.25
192.168.3.54:3000    13:13:36-GMT->13:13:46     553.2   2.10   1.95    1.28
192.168.3.55:3000    13:13:36-GMT->13:13:46     612.6   1.18   1.11    0.75
192.168.3.56:3000    13:13:35-GMT->13:13:45     580.4   2.02   1.96    1.59
192.168.3.57:3000    13:13:36-GMT->13:13:46     633.2   0.66   0.54    0.35
192.168.3.58:3000    13:13:35-GMT->13:13:45     605.4   1.30   1.17    0.69
192.168.3.59:3000    13:13:33-GMT->13:13:43     621.2   1.82   1.75    1.32
192.168.3.60:3000    13:13:37-GMT->13:13:47     588.2   1.43   1.34    0.87
192.168.3.61:3000    13:13:38-GMT->13:13:48     542.7   1.58   1.55    1.05

	====writes_reply====
                                   timespan   ops/sec   >1ms   >8ms   >64ms
192.168.3.52:3000    13:13:37-GMT->13:13:47     573.1   1.03   0.91    0.49
192.168.3.53:3000    13:13:35-GMT->13:13:45     595.2   0.67   0.60    0.25
192.168.3.54:3000    13:13:36-GMT->13:13:46     553.2   2.10   1.95    1.28
192.168.3.55:3000    13:13:36-GMT->13:13:46     612.6   1.18   1.11    0.75
192.168.3.56:3000    13:13:35-GMT->13:13:45     580.4   2.02   1.96    1.59
192.168.3.57:3000    13:13:36-GMT->13:13:46     633.2   0.66   0.54    0.35
192.168.3.58:3000    13:13:35-GMT->13:13:45     605.4   1.30   1.17    0.69
192.168.3.59:3000    13:13:33-GMT->13:13:43     621.2   1.82   1.75    1.32
192.168.3.60:3000    13:13:37-GMT->13:13:47     588.2   1.43   1.34    0.87
192.168.3.61:3000    13:13:38-GMT->13:13:48     542.7   1.58   1.55    1.05

	====reads====
                                   timespan   ops/sec   >1ms   >8ms   >64ms
192.168.3.52:3000    13:13:37-GMT->13:13:47   10023.4   0.30   0.20    0.16
192.168.3.53:3000    13:13:35-GMT->13:13:45    9858.0   0.20   0.16    0.14
192.168.3.54:3000    13:13:36-GMT->13:13:46    9353.1   0.34   0.20    0.18
192.168.3.55:3000    13:13:36-GMT->13:13:46   10229.3   0.21   0.16    0.14
192.168.3.56:3000    13:13:35-GMT->13:13:45    9570.0   0.25   0.18    0.16
192.168.3.57:3000    13:13:36-GMT->13:13:46   10943.8   0.25   0.14    0.12
192.168.3.58:3000    13:13:35-GMT->13:13:45   10300.6   0.25   0.17    0.15
192.168.3.59:3000    13:13:33-GMT->13:13:43   10174.5   0.24   0.15    0.14
192.168.3.60:3000    13:13:37-GMT->13:13:47    9729.3   0.27   0.16    0.15
192.168.3.61:3000    13:13:38-GMT->13:13:48    9211.9   0.27   0.20    0.18

#16

NSup is taking ~500 seconds per run. Since this is more than the nsup-period, it is basically always running which is why one of your cores is always consumed. I would like to see your latency characteristics when nsup isn’t running, could we change nsup to run once an hour instead of every two minutes?

asadm -e "set-config:context=service;nsup-period=3600"

Wait about 15 minutes and run

asadm -e "show latency"

#17

OK, we will change the setting and check the latency.


#18

hi,kporter

we change the setting on node 192.168.3.55 , and found that this node’s latency VERY GOOD.

Monitor> latency -k reads

	====reads====
                                   timespan   ops/sec   >1ms   >8ms   >64ms
192.168.3.52:3000    04:45:57-GMT->04:46:07   11037.1   0.31   0.20    0.18
192.168.3.53:3000    04:45:58-GMT->04:46:08   10837.6   0.22   0.15    0.13
192.168.3.54:3000    04:45:55-GMT->04:46:05   10289.2   0.29   0.21    0.19
192.168.3.55:3000    04:46:01-GMT->04:46:11   11482.8   0.11   0.00    0.00
192.168.3.56:3000    04:46:00-GMT->04:46:10   10907.2   0.35   0.17    0.15
192.168.3.57:3000    04:45:58-GMT->04:46:08   11875.9   0.24   0.17    0.15
192.168.3.58:3000    04:46:04-GMT->04:46:14   11717.0   0.28   0.14    0.13
192.168.3.59:3000    04:45:54-GMT->04:46:04   11196.7   0.27   0.15    0.13
192.168.3.60:3000    04:45:56-GMT->04:46:06   10676.4   0.22   0.15    0.13
192.168.3.61:3000    04:45:59-GMT->04:46:09   10239.0   0.25   0.17    0.17

Monitor> latency -k reads

	====reads====
                                   timespan   ops/sec   >1ms   >8ms   >64ms
192.168.3.52:3000    04:46:07-GMT->04:46:17   11429.8   0.30   0.17    0.15
192.168.3.53:3000    04:46:08-GMT->04:46:18   11174.2   0.27   0.19    0.17
192.168.3.54:3000    04:46:05-GMT->04:46:15   10653.6   0.18   0.13    0.11
192.168.3.55:3000    04:46:01-GMT->04:46:11   11482.8   0.11   0.00    0.00
192.168.3.56:3000    04:46:00-GMT->04:46:10   10907.2   0.35   0.17    0.15
192.168.3.57:3000    04:46:08-GMT->04:46:18   12237.7   0.22   0.15    0.13
192.168.3.58:3000    04:46:04-GMT->04:46:14   11717.0   0.28   0.14    0.13
192.168.3.59:3000    04:46:04-GMT->04:46:14   11490.1   0.36   0.20    0.19
192.168.3.60:3000    04:46:06-GMT->04:46:16   11045.2   0.24   0.16    0.14
192.168.3.61:3000    04:46:09-GMT->04:46:19   10501.6   0.32   0.21    0.19

So can we close the AUTO nsup ? and we do the nsup manual at 2AM use crontab ?


#19

So can we close the AUTO nsup ? and we do the nsup manual at 2AM use crontab ?

There isn’t an option to run nsup on demand. To simulate on demand, you could have the cronjob set the nsup-period to 1 second, sleep for 10 seconds, and then set the nsup-period to 24 hours.

We also do not recommend such a setting, since doing so means nsup will not be removing expired, you could find you disk filled with expired objects, so will want to monitor your namespaces free-pct-disk and trigger nsup if you find your used-pct-disk is unexpectedly approaching high-water-disk-pct.

Example script for cronjob:

asinfo -v "set-config:context=service;nsup-period=1";
sleep 10;
asinfo -v "set-config:context=service;nsup-period=1d";

#20

hi, kporter, thx.
we will monitor the free-pct-disk.

PS: why nsup such a big impact?
can we improve it ?