Benchmark always begins with spike

The benchmark begins with spike as below (see the values in bold). It hinders our test use cases. Wondering if I can configure to stop this behaviour?

[root@aspike benchmarks]# ./run_benchmarks -h -p 3000 -n test -k 10000000 -b 1 -o B:1400 -w RU,80 -g 1000 -T 50 -z 8
Benchmark: 3000, namespace: pcrf, set: testset, threads: 8, workload: READ_UPDATE
read: 80% (all bins: 100%, single bin: 0%), write: 20% (all bins: 100%, single bin: 0%)
keys: 10000000, start key: 0, transactions: 0, bins: 1, random values: false, throughput: 1000 tps
read policy:
    socketTimeout: 50, totalTimeout: 50, maxRetries: 2, sleepBetweenRetries: 0
    consistencyLevel: CONSISTENCY_ONE, replica: SEQUENCE, reportNotFound: false
write policy:
    socketTimeout: 50, totalTimeout: 50, maxRetries: 2, sleepBetweenRetries: 500
    commitLevel: COMMIT_ALL
Sync: connPoolsPerNode: 1
bin[0]: byte[1400]
debug: false
2017-06-28 17:45:35.516 INFO Thread main Add node BB9A689B63E16FA 3000
2017-06-28 17:45:35.535 INFO Thread main Add node BB9ACB2B93E16FA 3000
2017-06-28 17:45:36.593 write(tps=**1960** timeouts=0 errors=0) read(tps=**8129** timeouts=0 errors=0) total(tps=10089 timeouts=0 errors=0)
2017-06-28 17:45:37.593 write(tps=211 timeouts=0 errors=0) read(tps=797 timeouts=0 errors=0) total(tps=1008 timeouts=0 errors=0)
2017-06-28 17:45:38.593 write(tps=192 timeouts=0 errors=0) read(tps=822 timeouts=0 errors=0) total(tps=1014 timeouts=0 errors=0)
2017-06-28 17:45:39.594 write(tps=214 timeouts=0 errors=0) read(tps=802 timeouts=0 errors=0) total(tps=1016 timeouts=0 errors=0)
2017-06-28 17:45:40.594 write(tps=179 timeouts=0 errors=0) read(tps=832 timeouts=0 errors=0) total(tps=1011 timeouts=0 errors=0)
2017-06-28 17:45:41.595 write(tps=198 timeouts=0 errors=0) read(tps=816 timeouts=0 errors=0) total(tps=1014 timeouts=0 errors=0)

This seems to be coming from throttling of throughput. Do you want an over-damped settling to -g 1000. In that case, it would reach 1000 over a period of cycles. Since this is steady state benchmark, and also to account for JIT compiling in Java benchmark, why not ignore first minute of data as an artifact of the tool? There is not enough data here but looks like the behavior is leaning towards slightly underdamped. Either way its a tradeoff. Hard to get a step function start.