Migration of Aerospike cluster without downtime

@kporter @Albot Sorry for the delay. Below are the logs from new node.

Jul 11 2017 14:44:15 GMT+0530: DEBUG (fabric): (fabric.c:1263) asking about node bb9245fbd565000 good read 33346 good write 33346
Jul 11 2017 14:44:15 GMT+0530: DEBUG (fabric): (fabric.c:1263) asking about node bb94179bd565000 good read 33346 good write 33346
Jul 11 2017 14:44:15 GMT+0530: DEBUG (paxos): (paxos.c:2040) PAXOS message with ID 9 received from node bb9ef48bd565000
Jul 11 2017 14:44:15 GMT+0530: DEBUG (paxos): (paxos.c:2743) unwrapped | received paxos message from node bb9ef48bd565000 command SYNC (9)
Jul 11 2017 14:44:15 GMT+0530: DEBUG (paxos): (paxos.c:3076) received sync message from bb9ef48bd565000
Jul 11 2017 14:44:15 GMT+0530: DEBUG (paxos): (paxos.c:428) SYNC getting cluster key 54a8295ed4c6c4d2
**_Jul 11 2017 14:44:15 GMT+0530: INFO (partition): (partition.c:235) DISALLOW MIGRATIONS_**
Jul 11 2017 14:44:15 GMT+0530: INFO (paxos): (paxos.c:147) cluster_key set to 0x54a8295ed4c6c4d2
Jul 11 2017 14:44:15 GMT+0530: DEBUG (paxos): (paxos.c:442) setting succession[0] = bb9ef48bd565000 to alive 

when i checked on the machines from existing cluster, i got to know fabric process is bind to different NIC card.

tcp        0      0 10.40.0.151:3001            10.40.0.158:3064            ESTABLISHED 0          317516582  -
tcp        0      0 10.40.0.151:3001            10.40.0.153:47604           ESTABLISHED 0          156429404  -

ifconfig output of a machine from existing cluster

eth0      Link encap:Ethernet  HWaddr 00:50:56:BD:0F:2E
          inet addr:10.40.0.151  Bcast:10.40.3.255  Mask:255.255.252.0
          inet6 addr: fe80::250:56ff:febd:f2e/64 Scope:Link
          UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
          RX packets:9043214405 errors:0 dropped:0 overruns:0 frame:0
          TX packets:7371972367 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:1000
          RX bytes:2543220222540 (2.3 TiB)  TX bytes:2551073592952 (2.3 TiB)

eth1      Link encap:Ethernet  HWaddr 00:50:56:BD:1E:78
          inet addr:172.20.21.185  Bcast:172.20.23.255  Mask:255.255.248.0
          inet6 addr: fe80::250:56ff:febd:1e78/64 Scope:Link
          UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
          RX packets:6609006486 errors:0 dropped:0 overruns:0 frame:0
          TX packets:3849719449 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:1000
          RX bytes:1966068229402 (1.7 TiB)  TX bytes:1002592992809 (933.7 GiB)

eth2      Link encap:Ethernet  HWaddr 00:50:56:BD:0C:3B
          inet addr:10.20.0.150  Bcast:10.20.3.255  Mask:255.255.252.0
          inet6 addr: fe80::250:56ff:febd:c3b/64 Scope:Link
          UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
          RX packets:1521778656 errors:0 dropped:0 overruns:0 frame:0
          TX packets:12 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:1000
          RX bytes:92570021828 (86.2 GiB)  TX bytes:760 (760.0 b)

eth3      Link encap:Ethernet  HWaddr 00:50:56:BD:78:25
          inet addr:10.30.0.159  Bcast:10.30.3.255  Mask:255.255.252.0
          inet6 addr: fe80::250:56ff:febd:7825/64 Scope:Link
          UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
          RX packets:1521768774 errors:0 dropped:0 overruns:0 frame:0
          TX packets:11 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:1000
          RX bytes:92567914320 (86.2 GiB)  TX bytes:718 (718.0 b)

lo        Link encap:Local Loopback
          inet addr:127.0.0.1  Mask:255.0.0.0
          inet6 addr: ::1/128 Scope:Host
          UP LOOPBACK RUNNING  MTU:65536  Metric:1
          RX packets:702660367 errors:0 dropped:0 overruns:0 frame:0
          TX packets:702660367 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:0
          RX bytes:350155915456 (326.1 GiB)  TX bytes:350155915456 (326.1 GiB)

which is not accessible from new machine

[root@fcp-lgpaerospike3 aerospike]# telnet 10.40.0.151 3001
Trying 10.40.0.151...

Now if need to change the config of existing node to bind the fabric process on eth1(172.20.21.185) but my only concern is, we had deleted lots of entry from the cluster recently and i was reading somewhere, by restarting node those entries might reappear(Expired/Deleted data reappears after server is restarted).

so My question is, there is any way to find out that all the deleted entries has been flushed to disk and they won’t reappear if i restart