What is the right way to delete sets completely?

Im my workplace we need to have multiple versions of data . So we are planning to store each version of data in a different set. But then we have to delete stale unused sets. What is is the best way to do this?

As there is limit of 1023 sets per namespace, we need to get rid of the empty sets completely. This needs a reboot.

So If I mark a set for deletion and restart aerospike each node one by one, will the set (along with its name and data) go away.

My concern is that deleted data might come back (after the reboot) if the data on disk (ssd) is not updated. (as mentioned in https://www.aerospike.com/docs/guide/FAQ.html under the section “How do deletes work?”)

From what i understand the fool proof way to delete stuff would be to put a ttl on each record (to something very low) and then delete each record or mark the entire set for deletion.

This suggestion also has a significant risk of deletes coming back on a coldstart but it does reduce the odds a bit.

In this scenario the only option I can provide is to delete the set as you have already done, but since you have rebooted a node already this will need to be done again.

With the set deleted:
  For each node:
    stop Aerospike on that node
    zeroize the disks associated with the namespace with the deleted set
    start Aerospike on that node
    wait for migrations to complete
    continue to the next node

Blockquote zeroize the disks associated with the namespace with the deleted set

Thanks for the response. What does this mean?

By zeroize, I mean to set all of the bits on your disk to zero.

For SSDs you can do this with blkdiscard:

sudo blkdiscard /dev/<INSERT DEVICE NAME HERE>

On other disks you will need to use dd:

dd if=/dev/zero of=/dev/<INSERT DEVICE NAME HERE> bs=1M

But this will delete the entire namespace, right?

http://www.aerospike.com/docs/guide/FAQ.html

Records whose TTL have expired will never be reindexed after a reboot. Setting a low TTL value will ensure the expected deletion behavior.

What does this mean?

You in your previous comment mentioned that

This suggestion also has a significant risk of deletes coming back on a coldstart but it does reduce the odds a bit.

Under what cases will expired recored be re-indexed?

I tested this and realized this is not the case. I added 30,000 records to a set.

  • then deleted 10k of them
  • set a ttl of 10 secs and deleted the next 10k
  • set a ttl of 10 secs for the other 10k

Before restart i did a get on all of them they gave expected results Then i made a aerospike restart. And almost all of them came back. This is the code I used.

 # import the module
import aerospike

# Configure the client
config = {
    'hosts': [('127.0.0.1', 3000)]
}

# Create a client and connect it to the cluster
client = aerospike.client(config).connect()

# Records are addressable via a tuple of (namespace, set, key)
set_name = 'tp2'
def putDelete():
    policy = {'key': aerospike.POLICY_KEY_SEND}  # store the key along with the record
    meta = {'ttl': 0}
    # client.put(key, bins, meta, policy)
    # bins = {'name': 'John Doe','age': 33}
    # client.put(key, bins)
    #
    for i in xrange(3 * (10 ** 4)):
        key = ('ssd', set_name, str(i))
        bins = {'age': i}
        client.put(key, bins, meta, policy)
        if i % 10 ** 3 == 0:
            print client.get(key)[1:]
    print "==========="
    # Read a record
    key = ('ssd', set_name, '100')
    (key, metadata, record) = client.get(key)
    print (key, metadata, record)
    # Close the connection to the Aerospike cluster
    for i in xrange(10 ** 4):
        key = ('ssd', set_name, str(i))
        client.remove(key)
        if i % 10 ** 3 == 0:
            print client.get(key)[1:]
    print "==========="
    for i in xrange(10 ** 4, 2 * (10 ** 4)):
        key = ('ssd', set_name, str(i))
        meta = {'ttl': 10}
        bins = {}
        client.put(key, bins, meta)
        client.remove(key)
        if i % 10 ** 3 == 0:
            print client.get(key)[1:]
    print "==========="
    for i in xrange(2 * (10 ** 4), 3 * (10 ** 4)):
        key = ('ssd', set_name, str(i))
        meta = {'ttl': 10}
        bins = {}
        client.put(key, bins, meta)
        if i % 10 ** 3 == 0:
            print client.get(key)[1:]
    print "==========="


def read():
    for i in xrange(10 ** 4):
        key = ('ssd', set_name, str(i))
        if i % 10 ** 3 == 0:
            print client.get(key)[1:], key
    print "==========="
    for i in xrange(10 ** 4, 2 * (10 ** 4)):
        key = ('ssd', set_name, str(i))
        if i % 10 ** 3 == 0:
            print client.get(key)[1:], key
    print "==========="
    for i in xrange(2 * (10 ** 4), 3 * (10 ** 4)):
        key = ('ssd', set_name, str(i))
        if i % 10 ** 3 == 0:
            print client.get(key)[1:], key
    print "==========="

# putDelete()
read()

client.close()

Delete operations only do an in-memory index deletion for the records. The records on the disk are asynchronously removed by a separate defragmentation process. https://www.aerospike.com/docs/guide/FAQ.html

What is this process? How often does this run? Is there any setting which increases the frequency of this (some performance hit is okay) ? Is there a way to know if this process ran or not?

Yes, I requested that this be purged from our FAQ because it is incorrect. Doing this introduces a small chance that a record will not return.

Knowing these details will not help… Each device associated with a namespace continually processes a defrag queue for its underlying device.

The only way for a record to be truly deleted from Aerospike’s persistence layer is for the copy of the record with the largest TTL values to become expired or all copies of a given record to be overwritten. Because at any given moment there can be multiple untracked dead copies of a tracked live record on the storage devices, we cannot efficiently purge these copies from persistent storage. Copies are generated by updates to a record or the record’s container block being defragmented.

It will delete all the data for that node, assuming replication factor > 1, the data will also exist on other nodes. In the procedure I provided it says to wait for migrations to complete which means the data will be migrated back to this node (without the data that was deleted).

As of version 3.12.1, released in April 2017, set-delete is deprecated. The new feature of deleting all the data in a set or namespace is now supported in the database.

See the following info command reference truncate documentation:

See Managing Sets in a Namespace: http://www.aerospike.com/docs/operations/manage/sets#truncating-a-set-in-a-namespace

Truncate can also be executed from the client APIs. Here is the java documentation: http://www.aerospike.com/apidocs/java/com/aerospike/client/AerospikeClient.html

1 Like

In order to complete remove sets in a namespace, we first truncate al thel sets in a namespace, and show sets command show all sets have 0 objects. But after reboot Aerospike-Server immediately, the deleted data come back! Now I wonder how can I remove empty sets…

The process you describe will only work on Aerospike Enterprise because truncate is not durable on Community Edition.

囧rz, I find out that not only truncate data is not durable, but also deleted data… I can say without this feature, Aerospike Community Edition is meaningless…

Durable deletes are a very new feature, most users utilize expiration/eviction which cost less in resources than durable deletes.

Meaningless is a subjective to your use case. Aerospike has been in use for years before durable deletes, quite successfully. The question is whether you cold-start a node regularly. If so, this might be one feature you want to look to the Enterprise Edition for.

You are right, that’s what can be done without durable deletion.

The question is whether you cold-start a node regularly.

It hurt if you need to restart the server(and you will need in someday), has nothing to do with frequency.

Aerospike has been in use for years before durable deletes, quite successfully.

The reason is as @kporter says above, most users utilize expiration/eviction as durable deletes. Or use Aerospike as pure memory cache without persistence.

@arganzheng Very few people use Aerospike as a pure in memory system without persistence. I can’t think of a commercial customer - a single one - that uses Aerospike without persistence. Most do have some data in RAM-only configurations, but they back everything up with disk for fast restarts without having cache warm-up problems. You might quibble and say that’s still a cache case; it’s hard to say. A more reliable cache? An expiration database? Just words.

There are several application patterns I see:

  • Cases where a delete is “advisory” and is not a hard requirement from a data management and lifecycle perspective. This is the kind of case where you never expect the application to request the data again, and if the application happens to request it, whether the data is there or not doesn’t appreciably change behavior depending on whether that request returns “not found” vs a data record with an old time mark. These cases are remarkably prevalent.

  • Behavioral analytics, in which older and older data just gets less useful, but expiration is the desired policy.

Remember, Aerospike is already an AP database. Plenty of people need a really fast AP database capable of terabytes.

The other reality is that maintaining tombstones has expense. In the enterprise edition, we support both EXPUNGE ( which is in community ) vs DURABLE for deletes. In many, many cases you look at the data model and realize you don’t want to pay the cost of tombstones and tombstone management.

I can’t comment at the moment regarding where we are going regarding this and similar features, but we certainly do have interesting plans coming up. Aerospike is not standing still in this regard.