How do I remove a set?

As the no. of sets are limited to 1023, is it possible to remove a set once created either through API or through asinfo. I tried using the following command: asinfo -v "set-config:context=namespace;id=blocked;set=testset;set-delete=true;

I see only the records are removed, however, the set remains. how do we remove a set?

Regards Suthindran

1 Like

Hi Suthindran,

Removing a set dynamically is not supported in aerospike. Once created, it persists in the system throughout its life time. Once you have mark the set-delete as true, the flag resets after removing every data from the set.

The only way to remove an empty set is to restart the server.

EDIT 1: The only way to remove a set is to cold restart the server after contents of that set have expired or the disks the set may be stored on have been wiped.


1 Like

Let me point out that if you need this many sets we would like to offer assistance in your database design as needing this many sets would be an extremely unusual use case that I personally would like to know more about.

Thanks Petter

Thanks pratyyy for your response. The restart also does not drop the set, could you please cofirm once again, if required i can share the screenshots.

Regards Suthindran

Petter, Thanks for your response. We are at a very high traffic level, and expected to grow multiple times in the coming months. A set is created every hour (against the hour of the day and the day) to capture the data flow for analysis, and to provide some realtime hourly basis consolidation. We would not mind archiving the data after a month. We may not create 1023 sets, however, as we cross a month and if we do not drop the sets we would exceed the limit.

I understand that there are many alternatives for doing this, however, i think it would be appropriate to have an interface for dropping a set.

Regards Suthindran


If you could closer describe your use case perhaps we could, potentially, help you with something that might work for you.

Thanks Petter


Here is some additional information from our labs:

Thanks Petter

Hi Petter, Precisely, the content is misleading. It says deleting sets and data, but it only does deleting the data and not the set. I suggest to change the documentation or provide an interface to drop a set.

Hope this helps. I am working on alternative solution for time being.

Regards Suthindran


Did you find an alternative solution? It would be nice to know how permanently delete set(s) is possible.

Br Ripa

The work around is to reuse the sets, with expiration set to clear all rows before the set is ready for reuse. For example: To store data on daily basis, setname is generated as a function of (day id % 3) with expiration set as 24 hours. So, on the the third day the first day is set will be empty and ready for reuse the next day.

Hope this helps. Regards Suthindran

That is clever reusing the sets. Documentation about “drop set” varies a little bit and i have to test a little bit more is it possible to drop set at all…restarting service etc.

Thanks for comments! Br.ripa

Here is one way I observe that a set is dropped:

  1. Insert records (through Java Client) into a set with expiration set for each record
  2. after all the records are expired, use asinfo to set ‘set-delete’ flag to TRUE
  3. restart aerospike
  4. The set will be dropped

When i tried to manually insert and delete using (aql), the set was not getting dropped. However, with the above steps, i could observe that the set getting dropped.

I am not sure if this is a feature or a bug. Please try it out and let me know.

Regards Suthindran


Could you post your namespace context configuration for when you were running this?

Thanks Petter

Please find the namespace configuration below:

namespace archive { replication-factor 2 memory-size 4G default-ttl 0 # 30 days, use 0 to never expire/evict. storage-engine device { file /data/aerospike/archive.dat filesize 16G data-in-memory false # Store data in memory in addition to file. } }

Regards Suthindran


The above method will work assuming the the records are expired ONLY (NOT evicted or deleted) on a set across the entire cluster or it will not work. This is because during a cold start the server will check the ttl first and if the records are not expired it will build the index based upon these records which includes evicted and deleted records assuming they are still valid.

Please note that this is intended behavior of Aerospike and not a bug.

Thanks Petter

Thanks Petter for the explanation. However, as a consumer I would ideally look for a solution which drops an empty set when marked for delete, irrespective of the records are expired or get deleted/evicted. If you can raise this as a change request, it would be great.

Regards Suthindran

I cannot drop any set and records… I tried that method - data dissappear and set was marked for deletion, after cold restart all data appear again and set will be does not removed… I use an enterprise version.

Also I can’t understand how to delete wrong records in set, if I lost a key. Only solution I’ve found is scanning records in client and remove it by sign (bin value or something) and process removing by scanned key.

How data can be removed after expiration if ttl was 0 ?? this data will never expire.

Please help.

I think this will help:

Many thanks!

I was not aware about “cold-start-empty” parameter in namespace config and made cold start just by command “service aerospike coldstart”.

So after set cold-start-empty to true - I really was spared from annoying junk data. But! junk sets are still presents and are not set to deleted true. How to delete them too?

If you need to purge the data then you use the set delete instructions discussed above.

But I assume you are trying to get rid of the set metadata (such as the set name as listed in asinfo -v "sets/". First, I wouldn’t be concerned about rogue sets if you aren’t near the 1024 set-name limit as it is an involved process. That said, to get rid of the set metadata, you will need to purge to data from the set as discussed above. Assuming replication factor > 1, next you will need to CAREFULLY do a rolling cold restart with cold-start-empty set to true. After each restart you will need to wait for data to fully replicate before continuing to the next node.