I’m using the lset large data type to store roughly large numbers of unique integers, with repeated attempts to store some. It’s effectively storing a list of ids that have accessed recently.
However any time the same id gets stored there is a “LDT-Unique Key or Value Violation” error. I hardly consider this to be an error, and it creates large amounts of meaningless logs. Is there a way to prevent this from being outputted without altogether disabling udf logs? I could always check for existence before attempting to make a put, but that seems very inefficient.
Also, what are the scaling limits of this? The documents seem to suggest a maximum set size of 2gb, but the settings for DEFAULT_SMALL_CAPACITY is only 500k. If dealing with say 8 byte values should I expect to be able to fit 10^7(or even 10^8) entries in the set or is that not going to work?
Finally, what is the recommended way to insert large amounts of data fast? Should we package our own asynchronous wrapper and maintain many connections, or is there a better way to do this(Since neither the execute call or LargeSet classes appear to have asynchronous versions)