A few questions on aerospike

Hi,

We are thinking of exploring aerospike for our use case. We are not really looking for a key store but our use case will basically involve lots of sql like queries, with indexes etc. On initial investigation, we are stuck at the following.

  1. Does Aerospike Query support most query operations like “EQUALS”, “NOT IN”, “NOT EQUALS”, “GREATER THAN”, “LESS THAN” etc.

  2. Does aerospike hits all nodes/partitions for its sql queries and gets the data back, or it can find out based on index, and some custom logic to query data only on a single/two partitions(if we managed to store the data on one/two partitions only).

  3. Can we specify a custom partition key so that all the data of a particular type is stored only in 1 node. Like in a students table, instead of partitioning by studentId, we partition by class because all queries have class in it.

  4. If our data size will approximately be 1 TB, is it good to store it on 1 or 2 machines only or its good to spread it evenly, like 50 GB on each machine.

  5. Is Aerospike good option for this use case where we need to make lots of SQL based queries with lots of indexes or is it mostly good for key value stores or where the key is known in advance. In other words is it closer to MySQL or closer to NoSQL.

Thanks Tuco

YES, Not sure, NO, YES, YES. Some functionality can be extended by using predicate expressions though.[quote=“tuco.ramirez, post:1, topic:4096”]

  1. Does aerospike hits all nodes/partitions for its sql queries and gets the data back, or it can find out based on index, and some custom logic to query data only on a single/two partitions(if we managed to store the data on one/two partitions only). [/quote]

If its a PK query, it only executes against 1 node by default. For reads, you can optionally have this hit a random node that has a replica of the key you want. For Queries, it is scatter gather. By default, though, it should only operate on the master copy of the data. You need not worry about replica.

Why wold you want to do that? It’s kind of possible, in a way, but not like you’re thinking… would need to know the use case to know how to help answer.

By default, Aerospike will shard out your data evenly across all nodes. This splits load evenly across the cluster, which has always been preferable in my use cases…

Yes. Aerospike is the best thing ever :wink: How you model your data will depend on your number of records and cardinality and access patterns though.

Relevant reading: Secondary Index (SI) Query | Aerospike Documentation Predicate Filtering (Obsolete) | Aerospike Documentation

Thanks.

The use case is that we will have a huge item table, and we need to do all sort of querying/searching/sorting on it, using indexes. If possible, we would want to do joins with other tables, but that can be achieved by denormalizing the table. Basically we need a very fast/in memory database which could support SQL like queries with indexes. Hopefully it should also support joins. Considering it could be a huge distributed table, lets say with 5-10 nodes, we will have to hit 5-10 queries per query. Further the response time will be the slowest of those 5-10 nodes, and one node could be a bottleneck. Also, all our queries will have a typeId and clientid, so we were thinking if we could make “typeid:clientid” as the partition key, all the items related to that typeid and client id would be stored in the same partition table, so that for each query, when we get the typeid and clientid, we could query that same partition and execute one query instead of N queries.

Thanks

Joins with other tables is usually done with some kind of reference value or query. I don’t see any reason why you shouldn’t be able to do this with Aerospike.

What kind of data sets are you planning on storing and how are you accessing them - can you evaluate on exactly what you need to do and the number of records and cardinality?

Think of an individual record as the basic unit of storage in Aerospike. These records have a key and a value and are distributed randomly across all nodes of the Aerospike cluster using a hash of the record’s key. Value is a collection of one or more “bins” of allowed types.

Aerospike’s use of the term partition is grouping this hash value into 4k groups each group having a partition id which is the value of specific 12 bits in the 20 byte hash, partition table is just a table of which group (partition) is on which node. User has no choice in defining partition or the partition allocation to nodes. All that happens under the covers. Net effect for user is random and uniform distribution of records across all nodes of the cluster.

I think your usage or understanding of partition key and partition table is not in sync with Aerospike terminology.

Thank you, that helped.

The reason we are looking at custom hash key is that we want to avoid query on all nodes, because querying all nodes for all queries will not achieve horizontal scaling in querying although it will do so in data storage.

thanks

You might be boxing yourself by looking for a pre-defined solution in the sense, present the problem and see “if” and “how” it can be solved by various different options, Aerospike being one of them. If you can present some representative yet contrived record data format and what queries you want to run on it, perhaps folks on this forum can help with both “if” it can be done on Aerosipke and “how” it can be done on Aerospike; and then how to benchmark the performance you can expect on Aerospike, get a baseline, before you firm up on the solution.

Sure, let me try to put something here. We have an item with various fields, and when the request is made about the type of item best suited for a particular case, we find out the best items for that query. Right now, the items are less, and we keep some reverse indexes in application’s local memory and find out the items, on which we iterate and find out the top N items etc.

Now, lets say our items are expected to grow a lot, and keeping them in server’s memory is infeasible, as well as iterating on any large set(because a subset of a huge set of items will still be large). Further, considering we need to serve items within 10 ms.

Basically once we move away from the in-memory data, we will need to have a distributed in-memory database. Now considering we need to find out the top 5 items etc, we need to query. So, we are looking at distributed in-memory database with indexes.

Another option is to try distributed in-memory database where we don’t query but do a compute. Not sure whether lua could be used, but a solution ‘apache ignite’ is there.

Regardless of whether we do the compute or not, we would need the querying capabilities. The querying could be of the form Select * from Item where clientId = ? and isActive = ? and category = ? and itemid not in (?, ?, ?) and itemSize = ? and tags = ? order by rank desc limit 5 etc…

Frankly the queries can be a bit more complex as requirements grow, but i am quite sure we need dynamic querying capabilities. The expected number of items can go upto 100 million+(over the next few yrs of course). Normally, in almost all distributed solutions for this, SQL queries go to all the partitions/nodes. We are keen to avoid it, just because we don’t get horizontal querying, and one slow node can make the whole system slow(remember we need the data in 10-20 ms). But some solutions provide a custom partition key(which is not the primary key), but still hit every node for query.

wondering if aerospike is used for solving this kind of problems. could see some ad stacks on your sites who could have similar use cases related to items.

There a few challenges here. 1 – 100 million records, 2 - Query is dynamic (implies canned User Defined Functions can’t be used), 3 - Secondary index on clientID - what is the cardinality? 100 million will drop down to how many candidate records for rest of the filtering criteria? This is very important to know. 4 - post query results have to be ordered and then pick top 5.

I can think of a data model where all this can be achieved in Aerospike if cardinality of Secondary Index attribute is high. But then getting the 10 ms lookup over 100 million records - I don’t know representative numbers at the top of my head. Will have to test with representative hardware, network bandwidth, cardinality of the SI attribute.

Would be interested to know what other solutions can meet the 10ms dynamic query over 100 million records with Secondary Index followed by sort by rank and limit applied.

@pgupta Did you find an answer for ranking?

Hi @ayyappa99, this thread is 3 years old, I would suggest opening a new thread and describe what it is you are looking for. There may be better APIs available now that didn’t exist 3 years ago.