AQL select with PK

pk
aql
java
query

#1

I’m using a Java application to write/read records. It inserts a record with the key in the form of a byte array as illustrated with the following code, where client is an AerospikeClient object:

void insertRow(byte[] keyValue)
{
    Key newKey = new Key("namespace", "setName", keyValue);
    client.put(null, newKey, new Bin("someBinName", 10));
}

In the client’s default write policy, I have sendKey = true, which allows me to see the PK column in the AQL shell with a select, as follows:

aql> select * from namespace.setName
+-------------------------------------------------+-------------+
| PK                                              | someBinName |
+-------------------------------------------------+-------------+
| ED D9 9E 1B 49 87 AF 4D BA 5C F3 5A 26 30 E6 13 | 10          |
+-------------------------------------------------+-------------+

With sendKey=false, the PK client was absent. So it’s cool that I can now see the PK column.

But I can’t do a “select where” with the PK.

aql> select * from namespace.setName where PK='EDD99E1B4987AF4DBA5CF35A2630E613'
Error: (2) AEROSPIKE_ERR_RECORD_NOT_FOUND

I’ve tried every format I can come up with. Spaces between the bytes (as shown in the first select result), without, surrounding double quotes, single quotes, no quotes, etc. No dice.

Anyone know if it’s even possible? Is the PK that is displayed in AQL not the actual PK (it’s a “user” key?) and therefore not the one I should be using (in which case, there’s not really any point in setting sendKey=true, at least for my purposes, since I don’t need to see the PK column in AQL, I just need to be able to query with it using the known key values that I’m using to insert, and I thought sendKey might allow that)?


#2

I’m also an Aerospike user. I’m just wondering whether ‘EDD99E1B4987AF4DBA5CF35A2630E613’ is DIGEST other than PK. Can you get the record using: select * from namespace.setName where DIGEST=‘EDD99E1B4987AF4DBA5CF35A2630E613’ ?


#3

Tried that. Doesn’t work, unfortunately.


#4

PK is an internal designation used by AQL for the user specified primary key. For example, in AQL if you insert a record as follows: (Default KEY_SEND is false)

aql> INSERT INTO test.demo (PK, foo, bar) VALUES ('key1', 123, 'abc')
OK, 1 record affected.
aql> select * from test.demo
+-----+-------+
| foo | bar   |
+-----+-------+
| 123 | "abc" |
+-----+-------+
1 row in set (0.588 secs)

Internally, record is stored using the digest which is a hash of setname+yourkey. AQL can read the record using the digest.

aql> set output json
aql> set record_print_metadata true
aql> select * from test.demo
[
  {
    "digest": "7JEZLUt/jONdXXjTS8ply6qqyWA=",   _<== digest in Base64_
    "ttl": 2591505,
    "gen": 1,
    "bins": {
      "foo": 123,
      "bar": "abc"
    }
  }
]

Use EDIGEST for reading records with digest in Base64

aql> select * from test.demo where edigest="7JEZLUt/jONdXXjTS8ply6qqyWA="
[
  {
    "digest": "AAAAAAAAAAAAAAAAAAAAAAAAAAA=",   _<== incorrect value, AQL bug._
    "ttl": 2591487,
    "gen": 1,
    "bins": {
      "foo": 123,
      "bar": "abc"
    }
  }
]

What is the digest in hexadecimal? Here is how to see it.

aql> explain select * from test.demo where PK='key1'
[
  {
    "SET": "demo",
    "DIGEST": "EC 91 19 2D 4B 7F 8C E3 5D 5D 78 D3 4B CA 65 CB AA AA C9 60",
    "NAMESPACE": "test",
    "PARTITION": 492,
    "STATUS": 0,
    "UDF": "FALSE",
    "KEY_TYPE": "STRING",
    "POLICY_REPLICA": "AS_POLICY_REPLICA_MASTER",
    "NODE": "BB96E3DD1290C00",
    "POLICY_KEY": "AS_POLICY_KEY_DEFAULT",
    "TIMEOUT": 1000
  }
]

Use DIGEST now and carefully trim out the spaces from the hexadecimal digest string after pasting. (AQL command key words are case insensitive. namespace and set names etc are case sensitive.)

aql> select * from test.demo where digest="EC91192D4B7F8CE35D5D78D34BCA65CBAAAAC960"
[
  {
    "digest": "AAAAAAAAAAAAAAAAAAAAAAAAAAA=", _<== incorrect value, AQL bug._
    "ttl": 2591369,
    "gen": 1,
    "bins": {
      "foo": 123,
      "bar": "abc"
    }
  }
]

Now if you set key_send to true first, then insert record via AQL, AQL internally (uses C client) sets write policy attribute sendKey to true. Your key (“key2” below) is then stored in the record.

aql> SET KEY_SEND TRUE
aql> INSERT INTO test.demo (PK, foo, bar) VALUES ('key2', 456, 'abc')
OK, 1 record affected.
aql> select * from test.demo where pk='key2'
[
  {
    "PK": "key2",  _<== yourkey stored with the record_
    "digest": "AAAAAAAAAAAAAAAAAAAAAAAAAAA=",  _<== Ignore, AQL bug with WHERE_
    "ttl": 2591981,
    "gen": 1,
    "bins": {
      "foo": 456,
      "bar": "abc"
    }
  }
]

In your example above, in the client, you have used write policy as null. So it is not sending your key to the record. Internally, Aerospike does not store yourkey - it only stores the digest. Obviously, from the digest you cannot recover your key.

Hope this clears it.


#5

“In your example above, in the client, you have used write policy as null. So it is not sending your key to the record.”

That’s incorrect, because, as I said above, “In the client’s default write policy, I have sendKey = true, which allows me to see the PK column in the AQL shell with a select, as follows:”

As my table from an aql select command shows, there is a PK column, so clearly my client is sending my key.

I have discovered that when I use a string as my user key instead of a byte array, I can issue a successful aql select command with my user key like so:

aql> select * from namespace.setName
+----------------------------------+-------------+
| PK                               | someBinName |
+----------------------------------+-------------+
| 5323805b7cf2c64b823cc7f2d7d33592 | 10          |
+----------------------------------+-------------+

aql> select * from namespace.setName where pk="5323805b7cf2c64b823cc7f2d7d33592"
+-------------+
| someBinName |
+-------------+
| 10          |
+-------------+

However, my user keys need to be byte arrays, and therefore show up like this in aql:

aql> select * from namespace.setName
+-------------------------------------------------+-------------+
| PK                                              | someBinName |
+-------------------------------------------------+-------------+
| ED D9 9E 1B 49 87 AF 4D BA 5C F3 5A 26 30 E6 13 | 10          |
+-------------------------------------------------+-------------+

aql> select * from namespace.setName where pk="EDD99E1B4987AF4DBA5CF35A2630E613"
Error: (2) AEROSPIKE_ERR_RECORD_NOT_FOUND

As you can see, simply removing the spaces between the bytes doesn’t work.


#6

See below, it may help. (Hex 0xDEADFACE = Decimal 3735943886)

aql> INSERT INTO test.demo (PK, foo, bar) VALUES (0xDEADFACE, 678, 'abc')
OK, 1 record affected.
aql> select * from test.demo where PK=0xDEADFACE   _<== Omit the quotes_
+------------+-----+-------+--------------------------------+---------+-------+
| PK         | foo | bar   | {digest}                       | {ttl}   | {gen} |
+------------+-----+-------+--------------------------------+---------+-------+
| 3735943886 | 678 | "abc" | "AAAAAAAAAAAAAAAAAAAAAAAAAAA=" | 2591977 | 1     |
+------------+-----+-------+--------------------------------+---------+-------+
1 row in set (0.000 secs)

The digest value returned by AQL when using WHERE is incorrect (AQL bug) when print_record_metadata is set to true (which is what I have in the above example)


#7

Darn, thought that might work, but it didn’t. I actually think that’s one of the ways I originally tried (using the “0x” with no quotes).

The problem is that

INSERT INTO namespace.setName (PK, someBinName) VALUES (0xDEADFACE, 10)

isn’t equivalent to

byte[] keyValue = 
{
    (byte)0b11011110, // 0xDE
    (byte)0b10101101, // 0xAD
    (byte)0b11111010, // 0xFA
    (byte)0b11001110  // 0xCE
}; 
Key newKey = new Key("namespace", "setName", keyValue);

// *Note* that default write policy has sendKey=true
client.put(null, newKey, new Bin("someBinName", 10)); 

Because after running both of those, in aql I see this:

aql> select * from namespace.setName
+-------------+-------------+
| PK          | someBinName |
+-------------+-------------+
| 3735943886  | 10          |
| DE AD FA CE | 10          |
+-------------+-------------+

The PK=3735943886 row is from the aql INSERT command, while the PK=“DE AD FA CE” row is from my application.

From the “explain select…” command, I can somewhat see the issue.

The KEY_TYPE for the PK=3735943886 row is INTEGER, but for the PK=“DE AD FA CE” row, it’s BASE64_DIGEST.

Also, for the PK=3735943886 row, POLICY_KEY=AS_POLICY_KEY_SEND, while for the PK=“DE AD FA CE” row, it’s AS_POLICY_KEY_DIGEST.


#8

Not sure AQL supports keys others than strings and numerics. AQL is not meant to implement the full Aeropsike Client API. It uses C Clent underneath and exposes limited functionality for basic interaction with Aerospike.

You can enter Lists in bins by using Uppercase json, decimal values. It does not seem to parse lower case json or hexadecimal values in the bins. Not sure AQL can handle Byte[] types.

See various tests below.

aql> INSERT INTO test.demo (PK, listBin, foo, bar) VALUES ('key5','json[0xde, 0xad, 0xfa, 0xce]', 6789, 'abc')
OK, 1 record affected.
aql> select * from test.demo where pk='key5'
+--------+--------------------------------+------+-------+--------------------------------+---------+-------+
| PK     | listBin                        | foo  | bar   | {digest}                       | {ttl}   | {gen} |
+--------+--------------------------------+------+-------+--------------------------------+---------+-------+
| "key5" | "json[0xde, 0xad, 0xfa, 0xce]" | 6789 | "abc" | "AAAAAAAAAAAAAAAAAAAAAAAAAAA=" | 2591992 | 1     |
+--------+--------------------------------+------+-------+--------------------------------+---------+-------+
1 row in set (0.000 secs)

_==> It interpreted listBin value as a string when using lower case json._

aql> INSERT INTO test.demo (PK, listBin, foo, bar) VALUES ('key6','JSON[0xde, 0xad, 0xfa, 0xce]', 6789, 'abc')
Error: (-1) Invalid JSON value: listBin [0xde, 0xad, 0xfa, 0xce]

_==> does not parse hex values._

aql> INSERT INTO test.demo (PK, listBin, foo, bar) VALUES ('key6','JSON[1,2,3]', 6789, 'abc')
OK, 1 record affected.

aql> select * from test.demo where pk='key6'
+--------+--------------------+------+-------+--------------------------------+---------+-------+
| PK     | listBin            | foo  | bar   | {digest}                       | {ttl}   | {gen} |
+--------+--------------------+------+-------+--------------------------------+---------+-------+
| "key6" | LIST('[1, 2, 3]')  | 6789 | "abc" | "AAAAAAAAAAAAAAAAAAAAAAAAAAA=" | 2591994 | 1     |
+--------+--------------------+------+-------+--------------------------------+---------+-------+
1 row in set (0.000 secs)

Trying to use JSON construct on key specification did not work for me.


#9

You could probably write a small standalone Java application to read records with byte array keys … just thinking if it is useful for your development.