Issue: record-too-big error not cascading to aerospike-spark connector

Server Version: 6.3

Spark Version: 3.1.2

Writing records via Spark using aerospike-spark connector. Writes are not getting reflected on Aerospike server but no errors observed in Spark job.

        <dependency>
            <groupId>com.aerospike</groupId>
            <artifactId>aerospike-spark_2.12</artifactId>
            <version>4.5.1-spark3.1-scala2.12-allshaded</version>
        </dependency>

I can see record-too-big error in Aerospike Server logs but nothing on client side. How could this error be cascaded to spark connector which could eventually fail Spark Job?

INFO (info): (ticker.c:1051) {test} special-errors: key-busy 0 record-too-big 1 lost-conflict (0,0)

Config:

namespace test {
  default-ttl 30d # use 0 to never expire/evict.
  memory-size 1G
  nsup-period 120
  replication-factor 1
  storage-engine device {
    data-in-memory false # if true, in-memory, persisted to the filesystem
    file /opt/aerospike/data/test.dat
    filesize 4G
    read-page-cache true
  }
}
{
  "aerospike.seedhost": "localhost",
  "aerospike.port": "3000",
  "aerospike.namespace": "test",
  "aerospike.set": "test_set",
  "aerospike.updateByKey": "pk",
  "aerospike.keyColumn": "pk",
  "aerospike.write.mode": "REPLACE",
  "aerospike.sendKey": "true",
  "aerospike.commitLevel": "CommitLevel.COMMIT_ALL"
}

Thanks

@mcoberly I believe it might be related to this

  • [CONNECTOR-205] - Filter out records that breach write block size in Aerospike via Spark Connector.

Where could I find more on CONNECTOR-205?

Thanks