Assuming you are CE user and don’t have access to EE features, to handle shipping deletes you have to manage them yourself. It inevitably involves the client taking over that responsibility. When you delete a record in CE, it deletes its Primary Index in RAM - so query with predicate filtering on LUT will not find it. In Enterprise Edition, you can opt to store a tombstone or use Cross-data-center replication feature.
So, 1) as @Albot suggested, do a simultaneous delete in the “data-warehouse” cluster from the client app or 2) depending on your delete frequency, and when you run the database query, write-block-size etc - set up a record or few records that store digests of deleted records in a bin as a list on the transaction cluster. i.e. key: deleteme, bin: [digest1, digest2, …etc., each digest is 20 bytes] - then every time you delete a record in the transaction cluster, append to the deleteme list record. Then, when you do the data-warehouse update operation, run the delete on each digest and clear the delelteme list. Or, consider EE.
Second question - looks like one client modifies a record, you want to be notified so another client can do some additional action. There is no log of transactions in Aerospike, neither can it be made to emit signals on updates. So again, you may have to create your own “log of modified digests” in few dedicated records and visit them from the other client. Similar to the delete solution above. Client has to do the double work. And since you are doing multi-record update for every transaction - the map you want to modify + logging the update, there is always the chance that first one succeeds and second one fails. But here, retry on failure on second one will not adversely impact the method.