One to many relationship with filter

Well that is great - now also be aware of the pitfalls of both approaches.

They both work to give you accurate results in a perfect network and stable cluster. If you do filtering, and if the cluster undergoes a change - node drops out or new node joins in- you may get partial results or duplicate results. But you can use failOnClusterChange flag to catch that and re-run. Downside is high latency as you have tested.

In the -index on the fly- approach - each record update is now turned into two separate record updates and you must think of all the possible failure patterns.

Which record should I update first - the UserSet or HobbySet? If one write succeeds and other fails, what is my strategy? You must catch write failures and timeouts of each write and decide based on your data model. Is it better to write HobbySet first and then if UserSet never completes, I may subsequently try to read a record that does not have the data - can I deal with that in the reads? Other way around, if update UserSet but fail to update HobbySet and abandon - I may never index this entry. Is there a background job I can use to periodically validate the HobbySet records? Or keep retrying write till it succeeds?

Secondly, if you are using Community Edition, you can only use the AP mode. What happens if the cluster splits? You may end up writing a new index record in HobbySet and lose all previous UserLIst entries when the cluster heals regardless of how the two versions get conflict resolved, the user-lists won’t get merged. You could use strategies such as - don’t create the record if it does not exist and first initialize each HobbySet record with some placeholder entry. This way if the cluster splits, you will not create a brand new HobbySet record in the split sub-cluster. If you are using the Enterprise Edition, use the Strong Consistency mode and it will automatically protect you against this scenario. So while this multi-record update will work for the most part - in the rare split cluster scenario due to networking failure events, you may corrupt your index (HobbySet). If you have a background index (HobbySet records) rebuilding strategy, then that would be a plausible way to deal with it. Again, depends on the criticality of the data model needs. However, any multi-record update model, I would strongly recommend Enterprise Edition Strong Consistency mode of operation.