Distributed session store

#1

Hi all!

We want to deploy several instances of an application into a cloud, all of them should use a common distributed session store.

The number of instances should be scalable (up and down) in ongoing operations, from one to many. That is, if a instance is taken off (e.g. maintenance reasons) all sessions must still be available to other instances. Same if multiple instances are added, they should have have immediate access to all sessions.

Can this use case be implemented with Aerospike? If so, what needs to be considered and which functionalities must be used? Doable with the community edition?

Thanks, Armin

#2

Sounds like a great use case for Aerospike. Do you have any idea what your data model, record size, and number of records might be? Also any idea how many reads/writes per second? Community edition works for a lot of use cases but; scale will be limited (max number of CE nodes is 8 in a cluster, EE is 128), some features missing that are desirable, and most of all you don’t get enterprise support (they are AWESOME).

So if you have a basic idea of sizing, and how you plan on using the data - we can give you a better idea of how it’ll work and if CE would be good.

#3

Thanks for your reply, @Albot.

Our application is based on a multi-processing architecture to handle concurrent requests. Normally, up to 1024 processes per node want have simultaenous access to the session store in order to enforce session-based access control.

We have a 1:n relation between a session and its session items. The session record itself holds properties of primitive types (ID, timeout, read/writes stats, etc). A session item can be referenced by the corresponding session ID and holds serialized data.

A session record is normally a few KiBs whereas a session item can fill up to several MiBs. Several thousands of sessions and multiple items per session are common.

The number of nodes depends how much a customer wants to scale the application. However, in a cluster of 4 nodes up to 50.000 TPS (usually, 80% reads, 20% writes) must be possible.

Thanks in advance if you can provide more information to this use case.

#4

That sounds like it’ll work. I’m not sure what the size is (avg objsz*avg recsz), or how big the nodes you’re planning on using or memory vs disk, to be able to say if it’ll fit in CE - but I think your use case fits great.