I'm currently evaluating SharedCache. First, congrats for the great job you've done.
I just wonder how can a key be locked within the whole cluster.
I'd like to lock a specific key so that I would be the sole process to modify the object bound to this key at a given time.
Here's what I'm looking forward to do :
- lock a given key
- modify the object bound to this key (using IndexusDistributionCache.SharedCache.Add)
- release the lock so that other process may modify it .
As far as I could see, such a lock/release mechanism does not exist in your framework.
Did I miss something ? Is there a better way to make such an exclusive access ? If missing, is it something you've plan to develop in some next release ?
Mar 22, 2009 at 10:33 AM
thanks for your post.
A fully lock down scenario over the whole cluster means a different kind of handling with data then what we have implemented till today. I think
it will have also different behaviors between distributed caching and replicated caching.
in both cases we will have to extend our protocol with lockdown information which means each network message gonna grow and also our server memory grows. such lockdowns will
be implemented once we make a .net session provider. which is planned in one of our upcoming releases. another issue we have to think about it is that we have more calls between clients and server
and its specific handling in case of lock's.
I would suggest that your first 2 points (lock & modify) should be implemented in one single call like this:
Add(string key, object data, bool lock); -> lock need to be extended in the protocol
Get(string key); -> in case its locked there need to be a proper result for this, maybe we can adapt the IndexusException object for this behavior
the third extension you will have to implement is the unlock / release of this item and
on the server side we have implement a list which contains all locks and everytime data is triggered it will check if the item is already locked or not.
in case its locked we can return an IndexusException or we could manage this over the message status.
i think we need some further discussions about this issue because its contains a various amount of important changes to our core system.
if you gonna implement this it would be interesting to know your time horizon for it.