Intro

high-performance, distributed memory object caching system, generic in nature, but intended to speeding up dynamic web and / or win applications by alleviating database load. Don't forget to visit us at http://www.sharedcache.com

Release: release 3.0.5.1 (released on March, 9th 2009)

full release notes available here: Release Notes 3.0.5.1

Help with a small code sample

Help others to get fast into Shared Cache by providing some code samples. The solution I maintain on the following site will be expanded with your code samples:
http://www.ronischuetz.com/code/SharedCacheSamples.html

General Overview

Shared Cache is high performance distributed and replication cache system build for .Net cache and enterprise application running in server farms.

Shared Cache provides distributed replicated cache to minimize the load factor. It consists the usage of two or more servers in a farm. It's replicated all data within the cluster. The big plus is simple, you have all your cache nodes on all different servers. In case one of your servers get restarted, it will receive all nodes automatically from its parent.

Shared Cache uses 100% managed code which is written .Net C#.

Why you should consider to use shared cache?

There is no more efficient way to increase the scalable performance of applications then the use caching to unload deeper layers.

Shared Cache Topologies

Shared Cache provides a rich set of different scenarios to let you pick the option which suits your requirements. Connectivity between nodes happens on IP Layer 4 - Transport.

Distributed Caching - partitioned

  • your requirement:
    • extreme scalability
  • our solution:
    • shared-nothing architecture, automatically partition data across all cluster members
  • your result:
    • linear scalability. by partitioning the data evenly, the port-to-port throughput (the maximum amount of work that can be performed by each server) remains constant as servers are added, up to the extent of the switched fabric.
  • your benefits:
    • Partitioned: the size of the cache and the processing power available grow linearly with the size of the cluster.
    • Load Balanced: the responsibility for managing the data is automatically load-balanced across the cluster.
    • Point-to-Point: the communication for the partitioned cache is all point-to-point enabling linear scalability.
  • the limits:
    • as the amount of members in your cluster ram.
  • your uses:
    • any ram size caches, scaling with the size of the cluster.

dist.gif

Replicated Caching

  • your requirement:
    • Extreme Performance
  • our solution:
    • All data will be fully replicated to all cluster members.
    • Through the usage of fully replicated data to all members of the cluster will achieve your zero latency access and extreme performance requirements.
  • your result:
    • Through zero latency access. Since the data is replicated to each member, it is available in each member without network latency and without waiting time.
  • your benefits:
    • This provides the best / possible speed for accessing data in your cache
  • the limits:
    • Cost per update: Updating a replicated cache requires pushing the new version of the data to all other cluster members, which will limit scalability if there are a high frequency of updated per member
    • Cost per entry: the data is replicated to every cluster member, so CLR heap space is used on each member, which will impact performance for large caches

rep.gif

Single Instance Caching

  • your requirement:
    • build a scalable system architecture
  • our solution:
    • There are no limitations to grow as your system grows
  • your result:
    • Linear scalability by partitioning the data on your Shared Cache instance, the port-to-port throughput (the maximum amount of work that can be performed by each server) remains constant as servers are added, up to the extent of the switched fabric.
  • your benefits:
    • This provides the best / possible speed for accessing data in your cache.
    • While you grow, it will not require any refactoring since everything happens over configuration
  • the limits:
    • The size of available RAM

sin.gif

General Information

  • The usage of Shared Cache does NOT prevent you to go on and use System.Cache.

Donate

Why Donate
http://www.ronischuetz.com/donate.html x-click-but04.gif


softpedia_free_award_f.gif link

How-To for 64Bit Installer

Step 1:

step1.PNG

Step 2:

step2.PNG
DefaultLocation: [ProgramFiles64Folder]\[ProductName]

Last edited Apr 29, 2009 at 6:18 PM by ronischuetz, version 77