Posts Tagged ‘AppFabhric Caching’

Template or take Decision for Velocity:-

  • Cache Servers: – How Many Cache Servers needed.See Capacity Plan Section in MSDN.
  • Security:-Check Security Permission, Firewall rules and Domain setting.For details see the Security section on MSDN.
  • What we are moving to Distributed Cache: – Plan what you are moving to Cache Static Data, Catalog data .Consider moving less frequently changes data.
  • CacheConfig and Provider: – Consider SqlServer for Cache Cluster Configuration store.
  • Cache Client: – .NET 3.5 SP1 or .NET 4.0
  • Cache Pre-Loading:- Do we need Preloading the cache 
  • Cache loading: – Can it be Parallel?
  • Local Cache Mode and Notification: – Considered Local Cache and Notification.
  • Named Cache:-
  • Region and Tag :- If needed in same physical location.Can be managed as a single operation and tag can be used to query specific tag items.
  • Bulk Operation: – None
  • Versioning and Concurrency: – What type of versioning and Concurrency needed?
  • ChannelOpenTimeOut :- ?
  • RequestTimeOut :-?
  • MaxConnectiontoServer :-?
  • BufferSize:- ?

 

The communication between the cache servers and cache clients uses the WCF channel model and net.Tcp binding. Cache Servers use the 22233 (default) TCP port for communicating with the cache client

  • Tracing: – default (I think its not enabled by default)
  • Protection Level:-Check Access and Permission
  • Running Account: – Network Service or Service Account?
  • Test for a preload
  • Analyze and Decide Future Steps.

 

 

————End——–

 

 

Best Practice Recommendation

  • Even though having just 1 cache server in the client config would suffice, it is recommended to maintain as many cache server hostnames in the config file. *When a cache client connects to one of the cache servers, it gets the routing table which has the partitioning logic to access other cache servers. Including more cache servers helps with more resiliency for the initial connection. If the logic is implemented in code, building a host lookup service will work better.
  • Instantiating a DataCacheFactory object creates several internal data structures (DRM, ThickClient), minimizing the number of DataCacheFactories and creating them in advance (on a separate thread) is recommended. A singleton pattern should work for most scenarios; do not create one DataCacheFactory object per cache operation. When you need to have different policy settings, for example, if local cache is required only for a set of named caches, then having different DataCacheFactory objects will be appropriate.
  • Default setting for ChannelOpenTimeout is 15 seconds, you can set it to much lower values if you want the application to fail fast while opening the channel.
  • Default setting for RequestTimeout is 10 seconds, do not set it to 0. If you do, your application will see a timeout on every cache call. Changes to the default value need to take into account the workload and physical resources (client machine configuration, cache server configuration, network bandwidth, object size, ratio of GETs Vs PUTs, number of concurrent operations, usage of Regions, etc).
  • Setting maxConnectionsToServer=1 (default) will work in most situations. In scenarios, when there is a single shared DataCacheFactory and a lot of threads are posting on that connection, there may be a need to increase it. So if you are looking at a high throughput scenario, then increasing this value beyond 1 is recommended. Also, be aware that if you had 5 cache servers in the cluster, if the application uses 3 DataCacheFactories and if maxConnectionsToServer=3, from each client machine there would be 9 outbound TCP connections to each cacheserver, 45 in total across all cache servers.
  • Local Cache :- For best performance, only enable local cache for objects that change infrequently. Using local cache for frequently changing data may increase the chance that the client will be working with stale objects. Although you could lower the ttlValue and make a process refresh the local cache more frequently, the increased load on the cluster may outweigh the benefits of having the local cache. In such cases of frequently changing data, it is best to disable local cache and pull data directly from the cluster

· Expiration time on Velocity should not be set to very small values, which lead to increasing memory consumption, not decreasing. This occurs because of delays of garbage collection calls―old data expires, but remains in memory, and a new copy of data is added to memory.

· Velocity may scale better than linearly when scaling up from a small number of memory-poor nodes to store most-used data. Increasing the number of Velocity nodes increases the amount of memory available for most-used data and eliminates the database bottleneck.

For CacheClient :- Preparing the Cache Client Development Environment

High-Availability Resources

The cluster configuration storage location can be a single point of failure for your distributed cache system. For this reason, we recommend that you use Windows Server 2008 Failover Clustering (http://go.microsoft.com/fwlink/?LinkId=130692) when you can, to optimize the availability of your cluster’s configuration data. Consider which “clustered” resources are available to your application (in your environment) and balance that with the degree of availability required for your distributed cache system to decide which storage option is best for you.

For example, your infrastructure may already have a “clustered” SQL Server database available to store your configuration settings. Alternatively, there may be a “clustered” folder available for you to deploy a shared folder-based cluster configuration.

· Employ a large number of cache hosts.

· Deploy your distributed cache system within the perimeter of a firewall, with all servers members of the same domain, including the cache clients, cache hosts, primary data source server, and the server hosting the cluster configuration storage location.

· Use SQL Server or a custom provider to store the cache cluster configuration settings.

· Use SQL Server or a custom provider to perform the cluster management role. For more information, see Lead Hosts and Cluster Management (Windows Server AppFabric Caching).

· When possible, use Microsoft Windows Server 2008 Failover Clustering (http://go.microsoft.com/fwlink/?LinkId=130692) to host a “clustered” database resource for the cache cluster configuration storage location.

· Minimize costly configuration changes that require stopping the cluster. When possible, re-create named caches instead of stopping the entire cache cluster to make cache configuration changes in the cluster configuration settings.

· Always use the Stop-CacheHost command to stop the cache service before rebooting a server. When lead hosts perform the cluster management role, the Stop-CacheHost cmdlet will not succeed if the act of stopping the cache service causes the entire cache cluster to shut itself down (because of no majority of running lead hosts).

REF:-

http://msdn.microsoft.com/en-us/library/ee790934.aspx

Cache Preload :-

MustRead http://msdn.microsoft.com/en-us/magazine/ff714581.aspx

Preloading,ParallelLoading etc;

 

Cluster settings :- http://msdn.microsoft.com/en-us/library/ee790895.aspx

Velocity benchmarking https://files.griddynamics.net/VelocityBenchmarkWhitePaper20090901.pdf and

http://toddrobinson.com/appfabric/appfabric-cache-feature-comparisons/

With MVC http://cgeers.wordpress.com/2010/07/04/windows-server-appfabric-caching/

AppFabric Cahing vs IBM Exterme Scale http://channel9.msdn.com/Shows/Endpoint/endpointtv-AppFabric-Caching-vs-IBM-eXtreme-Scale-benchmark and Here is the benchmark With Code http://msdn.microsoft.com/en-us/netframework/ff923354.aspx 

Dynamic Router with AppFabCaching http://tinyurl.com/37jcctv