#caching
3 posts

An overview of caching methods

The most common caching methods are browser caching, application caching and key-value caching. Browser caching is a collaboration between the browser and the web server and you don’t have to write any extra code.
Read more

An overview of caching methods

  • The most common caching methods are browser caching, application caching and key-value caching.
  • Browser caching is a collaboration between the browser and the web server and you don’t have to write any extra code. For example - in Chrome when you reload a page you have visited before, the date specified under ‘expires’ in the 'Responses' header determines whether the browser loads resources directly from cache (from your first visit) or requests the resources again from the server. The server uses the headers passed by the browser (headers like If-modified-since or If-none-match or Etag) to determine whether to send the resources afresh or ask the browser to load from its cache.
  • Application-level caching is also called memoization and it is useful if your program is very slow. Think of cases where you are reading a file and extracting data from it or requesting data from an API. The results of any slow method are placed in an instance variable and returned on subsequent calls to the method. This speeds up the method. Though the downsides to this kind of caching are that you lose the cache on restarting the application and you cannot share the cache between multiple servers.
  • Key-value data caching takes memoization a step further with dedicated databases like memcache or Redis. This allows cached data to persist through user requests (allowing data sharing) and application reboots, but does introduce a dependency to your application and adds another object to monitor.
  • To determine the best method for you, start with browser caching as the baseline. Then identify your hotspots with an application profiling tool before choosing which method to grow with to add a second layer of caching.

Full post here, 7 mins read

All about prefetching

Four common prefetching strategies - interaction-driven, state-driven, user-driven and download everything.
Read more

All about prefetching

  • Network Information APIs allow you to use different strategies for different connection types to improve prefetching performance.
  • Four common prefetching strategies are as follows. Many websites use a combination of these.

1. Interaction-driven - uses mouse and scroll activities as signals

2. State-driven - uses current page or URL to prefetch next logical step

3. User-driven - matches each specific user’s patterns of past usage, account information, etc

4. Download everything - so all the links in a page or all the bundles for an app.

  • The prefetch resource hint can be used to prefetch on all major browsers except Safari. You can either manually add prefetch resource hints with no dependencies or build tools based on developer configurations or use dedicated prefetch tools such as quicklink, Guess.js or Gatsby.

Full post here, 7 mins read

Three strategies for designing the caching in large-scale distributed system

Always design the distributed systems to be ‘two mistakes high’. Place the web cache container in a side-car arrangement with each instance of your server/web service container.
Read more

Three strategies for designing the caching in large-scale distributed system

  • Always design the distributed systems to be ‘two mistakes high’ - handle failures at two levels so that there is at least one chance to recover instead of the system failing right away on a mistake.
  • Place the web cache container in a side-car arrangement with each instance of your server/web service container. Any modification to the cache container does not affect the decoupled service.
  • Place the cache above the service containers (or app replicas) so that all the containers can access the same cache replicas, and the cache can call the service in case of a miss.
  • The above two approaches work for stateless services. If state is a significant factor for your app and there are many concurrent connections, sharded caching serves better.
  • Use consistent hashing to distribute the load across multiple cache shards that show up as a single cache proxy to the user.

Full post here, 5 mins read