Scaling to 100k users
- When you first build an application, API, DB and client may reside on one machine/server. As you scale up, you can split out the DB layer into a managed service.
- Consider the client as a separate entity from the API as you grow further and build for multiple platforms: web, mobile web, Android, iOS, desktop apps, third-party services, etc.
- As you grow to about 1000 users, you might add a load balancer in front of the API to allow for horizontal scaling.
- As serving and uploading resources start overloading servers, at say 10,000 users, move to a CDN, which you can get with a cloud storage service for static content so the API no longer needs to handle this load.
- Finally, you might scale out the data layer at 100,000 users, with relational database systems such as PostgreSQL, MySQL, etc.
- You might also add a cache layer using an in-memory key-value store like Redis or Memcached so that multiple hits to the DB can be served by cached data. Cache services are also easier to scale out than DBs themselves.
- Finally, you might split out services to scale them independently, with say a load balancer exclusively for the web socket service; or you might need to partition and shard the DB, depending on your service; you might also want to install monitoring services.
Full post here, 8 mins read