#deployment
8 posts

Deploys at Slack

Getting a peek into an engineering organization's deploy process is always an interesting exercise. With Slack, it's no different. Their process today is straightforward:
Read more

Deploys at Slack

Getting a peek into an engineering organization's deploy process is always an interesting exercise. With Slack, it's no different. The process today is straightforward.

  1. Create a release branch to tag the Git commit and allow developers to push hotfixes (if required).
  2. Deploy to a staging setup which is a production environment that doesn't accept any public traffic.
  3. Phased roll-outs to canary servers. This is tightly coupled with monitoring to see any spikes in errors.

Some of the core principles for such a process are:

  • Fast deploys. All the deployments are pull-based instead of push-based. The build server updates a key in Consul which in-turn pings N servers to pull the latest code.
  • Atomic deploys. During deployment, a "cold" directory is created that pulls in new code. The server is then drained of any traffic and a symlink switches between the "hot" and "cold" directories.
  • Phased roll-outs lend a lot of confidence towards reliability as it allows teams to catch errors early with lesser impact.

Full post here, 6 mins read

Kubernetes deployment strategies

The standard for Kubernetes is rolling deployment, replacing pods of previous versions with the new one without cluster downtime. Kubernetes probes new pods for readiness before scaling down old ones, so you can abort deployment without bringing down the cluster.
Read more

Kubernetes deployment strategies

  • The standard for Kubernetes is rolling deployment, replacing pods of previous versions with the new one without cluster downtime. Kubernetes probes new pods for readiness before scaling down old ones, so you can abort deployment without bringing down the cluster.
  • In a recreate deployment, all old pods are killed at once and replaced with new ones.
  • A blue/green or red/black deployment offers both old and new versions together with users having access only to green (old version) while your QA team applies test automation to the blue (new version). Once the blue passes, the service switches over and scales down the green version.
  • Canary deployments are similar to blue/green but use a controlled progressive approach, typically when you want to test new functionality on the backend or with a limited subset of users before a full rollout.
  • Dark deployments or A/B testing are similar to canary deployment but used for front-end rather than backend features.

Full post here, 5 mins here

Blue-green deployment: a microservices antipattern

Blue-green deployment is a technique that reduces downtime for your application by running two identical production environments called Blue & Green.
Read more

Blue-green deployment: a microservices antipattern

  • Blue-green deployment is a technique that reduces downtime for your application by running two identical production environments called Blue & Green.
  • At any time, one of the environments is live and one is idle. The live environment serves all production traffic. And your team deploys and tests in the idle environment. Once the new build runs fine, you switch the router to swap the live & idle environments.
  • Adopting this approach in case of microservices, tosses out the need for microservices to be independently deployable.
  • All microservices in a release need to be mutually compatible because the entire application is released in one go in the new environment.
  • “..this creates a distributed monolith whose pace of evolution is limited by the slowest-developing microservice.”

Full post here, 4 mins read

Learnings from the journey to continuous deployment

Incremental changes result in easily maintainable products. Releasing with smaller changes at regular intervals brings value to customers faster and provides early feedback on future tasks.
Read more

Learnings from the journey to continuous deployment

  • Incremental changes result in easily maintainable products.
  • Releasing with smaller changes at regular intervals brings value to customers faster and provides early feedback on future tasks.
  • Improve code quality by writing quality tests and setting up comprehensive test strategies for the entire build and deploy pipeline.
  • Improve integration testing in the staging environment to detect issues related to dependencies.
  • Monitoring critical parameters such as system load, API latency, and throughput are vital to assess the health of the software.

Full post here, 5 mins read

Automated continuous deployment at Heroku

When adding new automation, start small. Make it easy to onboard. Configuring pipelines and listing alerts to monitor should be easier than manually deploying
Read more

Automated continuous deployment at Heroku

  • When adding new automation, start small.
  • Make it easy to onboard. Configuring pipelines and listing alerts to monitor should be easier than manually deploying.
  • If a manual process was working fine apart from speed, let the automated process use the same model.
  • Isolate early pipeline stages by team environment to avoid affecting other teams right away.
  • Teach deployers to use context the way a human would - scan for open incidents, check whether changes have been merged (or it might use incomplete work).
  • Use internal feature flags to separate the deployment of changes and the enabling of features.

Full post here, 8 mins read