#kubernetes
11 posts

Tips for running scalable workloads on Kubernetes

You must set resource requests & limits so the Kubernetes scheduler can ensure workloads are spread across nodes evenly.
Read more

Tips for running scalable workloads on Kubernetes

  • You must set resource requests & limits so the Kubernetes scheduler can ensure workloads are spread across nodes evenly.
  • The scheduler can use configured affinities & anti-affinities as another hint of which node is best to assign your pod to.
  • In Kubernetes, readinessProbe tells that a pod is ready to start receiving requests and livenessProbe tells that a pod is running as expected. Setting these ensure that requests to service always go to a container that can process the request.
  • It is common for nodes in Kubernetes to disappear and you should configure a pod disruption budget to ensure you always have a minimum number of ready pods for deployment.

Full post here, 13 mins read

“Let’s use Kubernetes!” Now you have problems

If you need to scale, you need at least 3-4 virtual machines, and that means twice as many actual machines at a minimum.
Read more

“Let’s use Kubernetes!” Now you have problems

  • If yours is a small team, Kubernetes may bring a lot of pain and not enough benefits for you.
  • If you need to scale, you need at least 3-4 virtual machines, and that means twice as many actual machines at a minimum.
  • The codebase is heavy: 580,000 lines of Go code at its heart as of March 2020, and large sections have minimal documentation and lots of dependencies.
  • Setting up and deploying Kubernetes is complex - architecturally, operationally, conceptually and in terms of configurations, compounded by confusing default settings, missing operational controls and implicitly defined security parameters.
  • Your application becomes hard to run locally because you need VMs or nested Docker containers, to begin with, staging environments, proxying a local process into a cluster or a remote process on to a local machine, etc.
  • You are tempted to write lots of microservices but distributed applications are hard to write correctly and hard to debug. If you have more services written than the number of developers on each, you are doing it wrong.

Full post here, 6 mins read

Why Kubernetes is the new application server

Figuring out how to connect to a service is easy and available out of the box with Kubernetes. You get configuration information from the runtime environment without it having to be hardcoded in the application.
Read more

Why Kubernetes is the new application server

  • Figuring out how to connect to a service is easy and available out of the box with Kubernetes. You get configuration information from the runtime environment without it having to be hardcoded in the application.
  • Kubernetes ensures reliability and availability for your applications by providing elasticity through ReplicaSets, which control the number of app replicas that should run at any time.
  • As Kubernetes run many replicas of the containerized application and auto-scales too, logging and monitoring become even more important than usual scenarios. And for this purpose, it has observability built-in. An important thing to note is that you must store your logs outside the container to ensure they are persistent across different runs.
  • Kubernetes is resilient. It ensures that your specified number of pod replicas are consistently deployed across the cluster. This automatically handles any possible node failures.

Full post here, 12 mins read

Tips for building and managing containers

Curate a set of Docker base images for your container because these base images can be reused as many apps share dependencies, libraries, and configurations. Docker Hub and Google Container Registry have thousands of pre-configured base images for download.
Read more

Tips for building and managing containers

  • Curate a set of Docker base images for your container because these base images can be reused as many apps share dependencies, libraries, and configurations. Docker Hub and Google Container Registry have thousands of pre-configured base images for download.
  • However, don’t trust arbitrary base images. always use a vulnerability scanner - incorporate static analysis into your pipeline and run it for all your containers. If you do find a vulnerability, rebuild your base image and don’t just patch it, then redeploy the image as immutable.
  • Optimize your base image, starting with the leanest most viable one and build your packages on that to reduce overheads, to build faster, use less storage, pull images faster, and minimize the potential surface of attack.
  • Use only one (parent) process per container. As a rule, each container should have the same lifecycle as the app itself.
  • Avoid embedding secrets inside containers, even if you keep the images private. For security, use Kubernetes Secrets objects to store sensitive data outside containers, use the Secrets abstraction to expose them as mounted volumes inside containers or as environmental variables.

Full post here, 7 mins read

The two most important challenges with an API gateway when adopting Kubernetes

Encourage a diversity of implementations for consolidated tooling that supports architectural flexibility. However, take advantage of a consolidated underlying platform and offer a ‘buffet’ of implementation options rather than allowing developers to build bespoke ones for better security.
Read more

The two most important challenges with an API gateway when adopting Kubernetes

  • When building an API gateway using a microservices pattern that runs on Kubernetes, you must think about scaling the management of hundreds of services and their associated APIs and ensuring the gateway can support a broad range of microservice architectures, protocols and configurations across all layers of the edge stack.
  • The challenges of managing the edge increase with the number of microservices deployed, which also means an increased number of releases, so it is best that you avoid a centralized approach to operations and let each team manage their services independent of other teams’ schedules.
  • Encourage a diversity of implementations for consolidated tooling that supports architectural flexibility. However, take advantage of a consolidated underlying platform and offer a ‘buffet’ of implementation options rather than allowing developers to build bespoke ones for better security. This is a more manageable and scaleable approach too.

Full post here, 5 mins read