#lambda
6 posts

How can we apply the principles of chaos engineering to AWS Lambda

Identify weaknesses before they manifest in system-wide aberrant behaviours: improper fallback settings when a service is unavailable, retry storms from poorly tuned timeouts, outages when a downstream dependency gets too much traffic, cascading failures, etc.
Read more

How can we apply the principles of chaos engineering to AWS Lambda

  • Identify weaknesses before they manifest in system-wide aberrant behaviours: improper fallback settings when a service is unavailable, retry storms from poorly tuned timeouts, outages when a downstream dependency gets too much traffic, cascading failures, etc.
  • Lambda functions have specific vulnerabilities. There are many more functions than services, and you need to harden boundaries around every function and not just the services. There are more intermediary services with their own failure modes (Kinesis, SNS, API Gateway) and more configurations to get right (timeout, IAM permissions).
  1. Apply stricter timeout settings for intermediate services than those at the edge.
  2. Check for missing error handling that allows exceptions from downstream services to escape.
  3. Check for missing fallbacks when a downstream service is unavailable or experiences an outage.
  • Monitor metrics carefully, especially client-side, which shows how user experience is affected.
  • Design controlled experiments to probe the limits of your system.

Full post here, 4 mins read

AWS Lambda - how best to manage shared code

To share code between functions across service boundaries in general, you can use shared libraries or encapsulate the business logic into a service.
Read more

AWS Lambda - how best to manage shared code

For functions that are highly cohesive, organized into the same repository, share code via a module inside the repository. To share code between functions across service boundaries in general, you can use shared libraries (perhaps published as private NPM packages only available to your team) or encapsulate the business logic into a service. To choose, consider:

  • Visibility: Dependency is explicitly declared in a library but often not declared in a service, so you need logging or explicit tracing.
  • Deployment: With a shared library, you rely on consumers to update when you publish a new version. With a service, you decide when to deploy and can control deployment better.
  • Versioning: There will be times when multiple versions of the library are active. With services, you control when and how to run multiple versions.
  • Backward compatibility: with a shared library, you communicate compatibility with semantic versioning (a major update signals a breaking change). With a service, it’s your choice.
  • Isolation: you expose more of the internal workings with a shared library. With a service, you exercise more control.
  • Failure: When a library fails, you know your code has failed and stack traces show what’s wrong. With a service, it may be an actual failure or a timeout (the consumer cannot distinguish between the service being down and being slow), which can be a problem if the action is not idempotent, and partial failures require elaborate rollbacks.
  • Latency: You get significantly higher network latency with a service.

Full post here, 9 mins read

Serverless pitfalls: issues with running a startup on AWS Lambda

Hosting your backend behind an API gateway can result in latency issues. If you want<50ms response times, you need dedicated infrastructure.
Read more

Serverless pitfalls: issues with running a startup on AWS Lambda

  • Functions with less RAM have slower CPU speed. Take both CPU and RAM into account (as well as the related costs: you save no money once execution time drops below 100ms) when allocating resources to Lambda functions.
  • Hosting your backend behind an API gateway can result in latency issues. If you want <50ms response times, you need dedicated infrastructure.
  • Lambdas in a VPC cannot connect to outside services like S3 unless you install a NAT gateway, with the associated charges. It’s best to run either completely inside or outside a VPC, even if it means less security and more latency and bandwidth costs.
  • Due to their distributed queues, AWS can execute Lambdas more than once per request. So be aware of and work around their idempotency.
  • You’ll find it hard to identify and debug functions that hang or get deadlocked because they automatically disappear over the limit. You’ll need to look to Cloudwatch for them.
  • Executing a Lambda from another Lambda is slow. Either launch a task specifically to launch a large number of other tasks or use threads to launch multiple tasks simultaneously.
  • To work around the dreaded cold start, you can move front-end requests facing the end user off Lambda and try to containerize.

Full post here, 10 mins read

Cold start/warm start with AWS Lambda

Programming language can impact the duration of a cold start in Lambda: Java and C# are typically slower to initialize than Go, Python or Node but they perform better on warm calls.
Read more

Cold start/warm start with AWS Lambda

  • Programming language can impact the duration of a cold start in Lambda: Java and C# are typically slower to initialize than Go, Python or Node but they perform better on warm calls.
  • Adding a framework to structure the code deployed in Lambda increases execution time with cold calls, which can be minimized by using a serverless-oriented framework as opposed to a web framework. Typically, frameworks don’t impact warm calls.
  • In serverless applications, one way to avoid cold starts is to keep Lambda warm beyond its fixed 5-minute life by preventing it from being unloaded. You can do this by setting up a cron to invoke Lambda at regular intervals. However, AWS Lambda will still reset every 4 hours and autoscaling must be taken into account.
  • To avoid cold starts in case of concurrent calls from automatic autoscaling, make pools of Lambda instances kept warm as above; but you will need to determine an optimal number to avoid wasting resources.

Full post here, 11 mins read

5 tips for building apps with the Serverless framework and AWS Lambda

Serverless works well with a microservice-style architecture. You should limit the scope of services and functions you use. Lambda functions shouldn’t persist any data or session information in the environment beyond the lifetime of a single request.
Read more

5 tips for building apps with the Serverless framework and AWS Lambda

  • Serverless works well with a microservice-style architecture. You should limit the scope of services and functions you use.
  • Lambda functions shouldn’t persist any data or session information in the environment beyond the lifetime of a single request.
  • However, Lambda might reuse your function instances to make performance optimizations. So, you should optimise for your functions for reuse.
  • Cold starts are a problem with AWS Lambda. Reduce latency by keeping containers warm.
  • Use dependency injection to make your functions easily testable. Write integration tests, both locally and on deployments.

Full post here, 6 mins read