#testing
19 posts

How to test serverless apps

Most of what goes wrong in serverless architecture lie in the configuration of functions: event sources, timeouts, memory, IAM permissions, etc. With functions being stateless, the number of integration points also increases, so you need more integration tests than unit or end-to-end testing.
Read more

How to test serverless apps

  • Most of what goes wrong in serverless architecture lie in the configuration of functions: event sources, timeouts, memory, IAM permissions, etc. With functions being stateless, the number of integration points also increases, so you need more integration tests than unit or end-to-end testing.
  • The first stage of testing should be local tests, for which you can: run the Node.js function inside a wrapper. Invoke functions locally using tools such as Serverless framework or AWS SAM local. Use docker-lambda to simulate an AWS Lambda environment locally. Use local-stack to simulate AWS services locally. However, none of these simulate IAM permissions or API authentication.
  • The second stage is unit tests. If you have a complex piece of business logic, you should encapsulate it into a module and test it as a unit.
  • Use integration testing to test code against external services you depend on, such as DynamoDB or S3. Run these tests against real DynamoDB tables or S3 buckets, and not mocks and stubs. Keep the same assumptions as of the code.
  • Once the local tests have checked your code, move to acceptance testing: whether functions have the right permissions, timeout settings, memory, API Gateway event sourcing, etc. Do this after deploying.
  • Finally, if your serverless application is used by a UI client directly or indirectly, make sure your changes are compatible with the client - you can have a QA team test this manually or use an automated test framework
  • Once deployed, you should still use robust monitoring and error reporting tools for issues developing in production.

Full post here, 6 mins read

Things to remember before you say yes to automation testing

All tests can’t be and shouldn’t be automated. Know which tests, if automated, will stop finding bugs and keep them out of the automation list.
Read more

Things to remember before you say yes to automation testing

  • All tests can’t be and shouldn’t be automated. Know which tests, if automated, will stop finding bugs and keep them out of the automation list.
  • Work on well thought out and well-defined test cases before starting to build test automation.
  • Use the programming language your testers are familiar with to keep the learning curve not too steep.
  • If test automation gives ambiguous results, don’t consider it. There must be a problem with the test script or test plan. Look at solving that problem.
  • Break your test suite into smaller chunks of independent test cases that don’t affect the results of other test cases.

Full post here, 6 mins read

Continuous testing - creating a testable CI/CD pipeline

For continuous testing, focus on confidence, implementation, maintainability, monitoring and speed.
Read more

Continuous testing - creating a testable CI/CD pipeline

For continuous testing, focus on confidence, implementation, maintainability, monitoring and speed (CIMMS):

  1. For greater confidence, pair testers with developers as they write code to review unit tests for coverage and to add service tests for business logic and error handling.
  2. To implement, use tools that support rapid feedback from fast running of repeatable tests. For service-level tests, inject specific responses/inputs into Docker containers or pass stubbed responses from integration points. For integration tests, run both services in paired Docker containers within the same network. Limit full-environment tests.
  3. Ensure tests are maintained and up to date. Create tests with human-readable logging, meaningful naming and commented descriptions.
  4. To monitor, use testing tools that integrate into CI/CD pipeline tools to make failures/successes visible and even send emails out automatically. In production, labeling logs to trace a user’s path and capturing system details of the user environment allows easier debugging.
  5. For speed, keep the test suite minimal. Let each test focus on only one thing and split tests to run in parallel if need be. Segregate to test only for changed areas and ignore those with no cross-dependencies.
  • Avoid automating everything. Run manual exploratory tests at each stage to understand new behaviours and determine which of those need automated tests.
  • When pushing to a new environment, test environmental rollback. Reversing changes should not impact users or affect data integrity. Test the rollout process itself for production and run smoke tests. Continue to monitor by running known error conditions and ensure monitoring captures those with sufficient information for easy debugging.

Full post here, 7 mins read

An introduction to load testing

Common parameters to test should include server resources (CPU, memory, etc) for handling anticipated loads; quickness of response for the user; efficiency of application; the need for scaling up hardware or scaling out to multiple servers; and maximum requests per second.
Read more

An introduction to load testing

  • Load testing is done by running the software on one machine (or cluster of machines) to generate a large number of requests to the webserver on a second machine (or cluster).
  • Common parameters to test should include server resources (CPU, memory, etc) for handling anticipated loads; quickness of response for the user; efficiency of application; the need for scaling up hardware or scaling out to multiple servers; particularly resource-intensive pages or API calls; and maximum requests per second.
  • In general, a higher number of requests implies higher latency. But it is a good practice in real life to test multiple times at different request rates. Though a website can load in 2-5 seconds, web server latency should typically be around 50-200 milliseconds. Remember that even ‘imperceptible’ improvements add up in the aggregate for a better UX.
  • As a first step, monitor resources - mostly CPU load and free memory.
  • Next, find the maximum response rate of your web server by setting desired concurrency (100 is a safe default but check settings like MaxClients, MaxThreads, etc for your server) and test duration in any load testing tool. If your software only handles one URL at a time, run the test with a few different URLs with varying resource requirements. This should push the CPU idle time to 0% and raise response times beyond real-world expectations.
  • Dial back the load and test again for how your server performs when not pushed to its absolute limit: specify exact requests per second, and cut your maximum requests from the earlier step in half. Step up or down requests by another halfway each time till you reach your maximum for acceptable latency (which should be in the 99th or even 99.999th percentile).
  • Some options among load-testing software you can explore - ab (ApacheBench), JMeter, Siege, Locust, and wrk2.

Full post here, 13 mins read

The best ways to test your serverless applications

For the serverless functions you write, test for each of the following risks: configuration (databases, tables, access rights), technical workflow (parsing and using incoming requests, handling of successful responses and errors), business logic and integration.
Read more

The best ways to test your serverless applications

  • For the serverless functions you write, test for each of the following risks: configuration (databases, tables, access rights), technical workflow (parsing and using incoming requests, handling of successful responses and errors), business logic and integration (reading incoming request structures, storage order in databases).
  • Break up functions into hexagonal architecture (ports and adapters) with separation of concerns through layers of responsibility.
  • For unit tests, use a local adapter or mock as an adapter to test the function business layer in isolation.
  • Use adapters to simulate to test integration with third-party end services. Save memory and time by testing not for full integration but file storage integration with the in-memory adapter.
  • For proper monitoring of integrations, use back-end tools such as IOpipe, Thundra, Dashbird, Epsagon, etc., and front-end tools such as Sentry or Rollbar. You can also use an open-source error tracking app such as Desole that you install in your AWS account.

Full post here, 10 mins read