#testing
22 posts

The Myth of Advanced TDD

This is an excellent post around using tests to drive design decisions. TDD is a much touted, much hated and much loved paradigm in the software world.
Read more

The Myth of Advanced TDD

This is an excellent post around using tests to drive design decisions. TDD is a much touted, much hated and much loved paradigm in the software world. This post shows how using mock objects in a test case allows us to expose design flaws in our codebase.

  • Listen to the test cases to refactor and improve your code design.
  • Mock objects and test doubles expose design risks early. If the code seems very tightly coupled to the test case, then change the design.
  • If the test has duplicate assertions for the same field, that points to a missing abstraction.
  • In your code flow if a lot of code seems inter-connected, consider using event sourcing to reduce the coupling. It'll make the tests also easier to write.
  • Advanced TDD is indifferentiable from diligent TDD.

I've been trying to incorporate more TDD in my professional projects. This post was the trigger to be more conscious and diligent about it. Do you practise TDD in your daily flow? Any other resources I should read on this topic to get better?

Full Post here, 13 mins read

test && commit || revert

This post by the legendary Kent Beck describes an extreme form of TDD. In all fairness, I haven't tried it and your mileage may vary.
Read more

test && commit || revert

TDD
Courtesy: Geek and Poke

This post by the legendary Kent Beck describes an extreme form of TDD. In all fairness, I haven't tried it and your mileage may vary.

Although, my biggest takeaway from this article was:

"I hated the idea so much that I had to try it"

The idea behind TCR (Test & Commit Or Revert) strategy is simple. Run a loop to commit code the moment all the test cases pass. If the tests don't pass, revert the code to the last commit where the tests pass.

while(true);
do
git pull --rebase;
(test && git commit -am working) || git revert;
git push;
done;
  • This strategy ensures that all changes are done incrementally in small batches. No big diffs.
  • Fewer conflicts between developers on the team because they are constantly pulling each other's code.
  • Insane out-of-the-box idea that might actually work.

Obviously, give it a shot in a smaller toy project first before trying this in your 2 million line code base that powers the stock market. I know I am.

Full Post here, 3 mins read

The Pyramid of Unit Testing Benefits

Effective unit test coverage on your codebase is the holy grail of software development. While it takes effort to do, at times the benefits aren’t entirely clear. This article details the benefits of unit testing very effectively.
Read more

The Pyramid of Unit Testing Benefits

Effective unit test coverage on your codebase is the holy grail of software development. While it takes effort to do, at times the benefits aren’t entirely clear. This article details the benefits of unit testing very effectively.

  • Validates your code by giving you immediate feedback. Silly mistakes and bugs can be caught early through this process.
  • Forcing your code to be testable generally leads to better design and explicit dependencies.
  • You can use the test suite as an up-to-date documentation of the software. If the tests pass, that’s what the code is supposed to do.
  • Prevent regressions by building an extensive test suite.
  • We are all constantly refactoring our code. Having a good test suite acts as a good safety net to ensure that we still deliver the functionality that was promised.

Full post here, 3 mins read

How to test serverless apps

Most of what goes wrong in serverless architecture lie in the configuration of functions: event sources, timeouts, memory, IAM permissions, etc. With functions being stateless, the number of integration points also increases, so you need more integration tests than unit or end-to-end testing.
Read more

How to test serverless apps

  • Most of what goes wrong in serverless architecture lie in the configuration of functions: event sources, timeouts, memory, IAM permissions, etc. With functions being stateless, the number of integration points also increases, so you need more integration tests than unit or end-to-end testing.
  • The first stage of testing should be local tests, for which you can: run the Node.js function inside a wrapper. Invoke functions locally using tools such as Serverless framework or AWS SAM local. Use docker-lambda to simulate an AWS Lambda environment locally. Use local-stack to simulate AWS services locally. However, none of these simulate IAM permissions or API authentication.
  • The second stage is unit tests. If you have a complex piece of business logic, you should encapsulate it into a module and test it as a unit.
  • Use integration testing to test code against external services you depend on, such as DynamoDB or S3. Run these tests against real DynamoDB tables or S3 buckets, and not mocks and stubs. Keep the same assumptions as of the code.
  • Once the local tests have checked your code, move to acceptance testing: whether functions have the right permissions, timeout settings, memory, API Gateway event sourcing, etc. Do this after deploying.
  • Finally, if your serverless application is used by a UI client directly or indirectly, make sure your changes are compatible with the client - you can have a QA team test this manually or use an automated test framework
  • Once deployed, you should still use robust monitoring and error reporting tools for issues developing in production.

Full post here, 6 mins read

Things to remember before you say yes to automation testing

All tests can’t be and shouldn’t be automated. Know which tests, if automated, will stop finding bugs and keep them out of the automation list.
Read more

Things to remember before you say yes to automation testing

  • All tests can’t be and shouldn’t be automated. Know which tests, if automated, will stop finding bugs and keep them out of the automation list.
  • Work on well thought out and well-defined test cases before starting to build test automation.
  • Use the programming language your testers are familiar with to keep the learning curve not too steep.
  • If test automation gives ambiguous results, don’t consider it. There must be a problem with the test script or test plan. Look at solving that problem.
  • Break your test suite into smaller chunks of independent test cases that don’t affect the results of other test cases.

Full post here, 6 mins read