#Issue21
3 posts

Testing the boundaries of collaboration

Two experiments that break the rules succeed spectacularly. Lessons learned from the Limbo model - of tiny changes, instantly deployed - can pave the way for real-time software development collaboration.
Read more

Testing the boundaries of collaboration

Two experiments that break the rules succeed spectacularly. Lessons learned from the Limbo model - of tiny changes, instantly deployed - can pave the way for real-time software development collaboration.

  • Small changes are usually safe, so roll them out straight away and save the slow, costly code reviews for the small percentage of riskier, bigger changes that really need them.
  • Try a test, commit, revert (TCR) workflow where you create a little commit every time a test passes—and deleting changes if the test fails. The automatic erasure of mistakes incentivizes you to make your changes smaller—and less likely to fail.
  • Collaborate in teams using the Limbo model and your collaborators may be supplying corrections for your mistakes even without knowing what you were doing.

Full post here, 9 mins read

Software testing anti-patterns

The two anti-patterns - unit tests only without integration tests and integration tests with no unit testing - both come from problematic assumptions about the time required, the complexity of integration tests, the difficulty of setting up the test environment.
Read more

Software testing anti-patterns

  • The two anti-patterns - unit tests only without integration tests and integration tests with no unit testing - both come from problematic assumptions about the time required, the complexity of integration tests, the difficulty of setting up the test environment. Unit tests are faster to set up, run and write fixes for, and can recreate outlier scenarios more easily; but some issues are only visible in integration.
  • Avoid testing for the wrong functionality or a less relevant one by building a mental model of the product or service as critical functions, core functions and everything else. Focus on critical functions, and code that breaks or changes often.
  • Tying tests tightly to the code itself is an anti-pattern. Don’t focus on tests verifying internal implementation, focus on testing functionality of features instead.
  • Not converting production bugs into tests is a mistake. It is not enough to simply fix the bug that passed through but test for it so that future releases will be safeguarded.
  • Do not write tests without reading all the documentation for your testing framework. Most frameworks will suffice for most jobs and you should not need to write much (or any) custom code or reinvent the wheel where standard code and best practices exist.

Full post here, 41 mins read

Cognitive bias in tests: The most human side of testing

To avoid the sunk cost fallacy, analyze ROI of the current solution & compare alternatives. Continuously refactor existing systems to stay up-to-date.
Read more

Cognitive bias in tests: The most human side of testing

Types of cognitive bias that affect testers and how to avoid them:

  • To avoid the sunk cost fallacy, analyze ROI of the current solution & compare alternatives. Continuously refactor existing systems to stay up-to-date.
  • Mitigate the anchoring effect (excessive reliance on early information) by ensuring testers are familiar with the project’s context and placing testers directly in touch with clients or decision-makers.
  • Counter confirmation bias (relying on information that confirms what we already believe in) by relying on measurable, objective values rather than opinion. Use A/B testing and have team members undertake cross-testing for each other.
  • Negativity bias makes past negative experiences weigh in too much on decision-making. It can make one pessimistic of the current release. Mitigate it by developing suites of automatic checks, by categorizing bugs by severity & impact and by doing a trend analysis of bug reports and solutions over time.
  • Inattentional blindness can be due to fatigue or tunnel vision from hyper-focusing on specific areas. Use peer reviews, pair testing, cross-testing, & automated checks, and encourage exploratory rather than script-based testing.

Full post here, 12 mins read