#Issue52
3 posts

From API craftsmanship to API landscaping

Moat your APIs with a robust, organization-wide security strategy. Allow your APIs to be discovered depending on whether they’re public, partner-facing, or private.
Read more

From API craftsmanship to API landscaping

  • Don’t let a fear of having too many APIs limit you. Some APIs will die while others will flourish with natural selection.
  • The effectiveness of your APIs should be felt and not seen. Changes in how consumers use APIs should be invisible to producers and vice versa.
  • Moat your APIs with a robust, organization-wide security strategy.
  • Allow your APIs to be discovered depending on whether they’re public, partner-facing, or private.
  • Use a sound versioning strategy.
  • Build your API ecosystem in a way that it can still work even if one is broken.

Full post here, 5 mins read

Learnings from the journey to continuous deployment

Incremental changes result in easily maintainable products. Releasing with smaller changes at regular intervals brings value to customers faster and provides early feedback on future tasks.
Read more

Learnings from the journey to continuous deployment

  • Incremental changes result in easily maintainable products.
  • Releasing with smaller changes at regular intervals brings value to customers faster and provides early feedback on future tasks.
  • Improve code quality by writing quality tests and setting up comprehensive test strategies for the entire build and deploy pipeline.
  • Improve integration testing in the staging environment to detect issues related to dependencies.
  • Monitoring critical parameters such as system load, API latency, and throughput are vital to assess the health of the software.

Full post here, 5 mins read

Back-end performance, those metrics we should care about

There is a strong correlation between throughput and latency in a performance test. Latency increases with the growth of throughput.
Read more

Back-end performance, those metrics we should care about

  • The latency requirement should correspond to the specific service type.
  • There is a strong correlation between throughput and latency in a performance test. Latency increases with the growth of throughput.
  • Normally network issues like congestion-caused errors should not exceed 5% of the total requests, and application-caused errors should not exceed 1%.
  • As the CPU determines a server’s performance, a high sy means the server switches between user mode and kernel mode too often, which is bad for overall performance.
  • Frequent reading or writing the disk could cause long latency and low throughput.

Full post here, 10 mins read