• The latency requirement should correspond to the specific service type.
  • There is a strong correlation between throughput and latency in a performance test. Latency increases with the growth of throughput.
  • Normally network issues like congestion-caused errors should not exceed 5% of the total requests, and application-caused errors should not exceed 1%.
  • As the CPU determines a server’s performance, a high sy means the server switches between user mode and kernel mode too often, which is bad for overall performance.
  • Frequent reading or writing the disk could cause long latency and low throughput.

Full post here, 10 mins read