Artificial intelligence (AI) has permeated every aspect of our lives, and software testing is no exception. In fact, AI is revolutionizing traditional testing practices, making them more efficient, effective, and adaptable to the demands of modern software development. By leveraging AI, teams can deliver high-quality software products that meet the evolving needs of both users and businesses. There has been ongoing debate about whether AI will eventually render manual and automated testing obsolete. In my opinion, this prospect is unlikely in the near future, particularly with regards to manual testing. However, AI is making significant strides in automated testing, albeit not yet at a satisfactory level. Currently, the greatest value of AI lies in supporting various activities, freeing up time for testing teams to focus on high-value tasks that require expertise in test engineering and product domain knowledge. LLM Based Application If you're a consultant like myself, you may wo...
Performance testing is critical for understanding how applications behave under different levels of load, but interpreting the results remains a complex challenge. Traditional evaluation methods—especially those using binary pass/fail criteria—fail to capture the nuanced reality of modern software systems. As part of Continuous Integration and Continuous Deployment (CI/CD) pipelines, performance tests must provide actionable, reliable insights without manual intervention. In this post, I’ll share my insights on evaluating performance testing results. It’s the first part of a series aimed at achieving fully autonomous continuous performance testing. Why Evaluation Is Critical for CPT and Performance Testing Performance testing is no longer a one-time activity executed before release. With Continuous Performance Testing (CPT), performance checks are embedded throughout the software delivery lifecycle. This integration demands fast, reliable decision-making. But performance data—resp...