Our work on methods has its 27th anniversary in 2022.
Improved quality begins with work on improved methods.
An improved quality assurance process requires thorough problem analysis and a subsequent work on methods for quality assurance.
Here follows the history behind three methods for improved work on Quality Assurance we introduced before today's terminology was established:
Test automation since 1995.
At Stockholm Stock Exchange, Quality Assurance and methods for Quality Assurance was regarded as critical for the success of the new trading system introduced 1999.
This was solved by two methods we used, way ahead of its time, test automation and continuous delivery.
Frequent manual tests of hundreds of function test cases was perceived as impossible to do manually of practical reasons. It would have required a large staff of testers and test equipment to do the job besides specifying test cases. Additionally there was no guarantee that the test cases were executed identically when they were done manually, i.e. no trace files to verify how the test cases were executed. Could therefore the results of manual tests repeated week after week in regression tests be trusted?
The only realistic solution was to build a test tool for automated execution of test cases. This solution would ensure identically executed regression tests. This was the background to our first test tools developed for a trading system.
Other requirements were:
-
A user friendly interface to improve speed and quality in creation of test cases.
-
Test case definitions that were easy to understand by non-programmers, i.e. experts on trading rules.
The test tool was also an early case of Java coding that enabled building a GUI for test case definitions.
The test tool made it possible for testers to entirely focus on creation of test cases. No time was required for manual tests. The test tool consequently saved all manpower costs for manual execution of test cases.
Continuous delivery since 1997.
With the introduction of automated function tests it was easy to follow up on the progress of quality of the tested system.
After a 6 months of weekly tests and test reports we could see a pattern of increases and decreases of errors every second week. The conclusions were that the pattern was the result of programmers way of working, i.e. to add new functionality one week and clean up the code on the following week.
It became appearent that the pattern of adding functionality one week and fix the bugs on the following week was very inefficient, simply because bug fixing took a much longer time when a programmer had spent the last week on other functionality when the bugs were reported.
To speed up bug fixing errors had to be reported when the programmers still remember the code, i.e. on the following morning.
This was accomplished by daily deliveries to QA in order to keep software quality under a day-to-day control. Every evening at 20:00 the code of the tested system was checked out, built, and tested with over 900 function test cases. The test results were published on the project's intranet the following morning when the development team retuned to work.
The new delivery pattern required that the test engine was modified to write the outcome of every execution step in a test case on a log file for easy identification of reasons for failed test cases.
Another advantage was that the QA group could save many hours by not having to write test reports anymore.
This is, we believe, the one of the first cases of "Continuous delivery" practice and the quality improvements were remarkable.
Performance snap shots since 2013.
Continuous deliveries to QA and test automation speeds up bug fixing while introducing new functionality in the code. However performance measurements of a tested system requires more time than what is normally available in a development project with high pace.
A problem with continuous delivery is that performance problems can be introduced in every build of a system and with scattered performance tests it's almost impossible to find the causes (it can be many) of the reduced performance.
To solve these problems and monitor performance in every build we introduced "Performance snap shots" at a customer in 2013.
The method means that performance is monitored by short but frequent performance measurements, such as after every build of the product. If performance was degraded in a product build it was easy to pinpoint the cause of the performance degradation.
The "Performance snap shots" method also put the focus on system performance on a "per build basis", which in this case resulted in a capacity increase of 20 times the original requirements on the same platform.