Keeping Quality Transparency Throughout the organization

Keeping Quality Transparency Throughout the organization

In general, software testers have a challenging job. Software testing is frequently the final significant activity undertaken prior to actually delivering a product. Since the terms “software” and “late” are nearly synonymous, it is the testers that frequently catch the ire of the whole business as they try to test the software at the end. It is the testers who are under pressure to finish faster and deem the product “release candidate” before they have had enough opportunity to be comfortable. To make matters worse, if bugs are discovered in the product after it has been released, everyone looks to the testers and says, “Why didn’t you spot those bugs?” The testers did not cause the bugs, but they must bear some of the guilt for the bugs that were disclosed.

Since I’ve been working in agile development for a while, I can say that testers there do, in some ways, have a much harder time. Agile development breaks down the process of developing software into short development cycles, which are typically one to four weeks long.

This means that a tester in an agile environment may feel end-of-project pressure every few weeks rather than every few months. The good news for those testers is that the nimble cooking pot has spawned some unique and inventive pressure-handling strategies. The first insight that the intelligent team must embrace is that the entire team is accountable for the software’s quality.

Second, before the entire team can begin to take responsibility for quality, they must first recognize what it is. Testers frequently have the best understanding of what quality is for the product and are in a strong position to keep the team aware about quality standards.

Test your website or web app online for iOS browser compatibility. Perform seamless cross-browser testing on the latest iPhone tester. Try for free!

Subjective Quality and Test Depth

We understand that it is not easy to express if the quality is excellent or bad. For example, if the team only had time to execute a basic first pass at testing a certain product area and discovered no defects, was the quality good or low? Of course, it’s hard to say. So, a good start can be to generate a report on both the “depth” of testing they’ve been able to accomplish and their subjective quality evaluation.

Example:

POOR Quality — — — — — — — — — — -> Average Quality — — — — — — — — — –🡪Great Quality

Dan:

Ron:

Dave:

To describe testing depth, use a value between one and five, with one indicating a shallow initial pass and five indicating extensive testing of all parts of the software, including borders and extreme failure conditions. If you’re testing in a group, have each colleague think of a number between one and five to signify how thoroughly he believes the application has been tested. When everyone has a number, have them all demonstrate it at the same time by raising that number of fingers on a hand.

This is similar to a game of rock-paper-scissors, except no one loses. If you discover that not everyone agrees, which is likely, debate the anomalies as a unit. For example, if I pick one and you pick three, we may talk about why you think the level of testing was average and why I think it was shallow. You can vote again after some debate to see where we all stand. If we still can’t agree after a certain number of votes, we’ll go with the most pessimistic evaluation.

Use a similar mechanism to assess quality, except this time use your thumbs up for good quality, down for bad quality, and sideways for middling. This brief, collaborative activity also provides everyone on the team with a shared notion of depth and quality. Report the pair of evaluations to the team as a one-to-five score for depth and a high, medium, or poor-quality assessment for your testing team. Will use a number of stars to represent depth and a happy, neutral, or frowning face to represent quality.

Report the New Feature Quality

At the end of a development cycle, it is typical practice in agile projects to organize a product demo. Each new feature is displayed for the entire team to view during the session.

In the past, the testing team could have grumbled about not having enough time to adequately test. There is no need to complain anymore. The extent of the tests is clearly stated. It is up to the entire team to determine what to do. After observing this technique, I’ve heard engineers agree that they should finish coding earlier in the sprint to allow testers more time. I’m not exaggerating.

Need a great solution for testing Safari for Windows? Forget about emulators or simulators — use real online browsers. Try LambdaTest for free!

Maintain visible open bug counts

The volume of bugs discovered isn’t necessarily a good indicator of quality. Low testing depth, for example, may yield a small number of bugs, whereas excessive testing depth is likely to provide a greater number of defects. Even if there are few defects, a subjective quality evaluation might reveal poor quality: “I didn’t do much testing, but everything I tried broke.”

You may then upload a new version of the graph every morning and display it on the team room (if it’s a physical graph) wall. The graph depicts a curve that gradually climbs over time and occasionally lurches downward as the development team repairs defects in an effort to lower the bug count.

Keeping the total low has become a source of pride for the development team, which now saves a few days at the end of each development cycle to concentrate on defects. They don’t want the cycle to conclude with a high defect count since they know this figure is visible to everyone, and a large number of defects will damage the team’s quality evaluation.

The overall quality of the product

More and more features are added as the product’s development progresses. Testing of the newly added features and the entire product is still being done diligently by testers. They want to increase the breadth of their testing across the product and learn how all these features work together. The testers evaluate the complete product quality at the conclusion of each cycle to update the team on their opinion of the quality of the product as they perceive it.

Because the product being developed is vast, providing a single evaluation for the complete product would be insufficient. Fortunately, this product is separated into a dozen different functional areas. The test team provides a depth and quality assessment for each functional area. The end result is a type of “score card” made up of depth scores and quality assessments. Pushing both depth and quality as high as possible is the team’s overall objective.

The team should be aware of any defects and do everything possible to address them as soon as they are discovered. You may keep the number of defects discovered visible at all times by implementing a graphic that resembles an agile burn chart. The vertical axis of this two-dimensional chart indicates the number of known bugs, while the horizontal axis is time.

Try this online Selenium testing tool to run your browser automation testing scripts. Our cloud infrastructure has 3000+ desktop & mobile environments. Try for free!

Closing

The product manager ultimately decides whether to ship or not at time of release, but the entire group is knowledgeable of the quality of the product being shipped. The choice to deploy a product with a low degree of testing and poor quality would be dangerous, but that risk is now transparent to anyone and everyone.