In a 2009 MSDN blog post, “Testing sucks,” Dr. James Whittaker suggested all managers out there need to ask themselves what they’ve done lately to make their testers (e.g., software engineers, system analysts and anyone else who may be involved in testing activities) more creative.
Formerly a Director of Test Engineering at Google (Why I left Google, Why I joined Microsoft), in a free ebook Dr. Whittaker offers five insights that he hopes will make your team of testers more effective.
Let’s take a look at a couple.
Insight 3: Take your testing up a level from test cases to techniques
Dr. Whittaker’s reasons:
I think test cases—and most discussions about them—are generally meaningless. I propose we talk in more high-level terms about test techniques instead. For example, when one of my testers finds a bug, they often come to my office to demo it to me (I am a well-known connoisseur of bugs and these demos are often reused in lectures I give around campus).
What are test cases? Here are two very interesting definitions.
According to Wikipedia, “a test case is usually a single step, or occasionally a sequence of steps, to test the correct behaviour/functionality, features of an application. An expected result or expected outcome is usually given.”
According to SQAtester.com, “a test case is a specific set of steps, it has an expected result, along with various additional pieces of information. These optional fields are a test case ID, test step or order of execution number, related requirement(s), depth, test category, author, and check boxes for whether the test is automatable and has been automated.”
The success of testing is primarily centered on how the test cases are designed and written. Your testers have to think about the methods and techniques that can be followed to design test cases in such a way that your organization will get the maximum coverage using an optimal set of test cases. There are various methods and techniques in designing test cases.
For example, test design techniques include:
- Specification-based/black-box techniques
- Structured-based/white-box techniques
- Experienced-based techniques
You can find information on these techniques that any tester can—and should—use here.
Apparently Microsoft has begun using a concept of test tours as a way to categorize test techniques.
Insight 5: Testing without innovation is a great way to lose talent
Testing is an immature science. There are a lot of insights that a thinking person can make without inordinate effort. By ensuring that testers have the time to take a step back from their testing effort and find insights that will improve their testing, teams will benefit. Not only are such insights liable to improve the overall quality of the test, but the creative time will improve the morale of the testers involved.
One idea may be simply to automate your manually documented and proven test cases so you can have them auto-run as part of any future regression testing efforts or cycles. Of course, automating at least a portion of your standard tests may help you avoid an apparently common excuse
Comments are welcome, for example, you may be thinking along the following lines:
- You think none of the tasks performed by human testers can be replaced by automation or a robot (see Robots Can’t Replace Humans Software Testers, Right?)
- Your organization believes testing needs to happen throughout the life cycle and not be left to the end
- You believe it is important to have a related policy and chart of authority that together essentially cover such things as what is the testing process, what are the related templates and tools, who reviews and makes decisions on outstanding defects before a product is given approval to go live
- You use unattended nightly scripts to run your tests, perhaps the results include summary metrics that are fed into a central repository that supports analysis from one test cycle to the next to determine if a product is stabilizing and to predict the test cycles and time required before the product is ready to go live
- You use a tool for defect management, perhaps the same tool supports logging and running of manual and automated tests
- Your efforts to take testing up a level including doing different types of tests such as load testing (perhaps you make use of a related cloud offering, an example here)
- Your test efforts include independent verification and validation by a third party
- You use a matrix or tool for test planning and traceability and are able to tie each requirement to related test cases, code, results and other test documentation (e.g., testing strategy, testing plans, test lab and resource requirements, test readiness training materials, etc.)
- You draw input from particular studies and reports (e.g., Gartner starts a study on Testing tools and publishes its findings on Magic Quadrant for Application Life Cycle Management, Research Magic Quadrant for Integrated Software Quality Suites)
- You use open-source testing tools, maintain a central repository for all test assets, apply metrics to zone in on areas to test, apply a tool for testing or analyzing code coverage, enjoy exploratory testing, ensure developers do unit testing, write test code before product code, have a very senior role that is responsible for a testing center of excellence, have a favorite list of open source testing tools, or have a particular testing techniques (e.g., drawing from CMMI, SearchSoftwareQuality, etc.) you could share to help readers
- You’ve applied some type of new philosophy or cultural approach that promotes better communication between teams (e.g., DevOps)
I asked Peter Shih, Director of Community Management, at uTest, “How does uTest help organizations take testing up a level?”
Through our community of 70,000+ professional testers from 190 countries, uTest is able to ensure that web, desktop and—especially—mobile applications work as well in the hands of end users as they do in the lab. By moving a portion of their testing beyond the confines of the lab across operating system, device, carrier and locations, uTest enables companies to be confident their apps will work as intended for users—the first time, and every time.
By the way, congrats to the folks at uTest on their recent major announcement:
Applause is a new type of app analytics tool—one that analyzes more than 50 million reviews & ratings across 1 million iOS and Android apps—enabling companies to monitor and measure mobile app quality and user satisfaction.
You can read about it at blog.utest.com and blog.applause.com.
In closing, as some additional food for thought for taking testing up a level, here is an excerpt from a small ebook I wrote called Inherent Quality Simplicity.
One means to improve quality is for development teams and test teams to gain knowledge of the business through business requirements provided by business analysts. To support this, business analysts should develop business scenarios for creation of test cases, which can either be manually executed or entered into a test suite for automated execution. Developers and testers can watch for “holes” in the business design, as they consider the business requirements using the business scenarios as a guide. — Blaine Bey, I.S.P., Sierra Systems, CIPS 2007 Volunteer of the Year
Extending particular concepts within the quote, one opportunity to enhance inherent quality is to utilize innovative means to build a strong automated regression suite. To elaborate, at no additional cost, it may be possible to do so while achieving the full-time equivalent (FTE) of one or more engineers. For example, secure management team agreement for a one-year initiative which would have each engineer allocated to a quality-control automation assignment for one or more months, using any number of test tools or life-cycle suites. If there were 12 engineers employed and each was given a one-month quality-control automation assignment, this would represent one FTE even before factoring in other benefits and savings that would be generated by the initiative.
As a related implementation approach, rotate one engineer each month (or within each agreed period) to work on quality control automation. Note: leverage the initiative to move developers increasingly toward providing test scripts and documentation with product code, as part of the configuration-controlled intellectual property versioned packages. As each engineer becomes the one engineer producing test scripts and documentation for the remaining engineers producing code, it is likely all engineers will see the need for each engineer to provide test scripts and documentation with the product code they produce. The FTE savings could be used to add a long-term resource that is able to help in various regards.
Sample justifications for this opportunity include:
(i) automation improves test depth, breath, speed, and consistency, while reducing need to grow the quality control group of testers (reducing the QC group of testers for many organizations could mean utilizing far fewer business resources to conduct tests, while improving automation could result in fewer test cycles);
(ii) automation, combined with business process optimization, can help ensure particular tests are completed but not unnecessarily repeated (the end result could free business resources and better allocate windows of time to performing particular types of testing, while ideal depth and breadth of automation will be best achieved with deep tool, development, and product skills and by utilizing skills of experienced engineers rather than less skilled typical test resources);
(iii) teaming subject matter functional experts with business analysts can move the business knowledge closer to the developer and further help to ensure requirements’ traceability to code and all deliverables (including to automated tests), while construction of the tests prior to code can help to better ensure the systems will work as intended;
(iv) teaming the developer and the business analyst can move the business knowledge closer to the code and therefore help to better identify the required tests, while a small QC team of testers can run engineer established test suites and report related results (while potentially also performing basic manual checks, recommending additional test scripts needed, coordinating customer acceptance tests, and otherwise enhancing their skills in various regards, such as for potential future development or engineer roles);
(v) achieving higher levels of quality, productivity, customer acceptance, profitability and intellectual property completeness, while enabling significant business value and concurrent quality initiatives for enhanced product quality (via quality control automation) and enhanced process quality (via dedicated process quality focus to address quality assurance and improvement, including inherent QMS adjustments to ensure agility or demonstrated traceability and compliance to the rigor of ITIL, CobiT, ISO 9001, CMMI, Six Sigma and so on).
As far back as the days of Juran and Deming, quality products were manufactured efficiently as a result of inherent quality. So why do we all too often ignore the same in software development and systems implementation projects?”
Question provided by a Director of IT and Project Management Audit. Remaining anonymous, he adds,
I believe it is due to the fact that construction and manufacturing are older professions than systems development. That said, since the beginning of time, man has been developing systems of the non-computer sort, and this makes me think of a great quote by Machiavelli on risk that is so relevant to projects we often deliver. It basically says there is nothing more difficult to plan, more doubtful of success, nor more dangerous to manage than the creation of a new system, for the initiator has the enmity of all who profit by the preservation of the old institution.”
Looking within the quote, one may begin to better see the present needs for policing and ethics enforcement concurrent with maturing the profession and pursuing various opportunities for innate improvement. As an example for the latter, tying administrative tasks to job performance may be a good idea. However, a better idea may be to identify which administrative tasks would benefit by replacement with automation, such as a software program that would eliminate the human data gathering and processing issue, while potentially adding an autocorrecting feature which could respond in AI fashion to the decisions the software program recommends, based on the information the software program generated from the raw data and embedded knowledge—reasoning logic. Making this less extreme, even if a software program was created to gather the data and produce an informative report, the administrative task could be streamlined and replaced with a higher-purpose human task of more value to the organization that could be tied to job, department and corporate performance.
Ron Richard, I.S.P., ITCP/IP3P
LinkedIn Profile and Personal Website
Latest posts by Ron Richard (see all)
- Change, exponential power, enterprise architecture, governance and stakeholder engagement - March 4, 2013
- Take testing activities up a level - February 4, 2013
- Your service-oriented architecture expert opinion - February 4, 2013