He has been travelling the world teaching the craft of software testing to many testers. He has specialized in courses on Rapid Software Testing – which he co-authored with James Bach.

He is also a prolific writer, and his publications include hundreds of articles, papers and columns.

We spoke with Michael Bolton this time and got to know many interesting things.

Read in this exclusive interview…

We are curious to know about your journey in software testing. Where and how did you start? Milestones reached that you find worth mentioning?

I started testing, as all people do, pretty much the moment I was born. For several years, I tested happily and learned quickly. Then I started going to school. 3. 4.

In school, some teachers taught me that questions had one and only one right answer. The activity they called “testing” in school tended to emphasize that. Testing, in school, is mostly focused on confirming that you’ve learned to provide the right answer to a specific question. I had a very good memory which sometimes helps to disguise the fact that I really hadn’t learned the material, but that I could remember the right answers. To my dismay, we sometimes do software testing in exactly that way, focusing on correctness. I was lucky to have had some teachers— and my parents— who taught me that the world was more complicated than that. Some questions have ambiguous answers. Some answers have lots of viable alternatives. There’s more than one way to do things, and there’s more than one way that things can go wrong. In light of those ideas, I think we testers have to focus on exploring, discovery, investigating, learning, discussing, reporting, rather than on single right answers. We need to think about what else might be going on in the system, and about looking for problems.

Although I didn’t realize it as I was doing it, I prepared myself for a career in software development by working in the theatre. In theatre, people prepare a model of an alternate reality in a few weeks. Talented and sometimes headstrong people figure out how to work together creatively. That turns out to be very similar software development, which combines technical and engineering work with something that, if we paid attention, would look more like art. Though I trained as an actor, I preferred the technical end of things, and eventually became a stage manager. A lot of the skills that I learned there—in particular, self-management, adapting to very rapid change, and coordinating people on a deadline—carried over into software development.

As with many people in testing, I was a programmer before I became a tester. If you learn to program you’ll quickly realize how important it is to test your work. It may surprise you, but I believe that most programmers are terrific testers. If you want evidence, watch how they test someone else’s code. Like the rest of us, though, programmers have systematic blind spots that allow them to miss subtle but important problems in their own work from time to time.

When I was a programmer, I tended to find out about mistakes sooner or later, and the feedback helped me to become a better programmer—and a tester.

After a couple of years doing fairly routine programming, I started working for a company called Quarterdeck. I started as a technical support person, working in the Canadian office for three years. I was given my first job in testing by Skip Lahti. He brought me to California and put me in charge of coordinating the testing work on DESQview/X, which was an implementation of the X Window System for DOS (two technologies you don’t hear much about these days). After a while, I became a program manager for our flagship products QEMM-386 (a memory manager) and DESQview (a multitasker), which at that time were among the bestselling pieces of software in the world. At Quarterdeck, the program manager coordinated the technical aspects of product development, but that always felt like a testing position to me. I was also the program manager for the first couple of versions of CleanSweep, which was also quite remarkably successful in its day.

Towards the end of my tenure at Quarterdeck, I tried to go back to being a programmer for a little while, but the program management work kept pulling me back in. Due to the extreme growth and extreme contraction over a couple of years, I left Quarterdeck in July 1998. I’ve been an independent consultant mostly specializing in testing, ever since.

Please tell us about your experience with Rapid Software Testing.

Starting in 2001, James and I had met a few times, and in 2004 he asked me to learn how to teach his class, which I did for the first time Bangalore, India. The class really appealed to me, because it was rooted in the kind of fast, responsive work that I had done in commercial software. Meanwhile, at the time, he felt he needed new exercises. I eventually brought the dice game and the Pattern exercise into the class, and I did a lot of work on developing the version of the Wason Selection Task that we teach. In 2006, James made me a full partner and co-author in the development of the class, which was very gratifying. We both teach the class in our different ways, and now we’ve added a third person—Paul Holland—who himself is teaching the class in his own way, informed by his career as a test manager. It’s important, we believe, that each tester owns his or her own approach to testing, so we give each other a lot of freedom and latitude to try things, to learn things, and to develop new exercises.

That’s a very creative process, and it’s very strongly influenced by the participants in the class, too. We learn a ton from them.

People, who know very little about RST often ask, ‘How to follow RST? What different things testers do (as compared to the traditional way of testing) when they do it the RST way?’ Would you help our readers understand RTS in brief?

Rapid Software Testing is fast, inexpensive, credible, and accountable, intended to fulfil the mission of testing completely. If you want a good description of Rapid Testing in a nutshell, have a look at the map here (http://www.satisfice.com/images/RapidTestingF ramework.pdf) and the premises of the class detailed on my blog, starting here.

Rapid Testing is focused on software development as a human activity. It’s centred not around documents or processes but on the skillset and mindset of the individual tester. Testing sometimes generates artefacts, but testing is not about the artefacts. It’s like music: you can have sheet music, which notates music would be like if it were played, but until it is actually performed no music happens. So Rapid Testing emphasizes the tester performing the testing. If it’s important to do other stuff—if it’s part of the mission—we do that. If we’re asked to produce comprehensive documentation, we do that. If we are asked to test within a particular context or to do highly formal testing, we do that. But we’ll always question the cost and value of what we’re doing because we’re always focused on trying to find important problems quickly, to add clarity, to reduce waste, and to speed up the successful completion of the product.

What are your opinion on Test Metrics and their usefulness? Are there any good or bad metrics? Why do Executives ask for it? And how should testers/test managers tackle such demands?

Anthropologists have told us that when people feel in the presence of forces that they can’t control and that they don’t understand, they resort to magic and ritual to try to get a feeling of control of them.

So when managers and executives don’t understand how to make the product better, or how to make the development, better, or how to make the testing better, they ask for numbers, and then hope that they can make the numbers look better.

Most managers and most testers haven’t studied measurement, and don’t pay much attention to the question of whether they’re measuring what they think they’re measuring. Moreover, they don’t recognize the effects that measurement can have on motivation and behaviour. To get started on addressing the first issue, read “Software Engineering Metrics: What Do They Measure and How Do We Know” by Cem Kaner and Walter P. Bond (it’s easy to find on the Web). In that paper, there’s a list of ten questions that you can ask to evaluate the quality of the measurement you’re making. Read the material cited in the paper, too. And read Quality Software Management Volume 2: First Order Measurement by Jerry Weinberg. For the second issue, have a look at Measuring and Managing Performance in Organizations, by Robert Austin.

Quite frankly, I think testers have been very bad at explaining their work and how it’s done. That’s partly because we’re in a fairly new craft. The craft has also been stuck in ideas that were first raised somewhere in the late 1960s and early 1970s. These ideas gave rise to the idea of the testing factory; that testing is an ordinary technical problem; that testing is about verification and validation; that testing is about checking to see that something meets expectations.

I resist that really strongly. Testing is far more than determining whether something meets expectations. Testing means telling a story about products, the kind of story an investigative reporter might tell: learning about products, investigating them, building new questions from that knowledge, and letting the expectations fall where they may. Some think of testing in terms of a game: preparing a set of test cases that the program has to pass, and then keeping score as to how many are passing and how many failing. Testing isn’t a game; it’s an investigation. Numbers can illustrate the story, sometimes, but the numbers aren’t the story.

How should testers develop their Critical thinking? Any tips?

There are a bunch of different books you can read tools of critical thinking by David Levy or thinking fast and slow by Daniel Kahneman but to get really good critical thinking. You have to practice it you have to practice asking yourself what else could this be you have to practice asking yourself how might I be fooled. You have to hang out and invite challenges from other people. Critical thinking is really thinking about thinking without being fooled with the object adopting fooled.

What are the things about the current state of software testing that make you happy and things that worry you?

A few years ago it was difficult to find people who are interested in doing software testing really well and really skillfully. That was true in India and it was true in the rest of the world too. These day things are very exciting. We’re discovering that there are lots of people who are interested in sharpening their skills in taking an expansive view of what testing is and what could be. I’m also very excited about the fact that some large institutions are starting to pay attention to.

I’m less excited about the fact that there are still people who want to trivialize our craft by selling bogus certifications and holding the threat of unemployment over the unwary. I’m still surprised at the number of people who don’t want to study testing and the things that surround it—how we make mistakes, how we model, how we learn, how we can use technology productively and expertly, how we can think critically. We’ve got fabulously interesting jobs.

We know people who want to follow the RST and CDT approach in their project works. But they can’t since they don’t have support from management. Organizations (most) treat experiments and risks equally. What would you advise them?

Rapid software testing is not focused on the organization. It’s focused on the mindset and the skill set of the individual tester. You can choose to test rapidly in any context, mostly by exploring, discovering, investigating, learning and reporting about the product. One of the big deals in rapid testing is optimizing your own work. You can do this yourself, in your own little corner of the world, in those little snippets of unsupervised time that you can afford to waste without getting into trouble.

We call this “disposable time”. If your management is doing things that are wasteful and unhelpful even to them, they’re probably not paying enough attention to notice that you doing anything differently anyway. On the other hand, they might notice when you’re providing more information and more valuable information them. So try little experiments. Share ideas with other testers in your community or your organization. Keep track of how much time you’re spending on excessive scripted work. Learn to use lightweight, flexible tools, and study some basic programming. Write little tools that save you time or tedium. Read. Read anything and connected to software or testing. Develop your skill with Excel, or with databases. Keep track of how much time you’re spending on test design and execution, on bug investigation and reporting, on setup, and administrivia work. Make that visible; graph it out.

Organizations that regulate themselves to the degree that they’re unwilling to take some little risks or tolerate small experiments will tend to blow up or be surpassed by the competition. Look at The Black Swan and Anti-Fragile by Nassim Taleb for some terrific ideas along those lines.

‘Value Add makes the client happy’ is a common belief in testing circle. What #value do you think testers can add to a project that will make the customer happy and how?

I’ve never heard the expression, “value add makes the client happy”, so I don’t exactly know what it means. Much of the value that testers add to the project is often indirect and, for some people, hard to observe. Here’s what we do add: awareness of what’s going on, particularly in the product, but also in the project and the business.

I sometimes hear people complain that the testing is the bottleneck, saying things like “we can’t ship the product until the testing is done.” I disagree; they could ship the product whenever they liked. I think what they really mean is that they can’t ship the product until they believe the development work is done—and can’t decide whether it is done. So it’s not that there’s a bottleneck; the problem is that no one knows what’s really in the bottle. Testing helps to address that problem.

We watched you acting in ‘The Angel and Devil of Testing’. Where did the idea come from? Would you like to share that experience with our readers?

I don’t exactly remember how the angel and devil piece got started. I remember doing a talk called “Two Futures of Software Testing” in which I played the devil side for the first half and something more like an angel for the second half, but I did that on my own. I believe Lee Copeland put Jonathan Kohl together with me to do the two-handed presentation. Since Jonathan’s outward appearance is more angelic than mine, I got to play the devil. Jonathan’s wife Elizabeth did some terrific graphics for it, too.

What is your opinion about Tea-time with Testers? Any message for our readers?

I think Tea Time with Testers is a wonderful initiative and congratulations to you for starting it. To your readers: Read on, but read critically. And when you notice something worth writing about, write!

Disclaimer: This interview has been originally published in past editions of Tea-time with Testers. The author’s opinions on the topic(s) may or may not have changed.