teatimewithtesters.com
Interviews, Old is Gold

Over A Cup Of Tea With Dr Cem Kaner – Part 3

Evolution of the Art and Craft of Testing cannot be discussed without citing credits to the work done by Dr Cem Kaner in the last few decades. Dr Cem’s contribution to this field has had a significant influence on the way testing is being done by thinking and adaptive testers today.

Conversing about testing with Dr Cem, knowing more about his journey and knowing more about his opinions on various subjects in the field was on our wish list from a long time. I got to speak with Dr Cem on various aspects of his life, expertise, opinion and suggestions recently. And this ‘series’ is a result of what we have been discussing from quite some time.

In part 2 of an interview with Dr Cem Kaner we mainly discussed how Depth and Diversity can help tester improve their prospects in software testing. In part 1 of the interview, you read about his career journey with some interesting stories and lessons learned. We also discussed his views on getting testing education from commercial training courses and from universities.

To catch up, you can read part 1 and part 2. Read on and find what invaluable insights we get from him this time.

You have said that testers are at risk of becoming irrelevant. What is your concern?

Many of our favourite methods for designing and running tests are the same as they were in the 1970s and 1980s. I think that many of our discussions of the testing process are also stuck in the 1980s. We’re still complaining about the same conflicts and trying to deal with them in, largely, the same ways.

The problem is that the world around us has changed. We used to call a 10,000-line program “big”. These days, many programs have millions of lines of code. Programmers work in new languages, with better development environments and much richer libraries. The libraries let them add much more capability—from a testers’ point of view, much more complexity—much more quickly.

If your testing approach has not gotten much more efficient, your testing staff has not gotten much larger, but the software you are testing is more than 100 times longer and more complex, then your testing has to have a smaller impact than it used to have. The more quickly that programmer capability rises, compared to tester capability, the less relevant the testing is to the final product.

Have we learned so little about how to test better over the last 30 years?

I think we’ve learned an enormous amount about how to test. I think we understand a lot more about the ways that programs fail. We’ve developed many new techniques for predicting where errors will be, for noticing errors more quickly, for verifying that revised code has not changed in unexpected ways, for tracking changes in data, for creating, executing and monitoring the results of tests and for finding patterns in failures.

The problem is not in the state of the art. It is in the state of the practice.

What is the difference between the state of the art and state of the practice?

The “state of the art” is about the leading edge of the field. What do the most knowledgeable people know? What can the most skilled people do?

The “state of the practice” is about normal practice in the field. What do most testers know and do?

I think the state of the art in testing has evolved enormously over the past 30 years. I think the state of the practice has barely budged. We keep running around the same circle and telling ourselves we are making changes, we are moving, therefore we must be making progress. We invent new names for ideas that were popular 15 years ago, then sell them as new ideas.

Is this unique to testing?

I hear this question most often from freshers, people who have recently gotten a job in testing or who are trying to get one. My impression of the typical person who asks this question is of someone intelligent, ambitious, willing to work hard, maybe university educated but without many skills that are well enough developed to be of practical value in the workplace. This person is at the start of a journey of self-improvement that will require many years of work,

Every field repeats it’s past, cycling through the same few basic strategies for addressing its basic problems.

This is not necessarily a bad thing. If you have new knowledge, if you have solved a few other problems, then an approach that didn’t work the last time you tried it might finally work.

However, all areas of computing seem to suffer from amnesia. If an idea hasn’t been popular in the last ten years, very few people will remember how popular it was before that. Honest people can honestly sell the idea as “new” because they are so ignorant of their field’s history. We spend very little time reading the history of what we do.

Some consultants get very impatient with me when I tell them that they are selling something as “new” that is very similar to an old idea. Because they didn’t learn from history, they had to reinvent the idea or learn it from other people who reinvented it. They don’t see the similarities. And the superficial things are different. The words people use to describe the idea are different. The words people use to describe the problem that this idea tries to solve are different. The social context—how programmers and project managers and marketing people and customers work together—is different too. For someone who doesn’t want to recognize a historical pattern, these superficial differences make repetition easy to ignore.

This kind of problem is more general than testing, more general than computing. I see it running through the social sciences, through psychology and economics for example. However, I think we are less well-read in computing than in the social sciences, less familiar with our history.

Why is this a problem?

If you don’t realize that you are trying an old approach, you won’t ask why that approach failed the last times people tried it. You won’t revise your version of the approach to avoid the old mistakes and to overcome the old barriers.

You start by talking about how testers progress more slowly than programmers, but here you are saying that testers and programmers have the same problem. What is the difference?

I think that programmers and testers are similar in that both groups are blind to their history. This isn’t just a problem among practitioners. University degree programs, at least the typical programs in North America, are much more focused on the current ideas, the current technology, the current practices, than on the history of them or the controversies that led to the evolution of the current forms.

What is different about programmers is that they incorporate what they are learning into their technology. They create new tools, new libraries, and new languages. They package what they are learning into a form that other people can use and that will still be usable (and useful) even after the ideas that inspired the technology have (again) become unfashionable.

The other difference, I think the essential difference, between the evolutions of programming and testing, is that programmers don’t just incorporate what they learn into their technology. They incorporate their technology into what they learn.

What do you mean?

When we teach people how to develop software, we don’t just teach them “about” programming. We teach them how to actually do programming. We give them programming assignments. The assignments might be fairly easy in the introductory programming course but they get harder. To do the assignments, the student must learn more skills, work with more tools, and learn new ways to solve problems. We intentionally give students assignments that force them to try new approaches, to learn new ways of thinking about computing.

Students don’t just study “programming”. They take courses like “data structures”, “algorithms”, “design methods”, “user interface programming”, “user interface design”, “computer organization, machine language and assembly language”, “thread-safe programming”, “network design”, “design of programming languages”, and so on. They learn many languages that demand that they think about problems in fundamentally different ways. The typical university computer science student learns an object-oriented language (like Java), a traditional structured language (like C or FORTRAN), an assembler, a scripted programming language (like Ruby or Perl), and perhaps a functional language (like Haskell). They learn to write code, and use development tools, in at least two operating systems. They learn to rely on programmers’ libraries, to find new libraries online (essentially, new collections of commands for their current language) and to save their own good code into libraries that they can reuse on later projects.

We don’t go into this type of depth in testing.

Is that because we don’t have university degree programs in software testing?

No.

Then what’s the difference?

Most commercial courses in software testing stick to easy topics. They teach definitions. They teach corporate politics and let people discuss (complain about) their situation. They present simplified examples. If they give students in-class assignments in class, the assignments are designed to be straightforward, so that everyone can finish at a reasonable time. Very few classes have homework and of those, very few have hard homework problems. Testing courses make you feel good.

We have a remarkable number of introductory courses in software testing. People drift from one course to another, thinking they are learning exciting new things when they’re really getting a new batch of vocabulary, exposure to a new collection of political attitudes (attitudes about the allocation of power and responsibility in software projects), a superficial introduction to ideas from some other field (maybe psychology, maybe engineering quality control, maybe statistics, maybe programming, maybe something else), and a collection of fun stories and fun activities.

But people come out of these courses believing that they’ve learned a lot?

It is easy to foster the illusion that we are covering important topics.

For example, we can have arguments that feel deep and meaningful about the proper definitions of words. We can introduce “profound” new definitions for familiar words and “deprecate” the old definitions. I don’t think definitions are important. I think we have conflicting definitions in the field and so we have to learn to listen to each other. I think that debates about the “right” definition lead nowhere. However, they can give the participants the illusion of progress. People can feel as though they’ve done something important even though they haven’t done any real work and they haven’t learned how to actually do anything differently.

As another example, we can work through simple examples. For every testing technique, there are simple, straightforward examples. To teach this way, you describe the technique in lecture, then work through one or two examples in the lecture. Next, pick an example that is very similar to the lecture. You can make it sound different by changing the setting—for example, you can design a test for the boundary of the size of a data file instead of the size of an input data item, but both boundaries can be integers that are specified as part of the problem. The students will have a successful experience and feel confident that they know the technique. They’ll feel ready to move on to the next technique because they won’t realize that they probably can’t apply this technique to problems that are even slightly more difficult.

As a third example, we can show how something in our field is like a problem studied in a different field, and then look at a model that people developed in that other field. I think this is a good start on teaching people to expand their thinking, but what does the course do with it? Do the students ever actually use the model? Do they learn to use it on the job? How much practice do they get in applying it?

If the course doesn’t give students enough experience with a concept or model, if it doesn’t give them experience actually using it under circumstances that are similar enough to the student’s work-life experiences, then the student won’t be able to apply it when they get back to work.

We have plenty of experience going to places where a person at the front of the room gives you information that you tell people about but you can’t make practical use of it. We have a special name for this type of information. We call it “entertainment.”

There’s nothing wrong with going somewhere for entertainment. But don’t confuse it with education or training.

If you’re going to spend your time on topics that are presented so superficially that you won’t be able to apply what you’ve learned, from a what-are-you-learning point of view, you’d be as well off (and probably better entertained) listening to jokes and watching magic tricks.

Are you saying that people should avoid introductory courses?

You have to start somewhere. It’s good to have an introduction. But an introduction is just an introduction.

It’s a starting point. It’s just one small first step on a path to competence in a field.

But if they take the right course, people can write an exam and be certified after an introductory course.

If you can learn what you need for a “certification” from an introductory course, that certification is not a measure of your knowledge. It’s a sales tool for the people who are selling the introductory course.

The fact that you can get “certified” after a few days of training is one of the problems in our field. This tells people that you can learn the basic knowledge and skills of the field in three to five days. That you can take this course, and then pass this multiple-choice exam, and then you can stop for a while because you have the basics.

This illustrates the difference that I was talking about between testing education and programming education. If I told you that I could teach you how to program in three days, well enough to get a job as a programmer and keep that job, you would probably laugh at me. Anyone who is any good at programming knows that you can’t get the basic skills that you need in a matter of three days. And that you can’t demonstrate your skill as a programmer by answering simple questions on a multiple-choice exam.

We should know the same thing about testing, but as a field, we don’t. A huge portion of our field takes training that is specifically designed to prepare people for these silly exams.

But some certification exams are hard to pass. Doesn’t that mean they have high standards?

No.

Why not?

The pass rate tells you very little about what the students actually learned, how useful the material was, or even how hard the material was.

When I teach a course at university, I can make my exams as easy or as hard as I choose. If I want to pass 10% of my students, I can write an exam to do that. If I want to pass 90% of my students, I can write an exam to do that too.

I can fail 90% of my class even if the underlying material is easy and I can pass 90% even if the course material is very hard.

Learning how to set the difficulty of an exam is one of the skills that professional teachers learn.

Do you think the people who teach the primary certification course are dishonest?

No.

I think they (ISTQB, ASQ, and QAI) have the wrong vision of testing education but I think that they are trying very hard to provide a good service (good courses, fairly-written exams) that is consistent with their vision.

I know much of QAI’s history but I am not familiar with the quality control processes of the new owners of QAI, so I am not comfortable making additional statements about QAI.

Regarding ISTQB and ASQ, I am very impressed with the efforts they make to evaluate and improve the quality of their exams. I respect the professionalism of their work. I just think they’re taking the field in the wrong direction.

How do you think testing education has to change?

That’s what I came to the university to think about. I was unhappy with the impact of my teaching as a commercial instructor. I felt that my colleagues and I were providing too much entertainment and too little skill development, that we were making too small a difference in the lives of our students.

So I joined the computer science faculty at a good school, to see how they taught novice programmers to win programming competitions and land good jobs. As I learned the basics of university teaching, I proposed a project to the National Science Foundation, to combine what I had learned from commercial training in software testing with good practices in university-level teaching of mathematics and programming. The NSF gave me three grants, over 10 years, to develop my approach. This is the work that led to the BBST course series (http://bbst.info).

Would you like to tell our readers about your overall experience with BBST courses?

Rebecca Fiedler and I designed these courses, with a lot of help from other people. Becky had been a professional teacher for almost 20 years—while she worked on BBST, she also did other research that led to her PhD in education and she became a professor of education. Much of the BBST instructional theory (as opposed to the testing theory and practice) came from Becky.

Our goal is to use technology to draw students into a deeper and more intense learning experience, helping them understand complex issues, develop complex cognitive skills, and develop real-world skill in the use of key software testing techniques.

One of the weaknesses of commercial training is that it rarely involves significant assessment. “Assessment” means the measurement of what the student knows or can do. To assess student work, we have done design good tests, good projects and so on.

Good assessment serves four purposes:

a) Assessment tells the students what they do and don’t know

b) Students learn from participating in the assessment activities. People learn what they do. The assessment activities structure what they do, and thus what they learn.

c) Assessment tells the instructor what each student knows (or doesn’t know)

d) Assessment tells the instructor what parts of the course are weak.

When you’re teaching a course, assessments force you to confront the weaknesses of your work. There are many excuses for not doing assessments or for making assessments so easy or so simplistic that you don’t learn much from them. However, if you want to improve as a teacher, studying what your students have learned is how you discover your effectiveness.

Our typical student spends 12-15 hours per week on each BBST course for four weeks (48-60 hours). Of that, about 6 hours is a lecture. The rest is the student’s work: doing assignments, participating in discussions, preparing for and writing quizzes and exams, and processing the feedback they get on their work. People learn what they do. We provide a structure in which students do a lot and (those who do) learn a lot.

What are the students learning?

In the first course, BBST-Foundations, students learn the basics of black box software testing. This is our version of the introductory course. We see it as the starting point for the BBST educational experience, not as the primary experience. The next course, BBSTBug Advocacy, is my favourite course. Students learn how to report bugs effectively. They learn a lot about

  • troubleshooting (how to demonstrate the failure and its consequences)
  • market research (how to demonstrate that some aspect of the program’s design reduces the program’s value)
  • persuasive technical writing (how to describe the bug precisely and in a way that motivates people to fix it)
  • decision theory (how people make decisions about the importance of the bug and the value to the project of the bug reporter (i.e., you))

The third course, BBST-Test Design, is a fast march through the field’s main test techniques. We peek at about 100 different techniques and focus on six. We do small projects using two of the techniques and larger, harder projects on two others. The course overwhelms many students with work—we provide hundreds of references, cross-referenced to the course topics so that students can come back to individual topics later when they need to apply one on the job.

Bug Advocacy and Test Design convey more practical knowledge—how to do the work of testing—while Foundations presents an overview.

We would like to know about your book on Domain Testing. Why did you decide to write a book dedicated to a single test technique?

The Test Design course introduces students to several techniques and gives them a brief experience with them. We try to give experiences that are realistically complex for a few techniques, including Domain Testing, but one course that surveys many techniques can only go into limited depth.

The Domain Testing Workbook goes into depth. This is not just an introduction to our field’s most commonly practised technique. Our goal is to help readers develop the professional-level skill. The book’s 450 pages present the overview and theory of the technique and then work through the analytical process in detail. What kinds of information do you use, and how do you use it, to create an excellent set of domain tests? We write the book around 30 worked examples that present different issues and challenges that someone who regularly uses this technique will have to face. For more on this book, see http://www.amazon.com/Domain-TestingWorkbook-Cem-Kaner/dp/0989811905.

Will this turn into another BBST course?

Yes. We wrote this as a textbook for a BBST course and created BBST-Domain Testing to present its material.

Your first three courses used videos and slides only, with a few readings but no textbook. Is it different working with a textbook?

The impact of the Domain Testing Workbook on that course was overwhelmingly positive. It inspired us to create workbooks for the other BBST courses. The Foundation’s book is published now (see http://www.amazon.com/dp/0989811921). The workbook for Bug Advocacy is almost done. We use the draft version in our Bug Advocacy course. The final version of this book and the Test Design Workbook will be published by the end of 2015.

These books significantly update the courses, providing a new set of assignments and a detailed commentary on the lectures. We’re now recreating the video courses that reflect the lessons we’ve learned teaching from these videos over the past ten years.

Do you expect to write new in-depth workbooks like Domain Testing?

I hope so. Becky and I have been writing sections of the Scenario Testing Workbook for years and I’ve been accumulating notes toward the Risk-Based Testing Workbook and the Specification-Based Testing Workbook.

Why are these in-depth presentations important?

These deeper books illustrate what I think our field most needs. We need more training on:

  • how to design great tests,
  • how to run them,
  • how to evaluate the results,
  • how to report the results, and
  • how to use computational skills to automate more aspects of these tests and to implement testing methods that are too hard to do manually.

I don’t think we need more surveys that talk about these topics. We need to create courses that teach the nuts and bolts of how to do these things, how to do them well, and how to do them efficiently.

This is the foundation that I think we need to make fundamental progress in the state of the practice in software testing. The feuds and the definitional wars are distractions—marketing tools for the businesses that promote them—that I think are increasingly irrelevant to the development of high-quality software. We need to improve our skills and our efficiency, not our politics.

That was Dr Cem Kaner on various topics around Software Testing. We hope you liked the series.

Disclaimer: This interview has been originally published in past editions of Tea-time with Testers. The author’s opinions on the topic(s) may or may not have changed.

Related posts

Leading Beyond The Scramble: Part 1

Lalit
3 years ago

Over A Cup Of Tea With James Marcus Bach

Lalit
3 years ago

Over A Cup Of Tea With Jerry Weinberg

Lalit
3 years ago
Exit mobile version