As we close out the year that was 2017, we welcome Gwen Dobson to The Testing Show to talk about Hiring Testers and Test Managers, the challenges and changes that re happening in the market, the issues and frustrations many of us face as we look for new opportunities, and all that that entails. Matt Heusser, Jessica Ingrasselino and Michael Larsen also join in with their own takes on entering the testing world through the side door, having conversations about money and skill, and not selling short the very real-life experiences and opportunities from learning in other work and experience capacities that don’t necessarily make sense as part of a bullet list of testing skills.
Also, in our news segment, in several states and metropolitan job areas, it is now illegal to ask about your salary history. How do you approach such a discussion if that’s the case?
This week we are joined by Katrina Clokie, author of “A Practical Guide to Testing in DevOps” to talk about the growth of DevOps, which organizations are actually doing it (and how well/completely they are), and the strange three-way handshake that happens with Development, Operations and Testing. If you have been curious about the ins and outs of testing in DevOps, we have you covered this week.
It’s a common expression to hear people say that we should “get involved in the broader testing community” but what does that actually mean? In today’s episode, Jessica Ingrassellino, Matthew Heusser and Michael Larsen get into the specifics of that topic with Melissa Tondi, president of Software Quality Association of Denver (SQuAD). As all of the above are veterans of either participating in or hosting community meetups, we talk about how to make sure that you are meeting the expectations of your members, ways to keep them engaged and to help grow those community ties.
Also, in Software Testing news, have you taken steps to protect yourself from the KRACK attacks? If not, you may want to take a look at your WPA2 devices and remedy that.
We conclude our two part series by talking about Code Review as a Service, who might need such a thing, what it promises and how it corresponds with what organizations are actually doing. As was the case last week, we share opinions and talk about the fact that marketing often drives the perception, but the devil is always in the details and the details may not be as compelling or flashy but they are relevant and more times than not really tell a fuller story of what is going on.
For this set of two shows, we decided to do a forum of just us regulars, and we were going to look at a couple of news stories. Those stories turned out to be the bulk of an extended conversation. We realized that the theme in all of them were the claims that companies made vs. what actually happens in workplaces and organizations.
Needless to say, this is an opinionated set of shows this go around as we discuss the promise of Machine Learning and what it actually delivers. We look at it in the light of other promises made over the past fifty years and how, often, it’s not the best idea that wins the day, but the first idea to gain traction that does.
We are back with Peter Varhol for Part 2 of our discussion on Machine Learning and AI. In this episode, we pick up with Peter and discuss the pitfalls of machine learning algorithms, where they can help us and where they often fail us. Also, what is the role of the tester and testing in machine learning and AI? How intimately involved with the problem domain do testers need to be? How can they learn to understand the analytical parts and what they need to accomplish within that problem domain.
QualiTest is celebrating its 20th Anniversay this month. To celebrate, founder and global CEO Ayal Zylberman joined Matt Heusser, Jessica Ingrassellino, Perze Ababa, Justin Rohrman and Michael Larsen to talk about the ups and down, and the growth and changes that QualiTest has been through. Ayal discusses how the climate has changed, the ways in which testing at QualiTest has changed, and what he sees as interesting opportunities for the future. Also, Ayal weighs in with us on the Equifax breach and what it might mean for the quality reputation of Equifax and other companies in the future.
This is the first of a two-parter with Peter Varhol on both the promises and the hype surrounding AI and Machine Learning. Matt, Perze and Michael go down the rabbit hole on the Machine Learning topic with Peter as we try to wrap our heads around both the realities of Machine Learning, AI and the unique testing challenges such systems offer. From Facebook’s Chatbots negotiating an agreement to systems making predictive suggestions in ways that are both intriguing and creepy. There is a lot to the machine learning puzzle that we are just starting to understand and also prepare ourselves to effectively test. Hint: the algorithms themselves are only part of the puzzle.
In an age when computing and communications is happening on anything and everything imaginable, Matt and Michael reached out to Paul Grizaffi of Magenic to discuss the proliferation of the Internet of Things as well as the convergence of so many devices that used to be discrete components requiring space, attention and connections that had to be closely monitored. With the Internet of Things and, more to the point, the spread of Ubiquitous Computing that goes even beyond the Internet of Things, we chatted and geeked-out on some developments in tech that have changed our everyday realities (we went on a tangent on how these technologies are filtering into musical instruments).
We conclude our conversation with Adam Goucher with some discussion about the goals organizations hope to achieve and the fact that many of the problems encountered with CD are people problems as opposed to toolchain problems. We talk about using CD approaches even when your organization has decided not to push several times a day. Additionally, what can you do when the long-distance goal is represented, but it’s not clearly defined and how to get there is also not clearly defined. Adam’s been there and he has plenty to say about it.
Continuous Integration, Continuous Deployment, Continuous Delivery, and a host of other continuous options abound out there. Do you know the difference? Would you like to? We asked Adam Goucher to come out and discuss with us the variations on the theme between these three distinct disciplines, what they mean, how they are implemented, and where testing and testers fit into the processes. This is the first part of a two-part interview. We will conclude this interview in our next episode.
Also, in the news segment, do you trust the idea of an open source autonomous automobile? Also, what happened when stock prices for many big technology companies all read the same stock price at the exact same time? If you are guessing a bug in the system, you’d be right, and we definitely have a few things to say about it.
Brick and Mortar stores have a lot of software in their operations. From supply chain to ordering, point of sale, inventory management, customer procurement and customer satisfaction, that’s a lot of moving pieces to keep track of. It’s a challenging endeavor to keep it all together, and QualiTest’s Mike Hershkovitz joins Matt and Jess to talk about the ins and outs or testing for the retail space.
Also, in a separately recorded segment, the Amazon purchase of Whole Foods and what it might mean is discussed by Matt, Perze, Justin and Michael. Is it really an additional 400-plus Amazon distribution centers, or is there something else going on here?
You know the feeling. Someone is breathing down your neck ,saying that we have to get the release out on this date at this time or else… well, it won’t be pretty, let’s just leave it at that! Sound at all familiar? Yeah, we feel your pain, and we talk about it quite a bit. Deadlines are a reality. Sometimes they are essential and necessary. Often they are nebulous and ill defined. Regardless, testers deal with them and the Testing Show Panel shares a few of our experiences and how we managed, or didn’t manage, those expectations.
Also, eClinicalWorks got to see first hand that untested, buggy and underperforming software can cost more than lost sales. In this case, it got them into $155 million worth of legal trouble.
Cassandra Leung and Pete Walen join us today in a discussion about Requirements. What are they? Do we get enough of them? Do we understand the ones we do get? Can we make them better, and if so, how can we help that process? If you’ve ever struggled with trying to make sense of a story, or fear that programmers are just implementing things for the sake of implementing them, and there’s no rhyme or reason, you may not be alone. It’s possible that you really are dealing with a severe case of Requirements Deficiency. Fortunately, we are here to help, or at the very least, give it a spirited try.
Also, in the news, Unified Windows Platform and Software Verification Competition. Yes, apparently, these are both “things”, and we pontificate on both of them.
For many testers, Selenium is a well known tool and a familiar friend. For others, it may be something you are curious about but haven’t had a chance to do much with yet. All of our panel has had some level of experience with Selenium, and Brian Van Stone visits us again to tell us what he’s been up to and how his Adventures with Selenium have informed his automation processes and overall testing.
Also on the Selenium and testing tools front, what is up with the VC community making big bets on software testing tools? Is it Silicon Valley business as usual, or is there something else going on here? We investigate, or pontificate, or at least we offer an opinion or four.
This week we are joined by Kim Knup, who is with Songkick and tells us a tale of intrigue and guile, and the behavior of concert attendees. Wait, what? OK, not quite that juicy, but she does work with Songkick, she does test and monitor performance, and it turns out that different audiences and different fans of different performers have distinctly different approaches to how the source and buy tickets through Songkick, and Kim shares some of those examples with us. Also, in our news segment, when Apple Support is down, do we care as much as when AWS is down? In other words, do we grade quality on a curve?
Sometimes, you can find experts on topics in unusual places. This week we discuss security and privacy with Doug Traser, an Information Security Manager with Five9. He’s also the guitar player for Michael’s band, Ensign Red (or is Michael Doug’s singer? We’re never entirely sure). Regardless, if you have questions about security, OWASP, polities that drive you crazy and wondering if any of this makes any sense, Doug has some answers, and maybe raises a few more questions.
Also, in our news segment: what happens when Amazon Echo might hold the key to a murder trial. Can your personal digital home assistant testify against you in a court of law?
Have you found yourself looking at deals and services online that seem too good to be true? Wondering “where’s the catch?” You’re not alone. There are lots of ways that software uses and manipulates us to give up details about ourselves, or to somehow get us to pay for services that we either didn’t want, or to provide information about ourselves and our habits to others that we don’t really want to be known. These practices are grouped together under the phrase ‘Dark Patterns” and Emma Keaveny has made it a point to learn about and warn about them. We discuss several varieties of Dark Patterns and debate where on the spectrum they fall, whether they be nuisances, poor design or an outright breach of ethics.
Also, where were you when Amazon’s E3 services went down on February 28, 2017? Did it affect you? It affected some of us, and at the time we recorded this episode it was a very fresh memory, so we had plenty to say about it.
How well do we know the work that we do as testers? Do we understand what it is we do? Really understand it? Jon Bach thinks we can do better at figuring out what it is that we do in our roles as testers and in the roles that support and offer service to people in our organizations. Much of what we do is implicit, and carries responsibilities, expectations and even contracts for what we do and how we act. In today’s episode, Jon helps us break down both traditional and not so traditional roles that we may find ourselves in, and ways that we can leverage both explicit and implicit knowledge of what we do, and maybe what we can stop doing.
Also, in an unconventional news segment this go around, friend of the show Anna Royzman tells us about Test Masters Academy and a fresh take on testing conferences geared towards testing leaders (the Testing Leadership Conference is May 1-3, 2017 in New York City) and emerging topics and technologies at the New Testing Conference coming this fall to New York City). It’s a wild ride!
Have you been to a testing conference? Wanted to go? Wondered which ones you should attend? Matt Heusser, Jessica Ingrassellino, and Michael Larsen have been to more than a few as participants and presenters. We discuss our favorites, the pros and cons of various conferences, and what makes each of them worthy destinations to consider.
Also, putting a different spin on the News Segment this time around, Michael shares his enthusiasm for and about “The Privacy Paradox”.
The truth is, no one will care as much or be as interested in your developing testing career as you are. There are studies that say that Software Testing is one of the Happiest Jobs there is. Does that sound weird, or does that sound spot on? In this episode of “The Testing Show” we welcome back Alex Schladebeck and welcome for the first time QualiTest’s Elle Gee to discuss software testing careers and how they differ or are similar depending on the organization in question. Regulars Perze Ababa and Justin Rohrman also riff along with Matt Heusser on the unique challenges in developing and sustaining a career in software testing.
Also, in our news segment, what happens when Automation and a software glitch makes it impossible to do a task we often take for granted? 900 Shell stations in Malaysia discovered exactly that, and we certainly have opinions about that, too.
This is Part Two of our discussion with Alex Schladebeck and Joel Montvelisky. We tackle regression testing and share a few stories from the trenches (and a ghostly Michael even makes a contribution to this topic), discuss the idea that perhaps continuous testing is a concept that’s time really has come, and look to see if we can possibly break out of the “hardening” process at the end of sprints in favor of more testing up front, so that discoveries can actually be addressed sooner rather than pushed off to later.
It seems that 2017 is shaping up to be the year of the two-parter, as we are back with another two part episode. This is Part One, in which the Testing Show regulars chat with Alex Schladebeck and Joel Montvelisky about the way that testing is practiced globally. Joel has some insights on this in that he steers the State of Testing Questionnaire that runs in January and February each year, and gathers statistics about how testers actually work. We look at some issues that were discovered with the survey, such as how many organizations claim to do automation versus how many actually are making a solid go at it, as well as where those organizations choose to, or choose not to, apply their efforts. Also, in the news, what happens when TSA’s computers go out on one of the busiest travel days of the year (the day after New Years)? The Testing Show panel and their guest weigh in, and they have plenty to say, both on the outage and the process in general.
We continue our conversation with Angie Jones about ways that automation can be put first in stories (yes, really) and ways that she has been able to get team buy in and cooperation to make that process effective. Also, we have a mailbag question that we answer in depth, or as much as we can… is it possible to be paid as much as a developer or an SDET if you are just a manual tester? The answer is “it depends”, but we go into a lot more about why that is the case.
Have you wondered how your team could better utilize its automation resources? Does your definition of “Done” include new automation efforts for stories that are in flight? How about when changes to functionality (or new additions) cause your old tests to stop working? Do we play continuous catch up, or is there a better way to applying automation efforts?
Angie Jones of Lexus Nexus joins us to talk about better ways to have those automation discussions, who should be responsible for what, and how everyone on the team can contribute to automation efforts (hint, you don’t need to be a coder to help make great automation, but it certainly helps).
Also, this week we delve into Spotify taking over hard drives with continuous writes that could shave years off of their operation life, and are Uber’s autonomous vehicles even close to ready for prime time?
This is part one of a two part series. Come back in two weeks when we continue our conversation with Angie.