Cassandra Leung and Pete Walen join us today in a discussion about Requirements. What are they? Do we get enough of them? Do we understand the ones we do get? Can we make them better, and if so, how can we help that process? If you’ve ever struggled with trying to make sense of a story, or fear that programmers are just implementing things for the sake of implementing them, and there’s no rhyme or reason, you may not be alone. It’s possible that you really are dealing with a severe case of Requirements Deficiency. Fortunately, we are here to help, or at the very least, give it a spirited try.
Also, in the news, Unified Windows Platform and Software Verification Competition. Yes, apparently, these are both “things”, and we pontificate on both of them.
For many testers, Selenium is a well known tool and a familiar friend. For others, it may be something you are curious about but haven’t had a chance to do much with yet. All of our panel has had some level of experience with Selenium, and Brian Van Stone visits us again to tell us what he’s been up to and how his Adventures with Selenium have informed his automation processes and overall testing.
Also on the Selenium and testing tools front, what is up with the VC community making big bets on software testing tools? Is it Silicon Valley business as usual, or is there something else going on here? We investigate, or pontificate, or at least we offer an opinion or four.
This week we are joined by Kim Knup, who is with Songkick and tells us a tale of intrigue and guile, and the behavior of concert attendees. Wait, what? OK, not quite that juicy, but she does work with Songkick, she does test and monitor performance, and it turns out that different audiences and different fans of different performers have distinctly different approaches to how the source and buy tickets through Songkick, and Kim shares some of those examples with us. Also, in our news segment, when Apple Support is down, do we care as much as when AWS is down? In other words, do we grade quality on a curve?
Sometimes, you can find experts on topics in unusual places. This week we discuss security and privacy with Doug Traser, an Information Security Manager with Five9. He’s also the guitar player for Michael’s band, Ensign Red (or is Michael Doug’s singer? We’re never entirely sure). Regardless, if you have questions about security, OWASP, polities that drive you crazy and wondering if any of this makes any sense, Doug has some answers, and maybe raises a few more questions.
Also, in our news segment: what happens when Amazon Echo might hold the key to a murder trial. Can your personal digital home assistant testify against you in a court of law?
Have you found yourself looking at deals and services online that seem too good to be true? Wondering “where’s the catch?” You’re not alone. There are lots of ways that software uses and manipulates us to give up details about ourselves, or to somehow get us to pay for services that we either didn’t want, or to provide information about ourselves and our habits to others that we don’t really want to be known. These practices are grouped together under the phrase ‘Dark Patterns” and Emma Keaveny has made it a point to learn about and warn about them. We discuss several varieties of Dark Patterns and debate where on the spectrum they fall, whether they be nuisances, poor design or an outright breach of ethics.
Also, where were you when Amazon’s E3 services went down on February 28, 2017? Did it affect you? It affected some of us, and at the time we recorded this episode it was a very fresh memory, so we had plenty to say about it.
How well do we know the work that we do as testers? Do we understand what it is we do? Really understand it? Jon Bach thinks we can do better at figuring out what it is that we do in our roles as testers and in the roles that support and offer service to people in our organizations. Much of what we do is implicit, and carries responsibilities, expectations and even contracts for what we do and how we act. In today’s episode, Jon helps us break down both traditional and not so traditional roles that we may find ourselves in, and ways that we can leverage both explicit and implicit knowledge of what we do, and maybe what we can stop doing.
Also, in an unconventional news segment this go around, friend of the show Anna Royzman tells us about Test Masters Academy and a fresh take on testing conferences geared towards testing leaders (the Testing Leadership Conference is May 1-3, 2017 in New York City) and emerging topics and technologies at the New Testing Conference coming this fall to New York City). It’s a wild ride!
Have you been to a testing conference? Wanted to go? Wondered which ones you should attend? Matt Heusser, Jessica Ingrassellino, and Michael Larsen have been to more than a few as participants and presenters. We discuss our favorites, the pros and cons of various conferences, and what makes each of them worthy destinations to consider.
Also, putting a different spin on the News Segment this time around, Michael shares his enthusiasm for and about “The Privacy Paradox”.
The truth is, no one will care as much or be as interested in your developing testing career as you are. There are studies that say that Software Testing is one of the Happiest Jobs there is. Does that sound weird, or does that sound spot on? In this episode of “The Testing Show” we welcome back Alex Schladebeck and welcome for the first time QualiTest’s Elle Gee to discuss software testing careers and how they differ or are similar depending on the organization in question. Regulars Perze Ababa and Justin Rohrman also riff along with Matt Heusser on the unique challenges in developing and sustaining a career in software testing.
Also, in our news segment, what happens when Automation and a software glitch makes it impossible to do a task we often take for granted? 900 Shell stations in Malaysia discovered exactly that, and we certainly have opinions about that, too.
This is Part Two of our discussion with Alex Schladebeck and Joel Montvelisky. We tackle regression testing and share a few stories from the trenches (and a ghostly Michael even makes a contribution to this topic), discuss the idea that perhaps continuous testing is a concept that’s time really has come, and look to see if we can possibly break out of the “hardening” process at the end of sprints in favor of more testing up front, so that discoveries can actually be addressed sooner rather than pushed off to later.
It seems that 2017 is shaping up to be the year of the two-parter, as we are back with another two part episode. This is Part One, in which the Testing Show regulars chat with Alex Schladebeck and Joel Montvelisky about the way that testing is practiced globally. Joel has some insights on this in that he steers the State of Testing Questionnaire that runs in January and February each year, and gathers statistics about how testers actually work. We look at some issues that were discovered with the survey, such as how many organizations claim to do automation versus how many actually are making a solid go at it, as well as where those organizations choose to, or choose not to, apply their efforts. Also, in the news, what happens when TSA’s computers go out on one of the busiest travel days of the year (the day after New Years)? The Testing Show panel and their guest weigh in, and they have plenty to say, both on the outage and the process in general.
We continue our conversation with Angie Jones about ways that automation can be put first in stories (yes, really) and ways that she has been able to get team buy in and cooperation to make that process effective. Also, we have a mailbag question that we answer in depth, or as much as we can… is it possible to be paid as much as a developer or an SDET if you are just a manual tester? The answer is “it depends”, but we go into a lot more about why that is the case.
Have you wondered how your team could better utilize its automation resources? Does your definition of “Done” include new automation efforts for stories that are in flight? How about when changes to functionality (or new additions) cause your old tests to stop working? Do we play continuous catch up, or is there a better way to applying automation efforts?
Angie Jones of Lexus Nexus joins us to talk about better ways to have those automation discussions, who should be responsible for what, and how everyone on the team can contribute to automation efforts (hint, you don’t need to be a coder to help make great automation, but it certainly helps).
Also, this week we delve into Spotify taking over hard drives with continuous writes that could shave years off of their operation life, and are Uber’s autonomous vehicles even close to ready for prime time?
This is part one of a two part series. Come back in two weeks when we continue our conversation with Angie.
It’s a new year, and it’s that classic time for people to make New Year’s Resolutions, as well as quickly run out of steam trying to actually succeed at them. We discuss ways in which we have set goals, or not set them, how we have been stymied in the past, or how we have pushed on regardless of failing, and the fact that failing is often the key element that helps us progress and ultimately succeed.
Also, in the news, we look back on the fifth anniversary of the death of Christopher Hitchens, how a typo may well have been the root cause of “The Russian Hack” regarding the U.S. Presidential election, and will all exploratory testers be replaced in five years by AI, neural networks and machine learning? We have opinions on all of those.
As we come to the end of 2016, we consider ways that software testing can go beyond just testing products, and looking at ways that we can use our super powers for good in the world. Abby Bangser works with ThoughtWorks and ThoughtWorks makes this a core part of their mission with ThoughtWorks University, a combination of training and integration of all roles, including software testing, as they learn how to work with ThoughtWorks model. Additionally, they take on a variety of projects that focus on social and economic justice, bringing groups of people from all over the world to Pune, India to research and work on a specific problem to help those in the immediate area and beyond.
Also, are you getting a lot of Skype spam lately? You are definitely not alone. We talk about what’s causing it and how to fix it.
For centuries, the Liberal Arts education was the gold standard that all education endeavors were based. The idea of a ‘Renaissance Person” was someone who had skills and abilities in a variety of fields, grew out of the classic Liberal Arts education. Some say that it’s a bygone piece of history, but many feel that it is a vital part of working and interacting with people, and in many ways, it’s perhaps the most vital of underpinnings for success as a software tester. We welcome back Jess Ingrassellino to talk with us about the value of a classic Liberal Arts education ,how it can be applied effectively to a software testing career, and how those who are so focused on automating as the be all and end all might be missing a few things.
Also, we take a look at the growth of online testing conferences, the UK National Health Service Email bomb and Virtual Studio for Mac… wait, what?!!
Accessibility and Inclusive Design are two approaches to help make software available to the broadest possible group of people. Today, we welcome Alicia Jarvis, a Toronto based Accessibility advocate, to discuss her own Accessibility advocacy, her experiences in software testing around accessibility, and how we need to look beyond a checklist for compliance, and think about Accessibility as a core part of our design approach from the beginning. Also: Samsung seems to be having issues with exploding batteries, both with the Note 7 and the Galaxy J5.
This week, we are joined by Jess Ingrassellino and Garry Heon to talk about where testing is going, and what the future of testing holds, going beyond Agile and dev-ops, or at least seeing where and how testers today will better be able to work and thrive in this brave new world. In the process of talking about that, we joked about the idea of a “full stack tester” to go with the increasing demand of “full stack developers”. Is there such a thing as a full stack tester? We weren’t sure, but if there was, we figured we could relate to the idea, and we’d be interested in seeing that idea develop, so to speak.
Also, we discuss the ramifications of what happens when your Web Cam takes part in one of the biggest DDoS attacks ever (hint, it’s easier and more likely than you might believe).
Do you have to be a career tester to perform the testing role? If you are facilitating, coaching or leading others, are you testing? Does it really matter who does the role, as long as somebody does it? The Testing Show is back in the studio and chatting with Qualitest’s Yaron Kottler on exactly these weighty questions. Needless to say, we discussed these ideas of “what makes a tester a tester” and then some. Also, the panel shares their frustrations with the recent Apple updates of iOS10.1 and macOS Sierra.
What happens when software development takes a cue from disciplines like law enforcement, counter intelligence and military operations? What do we do when we need to look at complex systems to find clues about issues that we didn’t even know existed, but the data shows it plainly? How can we harness the gut feelings of testers in a more scientific manner, and “make sense by sense making”? Confused? Dave Snowden wants to help with that.
Dave Snowden is the creator of Cynefin Framework, and it has been used with a broad array of applications, including government, immigration, counter-intelligence and software development. Cynefin is making inroads into the world of software testing, and Anna Royzman is possibly the person in the testing community most familiar with the Cynefin Framework. We are happy to have a conversation about Cynefin with both Dave and Anna, and its implications on software testing.
[Note: Due to challenges with Trans-Atlantic communications, the audio breaks up in various places. We have done our best to work around this, but there are places where audio will be spotty.]
This time around, The Testing Show is coming to you “Live” from the Conference for the Association for Software Testing, held in Vancouver, B.C., Canada. Matthew Heusser, Justin Rohrman, and Perze Ababa met up with Michael Bolton to discuss “Testopsies”, a focused examination and task analysis, and applying it to the opportunities for learning and refocusing of efforts often bundled under the label of “testing”.
Noah Sussman knows a bit about bringing Dev-Ops to fruition at a variety of organizations. To that end, we asked him to come join us and tell us a bit about what Dev-Ops is, and what it isn’t. We also discuss the ways that testers can develop their skills and become more technical (hint: it’s not as difficult as many may think, but it will require work and interaction with others to really be successful at it). Also, how does Bing Maps manage to put Melbourne, Australia in the Ocean somewhere near Japan?
It’s another “On the Road” episode of The Testing Show, with Matt Heusser attending Agile2016 in Atlanta, Georgia. While there, he gathered an impromptu forum to discuss the way we work and what we often have to do so we can get to the work we actually want to be doing. Emma Armstrong, Dan Ashby, Claire Moss and Tim Ottinger join in to dissect the continuum that is “Real Work vs. Bureaucratic Silliness”.
If you were going to be at liberty to drop into any software testing job you wanted, anywhere, in any software related industry of your choosing, what would be part of your “jump kit”? The Testing Show sits down with Curtis Pettit (of Huge) and asks exactly that. We geek out on favorite tools, and quickly discover that we all have some perennial favorites, but we discuss some lesser known exotics as well, really just scratching the surface of possible tools.
Also, have we reached a point where, when systems go down for Airlines and Credit Card companies, that we are helpless to go back and do business as we used to do, at least if we are in so called “technically advanced” areas? Perhaps the rugged backcountry may have a thing or two still to teach us all.
This week, we are joined by Dan Billing, a software security penetration test specialist at New Voice Media in the U.K.. Dan describes his path from everyday software tester to security expert, and the variety of approaches and methods, as well as tools, that come into play if you want to take a crack at being a software tester with a specialty towards security testing.
Also, can thirty years of fMRI results really be invalid, and what are you doing to take on the “30 Days of Software Testing” challenge?
Today, we are joined by Alan Page of Microsoft to discuss the Unified Engineering model, what that means, how it is working at Microsoft, and how that might effect software testing and software testers going forward. Also, what happens when the Air Force has a database crash and can’t recover their data, and is the Testing community really anti-automation?