How well do we know the work that we do as testers? Do we understand what it is we do? Really understand it? Jon Bach thinks we can do better at figuring out what it is that we do in our roles as testers and in the roles that support and offer service to people in our organizations. Much of what we do is implicit, and carries responsibilities, expectations and even contracts for what we do and how we act. In today’s episode, Jon helps us break down both traditional and not so traditional roles that we may find ourselves in, and ways that we can leverage both explicit and implicit knowledge of what we do, and maybe what we can stop doing.
Also, in an unconventional news segment this go around, friend of the show Anna Royzman tells us about Test Masters Academy and a fresh take on testing conferences geared towards testing leaders (the Testing Leadership Conference is May 1-3, 2017 in New York City) and emerging topics and technologies at the New Testing Conference coming this fall to New York City). It’s a wild ride!
Have you been to a testing conference? Wanted to go? Wondered which ones you should attend? Matt Heusser, Jessica Ingrassellino, and Michael Larsen have been to more than a few as participants and presenters. We discuss our favorites, the pros and cons of various conferences, and what makes each of them worthy destinations to consider.
Also, putting a different spin on the News Segment this time around, Michael shares his enthusiasm for and about “The Privacy Paradox”.
The truth is, no one will care as much or be as interested in your developing testing career as you are. There are studies that say that Software Testing is one of the Happiest Jobs there is. Does that sound weird, or does that sound spot on? In this episode of “The Testing Show” we welcome back Alex Schladebeck and welcome for the first time QualiTest’s Elle Gee to discuss software testing careers and how they differ or are similar depending on the organization in question. Regulars Perze Ababa and Justin Rohrman also riff along with Matt Heusser on the unique challenges in developing and sustaining a career in software testing.
Also, in our news segment, what happens when Automation and a software glitch makes it impossible to do a task we often take for granted? 900 Shell stations in Malaysia discovered exactly that, and we certainly have opinions about that, too.
This is Part Two of our discussion with Alex Schladebeck and Joel Montvelisky. We tackle regression testing and share a few stories from the trenches (and a ghostly Michael even makes a contribution to this topic), discuss the idea that perhaps continuous testing is a concept that’s time really has come, and look to see if we can possibly break out of the “hardening” process at the end of sprints in favor of more testing up front, so that discoveries can actually be addressed sooner rather than pushed off to later.
It seems that 2017 is shaping up to be the year of the two-parter, as we are back with another two part episode. This is Part One, in which the Testing Show regulars chat with Alex Schladebeck and Joel Montvelisky about the way that testing is practiced globally. Joel has some insights on this in that he steers the State of Testing Questionnaire that runs in January and February each year, and gathers statistics about how testers actually work. We look at some issues that were discovered with the survey, such as how many organizations claim to do automation versus how many actually are making a solid go at it, as well as where those organizations choose to, or choose not to, apply their efforts. Also, in the news, what happens when TSA’s computers go out on one of the busiest travel days of the year (the day after New Years)? The Testing Show panel and their guest weigh in, and they have plenty to say, both on the outage and the process in general.
We continue our conversation with Angie Jones about ways that automation can be put first in stories (yes, really) and ways that she has been able to get team buy in and cooperation to make that process effective. Also, we have a mailbag question that we answer in depth, or as much as we can… is it possible to be paid as much as a developer or an SDET if you are just a manual tester? The answer is “it depends”, but we go into a lot more about why that is the case.
Have you wondered how your team could better utilize its automation resources? Does your definition of “Done” include new automation efforts for stories that are in flight? How about when changes to functionality (or new additions) cause your old tests to stop working? Do we play continuous catch up, or is there a better way to applying automation efforts?
Angie Jones of Lexus Nexus joins us to talk about better ways to have those automation discussions, who should be responsible for what, and how everyone on the team can contribute to automation efforts (hint, you don’t need to be a coder to help make great automation, but it certainly helps).
Also, this week we delve into Spotify taking over hard drives with continuous writes that could shave years off of their operation life, and are Uber’s autonomous vehicles even close to ready for prime time?
This is part one of a two part series. Come back in two weeks when we continue our conversation with Angie.
It’s a new year, and it’s that classic time for people to make New Year’s Resolutions, as well as quickly run out of steam trying to actually succeed at them. We discuss ways in which we have set goals, or not set them, how we have been stymied in the past, or how we have pushed on regardless of failing, and the fact that failing is often the key element that helps us progress and ultimately succeed.
Also, in the news, we look back on the fifth anniversary of the death of Christopher Hitchens, how a typo may well have been the root cause of “The Russian Hack” regarding the U.S. Presidential election, and will all exploratory testers be replaced in five years by AI, neural networks and machine learning? We have opinions on all of those.
As we come to the end of 2016, we consider ways that software testing can go beyond just testing products, and looking at ways that we can use our super powers for good in the world. Abby Bangser works with ThoughtWorks and ThoughtWorks makes this a core part of their mission with ThoughtWorks University, a combination of training and integration of all roles, including software testing, as they learn how to work with ThoughtWorks model. Additionally, they take on a variety of projects that focus on social and economic justice, bringing groups of people from all over the world to Pune, India to research and work on a specific problem to help those in the immediate area and beyond.
Also, are you getting a lot of Skype spam lately? You are definitely not alone. We talk about what’s causing it and how to fix it.
For centuries, the Liberal Arts education was the gold standard that all education endeavors were based. The idea of a ‘Renaissance Person” was someone who had skills and abilities in a variety of fields, grew out of the classic Liberal Arts education. Some say that it’s a bygone piece of history, but many feel that it is a vital part of working and interacting with people, and in many ways, it’s perhaps the most vital of underpinnings for success as a software tester. We welcome back Jess Ingrassellino to talk with us about the value of a classic Liberal Arts education ,how it can be applied effectively to a software testing career, and how those who are so focused on automating as the be all and end all might be missing a few things.
Also, we take a look at the growth of online testing conferences, the UK National Health Service Email bomb and Virtual Studio for Mac… wait, what?!!
Accessibility and Inclusive Design are two approaches to help make software available to the broadest possible group of people. Today, we welcome Alicia Jarvis, a Toronto based Accessibility advocate, to discuss her own Accessibility advocacy, her experiences in software testing around accessibility, and how we need to look beyond a checklist for compliance, and think about Accessibility as a core part of our design approach from the beginning. Also: Samsung seems to be having issues with exploding batteries, both with the Note 7 and the Galaxy J5.
This week, we are joined by Jess Ingrassellino and Garry Heon to talk about where testing is going, and what the future of testing holds, going beyond Agile and dev-ops, or at least seeing where and how testers today will better be able to work and thrive in this brave new world. In the process of talking about that, we joked about the idea of a “full stack tester” to go with the increasing demand of “full stack developers”. Is there such a thing as a full stack tester? We weren’t sure, but if there was, we figured we could relate to the idea, and we’d be interested in seeing that idea develop, so to speak.
Also, we discuss the ramifications of what happens when your Web Cam takes part in one of the biggest DDoS attacks ever (hint, it’s easier and more likely than you might believe).
Do you have to be a career tester to perform the testing role? If you are facilitating, coaching or leading others, are you testing? Does it really matter who does the role, as long as somebody does it? The Testing Show is back in the studio and chatting with Qualitest’s Yaron Kottler on exactly these weighty questions. Needless to say, we discussed these ideas of “what makes a tester a tester” and then some. Also, the panel shares their frustrations with the recent Apple updates of iOS10.1 and macOS Sierra.
What happens when software development takes a cue from disciplines like law enforcement, counter intelligence and military operations? What do we do when we need to look at complex systems to find clues about issues that we didn’t even know existed, but the data shows it plainly? How can we harness the gut feelings of testers in a more scientific manner, and “make sense by sense making”? Confused? Dave Snowden wants to help with that.
Dave Snowden is the creator of Cynefin Framework, and it has been used with a broad array of applications, including government, immigration, counter-intelligence and software development. Cynefin is making inroads into the world of software testing, and Anna Royzman is possibly the person in the testing community most familiar with the Cynefin Framework. We are happy to have a conversation about Cynefin with both Dave and Anna, and its implications on software testing.
[Note: Due to challenges with Trans-Atlantic communications, the audio breaks up in various places. We have done our best to work around this, but there are places where audio will be spotty.]
This time around, The Testing Show is coming to you “Live” from the Conference for the Association for Software Testing, held in Vancouver, B.C., Canada. Matthew Heusser, Justin Rohrman, and Perze Ababa met up with Michael Bolton to discuss “Testopsies”, a focused examination and task analysis, and applying it to the opportunities for learning and refocusing of efforts often bundled under the label of “testing”.
Noah Sussman knows a bit about bringing Dev-Ops to fruition at a variety of organizations. To that end, we asked him to come join us and tell us a bit about what Dev-Ops is, and what it isn’t. We also discuss the ways that testers can develop their skills and become more technical (hint: it’s not as difficult as many may think, but it will require work and interaction with others to really be successful at it). Also, how does Bing Maps manage to put Melbourne, Australia in the Ocean somewhere near Japan?
It’s another “On the Road” episode of The Testing Show, with Matt Heusser attending Agile2016 in Atlanta, Georgia. While there, he gathered an impromptu forum to discuss the way we work and what we often have to do so we can get to the work we actually want to be doing. Emma Armstrong, Dan Ashby, Claire Moss and Tim Ottinger join in to dissect the continuum that is “Real Work vs. Bureaucratic Silliness”.
If you were going to be at liberty to drop into any software testing job you wanted, anywhere, in any software related industry of your choosing, what would be part of your “jump kit”? The Testing Show sits down with Curtis Pettit (of Huge) and asks exactly that. We geek out on favorite tools, and quickly discover that we all have some perennial favorites, but we discuss some lesser known exotics as well, really just scratching the surface of possible tools.
Also, have we reached a point where, when systems go down for Airlines and Credit Card companies, that we are helpless to go back and do business as we used to do, at least if we are in so called “technically advanced” areas? Perhaps the rugged backcountry may have a thing or two still to teach us all.
This week, we are joined by Dan Billing, a software security penetration test specialist at New Voice Media in the U.K.. Dan describes his path from everyday software tester to security expert, and the variety of approaches and methods, as well as tools, that come into play if you want to take a crack at being a software tester with a specialty towards security testing.
Also, can thirty years of fMRI results really be invalid, and what are you doing to take on the “30 Days of Software Testing” challenge?
Today, we are joined by Alan Page of Microsoft to discuss the Unified Engineering model, what that means, how it is working at Microsoft, and how that might effect software testing and software testers going forward. Also, what happens when the Air Force has a database crash and can’t recover their data, and is the Testing community really anti-automation?
What does it take to differentiate yourself as a tester? How can you demonstrate the unique values and attributes you can bring to the role of tester? How can you push back against the race to the bottom where “everyone can do the job”? Is that really true? These questions and more we posed to Andy Tinkham, and shared ideas as to how we can bring much more to the table that we often think we can.
Also, can software be specified in such a way that it can actually be made error free? Justin had a chance to look at that very idea at the DeepSpec Workshop at Princeton University, and he shared his findings with us.
Recorded live from Orcas Island, Washington, Matt, Justin and Perze attended the Reinventing Testers training run by James and Jon Bach. James sat down and talked with The Testing Show about reinventing testing skills, developing them, and the importance of words.
We all know that what we measure is something we can improve, right? We can measure anything and everything, and way too often, organizations attempt to do exactly that. The net result is we measure stuff that is not important to try to inform us of things that absolutely are. Mike Lyles joins us for a spirited talk about measurement and metrics. We can’t escape metrics completely, but we can be a lot smarter about the metrics we do use.
Also, how would you feel if your software update destroyed the product you were working on? What if it was a multi-million dollar satellite? Yep, that happened, and The Testing Show panel gets into it!
From the idea of automated trucking to the notion that testing will all be automated “at some point in time”, we thought it would make sense to bring in someone who has been part of this challenge for many years. Paul Grizzaffi joins us to give us his take on the promise of automation, the realities of tooling that go into those processes, and what the future might hold for the testing role as well as the possibility of “automated everything”.
As part of a follow on to the Making Testing Strategic discussion that happened at QA or the Highway, Jared Small joins us to talk about ways that software testing can add value to the software development process, and ways that we can extend the strategy conversation and help make sure that we can be both helpful and make an impact to the organization.
Additionally, we talk about the idea that Scrum can get us 250% better
quality (by some definition) and the persuasion of Trump, though we promise, this not a political show.