Last month I was at TestBash Manchester, which was both my second TestBash and my second of the year. It is strange to think how Brighton was only March this year, yet feels so much longer ago (in a good way).
I arrived in Manchester on Wednesday, so I could spend some time sightseeing in Manchester (never had the chance before), and also means I wouldn’t have to catch an early train on the Thursday morning and be yawning all day.
One additional benefit of going on the Wednesday meant I could go to the Meetup that night, which was held in the city centre by RentalCars. To get there, I met up with Claire Reckless, Gem Hill, Dan Billing and Neil Studd, where we made the long walk from the conference venue. It gave me a chance to talk with them all and get to know them better, including how Gem has never seen The Goonies!
When we got to the Meetup, I initially started off sitting by myself to the side. I don’t find it easy in large social situations where I don’t know many people. I did get to have a brief chat with Beren Van Daele and Vera Gehlen-Baum, who I first met at Brighton when trialling TestSphere for Beren.
Once people had a chance to talk and drink, we had a presentation by Matt Thomas on Coaching vs Teaching, which I found very thought provoking. He pointed out how in a traditional teaching role, the teacher is more experienced and often a senior role, whereas in a coaching role they may not be more experienced but simply offer a difference in perspective. An example was with footballers and F1 drivers, where they can be the best in their respective fields and yet still have coaches that they listen to and take advice from. It made me think how I could try and coach people I work with, even though they are peers that I am not in charge of.
After the presentation came a practical workshop on Riskstorming, the award winning workshop from Beren and Marcel Gehlen on a new way to use the TestSphere cards. Further information will be formally published on the MoT site and/or The Club, but in summary it is about identifying the six key areas you are concerned about with your product (for this we imagined we were making an app for Uber), then what are all the risks that could happen with those areas of functionality, and then what tests we could do against the risks. Then to top it off, this was all against the clock in different stages. I really enjoyed this way of using the TestSphere cards, and will look at trying out Riskstorming where I work.
Here are some photos of the board and how we ended up combining it with the TestSphere cards:
Thursday was the workshop day, and my first time with workshops as I never did it with Brighton. I opted for the all day workshop “Exploratory Testing 101” by Dan Ashby. I’ve been interested in Exploratory Testing (ET) for some time, buying books and reading blog posts on it, but decided to go for the workshop as I wanted to have a guided experience with learning it, as well as talking about it with others who also wanted to know about it, and pick up ways of teaching it to others when I returned to the office.
Entering the room, everyone was split between three tables, each with a mentor assigned to help them over the course of the day, which were Claire Reckless, Neil Studd and Beren. Being the last to arrive, I took the last remaining seat at a table, but as it was near the front by the note board I was happy with where I ended up.
For my table, Claire was my mentor, so having a familiar face was good. One other person I had spoken with before was Heather due to her work on the MoT Slack channel. I didn’t know anyone else, but we went around the table talking about how long we have been in testing, if we do any ET already, and what we are wanting from the day. The range on testing experience was vast, from over ten years to less than six months. Also, aside from me and one other everyone has been on the Rapid Software Testing (RST) course at some point, including the previous three days for some people!
Here is a group photo of my table 🙂
Our first activity was to follow a script given to us to go around our conference venue The Lowry, and see if we can complete all of our expected results. Needless to say, we found the expected results and actual results differed. Then, to reinforce the downsides of following a step-by-step script, we were asked questions when we got back, such as the speed limit of the road outside which could be seen from the windows. Guesses were made to the questions asked, but as we weren’t making an effort to look for them and only follow the script, no-one could answer them with confidence (or correctly).
Below you can see on the left of what our script was, and then notes I made as we tried to complete them on the right.
We followed this by being asked to go explore the floor of the building we were on, see what we could find. When we came back afterwards, each group had something unique to add, with some staying together and others splintering off. There were observations about security cameras, accessibility, future events, nearby buildings, all things we weren’t thinking about when we followed the script previously. As part of the discussion, some people admitted they liked the freedom it gave, where others found there were too many options and so they didn’t achieve much as they needed direction.
Dan went onto to talk what testing is, and how historically we have focused on covering explicit information, but scripted testing doesn’t help us with tacit & implicit information, nor unknown information. What ET allows us to do is tacit into explicit, and also uncover unknowns, allowing us to then create formal scripting. He also mentioned how automated testing supports ET as it can allows us to quickly get through processes or screens that we don’t need to perform ET on, until we get to the area we need to use ET on.
Building on this, Dan went into detail on what Risk Based Testing (RBT) is, which echoed what was done with the Riskstorming activity previously mentioned. We were encouraged to identify what risks could exist in our software, then test in descending order of severity, so that if we couldn’t complete all of the testing we wanted to that we could know we have covered the more severe risks.
As a part of this, Dan talked about in testing you have three elements:
If you focus on the type of testing you want to do (Functional, Security, Performance, etc.) then you will hit a limit of no more than 30 types of testing. By focusing on the risk, you will come up with many more, and won’t have the type of testing restrain what you want to do. Finally, the approach is how you will do this testing, which can feed into your type.
With an understanding of RBT, Dan gave us our next exercise, where we told to navigate to the Bing website, and identify risks with the website. Dan explained that he chose the Bing website as it is an established website but not one people typically use and thus have less of an autopilot when compared to sites such as Google. To assist us with working out potential risks, each group was given a TestSphere deck and encouraged to look at the Risk cards, to help us with ideas.
With our numerous risks identified, Dan then taught us how to create a charter, so instead of looking at anything and everything we had a focus but without the limitations of a classic test case.
A charter is also composed of three parts:
1. Explore [TARGET] (e.g. A chair)
2. With [RESOURCES] (e.g. Specific people)
3. To discover [RISKS] (e.g. Durability)
Multiple charters can have the same target, but using different resources and discovering different risks.
Now we knew how to make a charter, we were tasked with looking at all the risks we had previously created and creating charters for them all, which you can see below.
Pairing up on our tables, we looked at comparing the search function with other browsers, starting with Google, then Yahoo and finishing with Ask. The important thing was deciding what to search against, as we didn’t want something too obscure to be too restrictive on results but not so broad it would be hard to use, and so we settled on searching for our host, Dan Ashby.
We looked at the suggested fields, total search results, first page results, suggested images and related searches. The differences between them were interesting, as whilst they all perform a search, aside from Dan’s site being the first result, that is where the similarities ended, with everything else about them differing.
After our first go at using charters, we had a discussion on how were we taking notes as part of testing our charters, and how we do we record our notes in general. This was getting everyone in the room to share their ideas, not just being talked at by Dan, which was great as it got everyone talking, as well as discussing when something does or doesn’t apply.
Dan then went over what heuristics are and how they can help with working out tests to perform, but encouraging we treat them as a source of inspiration and ideas rather than a checklist to follow. As part of this, we also talked about oracles, and that they are “how you know what you know”, and that combined they can help us think about not only what testing we could do, but and how to do it.
We finished the day by working on another charter but with the assistance of the TestSphere deck again, this time using the entire deck to gives us further ideas of what tests we could perform to complete the charter.
I found the workshop really enjoyable, speaking with the people there, hearing how different people work, as well as leaving with ideas to try out in the workplace.
My next post (when I get round to it) will cover the main conference day of TestBash, with plenty of pictures from the talks.
If you have never been and able to attend one near you (England, Ireland, Germany, Netherlands, USA), then I would recommend it. The atmosphere is friendly, the talks are interesting, and the goody bags are always great.
Thanks for reading!