The best of 2017 for BDD addicts… Stories by Cucumber team, Andrew Knight, Chris Matts, Thomas Sundberg, Mark Winteringham & Gaspar Nagy…
Dear BDD Addicts,
I am happy to share with you the 2017 summary issue of the BDD Addict Newsletter. We have had a busy year with 57 shared articles from 41 different authors. The majority of the posts (21) were related to the BDD approach, but we had many articles related to test automation (14) and the bdd tools (10), like SpecFlow or Cucumber.
In this issue I have selected those articles that received the most interest in 2017. Have you missed them? Now there is a chance to check them out!
Please keep up sending links in 2018 to firstname.lastname@example.org, so that I can share it with the others. You can also encourage others to subscribe at bddaddict.com.
Happy new year, and keep on bdding…
[Process] Even more anti-patterns
The “Cucumber anti-patterns (part one)” was one of those that received the most interest from bdd addicts. Fortunately, we aren’t left without further hints that should be avoided for the sake of a successful BDD implementation. Part two has been also posted! Do you agree? Do you have further “don’t”-s? Write about it! (And send me the link!)
Cucumber anti-patterns (part two) (Cucumber team, @cucumberbdd)
[Test Automation] When the UI test says: tuttu
There are many articles talking about the danger that comes by focusing too much on automated UI testing. We all know the typical problems with it, like brittleness, long execution time and constant maintenance efforts. Mark Winteringham’s shot post might be still refreshing. Not only because he uses the TuTTu mnemonic (pronounced tutu) to describe this anti-pattern, but also because it focuses on the goals you might want to achieve with UI automation. Thanks for sharing it!
Anti-pattern: Cross browser checking (Mark Winteringham, @2bittester)
[Gherkin] Checklist for good scenarios
Probably everyone who practices writing Gherkin scenarios and helps others to follow this way has their own checklist of things one should watch for. Not everyone writes them down though and not everyone shares it with others. Thomas Sundberg belongs to the sharing group. Thanks for sharing your thoughts, Thomas!
Gherkin scenarios – some good properties (Thomas Sundberg, @thomassundberg)
It is not much well known, but Chris Matts was the person who suggested to use the Given/When/Then keywords that we all use nowadays for BDD scenarios. Given, When, and Then might be seen as just three labels that make scenarios more readable, but the reality is that they refer to very different aspects of the specification. The Given steps set the context or pre-condition, the When steps describe the function that we are about to specify and the Then steps describe the expected outcome or post-condition. In his post, Chris talks about the role of these steps in detail including how you can avoid irrelevant pre-conditions by thinking about the steps in the reverse order.
Three top tips for using Given When Then (Chris Matts, @PapaChrisMatts)
[Gherkin] I, Panda
Given I am a member of a team practicing BDD
When the team member mixes first person and third person phrasing style
Then the result will be confusing and hard to maintain
This scenario summarizes the important problem of inconsistent phrasing style and grammar for Gherkin scenarios. In his blog, the “Automation Panda”, Andy Knight talks about a small, but rather important question. How shall we phrase our scenarios. Shall we use the first person style “Given I …” or rather the third person “When the team member…”? Pandas don’t like talking about themselves…
Should Gherkin Steps Use First-Person or Third-Person? (Andrew Knight, @AutomationPanda)
[Test Automation] We miss the undersea tests!
I was working on a bugfix for SpecFlow. A great feature, covered with Gherkin scenarios, but still buggy. How is it possible? Actually, I realized when I started fixing that for this particular feature we only have SpecFlow tests and no unit tests. So while fixing, I was trying to analyse what caused the buggy code and how we could have avoided this. To my surprise, I found that the answer was related to the balance of automated specification (scenarios) and implementation verification (unit tests).