I have been invited by Smartbear to conduct two webinars on BDD. They have just released the recording of the second one online: Writing better BDD scenarios. In this post, I will summarize the most important points I was talking about.
Smartbear is the company behind Hiptest and many other test-related tools, and they are pretty much committed to spread good content about BDD. I haven’t used Hiptest for a real project yet, but one of the good things in BDD and BDD scenarios is that they are tool-agnostic. There might be some small syntactical differences, but the concept is the same, regardless whether you work with Cucumber for Java, SpecFlow for .NET or if you use Hiptest.
The topic, writing better BDD scenarios, is pretty important for me nowadays, for many different reasons. Maybe the most visible one of these is the Formulation book that we are currently working on with Seb Rose. This book (already available as beta with the first few chapters) focuses entirely on scenario writing using the Given, When and Then keywords, which are mostly used in so called Gherkin feature files.
Another reason that is probably even more important is related to collaboration. In my nearly 10 years of experience with BDD, I have seen many-many times that without the involvement of the entire team, without the business side involved, BDD has brought much less value. In our previous book (Discovery) about the BDD concept and collaboration, we have written: “Without the business involved, […], the scenarios become technical, data-driven and dry. This means that the business get very little value from reading the scenarios and, consequently, won’t understand the implication of a failing scenario. This removes the possibility that the scenarios will provide a constructive feedback loop between the business and delivery team — so the scenarios become an overhead.” The lack of collaboration is a big problem, but it first becomes visible when you look at the scenarios the team has written. As John Ferguson Smart has put down in a recent post “Show me your scenarios, I’ll tell you if you are practicing BDD”.
Probably we have all seen such long, technical and script-like scenarios. They are a late indicator of the problem. If you have only a few of them, it is probably easy to fix them, but if you already have hundreds… it will be a harder job. However, the earlier you start, the better chances you will have to avoid the technical (or testing) debt bad scenarios have caused.
In the webinar, I have taken such a bad scenario from an existing project and shown how it can be fixed. As the original scenario was almost 100 lines long, I had to shorten it and remove the confidential details. So it represented a test for an imagined StackOverflow-like Q&A site, called SpecOverflow. I don’t even include it here, because I fear people (or Google) would consider it as an example to follow, but you can find it in the webinar recording at around 15:00.
To be able to clean up the bad scenario, I will use the help of the 6 principles of good scenarios that we have collected with Seb for the Formulation book. We call these principles BRIEF, as the initials of the first five principles make up this word, but “brief” itself is also an important principle so that’s why we have 6. The free sample of the beta book on Leanpub contains a short description of the 6 principles, so I just have the list here…
- Business language — enables collaboration and feedback
- Real data — helps to discover white spots in requirements
- Intention revealing — is a way to describe the tests by focusing on what we wanted to achieve and not on how
- Essential — include only relevant details in the scenario
- Focused — the scenarios should illustrate a single rule only
- Brief — keep scenarios short
Using these principles, I cleaned up the bad scenario in six steps.
As you will see, fixing such a bad scenario is a complex task. Obviously it is better to write it well from the beginning (the BRIEF principles will help). Nevertheless, fixing such a bad scenario might be required sometimes and even if you don’t have such a legacy, thinking over the fixing steps might help better understand the principles of good scenarios.
#1 — Find the business language to express your intentions! (B)
I have started by analyzing the low-level browser automation step of the original scenario and tried to understand what sort of business workflow steps they cover. This is a sort of reverse-engineering, where from low level UI instructions you need to figure out what we really wanted to do. As if you navigated to the login page and filled up the user name and password text boxes followed by a button click on the login button, the original goal was probably to ensure a logged in user. Sometimes you can get a good guess, sometimes you will need to consult with the business to figure out why those particular steps were important. This is one of the most difficult and time consuming step. But the result is appealing: we have reduced the 27 steps to 10 and these already use our business language.
#2 — Make scenarios focused by illustrating a single rule! (F)
The scenario now uses the business language or at least something closer to that, but it still represents a long user journey, where the user performs different actions and the interim after these actions is verified by the test. In Gherkin, such scenarios usually follow the Given-When-Then-When-Then-When-Then pattern. Such pattern is normal for manual tests, where the tester can check as many things as possible with a single test preparation, but in case of automated tests, this pattern leads to brittle and unreliable scenarios that need constant maintenance.
The principle of “focused” describes that for good automated scenarios, we should focus on a single business rule or acceptance criterion. In our scenario (and it is typical for such multi-purpose scenarios), the Then steps can help us to identify the different rules the scenario verified. So we can go through the Then step blocks and try to figure out the rule that was verified by them. This is also reverse engineering, so just like for the previous step, you might need the help of the business. It can happen that you will find Then steps that do not verify any business rule (or the one they verify has been verified already anyway). You can skip these.
Essentially what you need to do is to check the Then steps and try to answer why it is important to verify these things. Once you have found the rules, you can split the long scenario into smaller ones, each illustrating a single rule. For example the “Then the question details should be visible” step with the data table might be the verification of the rule “The details of the question can be accessed from the question list…”, so the scenario can be extracted as the following one.
#3 — Reveal intentions rather than describing the mechanics! (I)
Applying “intention revealing” might need quite different changes depending on the context. Generally, what we want to achieve is that the steps should not describe the individual actions we had to perform (mechanics), but the goal or end result we wanted to achieve. This is especially true for the Given steps. For example in the scenario above, the “Given I have logged in” and the “And I have added a question with…” steps were together necessary to ensure that we have at least a question to check the details of. (For checking the details, you don’t even need to log in in this app.) An intention revealing version of these two steps would be this:
The difference seems to be that big, but the impact of the change is significant. Besides that, the second one is shorter and better understandable, it gives you better options for automation. If it is convenient, you can automate it in a way that first you let some default user log in and post a question. This might work in many cases. If you realize later that the scenario is too slow or enforces a too tight dependency between the question-display and the question-asking components, you can change the automation. For example in a way that is saves a question directly to the question store using some inner API of the application. Suddenly your scenario is much faster and more robust!
The original version forced you to automate it with the login-ask style. You had less options to improve the automated scenario.
#4 — Keep essential details only! (E)
To be able to illustrate a complex business rule we might need to include quite some details: data, workflow steps, etc. Keeping the scenario concrete with concrete data is good, but if you include details that are not relevant for the business rule, the scenario might become confusing. You will not understand why that particular detail was important.
The more details you include, the more dependent the scenario will be on the data structure or validation rules you have. In our scenarios, we also provide a list of tags for asking the questions, although the rules and the “Then” steps has nothing to do with tags. But providing tags were mandatory in this app, so we had to provide some tags. If the validation rules for the tags will change in the future, we might need to come back and fix this scenarios as well. So including irrelevant (incidental) details does not only make the scenario confusing but also causes extra maintenance efforts.
It is better to leave these details out of the scenario and push them down to the automation layer, where they can be maintained much better. (E.g. you can define a constant value for “default tags”, that you can reuse wherever needed.)
#5 — Use real data! (R)
If you see these scenarios as technical verification artifacts, it does not matter what data you use in the scenarios as long they are conform to the validation rules. So a question title of “Test question 123” might be just as good as any other text. But we do more than technical verification here. As you remember from the intro, without collaboration you will lose many of the benefits of BDD. In many cases, using real or realistic data in the scenarios helps for the team to better understand the goals of the business rules. This helps to make a better implementation, but it might also help to spot out special cases or forgotten requirements. Once you see a scenario with concrete real data, it will better trigger good discussions.
#6 — Brief scenarios!
Following the steps above helped us to fix the long scenarios and produce focused and clean ones, like this.
These scenarios already seem to fulfill our last principle: brief. To be able to get feedback from the business for our scenarios, it is essential to keep them short and straightforward. Brief scenarios with not more than 5-6 steps are a good help for this.
Writing better BDD scenarios
Brief scenarios are not only good for maintenance and automation, but they also enable better collaboration between the business and the delivery team.
In the Formulation book, which is currently in preparation, we have collected 6 principles that help to write better BDD scenarios. These BRIEF principles are not only useful for writing scenarios from scratch, but also for cleaning up an existing one.
Based on the webinar “Writing better BDD scenarios”, I have shown you a few ideas how this can be achieved.
If you are interested about this topic more in detail, you can attend one of my conference workshops in this topic (London Testers Gathering – London, ExpoQA – Madrid, EuroSTAR – Prague, TAPOST – Riga) or get an in-house training (BDD Vitals) for your entire team, optionally followed by BDD automation workshop days (BDD with SpecFlow, BDD with Cucumber Java, BDD with Cucumber.js). And of course, check out our books:
- Discovery — Explore behaviour using examples (available on Amazon and Leanpub, audiobook is coming soon!)
- Formulation — Express examples using Given/When/Then (beta is available on Leanpub)
Pingback: Java Testing Weekly 20 / 2019
Pingback: Five Blogs – 17 May 2019 – 5blogs