Today, I was consulting Egyptian company in Cairo. They had an objection to the CEO request to write automated GUI tests per sprint. Typically, he asked them to include that GUI test automation as a done definition per story, so the story will not be scored as done if its related automation was not totally done. The team suggested to delay that test automation to the end of release, as the GUI is constantly changing and that made them re-record all their automated tests again and again even with the smallest change to the GUI. This rework was not just a waste of effort but also a disappoint. In addition to that, some team members suggested to delay database unit testing and code unit testing also to the end of release.
Sure, we have flexibility in Agile, but some customizations could easily destruct the whole Agility concept. This one is typically one of them. A decision that could revert us back to old traditional ways of development and makes us follow hollow agile rituals.
First: We had a big issue in the team discussion, as their talks was opinion oriented, that could lead us to endless discussions to explain opposing points of view without getting a crisp image and rational decision. That meant in our case is to have data. How many tests are re-recorded? How much effort does it take? Why the user interface is so sensitive that much, to any change even if it is a small change? Without crisp and clear understanding and without measures we are deciding in a blind mode assuming many things all at once.
Getting facts and metrics is affordable in Scrum. As we have short cycles of complete software development efforts, we have the great opportunity to experiment and collect data to have solid feedback that could help us steer our development techniques in a really short time. This cycle is typically two weeks in the mentioned organization. So instead of debating subjectively about some technique, why not make an experiment in a sprint and see the result. So, I advised the team to plan a sprint, collect data abd talk with the CEO with facts, instead of opinions and fears.
Doing the automation at end of release can lead to a disaster. We will stretch the feedback loop of the test automation to the release level. Not only that, it will also show us a deceiving velocity of sprints, as by end of release we have to spend un-known amount of days recording the automated tests. In addition of that, the experimentation idea of short cycles is dead. Now we have to experiment on the level of release! what a long time! So, by all means we should delay that route as a last resort of action if at all.