A trap every app development teams fall into at some point is using UI testing to automate testing features from the perspective of a customer. Understand why this is a problem and how a testing approach that includes Acceptance Testing avoids it.

Has Your Mobile App Team Fallen Into The UI Testing Trap?

UI Testing, a black hole of app software development. (Image by Event Horizon Telescope)

Let's get it out there. The biggest pain of any app development arises from the quest to automate the time-consuming manual effort of testing the features from the perspective of an actual customer.

When this initiative begins we quickly see the adoption of UI Testing; offered as the best, or perhaps the only solution. UI Testing frameworks for mobile (Appium, XCUITest, Espresso and more) are so prevalent, we fail to question their use.

These tools whilst technological marvels, are a trap; and their use flies in the face of basic software testing principles. The actual tool doesn't even really matter, the outcome is the same: very little in the way of useful improvement.

So stay tuned if you want to:

The results are in. UI Testing is a FAIL!

The problems with UI tests I've observed (and seen repeated on different projects over 15 years) is:

How do really smart teams fall into the trap?

So how do smart teams, especially mobile app teams fall into the trap? The acuteness of this problem arises from the nature of releasing software via app stores.

Everyone's assumption is that nowadays all software is released in a continuous delivery fashion, where any fix is just a code-push away. Yet in reality, mobile app development looks more like the traditional software release process of yester year (when a golden master was literally sent to a factory to be replicated onto physical media). The app stores just replaced the factory, still pressing your app out one physical device at a time.

When software releases look like THIS you don't want to be on the team that ships bugs

When software releases look like THIS you don't want to be on the team that ships bugs

Let's just say the delivery process is VERY different from that of continuous delivery of a backend service or web app. So the unique characteristics of app store distribution create a higher imperative for quality up front. This leads to increased cost and delays caused by the limitations of manual testing.

At some point or another, the team's fate is set; they become trapped on a path to UI automation into which plenty of energy goes in, yet nothing all that useful comes out.

AND The really scary thing about this is..

We know all this! In software engineering we're all familiar with Mike Cohn and Lisa Crispin's Testing Pyramid and its cautions about the use of UI Testing.

This is where a team might be expected to return to this insight, pause for breath, heed the warnings, and say "we know where this leads, whatever WE DO.. however tempted, WE MUST resist the temptation of too many UI tests!".

Not so! Because the power of sunken losses and pre-existence of a set way of doing things is a powerful force. The existence of painstakingly-written, human-centric test scripts ensures that automation of those scripts is seen as a logical step.

(I'd bet money that the primary use of BDD tools is to automate existing manual test scripts rather than to write requirements.)

What's not given nearly enough consideration is that those tests were intended for a human being to follow. One that's vastly more intelligent than the computer about to be handed this complex task.

Creatures of automation's attempts to operate in a world that was designed for humans don't typically end well

Creatures of automation's attempts to operate in a world that was designed for humans don't typically end well

What happens next is an emphatic embrace of any number of well marketed and popular User Interface testing tools which promise to make the manual testing a distant memory.

And now our problems really begin!

Soon we end up with an "Ice Cream Cone" approach to testing, despite the fact that it turns the testing pyramid on its head. This leaves us with a distinctly un-agile software delivery reliant on slow, error-prone, expensive tests; the very thing we were meant to replace!

Right tools, wrong job

Forcing computers into a human-centric world never ends well. The reason why UI Tests don't work and the whole approach is flawed is because it turns out they are actually End-To-End or "System Tests" involving literally every part of the system.

(Yes, you can run all this in a simulated environment to make it more predictable, faster. The effort to do so, however, is non-trivial meaning it probably doesn't happen. If it does, the complexity of the test system increases exponentially with intolerable increases in code, costs and maintenance.)

Here's the list of the reasons I've observed why these tests can fail (whenever I'm tempted to repeat the experience again):

Notice how many of these have nothing to do with the actual behaviour of the software itself but the endless combinations of different things outside that cause things to go wrong.

Escape the Trap with Acceptance Tests

Manual testing software isn't a sustainable OR acceptable approach to software testing. But neither is UI Testing. So what is?

One answer is the relatively unknown discipline of "Acceptance Testing". This is something I discovered when grappling with these problems and learning about the value of TDD in solving related software quality problems.

(Given the lack of existing material, I created a new series exploring Acceptance Testing in detail with Clean Coders.)

The Key Takeaway: Acceptance Tests are the one kind of test that are designed for the explicit purpose of automating the testing of customer / business requirements. However, the critical difference between Acceptance Tests and UI Tests is that Acceptance Tests act at a level close to the actual code itself. They test the software at a level where the tests have direct access to the business logic of the software without the complexities of the UI and everything else this entails.

In short, Acceptance Tests are a way of returning to the guidance of Mike Cohn's Pyramid with particular focus on the middle of the Pyramid he labelled "Service Tests". These are the tests which Mike himself observed as being the forgotten layer of the test automation pyramid.

Not Convinced Yet?

Here, I've suggested Acceptance Testing can help you escape from the UI test trap, but it turns out it also has many many other benefits to offer around the actual development of software in the first place - not just how you test it!

If you'd like to learn more about Acceptance Testing and how it can benefit your mobile (or other) app development project, check out my in-depth 5 Part Clean Coders series.

Link to Video

In it you'll discover the simple brilliance of Acceptance Testing and how to build successful software sooner and keep it that way for longer.