Yes, user feedback about an app is useful, but only to a degree. Here are some other ways to evaluate an app.

Guest Commentary, Guest Commentary

August 3, 2017

5 Min Read
Hunter Jensen, Barefoot Solutions

Nearly 23% of applications are abandoned after first use. Yikes!

For app developers, this statistic can seem especially grim. No one wants their hard work to end in abandonment, but it’s hard to figure out where you’re missing the mark. This is where user feedback and user evaluation comes in. But how do you know which technique to use?

Many app developers prioritize user feedback over other evaluation techniques, such as heuristic evaluation and A/B testing. But is user feedback the only method you should use? Is there such a thing as too much user feedback?

What follows is everything you need to know about the assessment methods available to you and how they can give you a fuller picture of your app’s user experience.

How much user feedback is too much?

The top innovators -- like Steve Jobs and Henry Ford -- would argue that user feedback is unnecessary. From their perspective, user feedback stifles creativity and doesn’t even lead to products users actually want.

As a quote commonly attributed to Ford says, “If I had asked people what they wanted, they would have said faster horses.” According to Jobs, “It’s really hard to design products by focus groups. A lot of times, people don’t know what they want until you show it to them.”

Ford and Jobs created innovative new products for which there was no initial demand, but those products quickly became indispensable. If you are building an app that is completely unprecedented, user feedback won’t be of much use. All it will do is stifle innovation.

It's unlikely your app is truly revolutionary, which means there is a place for user feedback in evaluating its strengths and weaknesses. But there’s also an important lesson here: Users can’t tell you what your next product should be. They cannot present you with fully-formed app ideas.

Prototyping and user feedback will provide incremental benefits to your app by improving its functionality. For most app developers, the question is not whether user feedback should be eschewed, but how much feedback is useful.

There is a danger in allowing projects to become user-led. Few users are knowledgeable enough to know whether their needs and wants are practical. Even when surveying a large sample of users, there is a chance those surveyed will not represent the needs of all users.

It’s important to be deliberate in how you collect user feedback, and how much of it you get.

What’s the right way to gather feedback?

Many companies fall into some common pitfalls when soliciting user feedback. Here are some tips on getting good data you can convert into action.

Don’t ask users what they want. This is too generic a line of inquiry that often results in feedback that is both overly specific and not entirely accurate. Instead, pursue more fruitful lines of questioning, such as:

  • What moves users emotionally?

  • What features do users find frustrating or difficult to use?

  • What features do they most enjoy? Why?

  • What problems does the app solve for its users? What doesn’t it solve?

Avoid closed questions. Don’t ask questions that can be answered with a simple “yes” or “no.” This doesn’t start a conversation or give any meaningful insight because it doesn’t get to the why.

Remember: One opinion is not data. One user’s opinion shouldn’t be enough for you to change your app. Look for trends by clustering your data samples. If 100 users have difficulty with a particular feature, that carries more weight than if just one user struggles.

Look at user behavior patterns. Actions speak louder than words. If your app has a feature users claim is useful but don’t actually use, it’s not useful to them. Examine what your users do, not merely what they say.

Other ways to evaluate your app

While user feedback has its place, it is not the only way to evaluate your app. Heuristic evaluation and A/B testing can be valuable to the product development process.

Heuristic evaluation. “Heuristic” is a fancy word for “trial-and-error” or “hands-on.” As part of heuristic evaluations, user-experience specialists dig into the interface of an app to identify usability issues. This is an efficient and effective way to get an overview of app UX and find the main problems that are affecting its usability.

Heuristic evaluators need to understand the end user’s needs and goals so they can evaluate the app from the user’s perspective. It is helpful to develop user personas, so evaluators know who is using the app and why.

A/B testing. Some developers view A/B testing as a waste of time, and would rather focus on developing and tweaking their app. This is unfortunate, as A/B testing allows you to move from making technical and design choices based on hunches to making them based on data.

Everything from the layout of the user interface to the color of the buttons can be subjected to A/B testing. Google, for example, tested what shade of blue to use for its text links. They discovered that one particular shade made people more likely to click, which Google converted into an extra $200 million in ad revenue. It’s unlikely Google would have seen such benefits had it asked its users what color its links should be.

Different evaluation methods serve different purposes, but used together, user feedback (solicited in the right way), heuristic evaluation, and A/B testing will give you a full picture of your app’s user experience and, most importantly, how to improve it.

Hunter Jensen is the founder and CEO of Barefoot Solutions, an innovative digital agency headquartered in San Diego. Barefoot Solutions specializes in web and mobile design and development, including web, iOS, Android, IoT, AppleTV, Apple Watch, and more. Having worked in technology for more than 19 years, Hunter's experience covers the entire lifecycle of product design and development, as well as all other facets of running a digital agency, including business development and fundraising.

About the Author(s)

Guest Commentary

Guest Commentary

The InformationWeek community brings together IT practitioners and industry experts with IT advice, education, and opinions. We strive to highlight technology executives and subject matter experts and use their knowledge and experiences to help our audience of IT professionals in a meaningful way. We publish Guest Commentaries from IT practitioners, industry analysts, technology evangelists, and researchers in the field. We are focusing on four main topics: cloud computing; DevOps; data and analytics; and IT leadership and career development. We aim to offer objective, practical advice to our audience on those topics from people who have deep experience in these topics and know the ropes. Guest Commentaries must be vendor neutral. We don't publish articles that promote the writer's company or product.

Never Miss a Beat: Get a snapshot of the issues affecting the IT industry straight to your inbox.

You May Also Like

More Insights