Services
Technologies
Services
Technologies
Do you have a
project in mind?
We are always up for new ideas and unique challenges that help transform businesses.
Talk To UsWhen it comes to work-related decision making, being objective and data-driven is imperative. UX designers, in particular, have to make countless decisions during the initial user research and prototyping phases.
While you can argue that good design is subjective and sure, UX designers need to rely on their strong right brains to come up with creative and appealing solutions. But design decisions need to be objective and backed by facts instead of being purely instinctive.
Sadly, even the most experienced, rational, and objective designers regularly fall prey to cognitive bias. After all, UX designers are human too.
It is a systematic error in human thinking that affects the decisions and judgments that people make. A cognitive bias is a psychological deviation from rationality, a blind spot in our understanding of the world.
What happens is that we formulate our own subjective reality where reason takes a hike in favor of our unconscious biases which leads to poor decisions, even if we think they’re the best decisions.
In other words, a cognitive bias may result in warped perception, inaccurate judgments, and even illogical decisions. It negatively impacts your design thinking and ultimately, spoils the end user’s experience. What’s more, such biases are often extremely difficult to discern from the inside.
Being cognizant of these biases will enable you to make more intelligent and objective decisions when designing products. Thus, below is a list of seven common cognitive biases that may affect you as a UX designer and how you can avoid them.
This is one of the most common types of cognitive bias which is quite difficult to rectify.
Basically, confirmation bias is our tendency to seek out results or information that are in accord with our worldview. We usually like to believe what we want to believe even if there’s firm evidence pointing otherwise. Any information that challenges our hypothesis may be discarded.
For instance, we all look up our symptoms online (even if it is advisable not to). When we are browsing the search results, one of two things happen:
We try to convince ourselves that what we have isn’t any of the serious diseases/disorders listed, even if the symptoms match.
Or we freak out and are convinced that what we have is terminal and we need an immediate appointment with a qualified specialist, even if only a couple of symptoms match.
Either way, we’ve already made up our minds about what illness we have and we browse the internet just to find information that conforms with our thinking. Similarly, let’s say you design a cool call-to-action that you think is bound to bring in more conversions. But you soon realize that the clicks aren’t coming and so, you decide to do some user testing to fix the problem.
Now, you’re impressed with your work. And because you love the button you designed, you might dismiss user data or feedback that goes against your initial assumption.
The best way to dodge this bias is to remember that the purpose of user research and testing is to learn more about users rather than you being right about users. Treat all data equally instead of favoring some results over others.
A slightly less common and subtle bias, the observer-expectancy effect is when you subconsciously act out your preconceptions when conducting user research, which influences how your participants behave.
As humans, we all have our personal beliefs, prior knowledge, and ingrained subjectivity. This can affect how we act around people we observe, and influence their behaviors such that they are more in tune with our desires.
A famous example of this effect is from the 1900s. Wilhelm von Osten believed his horse (Clever Hans) was capable of solving mathematical problems. So, unknowingly, he gave Hans hints that enabled him to answer such puzzles correctly.
Likewise, when conducting a user interview, your gestures and overall body language may unintentionally pressure the user to answer your questions in a certain manner. But you obviously want unswayed answers from your users if you truly want to build a great user experience.
To overcome this, practice your interviews before conducting them. Ask a coworker to give you feedback about your body language and non-verbal communication that does not appear neutral.
Along the lines of the observer-expectancy effect, this bias occurs when you frame a question in a way such that it prompts the user to answer in a certain manner.
For example, imagine taking a survey in which the question is worded as “How difficult was it for you to use our mobile app?” The question itself suggests that using their mobile app is tricky. It stimulates you to think in a pessimistic way.
While this can be useful if you want to find bugs in your product, it does not inspire objective answers to your questions. So, thoroughly proofread your questions to verify whether your questions are neutral.
When you’re deciding on a sampling pool for your user research, it may happen that some types of users are unintentionally left out.
Suppose you are devising a marathon running app and need to conduct research on marathon running enthusiasts. You decide to interview and observe runners in your city, but you fail to recognize that their running behavior may markedly vary from those who live in suburban and rural areas. You are running the risk (pardon the pun) that your research insights may not apply evenly to all your target audiences.
A solid way to tackle this bias is to clearly define your audiences and list down the key differentiators in terms of their background, behaviors, and attitudes. Include people of diverse characteristics in your sample.
Humans tend to see “trends” and “patterns” in what are completely random outcomes. We are naturally inclined to bring order to chaos. While it may help make some sense of randomness, clustering bias disrupts objective insights. A “hot streak” in a luck-based casino game? That’s clustering bias.
UX designers frequently rely on qualitative analysis. But with a short sample size, it is often impossible to not see patterns that are just smaller sets of randomness which appear to share common features.
One way to counter clustering bias is by conducting user research and prototyping with totally distinct and diverse sets of users. Another way of avoiding it is to have a quiet brainstorming session before discussing it among stakeholders. You can also include more diverse stakeholders in the analysis process so that the bias has a better chance of getting eliminated.
Stay up to date with our latest news and products
"*" indicates required fields