Can Artificial Intelligence (AI) help to improve your User Experience (UX) designs? How does it accomplish this — knowing that AI itself may be prone to bias?
In this article, we will examine this issue in detail and explore how you can tap AI to improve your designs.
User experience design is a critical part of running a business today. Your company needs a website, if not an app, and this platform will be most effective when it’s easier to use and understand.
However, one important yet easy-to-miss piece of that puzzle is minimizing bias in your UX.
What Is UX Design Bias?
UX design bias is a general term covering any prejudice existing within a platform’s user experience. Accessibility issues are among the most common. As many as 95.9% of home pages don’t fully comply with the Web Content Accessibility Guidelines (WCAG) standard.
Not everyone faces the same barriers when using a website or app. Consequently, designers could build something that functions well for them without realizing it doesn’t accommodate others’ needs. As such, their UX designs are considered biased.
Cognitive biases can affect UX design, too. These are skewed ways of thinking your brain uses as a shortcut to process information but can limit understanding. For example, people commonly over-emphasize factors they notice first — anchoring bias — or consider data supporting their ideas more heavily than that which doesn’t — confirmation bias.
Such phenomena can likewise hurt your UX. Failing to account for them could lead you to overlook better, accessible designs or miss ways in which the layout is misleading.
How Can AI Decrease Bias in UX Design?
Across all its forms, UX design bias leads to inaccessible or otherwise suboptimal user interactions. Prejudice is often unconscious, too, making it difficult to notice or work around. Thankfully, artificial intelligence (AI) can help in a few ways.
Some AI design tools can scan your content to look for common signs of bias. That way, you can find missteps where you may have overlooked them otherwise and fix the design before it causes friction with users. While automated accessibility checkers only catch 57.38% of total issues according to some research, they still provide a fresh perspective in minimal time.
Implementing AI features into your UX itself can also make the experience less biased toward some user groups. Automated alt text and built-in text-to-speech readers use natural language processing (NLP) to make interfaces accessible to the 7.6 million U.S. adults with a visual disability. Similarly, auto-generated closed captions and subtitling can help those with hearing loss.
NLP can assist users through chatbots, too. Bots can offer crucial information to visitors or walk them through answers to the barriers they face to make the platform accessible. When chatbots cannot resolve an issue, they can report it so you know what unaddressed prejudices may exist within your UX.
A Note About AI Bias
All of these use cases make it easier to prevent and mitigate UX design bias. However, you should be aware that AI can carry prejudices of its own.
Because bias is usually innate but widespread, it often exists in data sets without people realizing it. Once AI trains on this data, it may develop and even exaggerate human prejudices. While unintentional, it can have dramatic effects. For example, in 2018, Amazon discovered its recruitment AI discriminated against women — the effect of training mostly on men’s resumes. A year later, a Facebook algorithm showed racial discrimination in how it promoted real estate advertisements.
Given the potential for AI bias, the technology must be used carefully. Only use AI tools from developers who have taken steps to minimize these risks, and always monitor AI’s results to ensure it doesn’t show prejudice.
How to Use AI in UX Design Effectively
The key to fighting UX bias with AI starts with recognizing where it’s most useful. Automated accessibility features like closed captioning, text-to-speech and voice controls are often the best path forward. AI-promoted prejudice is less of an issue in these more practical, less analytic models, and the results specifically target usability and diversity.
When using AI analysis to look for biased designs, avoid over-relying on the results. Remember, they may be incomplete, so it’s best to pair them with real-world user testing. You can also use prompt engineering when using a generative model like ChatGPT, which increases the likelihood of precise results and relevant content.
Always verify AI-generated or suggested outputs before putting them into action. While the technology is a great way to gain new perspectives and save time, humans are better at the necessary nuance to keep UX fair and accessible.
AI Can Be Helpful for Addressing Bias When Used Carefully
AI’s role in UX design bias is a complicated one. On the one hand, it can fill oversight gaps and automate accessibility features, but on the other, it can take some human prejudices to extremes.
Learning AI’s strengths and weaknesses will make it easier to use effectively and fairly. Once you understand these considerations, you can optimize your UX for all user groups.
Eleanor Hecks is a design and marketing writer and researcher with a particular passion for CX topics. You can find her work as Editor in Chief of Designerly Magazine and as a writer for publications such as Clutch.co, Fast Company and Webdesigner Depot. Connect with her on LinkedIn or X to view her latest work.