https://static1.squarespace.com/static/5db8a4995630c6238cbb4c26/t/65cc0bf858c4676b9878732a/1707871238177/Valoir+Report+-+Language+matters+-+AI+user+perceptions.pdf
https://static1.squarespace.com/static/5db8a4995630c6238cbb4c26/t/65cc0bf858c4676b9878732a/1707871238177/Valoir+Report+-+Language+matters+-+AI+user+perceptions.pdf
<aside> 💡 AI Generated Summary
Valoir's study on AI and UX reveals that language and trust are crucial for AI adoption in the workplace. Although 84% of workers have experimented with AI, 17% believe it can't help them at work, and 10% can't name a company they trust with AI. The term 'virtual assistant' is preferred over 'copilot', and training is a significant hurdle to adoption. Workers are most concerned that AI could violate their privacy, act without human intervention, or replace them. Vendors need to communicate the benefits of their AI solutions clearly and transparently to build trust and drive adoption.
</aside>
As enterprise application vendors continue to evolve and deliver their
artificial intelligence (AI) solutions to the market, how they are perceived
by users is important for acceptance, effective adoption and, ultimately,
the value they deliver. In our study of end users, Valoir found that
language matters, and many users are still waiting to hear the magic
words that will make them trust – and effectively adopt – AI. Seventeen
percent of workers believe AI can’t help them at work, and much of it is
a question of trust: one in 10 workers couldn’t name a company they
would trust with AI.
Since the announcement of the first version of Chat GPT more than a
year ago, enterprise application software vendors have been rushing AI-
related product announcements to the market – and been met with
varying levels of skepticism. Concerns about risk, bias, ethics, and safety
have put the hold on broad adoption of many AI tools and applications –