While artificial intelligence offers new and potentially transformational ways for consumers to discover and enjoy products and services such as music, television and movies, it can also shape human perceptions and drive decisions in ways that limit personal exploration and agency, according to a paper co-authored by Donna Hoffman, Louis Rosenfeld Distinguished Scholar, professor of marketing and co-director of the Center for the Connected Consumer at the George Washington University School of Business.
Professor Hoffman, who teaches a course on “AI and Marketing Strategy” for undergraduate students at the School of Business, addresses consequential questions about how AI could be altering the dynamics of human choice and autonomy. The paper, “How Artificial Intelligence Constrains the Human Experience,” will be published in print by the Journal of the Association for Consumer Research in July 2024.
The article builds on prior research completed by a number of scholars, including work conducted by Professors Hoffman and Thomas Novak, the Denit Trust Distinguished Scholar and professor of marketing who co-directs the Center for the Connected Consumer. The Center studies how consumer experience is being reconfigured by the interactions with smart devices.
Both scholars have added to the emerging area of studies that focus on the intersection of human psychology and technology. In this latest work, Professor Hoffman and co-authors explain three mechanisms: agency transference, parametric reductionism and regulated expression.
Agency transference involves the limited set of choices produced by AI-recommended content. Over time, curated music and media playlists on platforms such as Spotify and Netflix may slowly eliminate the occurrence of serendipitous discoveries by human users.
Parametric reductionism is defined by AI’s susceptibility to use bias and pre-conceived notions of human characteristics, to produce discriminatory and misaligned outputs. An example was the automated hiring tool created by Amazon that was discovered to have been biased against women candidates.
Regulated expression refers to the tendency of AI users to adjust or alter their usual manner of communicating when interacting with AI systems. This self-regulated language is often done in an effort to protect one’s privacy or to help the AI system understand a particular query. This could lead to less authenticity in human expression, and cause AI tools such as large language models (LLMs) that power chatbots to re-orient their output toward more regulated and constrained data.
These three mechanisms, the researchers argue, place constraints on aspects of the human experience, including agency, dignity, diversity, equality and skills. The authors also outline implications for AI developers, and steps that public policy practitioners could take to mitigate these issues in the future.