Track 2: Technology & Society
Rom Schrift (Indiana University)

2B. When AI Goes Awry

Friday, March 4
11:15am – 12:45pm EST
Discussant: Bernd Schmitt (Columbia University)
MC: Mansur Khamitov (Indiana University)
Calendar Invite: Add to calendar
Student Coordinator: Jeffrey Kang (Cornell University) (jk2832@cornell.edu)

Competitive Papers

How Personalized Video Recommendation Algorithms Induce Consumers to Believe Any Misguided Information on the Platform
Authors: Sonia Kim (Columbia University, Columbia Business School), Jaeyeon Chung (Rice University), Gita Johar (Columbia University, Columbia Business School)
Presenting Author: Sonia Kim (Columbia University, Columbia Business School)
Across six studies, the current research demonstrates how relying on the YouTube recommendation system may lead consumers to believe misguided information on this platform. As users rely more on the recommendation algorithm to watch new suggested videos, they tend to trust the YouTube platform as a valid source for information and be dismissive of the credibility of individual channels that upload video content. Consequently, they fall into a pitfall of believing in any information including even conspiracy theories claimed by these individual channels. We find this effect robust across various unverified historical, scientific, and health facts.
The Unintended Consequences of Raising Awareness: Knowing About the Existence of Algorithmic Racial Bias Widens Racial Inequality
Authors: Shunyuan Zhang (Harvard Business School, Harvard University), Yang Yang (University of Florida)
Presenting Author: Yang Yang (University of Florida)
In May 2016, a series of news articles on algorithmic racial bias went viral and triggered a nation-wide outrage on social media. How does awareness of the existence of racial bias in some algorithms influence different groups’ receptivity to unrelated, unbiased (i.e., race-blind) and beneficial algorithms? We find that the racial gap in Airbnb’s Smart Pricing (i.e., an unbiased and beneficial pricing algorithm) usage widened by 61.2% after (vs. before) the media event, because awareness of algorithmic racial bias differentially affects the expected financial benefits of Smart Pricing along racial lines, increasing (reducing) the expected benefits among white (Black) hosts.
Artificial Intelligence in the Government: Responses to Failures and Social Impact
Authors: Chiara Longoni (Questrom School of Business, Boston University), Luca Cian (Darden, University of Virginia), Ellie Kyung (Wharton School, University of Pennsylvania)
Presenting Author: Chiara Longoni (Questrom School of Business, Boston University)
Artificial Intelligence (AI) is pervading the government and transforming how public services are provided to people—from allocation of government benefits and privileges to enforcement of law and regulatory mandates, monitoring of risks to public health and safety, and provision of services to the public. Unfortunately, despite technological improvements and betterments in performance, AI systems are fallible and may commit errors. How do people respond when learning of AI’s failures? In twelve preregistered studies across a range of policy areas and diverse samples, we document a robust effect of algorithmic transference: algorithmic failures are generalized more broadly than human failures. Rather than reflecting generalized algorithm aversion, algorithmic transference is rooted in social categorization: it stems from how people perceive a group of non-human agents versus a group of humans—as out-groups characterized by greater homogeneity than in-groups of comparable humans. Because AIs are perceived as more homogeneous than people, failure information about one algorithm has higher inductive potential and is transferred to another algorithm at a higher rate than failure information about a person is transferred to another person. Assessing AI’s impact on consumers and societies, we show how the premature or mismanaged deployment of faulty AI technologies may engender algorithmic transference and undermine the very institutions that AI systems are meant to modernize.
The Effect of Anticipated Embarrassment on Consumer Preference for Using Chatbots
Authors: Rumela Sengupta: (University of Illinois at Chicago), Lagnajita Chatterjee (Worcester State University), Jeff Parker (University of Illinois at Chicago)
Presenting Author: Rumela Sengupta: (University of Illinois at Chicago)
In recent years, businesses have started using a variety of conversational chatbots to help improve their customer support efforts. Technological advances make these chatbots seem more human-like, which leads their users to anthropomorphize them. The current research examines the influence of anticipated embarrassment on consumers’ willingness to use chatbots for online searches. Across three studies, we show that when faced with a decision of whether to use a chatbot to conduct a search, consumers are less likely to do so when they anticipate feeling embarrassed about the search than when they do not. This reluctance to use chatbots result from a sense of perceived social presence while interacting with a chatbot. Together, the findings reveal an unexamined negative impact of anthropomorphizing chatbots, which has important implications for both theory and practice.

Flash Talks

Should Automated Vehicles Favor Passengers Over Pedestrians?
Authors: Julian De Freitas (Harvard Business School)
Presenting Author: Julian De Freitas (Harvard Business School)
Should automated vehicles favor passengers over pedestrians? Existing work suggests ‘yes’, insofar as consumers favor passengers when forced to choose between the two. I find that, if given a third option to treat passengers and pedestrians equally, most consumers choose that option instead. Consumers are also more outraged by firms whose AVs deliberately (but not indiscriminately) harm pedestrians instead of passengers, and their feelings of outrage explain how much they blame the firm for the accident. Given these preferences and moral reactions, AV firms may be better off depicting their AVs as egalitarian.
Anthropomorphism and Virtual Assistants’ Mistakes: Who is to Blame?
Authors: Bianca Kato (University of Guelph), Juan Wang (University of Guelph), Jing Wan (University of Guelph)
Presenting Author: Bianca Kato (University of Guelph)
This research investigates whether anthropomorphizing a virtual assistant (VA) can influence how consumers blame companies when there is a service error. We find that anthropomorphism increases the likelihood of consumers blaming the company that produces the VA, while the blame attributed to the VA does not change. This is because consumers perceive the company as having more control over the anthropomorphized VA. The increased company blame has important downstream managerial implications.

Posters

Human Chefs Cook More Calories: The Impact of Human (vs. Robotic) Food Producer on Calorie Estimation
Authors: Wenyan Yin (Drexel University), Yanliu Huang (Drexel University), Cait Lamberton (University of Pennsylvania)
Presenting Author: Wenyan Yin (Drexel University)
This research explores how the production mode (human-made vs. robot-made) has an impact on calorie estimation for vice and virtue food. Across 3 studies, we find that healthy food is inferred to have more calories when it is produced by a robot than by a human whereas the effect is reversed for unhealthy food. Unhealthy food produced by a human is estimated to have more calories than its counterpart. Perceived food quality mediates the relationship between the production mode and calorie estimation.
Does Lack of Social Support Predict Maladaptive Relationships with Technology?
Authors: Summer Kim (University of Kansas), Yexin Li (University of Kansas)
Presenting Author: Summer Kim (University of Kansas)
Loneliness, a health crisis even before the pandemic, has only intensified in the last year. A consequence of loneliness is greater attachment to non-human entities like smartphones. Although phones are an important connectivity tool, they can also heighten negative emotions and impede off-line relationships. Data from a cross-national dataset and two lab studies reveal that dispositional and situational loneliness predicts nomophobia, or the fear of being without one’s mobile phone. Moreover, nomophobic individuals shell out more money for smartphone network coverage. Consumers should be aware of these consumption biases and consciously prioritize real life relationships when they are lonely.
The Effect of Ingratiation on Users’ Evaluation of an Ingratiating Artificial Intelligence
Authors: Umair Usman (University of Kentucky), Aaron Garvey (University of Kentucky)
Presenting Author: Umair Usman (University of Kentucky)
The current research explores how an ingratiating AI affects users’ evaluation of the AI’s usefulness. We find positive effects of ingratiation by a machine-like (vs. human-like) AI on a user’s willingness to accept recommendations, and the perceived accuracy of the AI in making future predictions. We propose that a user’s perception about how objective the AI is, based on the findings from research on intentions and agency of a machine-like (vs. human-like) AI, explains the underlying psychological process of the hypothesized effect.
Artificial Intelligence Can Help Counter Perceived Threats
Authors: Itai Linzen (Tel-Aviv University), Yael Steinhart (Tel Aviv University), Ziv Carmon (INSEAD)
Presenting Author: Itai Linzen (Tel-Aviv University)
This research argues that Artificial Intelligence (AI) can play a restorative role when consumers feel threatened. In four studies, we demonstrate that consumers experiencing a threat to their sense of control report higher purchase intentions of AI-based products than non-threatened consumers, since threatened consumers believe such products may help them manage their everyday lives. We further show that this effect is moderated with respect to non-AI products that only perform consumer’s instructions without learning consumer’s preferences. Finally, we demonstrate that AI-based products may increase consumers’ perceived sense of control.
Return to schedule