Friday, March 4
9:30am – 11:00am EST
9:30am – 11:00am EST
Discussant: Klaus Wertenbroch (INSEAD)
MC: Gideon Nave (University of Pennsylvania)
Calendar Invite: Add to calendar
Student Coordinator: Jeffrey Kang (Cornell University) (email@example.com)
Friendly and Reliable: Antecedents of Smart Agent Personality
Smart agents are modern technological tools that perform one or more tasks and interface in a mutual and dynamic way with their users (e.g., Amazon Alexa, Apple Siri, Roomba). Despite rapid technological advances and broad adoption, existing research provides limited insight into how these agents are perceived by their users. Prior work suggests a “personality” model for smart agents that includes two primary factors (“friendly” and “reliable”) and seven underlying facets. Extending that work, we confirm the utility of the model and demonstrate that emotional language and the ability to be customized serve as unique antecedents of friendliness and reliability, respectively. Our work informs understanding of how consumers perceive modern devices and how firms can design them to portray distinct personality characteristics.
A Chatbot’s Language Can Undermine Consumer Trust
"Chatbots are increasingly used by marketers. However, the language used by consumer-oriented chatbots may influence consumer attitudes and behaviors in unanticipated ways. The present work investigates how consumers respond when chatbots’ language contains linguistic cues often associated with deception (emphatic markers and low lexical diversity). We show that the mere presence of these two linguistic markers within a chatbot’s language negatively impacts trust, measured implicitly and explicitly. In turn, a lack of trust leads to lower purchase intentions and willingness to provide personal information. While marketers may find it intuitive to include emphatic markers or a consistent set of words, this research suggests there may be substantial downsides to doing so. Additionally, this work provides an empirical paradigm to test the effects of language in service interactions, since chatbots provide a way to abstract linguistic factors from the physical characteristics of the interaction, and analyze their effect on trust-dependent behaviors. "
Dehumanizing Voice Technology: Phonetic & Experiential Consequences of Restricted Human-Machine Interaction
The use of natural language and voice-based interfaces gradually transforms how consumers search, shop, and express their preferences. The current work explores how changes in the syntactical structure of the interaction with conversational interfaces (command vs. request based expression modalities) negatively affects consumers’ subjective task enjoyment and systematically alters objective vocal features in the human voice. We show that requests (vs. commands) lead to an increase in phonetic convergence and lower phonetic latency, and ultimately a more natural task experience for consumers.
New Pathways for Improving Engagement with Opposing Views
"We develop an interpretable machine learning algorithm to detect "conversational receptiveness” – language that communicates thoughtful engagement during a disagreement. Across several populations (online education forums, Wikipedia editors, local government officials) receptive writers are more persuasive, and prevent conflict escalation. To teach receptiveness, we find benefits from a static “receptiveness recipe” explaining the model, and even moreso from a personalized feedback system that evaluates their previous responses. Our results show how algorithms can be used to improve the choices people make during difficult conversations.
When Artificial Becomes Real: Role of Mind Perception in Perceived Authenticity of AI Influencers
AI influencers (AII) are entities which use machine learning algorithms to develop their content and interact with users on social media. We report that despite being explicit about virtual nature of AII, consumers perceive higher authenticity for AII as compared to the real influencers in social media contexts. An experimental study also investigates the mechanism driving this effect. We show that AII have lower mind perception (agency and experience) which results in heightened perceived authenticity. Thus, while lack of mind perception is generally associated with AI aversion in consumers, we show that lower mind perception results in positive perceptions of AII. Future studies aim to test boundaries for the effect and the downstream impact on brand perception.
Tailoring Recommendation Algorithms to Ideal Preferences Makes Users Better Off
In the digital era, companies rely on computer algorithms to predict consumer preferences and recommend content. In this research, we utilize machine learning algorithms to generate personalized recommendations tailored to people’s actual or ideal preferences. Whereas people are more likely to follow both types of customized recommendations (albeit not equally) over non-customized recommendations, they feel better off and are more likely to use the recommendation service again when receiving “ideal” (vs. "actual") recommendations. This research has profound implications for companies, highlighting their ability to leverage recommendation algorithms for their own benefit while helping consumers live in line with their ideals.
When and Why Consumers Prefer to Interact with Artificial Intelligence Over Human Service Providers
Although consumers may often find themselves talking to conversational Artificial Intelligence (i.e., chatbots) instead of live humans, little is known about consumer preference for human versus chatbot service providers. While prior research has shown that people prefer human medical providers over their AI counterparts, we explore this preference in more common purchase contexts. We find when self-presentation concerns are activated, people prefer to interact with chatbots over humans because they feel less embarrassed interacting with a chatbot. Building on Theory of Mind, we show that this decreased embarrassment is driven by the decreased attribution of mind to chatbots (vs. humans).
A Theoretical Framework on Conceptualization of Autonomy in Personalization through Ring Theory of Personhood
Presenting personalized options may be perceived appealing, useful and relevant but prevent consumers from trying new things. This discourages consumers from naturally evolving and refining their taste and preferences in long term. AI based personalization result in long term impact on consumer autonomy due to repeat past behavior. As a result, a gap occurs in how consumers view their actual self to ideal self. Hence, it is important to study how consumers perceive the benefits of personalization, and how it affect consumer behavior. Drawing from ring theory of personhood we explain how consumer perceive personalization and autonomy in decision making.
The Role of Modality Authenticity in Consumer Acceptance of Verbal AI
This research introduces "authentic modality" as a dimension of source acceptance in communications through which consumers make decisions when interacting with AI (vs. human) customer service agents. We predict and find that when it comes to human-AI interactions, people can be more influenced by AI (vs. human) agents when communicating via text. This effect occurs because people think of verbal communication as a more authentic way for machines (and not humans) to interact with others.