As artificial intelligence (AI) becomes increasingly integrated into competitive and evaluative domains, it is important to understand how individuals psychologically respond to AI opponents. This study examined the psychological effects of losing to an AI versus a human opponent in an emotion-recognition task. Participants were N = 77 undergraduate students from Angelo State University who were randomly assigned to compete against either an AI or a human opponent in an emotion-recognition task. Regardless of actual performance, all participants received standardized false feedback indicating they had lost to their opponent in a task involving identifying emotions in dance videos. Following the task, participants completed self-report measures of self-esteem, stress, self-efficacy, and attitudes toward AI. Independent-samples t-tests revealed no statistically significant differences between conditions; however, effect sizes were small to moderate for self-efficacy (d = .36) and attitudes toward AI (d = .33), with participants in the AI condition reporting greater self-efficacy and more positive attitudes toward AI following a loss to their AI opponent. These findings were contrary to the hypothesis that losing to an AI would produce negative psychological outcomes. Instead, the results suggest that participants may perceive AI opponents as less personally threatening and maintain a more favorable view of themselves and the AI system after a loss to AI. Future research should explore the effects of higher-stakes or more personally relevant tasks, as well as the long-term implications of human-AI competition for psychological well-being.
The Psychological Effects of a Perceived Loss to an AI versus a Human Opponent
Artificial intelligence (AI) has become an integral part of modern life, shaping both personal and professional spheres as AI-driven technologies continue to expand across industries and are incorporated into daily routines (States News Service, 2023). From smart assistants and recommendation algorithms to AI-powered tools in education and gaming, people are interacting with AI more frequently than ever before (States News Service, 2023). In this evolving technological landscape, competitive environments involving AI are becoming more common, prompting new questions about how humans psychologically respond to these interactions. Although the effects of competition and performance feedback have been widely studied in traditional human-to-human contexts, the rise of AI introduces novel psychological dynamics that remain unexplored.
In general, competition has been shown to play a crucial role in shaping motivation, self-perception, and performance outcomes (Murayama & Elliot, 2012). Research has found that feedback, especially negative or false feedback, can significantly affect an individual’s self-assessment and behavior. For example, receiving inaccurate negative feedback can lead individuals to question their abilities and experience diminished motivation, a phenomenon Kim and Chiu (2011) described as self-effacement. This reaction is often magnified in competitive scenarios, where individuals may tie their performance outcomes closely to their self-worth (Kim & Chiu, 2011). However, most of this body of research focuses on human competition, leaving a gap in understanding how similar feedback is processed when it involves an AI opponent.
Recent research has started to explore how people respond differently to AI in competitive settings, depending on how the AI is perceived, either as a supportive tool or as a direct rival. When AI is framed as a collaborator, individuals are often more receptive to its input and less emotionally reactive to its decisions (Krakowski, Luger, & Raisch, 2023). However, when AI is framed as a competitor, people respond with increased resistance or discomfort (Krakowski, Luger, & Raisch, 2023). This distinction is vital because the framing of the AI can influence not just attitudes toward the technology itself, but also internal processes such as self-esteem, perceived stress, and self-efficacy.
Further highlighting this subject, a study investigated how individuals reacted to losing a game that involved elements of both skill and luck. Participants were led to believe they were competing against either an AI or a human opponent. Interestingly, while most participants attributed their losses to luck, regardless of the opponent’s identity, those who believed they lost to an AI opponent were significantly less likely to re-engage in competition (Yokoi & Nakayachi, 2024). This reluctance to continue suggests that losses to AI may have a deeper psychological impact. Research also shows that participants attribute losses to their own and their opponent’s abilities more than any other factor, regardless of opponent type (Yokoi & Nakayachi, 2022). However, similarly, the number of participants choosing to re-challenge the game was lower in the AI condition than in the human condition, highlighting a potential demotivating effect when competing against AI (Yokoi & Nakayachi, 2022). Such findings point to the subtle but meaningful ways in which the identity of an opponent can influence not only emotional responses but also subsequent motivational behaviors following a perceived loss.
Understanding these psychological dynamics is increasingly relevant as AI systems become more involved in everyday activities. Whether in gaming atmospheres, educational platforms, or professional environments, AI is playing a growing role in evaluating and interacting with human users (AI and Your Daily Life, 2024). As AI takes on more roles that include judging and performing alongside humans, the psychological toll of these interactions warrants closer examination. Concerns, including how such interactions affect an individual’s self-esteem, stress, self-efficacy, and attitudes toward AI systems, should be taken into account when moving forward. As AI becomes more ubiquitous, understanding the psychological effects of these interactions becomes increasingly important for ensuring that these technologies foster positive relationships and do not inadvertently negatively impact well-being.
The Present Study
Building on these insights, the current study aimed to bridge gaps in the literature and provide deeper insights into the psychological effects of human-AI competition. Specifically, this work sought to further investigate these dynamics within an emotion-recognition task. In this study, participants were led to believe they were competing against either an AI or a human opponent in a task that involved identifying emotions in dance videos. Regardless of their actual performance, participants received standardized feedback stating that they had lost to their opponent. This experimental design allowed us to examine how this perceived loss affected various psychological factors, including self-esteem, stress, self-efficacy, and attitudes toward AI. We hypothesized that participants who believed they lost to an AI opponent would report lower self-esteem, higher levels of stress, lower self-efficacy, and more negative attitudes toward AI compared to those who believed they lost to a human opponent.
Method
Participants
Undergraduate students from Angelo State University were recruited to participate in this study. Participants were 80.5% women, 18.2% men, and 1.3% non-binary, ranging in age from 18 to 25 years old (M = 19.58, SD = 1.44). The self-reported race breakdown of the sample was 51.9% European American/White, 31.2% Hispanic/Latino American, 9.1% more than one race, 3.9% African American, 2.6% other, and 1.3% Asian American. All study procedures were approved by the Institutional Review Board at Angelo State University (IRB #ARA030525).
Procedure
Participants signed up through Sona Systems and completed the study online via Qualtrics survey software. After providing consent, they completed an emotion-recognition task under the impression that they were competing against either an AI or a human opponent, which was the experimental manipulation in this study. Participants then received false feedback indicating that they had lost to their opponent. Afterward, they completed self-report questionnaires assessing the following dependent variables of interest: self-esteem, stress, self-efficacy, and attitudes toward AI. Finally, participants provided standard demographic information and were then thanked and debriefed.
Experimental Manipulation
This study utilized a between-subjects design in which participants were randomly assigned to believe they were competing against either an AI or a human opponent in an emotion-recognition task. Participants watched five brief videos of a white silhouette dancing on a black background and were asked to identify the emotion, from a provided list, that they felt best fit the dance displayed in each video. Regardless of their actual performance, all participants received standardized false feedback stating that they had lost to their AI opponent (experimental condition) or human opponent (control condition). The feedback was delivered on the screen immediately after the emotion-recognition task was completed. The feedback provided to participants differed based on whether they were in the AI or human opponent condition. In the AI opponent feedback condition, participants were informed that their responses were compared to those of an AI opponent. They were told that after reviewing the response, the AI had provided more accurate answers, resulting in a higher emotion-recognition score for the AI. In the human opponent feedback condition, participants were told that their responses were compared to those of a previous participant in the study. They were informed that the previous participant had provided more accurate answers, resulting in a higher score for the previous participant. In both conditions, the feedback was designed to be neutral and nonjudgmental, acknowledging the participant’s efforts before moving on to the next section of the study. After completing the task and receiving the false feedback, participants were asked to complete the self-report measures of the dependent variables.
Measures
The Rosenberg Self-Esteem Scale (Rosenberg, 1965) was used to measure global self-esteem, assessing participants’ overall self-worth and self-acceptance. Example items included “On the whole, I am satisfied with myself” and “I feel that I do not have much to be proud of” (reverse-coded). It consists of ten items rated on a scale from 1 (strongly disagree) to 4 (strongly agree). Higher scores indicate greater self-esteem; ⍺ = .91, M = 2.72, SD = .69.
The Perceived Stress Scale (Cohen et al., 1983) was used to evaluate the degree to which individuals perceive their lives as stressful. Example items include “I get upset because of something that happens unexpectedly” and “I find that I cannot cope with all the things that I have to do.” It consists of ten items rated on a scale from 1 (strongly disagree) to 7 (strongly agree), measuring feelings of unpredictability, lack of control, and overload. Higher scores indicate greater perceived stress; ⍺ = .87, M = 4.18, SD = 1.12.
The General Self-Efficacy Scale (Schwarzer & Jerusalem, 1995) was used to assess participants’ beliefs in their ability to handle challenging situations and achieve goals. Example items include “I can always manage to solve difficult problems if I try hard enough” and “I can remain calm when facing difficulties because I can rely on my coping abilities.” It consists of ten items rated on a scale from 1 (not true at all) to 4 (exactly true). Higher scores indicate greater perceived self-efficacy; ⍺ = .83, M = 3.07, SD = .47.
The AI Attitude Scale (Grassini, 2023) was used to measure attitudes toward artificial intelligence (AI), including trust, perceived usefulness, and potential risks. Example items include “I believe that AI will improve my life” and “I think AI technology is a threat to humans” (reverse-coded). It consists of four items rated on a scale from 1 (strongly disagree) to 7 (strongly agree). Higher scores indicate more positive attitudes toward AI; ⍺ = .83, M = 4.06, SD = 1.30.
Demographic Questionnaire. Participants responded to standard demographic questions reporting their sex, gender, age, and race/ethnicity.
Results
Independent-samples t-tests were conducted to test whether self-esteem, stress, self-efficacy, or attitudes toward AI differed following a loss to an AI opponent (n = 38) versus a human opponent (n = 39). All statistical tests were conducted at p < .05.
Stress
Results showed no difference in perceived stress depending on whether participants believed they lost to an AI opponent or a human opponent. The average level of stress for participants in the AI opponent condition (M = 4.12, SD = 1.02) was lower than that of those in the human opponent condition (M = 4.24, SD = 1.22), but this difference was not statistically significant, t(75) = .44, p = .664, and demonstrated a miniscule effect size, d = .01.
Self-Esteem
Self-esteem scores were nearly identical for those in the AI condition (M = 2.74, SD = .66) and those in the human condition (M = 2.71, SD = .73). There was no statistically significant difference in self-esteem between the conditions, t(75) = -.18, p = .855, and the effect size was very small, d = .04, suggesting that the identity of the opponent had no meaningful effect on self-esteem.
Self-Efficacy
Self-efficacy scores were higher in the AI condition (M = 3.16, SD = .45) compared to the human condition (M = 2.99, SD = .48). Although not statistically significant (t(75) = -1.59, p = .115), the effect size for the difference was small to moderate (d = .36), highlighting the need for further exploration of the effect of opponent type on self-efficacy.
AI Attitudes
Participants who lost to an AI opponent reported more favorable attitudes toward AI (M = 4.28, SD = 1.22) than those who lost to a human opponent (M = 3.85, SD = 1.37), although this difference was also not statistically significant, t(75) = -1.44, p = .156. However, the effect size for this difference was also small to moderate (d = .33), suggesting that losing to AI may be associated with a more positive perception of AI overall, despite the loss, and warrants further exploration.
Discussion
This study examined the psychological effects of losing to an AI versus a human opponent in an emotion-recognition task, with a focus on the outcomes of self-esteem, stress, self-efficacy, and attitudes toward AI. Although none of the differences between groups reached statistical significance, the observed effect sizes for self-efficacy and attitudes toward AI suggest potentially meaningful trends that invite further investigation.
Contrary to the study’s hypothesis, which predicted that losing to an AI opponent would have greater negative psychological consequences than losing to a human opponent, participants in the AI condition reported higher levels of self-efficacy and more positive attitudes toward AI. This was an unexpected finding, as previous research has suggested that people might experience greater discomfort or lowered self-assessment when outperformed by a non-human intelligence, possibly due to feelings of dehumanization or a threat to personal competence (Krakowski, Luger, & Raisch, 2023). Instead, the results of this study suggest a different interpretation: participants may have viewed the AI opponent as a less socially threatening or less personally judgmental competitor than a fellow human (Krakowski, Luger, & Raisch, 2023).
In the case of self-efficacy, those who lost to the AI may have been more likely to externalize the loss, attributing it to an external factor such as luck or the machine’s superior capabilities and computational advantages rather than a lack of personal skill (Yokoi & Nakayachi, 2024). In contrast, losing to a human opponent may have posed a greater ego threat due to the potential for unfavorable social comparison (Vohs & Heatherton, 2004), resulting in slightly lower perceptions of personal efficacy following a loss to a human opponent than to an AI opponent.
Similarly, the finding that participants in the AI condition reported more favorable attitudes toward AI, even after receiving feedback indicating they lost to the AI, suggests a degree of openness or acceptance toward AI as a fair or capable agent. These results contradict fears that competitive AI would lead to distrust or aversion (Krakowski, Luger, & Raisch, 2023) and instead imply that some individuals may have come to respect AI’s capabilities more after interacting with it, even in adversarial contexts.
Limitations and Future Directions
One potential limitation of this study is that the framing of the feedback was neutral and impersonal, which may have helped buffer negative reactions. Unlike competitive tasks that involve explicit judgment or interpersonal evaluation, the emotion-recognition task in the study was relatively abstract and nonconfrontational. This may have reduced emotional stakes and made it easier for participants to accept the outcome without experiencing a strong hit to psychological variables (Garcia, Tor, & Schiff, 2013). Additionally, because the study was conducted remotely through an online platform, the lack of face-to-face interaction and physical presence may have further distanced participants emotionally from the experience (Mallen, Day, & Green, 2003), minimizing the perceived intensity or social implications of the loss. Without the immediacy of in-person competition, participants may have been less inclined to internalize the loss, particularly in the AI condition, where the opponent was not human to begin with.
Given the novelty of this research, the observed trends, although not statistically significant, are valuable in shaping future hypotheses and studies that address limitations and expand upon the current findings. For example, it is plausible that in higher-stakes situations or tasks with greater personal relevance, the psychological effects of losing to an AI opponent may have been more pronounced. Future research could explore these possibilities by replicating the study with a larger, more diverse sample, incorporating qualitative measures of emotional reactions, or testing more personally relevant tasks such as academic assessments, job performance feedback, or social decision-making games. Longitudinal designs could also assess how repeated exposure to competitive AI interactions affects self-perception and trust over time.
Implications and Conclusions
This study explored the psychological effects of losing to an AI opponent versus a human opponent in an emotion-recognition task, focusing on the outcomes of self-esteem, stress, self-efficacy, and attitudes toward AI. Although no statistically significant differences were found between the opponent conditions in any of the dependent variables of interest, the small to moderate effect sizes for self-efficacy and attitudes toward AI suggest interesting trends that warrant further investigation. Contrary to the hypotheses, participants who lost to the AI opponent reported higher self-efficacy and more favorable attitudes toward AI, potentially indicating that AI opponents are perceived as less personally threatening and less judgmental than human opponents.
These findings offer valuable insights into how individuals may respond to AI in competitive contexts, highlighting the potential for AI to be viewed as a neutral or even favorable competitor rather than a source of negative psychological consequences. This could have important implications for the integration of AI into various competitive and evaluative domains, including gaming, education, and the workplace. However, as the study involved a relatively low-stakes and impersonal task, future research should explore the effects of AI competition in more personally relevant or higher-stakes contexts to better understand the long-term psychological implications of losses to AI. As AI continues to play an increasing role in society, ongoing research is crucial to ensure that these technologies are implemented in ways that promote positive psychological outcomes and well-being.