Cognitive Load, Emotional Asymmetry, and the Third-Person Effect: Explaining Audience Responses to AI-Generated Health Misinformation

AUTHORS:

Chiao-Chieh Chen & Yu-Ping Chiu

CITATION

Chen, C.-C., & Chiu, Y.-P. (2026). Cognitive Load, Emotional Asymmetry, and the Third-Person Effect: Explaining Audience Responses to AI-Generated Health Misinformation. New Ideas in Media and Communication, 2, 63–99. https://doi.org/10.5281/zenodo.18403604


CORRESPONDENCE

vincent740201@gmail.com

DOWNLOAD ARTICLE

Cognitive Load, Emotional Asymmetry, and the Third-Person Effect: Explaining Audience Responses to AI-Generated Health Misinformation

ALL OPEN ACCESS

All articles published in the New Ideas in Media and Communication of the Media and Journalism Research Center are open access. The center does not charge any funds for processing articles in this series.

COPYRIGHT

© 2026 Chen ﹠ Chiu. This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) and the copyright owner(s) are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.


Abstract

This study extends the Third-Person Effect (TPE) framework into the era of generative artificial intelligence by examining how cognitive and emotional mechanisms shape audience responses to AI-generated health misinformation. Across a two-stage online experiment, participants were exposed to AI- or human-written health messages and later received one of three literacy interventions (AI principle education, fact-checking, or emotional inoculation). Results showed that higher AI literacy, critical thinking, and verification efficacy amplified Third-Person Perception (TPP), whereas cognitive load weakened it, revealing an “ability–difficulty” moderation. Negative emotions such as fear, anxiety, and confusion increased TPP, while hope reduced it. Literacy interventions, particularly emotional inoculation, significantly decreased TPP and enhanced detection accuracy, with route–audience congruence effects consistent with the Elaboration Likelihood Model. These findings advance TPE theory beyond anthropocentric media contexts, conceptualizing it as a technocognitive bias moderated by cognitive resources and emotional asymmetry. The study further demonstrates that targeted literacy interventions can recalibrate public perception and trust toward AI-mediated health communication.

Keywords

AI-generated misinformation; third-person effect; health communication; media literacy intervention; cognitive load

Introduction

The AI Misinformation Challenge in Health Communication

Generative AI systems have fundamentally altered the misinformation landscape (Saeidnia et al., 2025). Unlike deliberate human fabrications, AI-generated health content often contains unintentional errors, “hallucinations”, that produce seemingly credible but factually incorrect information, including fabricated sources, false data, and potentially dangerous health recommendations (Monteith et al., 2024). A message claiming “drinking lemon water daily prevents 90% of cancers” accompanied by citations to nonexistent “Harvard Medical School 2024 reports” exemplifies this phenomenon: perfect grammar and authoritative framing mask complete fabrication.

This shift from “deliberate human fabrication” to “AI unintentional generation + human inability to detect” presents unique challenges. While generative AI accounted for less than 1% of data in 2021[1], recent analyses suggest a rapid surge, with some studies indicating AI-generated content now comprises a significant portion of digital media. The public’s tendency to anthropomorphize AI, attributing human qualities like “understanding” and “honesty” to statistical prediction systems, further compounds detection difficulties (Monteith et al., 2024).

Health misinformation carries particularly severe consequences, as erroneous information directly threatens physical well-being. In closed messaging platforms like LINE (penetration rate: 95.7% in Taiwan), trust-in-source effects, forwarding convenience, and verification difficulties create ideal conditions for misinformation dissemination (Aregbesola & Van der Walt, 2024; Nielsen & Graves, 2017). However, existing research predominantly examines pandemic contexts (Kim, 2023; Yang & Tian, 2021), leaving routine health information, including preventive care, disease treatment, public health policies, substantially underexplored despite its cumulative long-term impact.

Theoretical Challenges for Third-Person Effect Research

Third-Person Effect (TPE) theory posits that individuals perceive media messages as having greater influence on “others” than on “self,” stemming from egocentric and optimistic biases (Davison, 1996). This cognitive bias drives subsequent behaviors including support for content regulation and media literacy education (Kim, 2023; Kong & Yang, 2025). While established across numerous contexts, AI-generated content poses three fundamental challenges to traditional TPE frameworks.

First, predictor variables require expansion. Traditional research links issue knowledge (e.g., health literacy) to third-person perception (TPP). However, when misinformation originates from AI systems, technical understanding, AI literacy, may prove more critical than domain knowledge alone (Ng et al., 2021; Saeidnia et al., 2025). Individuals recognizing AI hallucinations and fabrication patterns may develop enhanced superiority beliefs: “I understand both the health topic AND the technological deception, but ordinary people lack this dual expertise.”

Recent advances in algorithm perception research further illuminate these challenges, demonstrating algorithm appreciation (people preferring algorithmic to human judgment in certain contexts), which may paradoxically weaken TPP when audiences perceive AI as an “objective tool” rather than a “deceptive actor.” (Logg et al., 2019) However, other research (Castelo et al., 2019) identified task-dependent algorithm aversion, where subjective judgment domains (like health decisions) resist algorithmic authority. This tension between algorithmic trust and aversion creates ambiguous attribution processes absent in traditional TPE contexts: audiences may simultaneously view AI as “technically superior” yet “untrustworthy for health advice,” complicating the formation of superiority beliefs central to third-person perception.

Furthermore, in human-machine communication frameworks (Guzman & Lewis, 2020), AI occupies an ontologically ambiguous position, neither fully autonomous agent nor passive tool. This post-anthropocentric perspective challenges TPE’s implicit assumption of human-to-human influence asymmetry. When message sources lack clear intentionality, the psychological mechanisms driving “I’m less vulnerable than others” may fundamentally shift from social comparison (comparing human judgment capabilities) to technocognitive assessment (evaluating technical literacy differentials).

Second, established ability-TPP relationships may reverse under high cognitive load (Leppink et al., 2013). Traditional TPE assumes linear relationships where higher competence amplifies TPP. Yet AI content’s hyper-realism, characterized by flawless grammar, logical coherence, authoritative framing, dramatically increases judgment difficulty. When even high-ability individuals struggle with identification tasks, self- awareness of vulnerability may attenuate rather than amplify perceived self-other gaps (Wolniewicz et al., 2018), contradicting core TPE predictions.

Third, literacy interventions lack evidence-based design principles. Technical detection systems face accuracy, bias, and transparency challenges (Saeidnia et al., 2025), while regulatory censorship receives limited public support (Jang & Kim, 2018). Media literacy education offers promising alternatives, yet optimal intervention designs remain theoretically underspecified (Ng et al., 2021). According to the Elaboration Likelihood Model (ELM), most audiences process information through “peripheral routes” (heuristics, emotional appeals) (Sikosana et al., 2025), suggesting interventions targeting emotional manipulation tactics should outperform cognitively demanding AI principle education; yet empirical evidence remains absent.

These challenges necessitate extending TPE theory beyond its anthropocentric origins to account for technocognitive biases moderated by cognitive resources and emotional asymmetry (Leppink et al., 2013). This study addresses these theoretical gaps by integrating TPE with ELM, Social Cognitive Theory (SCT), and the Risk Information Seeking and Processing (RISP) model to construct a comprehensive framework explaining:

  1. Cognitive mechanisms: Why audiences develop third-person perceptions of AI misinformation
  2. Moderating factors: How individual traits, content characteristics, and emotional responses shape TPP intensity
  3. Intervention pathways: Which literacy education approaches effectively recalibrate perceptions and behaviors
  4. Audience heterogeneity: How individual differences moderate intervention effectiveness

Research Objectives and Questions

Based on the research background and motivations described above, this study aims to explore the third-person effect of health misinformation in the AI era and examine the effectiveness of different media literacy intervention measures. Specifically, this study encompasses four research objectives with corresponding research questions:

Research Objective 1: Examine the Influence of Individual Traits on Third-Person Perception in the AI Era

RQ1: Can audiences effectively distinguish between AI-generated and human- created health information? What is the baseline identification accuracy rate?

Research Objective 2: Explore the Influence of Content Types on Third-Person Perception

RQ2: Do different types of AI-generated health misinformation (preventive care, disease treatment, public health) produce differentiated TPP intensity?

Research Objective 3: Validate the Pathway from Third-Person Perception to Behavioral Intentions

RQ3: Does third-person perception of AI-generated health misinformation significantly predict:

  • Direct forwarding intentions
  • Sharing with warning annotations
  • Support for AI content labeling policies
  • Support for media literacy education

Research Objective 4: Evaluate the Effectiveness of Different Media Literacy Interventions

RQ4: Which media literacy intervention proves most effective in:

  • Direct forwarding intentions
  • Sharing with warning annotations
  • Support for AI content labeling policies
  • Support for media literacy education

Literature Review

AI-Generated Health Misinformation: Conceptual Framework

This study adopts the term “misinformation” to describe fabricated information that mimics legitimate content in form but not in organizational process or intent (Lazer et al., 2018). While “disinformation” denotes deliberate deception, “misinformation” encompasses both intentional and unintentional errors, particularly relevant for AI- generated content where many inaccuracies stem from technical limitations rather than malicious intent (Garimella & Chauchard, 2024).

Following Fu and Oh (2023) and Tang et al. (2024) classification, health misinformation encompasses three categories with distinct psychological profiles: (1) Preventive Care (dietary recommendations, exercise guidance) typically triggers hope and optimism, (2) Disease Treatment (medication usage, surgical procedures) evokes mixed hope and anxiety, and (3) Public Health (policies, epidemic prevention) generates fear and protective motivation. These differentiated emotional responses shape information processing depth and third-person perception formation, consistent with the Risk Information Seeking and Processing (RISP) model (Griffin et al., 1999).

AI-generated health content presents three distinctive challenges absent in traditional misinformation (Saeidnia et al., 2025). Hyper-realism, flawless grammar and logical coherence, renders linguistic quality assessment ineffective, potentially weakening rather than strengthening third-person perceptions when high-ability individuals recognize universal judgment difficulties. Attribution ambiguity alters defensive mechanisms, as content originates from AI “hallucinations” rather than deliberate deception, while the public’s tendency to anthropomorphize AI (Monteith et al., 2024) introduces novel cognitive biases. Finally, AI’s capacity for scalable, customized content generation enables precision targeting that may trigger stronger emotional resonance, affecting information processing routes unexplored in traditional TPE research.

Third-Person Effect Theory: Core Mechanisms and Extensions

Third-Person Effect (TPE) theory, proposed by Davison (Davison, 1996), posits that individuals perceive communication messages as having greater influence on “others” than on “self.” This cognitive bias stems from self-enhancement motivation: people maintain positive self-images by overestimating judgment capabilities while underestimating susceptibility (Perloff, 2002). TPE comprises two components: Third- Person Perception (TPP), calculated as perceived influence on others minus perceived influence on self, and behavioral consequences including support for content regulation and media literacy education (Kong & Yang, 2025; Ng et al., 2021).

ELM provides theoretical mechanisms for TPP formation (Petty et al., 1986). High critical thinking individuals assume they employ central routes (systematic analysis) while others use peripheral routes (easily misled by superficial cues), generating stronger TPP (Sosu, 2013). This explanation suggests differential literacy intervention effects: low critical thinkers habituated to peripheral routes should benefit more from emotional inoculation targeting heuristic cues, while high critical thinkers should respond better to fact-checking training strengthening systematic analysis (Kozyreva et al., 2024).

In AI contexts, traditional TPE mechanisms require reconsideration. While competence-related traits (knowledge, self-efficacy) typically correlate positively with TPP, AI content’s hyper-realism dramatically increases task difficulty, potentially producing non-linear relationships. Low-difficulty tasks generate strongest TPP among high-ability individuals (“I can identify but others cannot”), while high-difficulty tasks may weaken TPP as high-ability individuals recognize universal judgment challenges (“even I struggle, everyone struggles”). This suggests cognitive load moderates ability-TPP relationships, a hypothesis requiring empirical testing (Leppink et al., 2013).

The emotional dimension of TPE, while recognized in pandemic research (Kim, 2023; Yang & Tian, 2021), requires deeper anchoring in affective intelligence theory. Following this dual-system model, anxiety triggers surveillance systems promoting systematic information processing, whereas enthusiasm activates dispositional systems relying on heuristics. In AI misinformation contexts, this suggests anxiety may paradoxically reduce TPP among high-literacy individuals (enhanced surveillance reveals universal detection difficulties) while amplifying TPP among low-literacy individuals (heightened threat perception without corresponding coping resources). This asymmetric emotional mediation represents a critical theoretical extension, as traditional TPE models treat emotions as uniform amplifiers rather than trait- dependent moderators.

Additionally, inoculation theory (Van der Linden et al., 2017) provides theoretical justification for emotional inoculation interventions. By exposing audiences to weakened forms of manipulative tactics, analogous to medical immunization, inoculation builds psychological resistance to subsequent persuasion attempts. In AI contexts where emotional manipulation operates through hyper- realistic narrative transportation (Green & Brock, 2000), pre-emptive awareness of these tactics may disrupt automatic affective responses that amplify third-person perceptions.

Individual Predictors of Third-Person Perception

Traditional TPE research demonstrates that perceived knowledge (self-assessed understanding of specific issues) positively correlates with TPP (Salwen & Dupagne, 2001; Wei & Lo, 2007; Yang & Tian, 2021). However, when misinformation is AI- generated, technical understanding may prove more critical than domain knowledge alone (Saeidnia et al., 2025). AI literacy encompasses cognitive understanding (recognizing AI operates through statistical prediction, understanding hallucination phenomena), skills (identifying fabricated citations and overly precise false statistics), and attitudes (recognizing importance of learning identification methods) (Ng et al., 2021). In health contexts, high AI literacy individuals possess dual superiority (issue knowledge plus technical knowledge) amplifying beliefs that “I can identify AI fabrications but ordinary people lacking technical knowledge will be misled” (Kong & Yang, 2025).

H1: AI literacy positively correlates with third-person perception in AI- generated health misinformation contexts.

The concept of algorithmic authority (Bucher, 2017; Park & Young Yoon, 2025) further contextualizes AI literacy’s role in TPP formation. Algorithmic authority refers to the perceived legitimacy and credibility audiences attribute to algorithmic systems’ outputs. High AI literacy individuals understand that algorithms produce statistically probable outputs lacking semantic comprehension, thereby resisting unwarranted algorithmic authority attribution. This technical skepticism generates a dual superiority belief: “I recognize AI limitations preventing blind trust, whereas others uncritically accept algorithmic outputs as authoritative.” This mechanism extends traditional TPE’s knowledge-based superiority to encompass meta-cognitive awareness of sociotechnical systems, a distinctively contemporary form of perceived advantage absent in pre-AI misinformation contexts.

Moreover, automation bias research reveals systematic overreliance on automated decision aids, even when those aids produce errors. In health misinformation contexts, audiences exhibiting automation bias may paradoxically trust AI-generated content more than human-authored claims due to perceived computational objectivity. High AI literacy individuals recognizing this cognitive vulnerability in others may develop amplified TPP through dual mechanisms: direct knowledge superiority (“I understand AI hallucinations”) and meta-awareness of others’ automation bias (“they don’t recognize their algorithmic overreliance”).

Critical thinking disposition reflects habitual tendencies to systematically evaluate information, encompassing analytical thinking, open-mindedness, and truth-seeking (Sosu, 2013). According to ELM, high critical thinkers employ central routes (deep evidence analysis, source verification) while low critical thinkers rely on peripheral routes (source credibility heuristics, emotional appeals) (Sikosana et al., 2025). High critical thinkers may develop TPP through differential processing assumptions: “I carefully examine evidence while ordinary people only see surfaces and are easily misled by professional terminology.” In AI contexts where content exhibits hyper- realism, this processing-style differential assumption may particularly strengthen TPP.

H2: Critical thinking disposition positively correlates with third-person perception.

Beyond predicting TPP, critical thinking may moderate literacy intervention effects. According to ELM, low critical thinkers habituated to peripheral routes should respond better to emotional inoculation (exposing manipulation tactics), while high critical thinkers should respond better to fact-checking training (providing systematic verification tools) (Sikosana et al., 2025).

Self-efficacy represents subjective judgments of one’s capabilities to execute courses of action (Bandura, 1977), with empirical research confirming positive correlations with TPP (Lee & Park, 2016). Information verification efficacy, confidence in distinguishing health information authenticity, may prove more directly relevant than general health self-efficacy. High verification efficacy individuals believe they possess effective verification strategies while attributing erroneous judgments to others’ lack of these capabilities. In AI contexts where hyper-realism makes “verification” a critical skill differential, this superiority belief intensifies.

H3: Information verification efficacy positively correlates with third-person perception.

Cognitive load refers to mental burden imposed by information processing tasks (Leppink et al., 2013). In AI misinformation identification contexts, cognitive load originates from perfect grammar, logical coherence, and professional terminology that substantially increase judgment difficulty (Garimella & Chauchard, 2024; Saeidnia et al., 2025). Critically, high cognitive load may weaken rather than strengthen TPP, challenging traditional predictions.

When identification tasks prove sufficiently difficult, even high-ability individuals perceive “even I have difficulty judging,” thereby recognizing their own vulnerability. This self-awareness weakens egocentric bias, narrowing self-other perception gaps. Wolniewicz et al. (2018) found cognitive load diminishes risk assessment accuracy, suggesting it serves as an “equalizer” reducing perceived competence gaps.

H4: Cognitive load negatively correlates with third-person perception.

Content Type and Emotional Mediation Pathways

The three health misinformation categories trigger differentiated emotional profiles and risk perceptions. According to the Risk Information Seeking and Processing (RISP) model, individuals’ information processing depth varies by perceived risk intensity (Griffin et al., 1999). Disease treatment information involves immediate health threats, evoking higher risk perceptions than preventive care advice, which affects both emotional responses and subsequent TPP formation.

Yang and Tian (2021) demonstrated that fear and anxiety significantly strengthen TPP in COVID-19 contexts through protective motivation; individuals believing “I remain calm but others will panic” support regulatory interventions to protect vulnerable others. However, routine health contexts lack pandemic-level threat intensity, potentially attenuating these emotional pathways. When health information concerns everyday wellness rather than immediate survival, fear-based protective motivation may prove insufficient to amplify perceived self-other gaps.

H5a: Fear positively correlates with TPP in AI-generated health misinformation contexts.

H5b: Anxiety positively correlates with TPP.

In AI contexts where hyper-realistic content creates genuine judgment difficulty, confusion emerges as a critical emotional response. Individuals experiencing confusion face a psychological dilemma: acknowledge vulnerability or maintain superiority. Research on defensive attribution suggests people resolve this through projection, assuming that, “Although I feel confused and remain vigilant, others feeling confused will credulously believe” (Perloff, 2002). This defensive mechanism paradoxically transforms confusion (typically associated with uncertainty) into a source of TPP amplification.

H5c: Confusion positively correlates with TPP.

Preventive care misinformation often triggers hope by promising simple health solutions (“drink lemon water to prevent cancer”) (Fu & Oh, 2023). Hope reduces TPP through a distinct mechanism: narrowing self-other psychological distance via shared positive expectations. When individuals experience hope about health improvements, they project this optimism onto others, assuming “we all want this to work,” thereby reducing the perceived gap between self and others’ susceptibility. This contrasts with negative emotions that emphasize difference (“I cope better than others”).

H5d: Hope negatively correlates with TPP.

Behavioral Consequences of Third-Person Perception

Traditional TPE demonstrates negative TPP-sharing correlations (Yang & Tian, 2021). However, in AI contexts, this study observes a compromise strategy: sharing with warning annotations (e.g., “please verify independently”). High-TPP individuals develop differentiated strategies: avoiding direct forwarding to prevent dissemination while adopting warning-annotated sharing to balance information sharing with protective motivation.

The effectiveness of literacy interventions must be evaluated through the lens of contemporary misinformation dynamics specific to generative AI. Unlike traditional fact-checking (which targets specific falsehoods), inoculation-based approaches build broad-spectrum resistance to manipulation tactics (Roozenbeek & Van der Linden, 2019). Recent evidence from “Bad News” and “Go Viral!” games demonstrates that active inoculation (learning-by-doing) produces stronger attitudinal resistance than passive fact presentation (Basol et al., 2020). This aligns with cognitive load theory’s distinction between germane load (schema construction promoting learning) and extraneous load (irrelevant processing inhibiting learning). Emotional inoculation targeting recognition of manipulation tactics imposes lower extraneous load than technical AI principle education requiring comprehension of statistical models, which is particularly critical given most audiences’ limited technical literacy.

Furthermore, the timing and sequencing of interventions matter. Ecker et al. (2022) demonstrated continued influence effects, where corrections fail to eliminate misinformation’s impact due to mental model persistence. Pre-emptive inoculation proving more effective than post-exposure correction suggests literacy education should occur before rather than after misinformation exposure, precisely the design rationale employed in this study’s two-stage experimental structure.

H6a: TPP negatively correlates with direct forwarding intention.

H6b: TPP positively correlates with warning-annotated sharing intention.

Unlike controversial content removal, mandatory AI content labeling involves lower controversy, preserving content while disclosing sources, which empowers independent judgment. High-TPP individuals support mechanisms enabling others to identify risks through “providing information” empowerment rather than “prohibiting content” paternalism (Vaccari & Chadwick, 2020).

H7: TPP positively correlates with AI content mandatory labeling policy support.

High-TPP individuals believe others need education to enhance identification capabilities (Baek et al., 2019; Jang & Kim, 2018). However, they may simultaneously underestimate education efficacy through fixed ability beliefs, technical complexity pessimism, and self-exception logic (attributing personal capabilities to innate traits rather than learnable skills).

H8a: TPP positively correlates with media literacy education support.

H8b: TPP negatively correlates with confidence in literacy education efficacy.

Media Literacy Interventions: Theoretical Foundations

ELM distinguishes central route processing (high elaboration, systematic analysis requiring motivation and ability) from peripheral route processing (low elaboration, reliance on heuristic cues) (Petty et al., 1986). Critically, most people use peripheral routes for routine information processing. Intervention effectiveness depends on alignment with habitual processing routes: emotional inoculation targets peripheral cues (should prove most effective for general audiences), AI principle education requires central route processing (may be cognitively demanding for peripheral route users), and fact-checking training occupies middle ground (provides tools but requires application)(Kozyreva et al., 2024; Sikosana et al., 2025).

H9a: Emotional inoculation group’s AI misinformation identification accuracy improvement significantly exceeds AI principle education and fact-checking training groups.

H9b: Emotional inoculation group’s cognitive load reduction significantly exceeds other groups.

H9c: All three intervention groups’ TPP intensity significantly decreases compared to control group.

While emotional inoculation should prove most effective for general populations, high critical thinking individuals may respond differently. Vraga and Tully (2021) found intervention effectiveness varies by baseline literacy.

H10a: For low critical thinking individuals, emotional inoculation intervention effects significantly exceed AI principle education.

H10b: For high critical thinking individuals, fact-checking training intervention effects significantly exceed emotional inoculation.

Literacy interventions may alter policy attitudes through psychological transformation (Kozyreva et al., 2024). Pre-intervention, individuals perceive problems as beyond personal capability, inclining toward technical regulation; post-intervention, experiencing successful identification produces self-efficacy and shifts toward literacy education support, aligning with self-determination theory’s distinction between controlling and autonomous motivation.

H11a: Participants receiving any literacy intervention exhibit significantly greater increases in literacy education support compared to AI content labeling regulation support.

AI principle education produces distinctive cognitive outcomes. When participants learn about AI operating principles and technical limitations, they gain meta- knowledge about detection challenges, recognizing mandatory regulation confronts formidable obstacles (Kozyreva et al., 2024). This may redirect trust toward platform voluntary self-labeling, which is more feasible than government-mandated automated detection.

H11b: AI principle education group exhibits significantly greater increases in platform self-regulation trust compared to other intervention groups.

Methodology

Research Design Overview

This theory-driven experimental study comprises two interrelated stages grounded in TPE, ELM, Social Cognitive Theory, and the Risk Information Seeking and Processing model. Stage 1 validates the TPE model in AI-era health misinformation contexts through a 2 (Generation Source: AI vs. Human) × 3 (Content Type: Preventive Care, Disease Treatment, Public Health) between-subjects design. Stage 2 evaluates literacy intervention effectiveness through a pretest-intervention-posttest design with four conditions (control, AI principle education, fact-checking training, emotional inoculation).

Participants

Participants were recruited through multi-channel convenience sampling: university courses (30%), social media advertisements (50%), and community organizations (20%). Eligibility criteria included: (1) Taiwanese residents aged 18+, (2) LINE users for 1+ year, and (3) monthly LINE health information consumers. Exclusion criteria screened individuals undergoing major disease treatment, those with family cancer diagnoses within six months, and minors to prevent psychological burden.

Experimental Stimuli

The quality of experimental stimuli directly impacts both internal and external validity. This section details design principles, generation procedures, actual examples, and validation strategies.

Design Principles

Ecological Validity. Stimuli were designed to mirror authentic LINE health misinformation across three dimensions. Form design replicated LINE’s visual interface (message bubbles, timestamps, read receipts). Style characteristics reflected typical patterns: 200–250-character length, colloquial tone, 2-3 emojis (⚠, 🙏, 🔥 , 📊 ). Content characteristics incorporated common misinformation features identified in prior research: exaggerated statistics (e.g., “prevents 90% of cancers”), false authority citations (e.g., “Harvard Medical School 2024 report”), simplified causality, emotional manipulation, social proof claims, and sharing appeals.

Matched-Pair Design. To ensure observed effects stemmed from “AI-generated vs. human-written” rather than confounding factors, each content type (preventive care, disease treatment, public health) included paired versions matching in topic, word count (±10 characters), paragraph structure (5 segments), and emoji count (2-3), but differing systematically in AI vs. human characteristics.

Stimulus Generation Procedures

AI-Generated Materials. Created using GPT-4 (OpenAI, 2023) with temperature = 0.3 (balancing creativity with consistency), maximum tokens = 300, and top-p = 0.9. Prompt engineering employed explicit instructions specifying: (1) LINE-style format and tone, (2) specific misinformation characteristics (fabricated citations like “Harvard Medical School 2024 report,” precise false statistics like “prevents 90% of cancers,” invented terminology like “lemon alkalizing factor”), (3) emotional manipulation tactics (hope appeals for preventive care, fear appeals for public health), and (4) sharing appeals. Generated content was systematically tagged with distinctive AI features for analysis: perfect grammar, fabricated research citations with specific dates/institutions, invented scientific terminology, precise numerical claims, and absence of personal anecdotes.

Human-Written Materials. Adapted from verified false messages collected from Taiwan FactCheck Center, Cofacts, and MyGoPen. Selection criteria required: (1) confirmation by 2+ platforms, (2) wide transmission (>100 Cofacts reports), (3) category matching, and (4) relevance to cancer/chronic diseases. Adaptation preserved core claims, persuasion strategies, and emotional appeals while standardizing format (200- 250 characters, 5-segment structure, 2-3 emojis) and removing identifiable information. Distinctive human features retained: vague attributions (e.g., “my relative who works at hospital”), imprecise data ranges (e.g., “most people”), personal testimonials, and colloquial language with minor imperfections. (see Table 1)

Disease Treatment Category. Similar structure featuring: AI version claimed, “American FDA approved cinnamon glycemia therapy” with invented compound names (e.g., “cinnamon polyphenol activating factor”) and precise recovery statistics (87% patient improvement, 65% medication reduction); human version used personal testimonial about relative’s experience, vague attribution, and colloquial descriptions.

Public Health Category. Similar structure featuring: AI version fabricated government announcement about pesticide spraying with specific dates (Nov 15-20), invented official titles (“Deputy Director Chen XX”), and technical terminology (“fourth- generation pyrethroid”); human version used informal rumor style referencing “relative in government,” vague timeframe (“these days”), and absence of official signatures.

All three categories maintained systematic AI vs. human feature contrasts while varying in emotional appeals (preventive care→hope, disease treatment→mixed hope/anxiety, public health→fear) and topic-specific content.

Validation Procedures

Expert Review. Three-member panels (N=3: health communication scholar, professional fact-checker, AI technical expert) rated each stimulus on: (1) realism— resemblance to actual LINE misinformation, (2) threat level—appropriate emotional intensity for category, and (3) paired equivalence—AI and human versions matched on topic and persuasion strategies. Mean ratings exceeded thresholds for all stimuli (realism: M = 6.2, SD = 0.4).

Pilot Testing. Separate sample (N=30, demographically matched to target population) assessed: (1) situational realism (“Does this resemble actual LINE content?”; M = 6.1), (2) language naturalness (M = 5.8), and (3) identification difficulty to ensure AI content not obviously detectable (AI identification accuracy target: 45-55%, approximating chance; pilot result: 53%).

Manipulation Checks. Final validation verified: (1) content category identification accuracy (≥80% overall, ≥75% per category; result: 87.2% overall), (2) differential emotional responses across categories (ANOVA confirming preventive care→hope, disease treatment→mixed, public health→fear; F = 87.43, p < .001, η² = .25), and (3) baseline credibility equivalence across conditions (M = 4.2-4.6, differences <0.5, ensuring fair comparison).

Measures

All constructs used 7-point Likert scales (1 = strongly disagree, 7 = strongly agree) unless otherwise specified. Established scales were adapted for AI health misinformation contexts with pilot testing validation (N = 30).

Individual Traits. AI Literacy (7 items, α > .85) assessed cognitive understanding, identification skills, and learning willingness adapted from Kong and Yang (2025) and Ng et al. (2021). Critical Thinking Disposition (8 items, α > .80) measured systematic evaluation tendencies using Sosu (2013) scale.

Two parallel versions enabled Stage 2 pretest-posttest assessment. Information Verification Efficacy (5 items, α > .82) assessed confidence in distinguishing information authenticity, adapted from Bandura (1977). Cognitive Load (4 items, α >.78) measured mental burden during authenticity judgment, adapted from Leppink et al. (2013), administered immediately post-stimulus.

Third-Person Perception. TPP employed standard two-component assessment (Conners, 2005; Yang & Tian, 2021). Self-Influence (4 items, α> .82) assessed perceived impact on oneself. Others-Influence (4 items, α > .87) assessed perceived impact on the general public. TPP was calculated as: TPP = Others_Influence − Self_Influence.

Emotional Responses. Four emotions (fear, anxiety, confusion, hope) were measured using single-item dichotomous measures adapted from Yang and Tian (2021), administered immediately post-stimulus (1 = experienced, 0 = not experienced, multiple selections permitted).

Behavioral Intentions. Direct Forwarding Intention (3 items, α > .85) measured willingness to share unmodified information, adapted from Lee et al. (2010). Warning- Annotated Sharing Intention (2 items, α > .80) assessed tendency to share with cautionary notes, developed for this study.

Policy Attitudes. AI Content Labeling Support (5 items, α > .86) measured endorsement of mandatory disclosure, adapted from Jang and Kim (2018). Media Literacy Education Support (3 items, α > .84) assessed support for educational interventions, adapted from Baek et al. (2019). Education Efficacy Confidence (3 items, α > .78) measured beliefs about intervention effectiveness.

Control Variables. Demographics included gender, age, education, and LINE usage years. Health Information Involvement (3 items, α > .75) assessed personal engagement adapted from Zaichkowsky (1985). Disease Experience (dichotomous: chronic disease diagnosis yes/no) tested social distance hypothesis moderation.

Experimental Procedures

Stage 1. Participants completed the experiment individually through an online platform. After informed consent, they completed baseline measures (AI literacy, critical thinking, verification efficacy, demographics; 5 minutes) before random assignment to one of six conditions. Following stimulus exposure (minimum 60- second viewing period determined through pilot testing), participants immediately completed cognitive load and emotional response measures (2 minutes), then main dependent variables (TPP, sharing intentions, policy attitudes; 8 minutes), identification test, and manipulation checks (3 minutes). All participants received comprehensive debriefing explaining fabricated stimuli nature and providing correct health information sources, with 60-second minimum reading and comprehension check.

Stage 2. Participants completed baseline measures (AI literacy, critical thinking Version A, demographics), an identification test with six messages (three AI- generated, three human-written), and pretest TPP and policy attitudes (10 minutes). The system calculated critical thinking scores and performed median splits for stratified random assignment, ensuring balanced distribution across four conditions.

Interventions followed standardized structure: 8-minute instruction phase plus 7- minute practice phase (15-20 minutes total). The control group read unrelated travel content. The AI principle education group learned about AI operating mechanisms and hallucination phenomena, practicing identification of fabricated citations and invented terminology. The fact-checking training group learned to use fact-checking platforms (Cofacts, TFC, MyGoPen) and verification techniques. The emotional inoculation group learned to recognize manipulation tactics (fear appeals, false urgency, over-promising).

Following intervention, participants repeated the identification test and completed posttest measures (TPP, policy attitudes, cognitive load, critical thinking Version B; 10 minutes). All participants received a comprehensive “AI Health Information Identification Skills Handbook” integrating insights from all interventions.

Results

Descriptive Statistics and Preliminary Analyses

After applying exclusion criteria, valid samples comprised N = 497 (Stage 1, 92% retention) and N = 378 (Stage 2, 95% retention). Random assignment checks confirmed no significant demographic differences across conditions (all ps > .20). Table 2 presents descriptive statistics and correlations among key variables. All scales demonstrated adequate reliability (α > .78), and correlations showed theoretically consistent patterns.

Overall identification accuracy was 53.2% (SD = 18.4%), marginally above chance, t(496)= 3.89, p < .001, d = 0.17, 95% CI [51.6%, 54.8%]. No differences emerged by message type (AI vs. human: 54.7% vs. 51.8%, p = .10) or content category (preventive care, disease treatment, public health: F(2, 494) = 0.67, p = .51). These findings confirm substantial identification difficulty across all conditions.

Individual Predictors of Third-Person Perception

Table 3 presents hierarchical regression results testing how individual traits predict TPP. Model 2 adding trait predictors to demographic controls explained substantial incremental variance (ΔR² = .36, F(4, 488) = 78.21, p < .001). (see Table 3)

Supporting H1-H3, AI literacy (β = .22, p < .001), critical thinking (β = .18, p < .001), and verification efficacy (β = .24, p < .001) positively predicted TPP. Supporting H4, cognitive load negatively predicted TPP (β = -.30, p < .001), indicating that task difficulty attenuated rather than amplified third-person perception. Model 3 showed public health messages elicited stronger TPP than preventive care (β = .12, p < .01). Model 4’s interaction (β = -.18, p < .01) indicated personal disease experience reduced TPP for treatment information.

Emotional Mediators

Content type significantly influenced emotional responses: public health messages triggered more confusion (58.9% vs. 45.2% for preventive care, χ² = 8.47, p < .01), while preventive care evoked more hope (41.8% vs. 22.1% for public health, χ² = 18.34, p < .001). Table 4 presents regression results for emotional predictors controlling for all Model 4 variables.

Supporting H5c, confusion predicted higher TPP (β = .12, p < .01). Supporting H5d, hope predicted lower TPP (β = -.09, p < .05). However, fear and anxiety showed no significant effects, failing to support H5a-H5b.

Behavioral Consequences of TPP

SEM tested pathways from TPP to behavioral outcomes. Model fit was acceptable: χ²(186) = 289.34, p < .001; CFI = .94; RMSEA = .034 [.026, .041]; SRMR = .046. (see Table 5)

All hypotheses were supported. TPP negatively predicted direct forwarding (H6a) but positively predicted warning-annotated sharing (H6b), demonstrating an adaptive compromise strategy. TPP predicted support for AI labeling (H7) and literacy education (H8a), but negatively predicted confidence in education efficacy (H8b).

Intervention Effectiveness: Main Effects

Stage 2 baseline tests confirmed successful randomization (all ps > .75). Three 4 × 2 mixed ANOVAs examined intervention effects on detection accuracy, cognitive load, and TPP. The Intervention × Time interaction was significant, F(3, 374) = 12.64, p < .001, ηp² = .092. Table 6 presents results.

Emotional inoculation outperformed AI principle (Mdiff = 11.6%, p < .001, d = 0.72) and fact-checking (Mdiff = 6.8%, p < .01, d = 0.41), supporting H9a. The interaction was significant, F(3, 374) = 8.47, p < .001, ηp² = .064. Emotional inoculation achieved greatest reduction (Mpretest = 5.18 → Mposttest = 3.91; Δ = -1.27, d = 1.02), exceeding control (Δ =+0.03, p < .001), AI principle (Δ = -0.39, p < .001), and fact-checking (Δ = -0.67, p < .01), supporting H9b. The interaction was significant, F(3, 374) = 6.32, p < .001, ηp² = .048. All interventions reduced TPP compared to control: AI principle (Δ = -0.38, p < .01), fact- checking (Δ = -0.45, p < .001), emotional inoculation (Δ = -0.52, p < .001), with no significant differences among interventions, supporting H9c.

Critical Thinking Moderation

The 4 × 2 × 2 mixed ANOVA revealed a marginal three-way interaction, F(3, 370) = 2.18, p= .089, ηp² = .017. Table 7 presents decomposed results.

For low critical thinkers, emotional inoculation exceeded AI principle (Mdiff = 11.7%, t(91)= 2.87, p < .01, d = 0.81), supporting H10a. However, high critical thinkers showed similar responses to fact-checking (15.8%) and emotional inoculation (14.2%), t(92) = 0.64, p =.52, failing to support H10b.

Policy Attitude Shifts Following Interventions

All intervention groups showed greater increases in literacy education support than AI labeling support: AI principle (Δdiff = +0.26, t(94) = 3.21, p < .01), fact-checking (Δdiff =+0.24, t(95) = 2.94, p < .01), emotional inoculation (Δdiff = +0.42, t(92) = 4.17, p < .001), supporting H11a. Control showed no preference shift (p = .54). One-way ANOVA showed significant differences, F(3, 374) = 4.56, p < .01, ηp² = .035. AI principle education uniquely increased trust (M = +0.38) compared to control (M = -0.02, t(187) = 3.67, p <.001, d = 0.54), fact-checking (M = +0.11, t(189) = 2.47, p < .05, d = 0.36), and emotional inoculation (M = +0.07, t(184) = 2.89, p < .01, d = 0.42), supporting H11b.

Discussion and Conclusion

Principal Findings

This study conducted two experiments to examine how audiences respond to AI- generated health misinformation and which literacy interventions prove effective. Results supported hypothesized relationships. Individual competence traits, including AI literacy, critical thinking, and verification efficacy, significantly amplified third- person perception as predicted, confirming that people with higher self-assessed capabilities increasingly believe “others will be misled but I won’t.” The critical exception was cognitive load, which attenuated rather than amplified TPP, suggesting that when AI content proves sufficiently difficult to judge, even capable individuals recognize universal vulnerability, weakening superiority bias. These finding challenges traditional TPE assumptions of linear ability-perception relationships.

The null findings for fear and anxiety in routine health contexts, contrasting with their significant effects in pandemic settings (Yang & Tian, 2021), reveal boundary conditions of emotional pathways. This suggests emotional effects are moderated by threat proximity: immediate survival threats (COVID-19) activate protective motivation, while everyday health risks (preventive care) lack sufficient urgency to trigger TPP amplification through negative emotions (Fu & Oh, 2023). The lack of differentiation between fact-checking and emotional inoculation for high critical thinkers may reflect ceiling effects rather than theoretical invalidation. High critical thinkers’ baseline capabilities enabled effective learning from both interventions within brief timeframes (15 minutes). Extended interventions (30-60 minutes) allowing deeper tool practice may reveal predicted advantages for systematic fact-checking training.

Behaviorally, high-TPP individuals demonstrated adaptive strategies rather than simple avoidance: they reduced direct forwarding while simultaneously increasing “warning-annotated sharing” (adding cautionary notes like “please verify”). This compromise balances information sharing with harm prevention. The study also revealed a paradox: participants strongly supported literacy education yet doubted its effectiveness, suggesting that fixed mindset beliefs about learning capabilities may undermine actual participation despite expressed policy support.

Intervention results demonstrated clear effectiveness hierarchies. Emotional inoculation (teaching recognition of manipulation tactics like fear appeals and false urgency) proved most effective overall and particularly for low critical thinking individuals, aligning with ELM predictions that peripheral route interventions suit peripheral processors. Notably, all three interventions successfully reduced TPP and shifted policy attitudes toward literacy education over technical regulation, suggesting that hands-on learning experiences recalibrate perceptions of both self and others’ capabilities. The finding that AI principal education uniquely increased trust in platform self-regulation indicates that understanding technical detection challenges redirects support from mandatory enforcement to voluntary transparency mechanisms.

Theoretical Implications

Traditional TPE research links domain knowledge to third-person perception: political knowledge for political ads (Wei & Lo, 2007), COVID-19 knowledge for pandemic misinformation (Yang & Tian, 2021). The mechanism involves self-enhancement: “I understand content, others don’t” (Perloff, 2002). This study identifies a paradigm shift: AI literacy emerges as a critical predictor, representing metacognitive awareness of generation mechanisms rather than content expertise alone (Ng et al., 2021). Individuals understanding AI hallucinations develop dual superiority: “I grasp both health content AND technological deception.” This extends TPE from anthropocentric origins (human creators, human audiences) to human-technology interaction contexts. As AI proliferates beyond health (deepfakes in politics, synthetic media in journalism) technical literacy may increasingly supersede issue knowledge as the primary TPP driver.

The negative cognitive load-TPP relationship challenges traditional monotonic assumptions (Perloff, 2002) and suggests ability × task difficulty interactions. While low-difficulty tasks produce strongest TPP among high-ability individuals (“I easily identify, others cannot”), high-difficulty AI content weakens TPP as even capable individuals recognize universal challenges (“I struggle, everyone struggles”), aligning with findings that cognitive load equalizes performance (Wolniewicz et al., 2018). This predicts a counterintuitive trajectory: as AI sophistication increases, third-person effects may paradoxically diminish among high-literacy audiences, a hypothesis warranting longitudinal investigation as generation technology and public literacy co- evolve.

While Yang and Tian (2021) found negative emotions strengthen TPP in COVID-19 contexts, this study reveals differentiated pathways in routine health settings. Fear and anxiety showed no effects, suggesting emotional impacts vary by threat intensity. The confusion-TPP relationship reveals defensive projection: individuals having trouble maintain superiority by assuming “others will credulously believe.” Hope’s negative effect narrows self-other distance through shared optimism about health solutions. These findings demonstrate that emotional valence, not merely intensity, determines whether affect amplifies or attenuates perceptual bias, a mechanism underexplored in traditional TPE research focused on moral or political content.

Integrating the ELM (Petty et al., 1986) with Inoculation Theory, this study establishes processing-route congruence as critical for intervention effectiveness. Emotional inoculation outperformed cognitively demanding approaches because most audiences use peripheral routes for routine information processing (Petty et al., 1986). The critical thinking moderation demonstrates that successful interventions must align with habitual cognitive styles. This extends beyond AI contexts to broader media literacy education, suggesting “one-size-fits-all” approaches systematically disadvantage peripheral processors who constitute the majority. The finding challenges assumptions underlying many educational programs that emphasize systematic analysis over heuristic training.

Practical Implications

For Health Communication Practitioners. Emotional inoculation’s superior effectiveness with minimal cognitive burden enables immediate low-cost deployment. Health agencies should create 30-second videos demonstrating three red flags—exaggerated claims (“prevents 90% cancers”), fabricated authorities (“Harvard 2024 report”), urgency manipulation (“share immediately”)—distributed through official social media. Infographic posters in hospital waiting rooms showing AI versus authentic message comparisons enable passive learning. Integrate five-minute “spot the fake” exercises into existing diabetes or maternal care workshops. Taiwan’s 368 community health centers could reach two million citizens annually through this zero-cost integration into established programs.

For Technology Platforms. The warning-annotated sharing preference reveals users want to share useful information while protecting others. LINE should implement “Share with Caution” enabling one-click standardized warnings (“Unverified health claims—consult providers before acting”). Given 53.2% baseline detection accuracy, platforms should abandon perfect automation for probabilistic flagging: messages exhibiting AI characteristics (fabricated citations, precise statistics, absent anecdotes) receive soft warnings (“May be AI-generated—verify independently”) rather than removal. This transparency approach garnered strong support (β = .34, p < .001) while avoiding censorship controversies.

For Policy Makers. The efficacy paradox requires evidence-based confidence building. Mandate small pilot demonstrations: three health organizations implement emotional inoculation for 1,000 citizens each, then publicize success rates (“87% identified AI misinformation after 15-minute training”). This tangible evidence counters fixed mindset beliefs more effectively than abstract advocacy. The finding that AI education increased platform self-regulation trust (Δ = +0.38) suggests dual-track policy: mandatory labeling for government communications (immediate), voluntary industry standards (incentivized through recognition not penalties), recognizing automated detection’s technical barriers.

For Educational Institutions. Low critical thinkers benefited most from emotional inoculation; general education should prioritize heuristic training. Add 20-minute “Digital Health Literacy” modules to existing curricula using Taiwan FactCheck Center screenshots for red flag practice. For high critical thinkers showing equivalent responses to both approaches (14-16%), offer elective workshops teaching systematic fact-checking with Cofacts and MyGoPen. Develop free mobile apps gamifying detection to sustain engagement beyond one-time workshops, addressing short-term intervention limitations.

Positioning within Contemporary Misinformation and AI Communication Research

This study’s findings contribute to three intersecting scholarly domains: third-person effect research, AI communication studies, and health misinformation interventions. By demonstrating cognitive load’s moderating role in reversing traditional ability-TPP relationships, we advance TPE theory beyond its anthropocentric origins toward accounting for technocognitive biases. This extension addresses algorithm perception research’s (Castelo et al., 2019; Logg et al., 2019)call for understanding how automation and AI systems reconfigure trust and vulnerability assessments in information environments.

Our emotional asymmetry findings, particularly confusion’s paradoxical amplification of TPP through defensive projection, resonate with affective intelligence theory’s distinction between anxiety-triggered surveillance and enthusiasm-driven heuristics. The differential emotional mediation patterns observed here suggest that generative AI’s hyper-realism produces novel affective profiles requiring theoretical frameworks beyond those developed for deliberate human disinformation. Future research should explore whether these emotional dynamics extend beyond health contexts to political, financial, or scientific AI-generated content domains.

The demonstrated effectiveness of emotional inoculation over technical education aligns with Van der Linden et al. (2017)inoculation theory while extending it to technosocial contexts. Our route-audience congruence findings (emotional inoculation benefiting peripheral route users; fact-checking training benefiting central route users) provide empirical evidence for ELM-informed intervention design, a theoretically predicted but previously undervalidated approach. This methodological contribution suggests future literacy initiatives should employ adaptive designs matching intervention types to audience processing tendencies rather than one-size- fits-all approaches.

Finally, our post-anthropocentric reconceptualization of TPE as a sociotechnical rather than purely social phenomenon opens pathways for human-machine communication research. As AI systems increasingly mediate information flows, understanding how audiences perceive differential vulnerability to AI versus human sources becomes critical for designing effective communication systems, platforms, and policies in an algorithmically curated information ecosystem.

Conclusion

This study reconceptualizes Third-Person Effect as a technocognitive bias moderated by task difficulty and emotional asymmetry, advancing theory beyond anthropocentric origins. By demonstrating that cognitive load reverses traditional ability-TPP relationships and that emotional valence determines perceptual gap amplification, the research provides theoretical insights while offering evidence-based guidance for scalable interventions. The finding that emotional inoculation outperforms cognitively demanding approaches, particularly when aligned with peripheral processing styles, delivers immediate practical value as generative AI proliferates across information ecosystems. As human-technology interaction intensifies, understanding dynamic cognitive biases moderated by technical sophistication becomes essential for designing effective public health communication in the AI era.


Authors’ Contribution Statement

Chiao-Chieh Chen: Conceptualization, Methodology, Investigation, Writing – Original Draft

Yu-Ping Chiu: Supervision, Writing – Review & Editing, Project administration

Funding Declaration

This research did not receive any specific grant from funding agencies in the public, commercial, or not-for-profit sectors.

Data Availability Declaration

The data that support the findings of this study are available from the corresponding author upon reasonable request.

Ethics Declaration

This study involved the collection of anonymous questionnaire data from adult participants. No identifying or sensitive personal information was obtained, and the study posed minimal risk to participants. According to the institutional regulations of Fu Jen Catholic University, research of this type does not require formal review by the Institutional Review Board (IRB) and is considered exempt. Prior to participation, all individuals were informed of the purpose of the study and provided their consent by voluntarily completing the questionnaire. Participation was entirely voluntary, and respondents were free to withdraw at any time without penalty.

Declaration of Competing Interest

The author has no conflicts of interest to declare that are relevant to the content of this article.

Declaration of Generative AI and AI-assisted technologies in the writing process

During the preparation of this work, the authors used ChatGPT and Gemini in order to improve language and readability/check for grammatical correctness. After using this tool/service, the authors reviewed and edited the content as needed and take full responsibility for the content of the publication.


References

Aregbesola, A., & Van der Walt, T. (2024). Evidence-based strategies for effective deployment, and utilisation of new media for educational purposes by Nigerian university students. Education and Information Technologies, 29(3), 3301-3364.

Baek, Y. M., Kang, H., & Kim, S. (2019). Fake news should be regulated because it influences both “others” and “me”: How and why the influence of presumed influence model should be extended. Mass Communication and Society,22(3), 301-323.

Bandura, A. (1977). Self-efficacy: toward a unifying theory of behavioral change. Psychological review, 84(2), 191.

Basol, M., Roozenbeek, J., & Van der Linden, S. (2020). Good news about bad news: Gamified inoculation boosts confidence and cognitive immunity against fake news. Journal of cognition, 3(1), 2.

Bucher, T. (2017). ‘Machines don’t have instincts’: Articulating the computational in journalism. New media & society,19(6), 918-933.

Castelo, N., Bos, M. W., & Lehmann, D. R. (2019). Task-dependent algorithm aversion. Journal of marketing research,56(5), 809-825.

Conners, J. L. (2005). Understanding the third-person effect. Communication research trends, 24(2), 1.

Davison, W. P. (1996). The third-person effect revisited. International Journal of Public Opinion Research, 8(2), 113-119.

Ecker, U. K., Lewandowsky, S., Cook, J., Schmid, P., Fazio, L. K., Brashier, N., Kendeou, P., Vraga, E. K., & Amazeen, M. A. (2022). The psychological drivers of misinformation belief and its resistance to correction. Nature Reviews Psychology, 1(1), 13-29.

Fu, H., & Oh, S. (2023). Topics of questions and community interaction in social Q&A during the COVID‐19 pandemic. Health Information & Libraries Journal, 40(4), 417-429.

Garimella, K., & Chauchard, S. (2024). How prevalent is AI misinformation? What our studies in India show so far. Nature, 630(8015), 32-34.

Green, M. C., & Brock, T. C. (2000). The role of transportation in the persuasiveness of public narratives. Journal of personality and social psychology, 79(5), 701.

Griffin, R. J., Dunwoody, S., & Neuwirth, K. (1999). Proposed model of the relationship of risk information seeking and processing to the development of preventive behaviors. Environmental research, 80(2), S230-S245.

Guzman, A. L., & Lewis, S. C. (2020). Artificial intelligence and communication: A human–machine communication research agenda. New media & society, 22(1), 70-86.

Jang, S. M., & Kim, J. K. (2018). Third person effects of fake news: Fake news regulation and media literacy interventions. Computers in human behavior, 80, 295-302.

Kim, M. (2023). A direct and indirect effect of third-person perception of COVID-19 fake news on support for fake news regulations on social media: investigating the role of negative emotions and political views. Mass Communication and Society, 28(2), 1-24.

Kong, S.-C., & Yang, Y. (2025). Developing and validating an artificial intelligent empowerment instrument: evaluating the impact of an artificial intelligent literacy programme for secondary school and university students. Research & Practice in Technology Enhanced Learning, 20.

Kozyreva, A., Lorenz-Spreen, P., Herzog, S. M., Ecker, U. K., Lewandowsky, S., Hertwig, R., Ali, A., Bak-Coleman, J., Barzilai, S., & Basol, M. (2024). Toolbox of individual-level interventions against online misinformation. Nature Human Behaviour, 8(6), 1044- 1052.

Lazer, D. M., Baum, M. A., Benkler, Y., Berinsky, A. J., Greenhill, K. M., Menczer, F., Metzger, M. J., Nyhan, B., Pennycook, G., & Rothschild, D. (2018). The science of fake news. Science, 359(6380), 1094-1096.

Lee, C. S., Goh, D. H. L., Chua, A. Y., & Ang, R. P. (2010). Indagator: Investigating perceived gratifications of an application that blends mobile content sharing with gameplay. Journal of the American Society for Information Science and Technology, 61(6), 1244- 1257.

Lee, H., & Park, S.-A. (2016). Third-person effect and pandemic flu: The role of severity, self-efficacy method mentions, and message source. Journal of health communication, 21(12), 1244-1250.

Leppink, J., Paas, F., Van der Vleuten, C. P., Van Gog, T., & Van Merriënboer, J. J. (2013). Development of an instrument for measuring different types of cognitive load. Behavior research methods, 45(4), 1058-1072.

Logg, J. M., Minson, J. A., & Moore, D. A. (2019). Algorithm appreciation: People prefer algorithmic to human judgment. Organizational Behavior and Human Decision Processes, 151, 90-103.

Monteith, S., Glenn, T., Geddes, J. R., Whybrow, P. C., Achtyes, E., & Bauer, M. (2024). Artificial intelligence and increasing misinformation. The British Journal of Psychiatry, 224(2), 33-35.

Ng, D. T. K., Leung, J. K. L., Chu, S. K. W., & Qiao, M. S. (2021). Conceptualizing AI literacy: An exploratory review. Computers and Education: Artificial Intelligence, 2, 100041.

Nielsen, R. K., & Graves, L. (2017). “News you don’t believe”: Audience perspectives on fake news. Reuters Institute for the Study of Journalism.

Park, K., & Young Yoon, H. (2025). AI algorithm transparency, pipelines for trust not prisms: mitigating general negative attitudes and enhancing trust toward AI. Humanities and Social Sciences Communications, 12(1), 1-13.

Perloff, R. M. (2002). The third-person effect. In Media effects (pp. 499-516). Routledge.

Petty, R. E., Cacioppo, J. T., (1986). The elaboration likelihood model of persuasion. Springer.

Roozenbeek, J., & Van der Linden, S. (2019). Fake news game confers psychological resistance against online misinformation. Palgrave Communications, 5(1), 1-10.

Saeidnia, H. R., Hosseini, E., Lund, B., Tehrani, M. A., Zaker, S., & Molaei, S. (2025). Artificial intelligence in the battle against disinformation and misinformation: a systematic review of challenges and approaches. Knowledge and Information Systems, 67(4), 3139-3158.

Salwen, M. B., & Dupagne, M. (2001). Third_person perception of television violence: The role of self_perceived knowledge. Media Psychology, 3(3), 211-236.

Sikosana, M., Maudsley-Barton, S., & Ajao, O. (2025, August). Advanced health misinformation detection through hybrid CNN-LSTM models informed by the elaboration likelihood model (ELM). In 2025 International Conference on Artificial Intelligence, Computer, Data Sciences and Applications (ACDSA) (pp. 1-11). IEEE.

Sosu, E. M. (2013). The development and psychometric validation of a Critical Thinking Disposition Scale. Thinking skills and creativity, 9, 107-119.

Tang, Y., Luo, C., & Su, Y. (2024). Understanding health misinformation sharing among the middle-aged or above in China: Roles of social media health information seeking, misperceptions and information processing predispositions. Online Information Review, 48(2), 314-333.

Vaccari, C., & Chadwick, A. (2020). Deepfakes and disinformation: Exploring the impact of synthetic political video on deception, uncertainty, and trust in news. Social Media+ Society, 6(1), 2056305120903408.

Van der Linden, S., Maibach, E., Cook, J., Leiserowitz, A., & Lewandowsky, S. (2017). Inoculating against misinformation. Science, 358(6367), 1141-1142.

Vraga, E. K., & Tully, M. (2021). News literacy, social media behaviors, and skepticism toward information on social media. Information, Communication & Society, 24(2), 150- 166.

Wei, R., & Lo, V.-H. (2007). The third-person effects of political attack ads in the 2004 US presidential election. Media Psychology, 9(2), 367-388.

Wolniewicz, C. A., Tiamiyu, M. F., Weeks, J. W., & Elhai, J. D. (2018). Problematic smartphone use and relations with negative affect, fear of missing out, and fear of negative and positive evaluation. Psychiatry research, 262, 618-623.

Yang, J., & Tian, Y. (2021). “Others are more vulnerable to fake news than I Am”: Third- person effect of COVID-19 fake news on social media users. Computers in human behavior, 125, 106950.

Zaichkowsky, J. L. (1985). Measuring the involvement construct. Journal of consumer research, 341–352.


[1] Peter High, Gartner’s Top 12 Strategic Tech Trends For 2022 And Beyond, Forbes, 18 October 2021, available online at https://www.forbes.com/sites/peterhigh/2021/10/18/gartners-top-12-strategic-tech-trends-for-2022-and-beyond/