Skip to main content

Unfortunately we don't fully support your browser. If you have the option to, please upgrade to a newer version or use Mozilla Firefox, Microsoft Edge, Google Chrome, or Safari 14 or newer. If you are unable to, and need support, please send us your feedback.

We'd appreciate your feedback.Tell us what you think!

Elsevier
Publish with us

IP&MC Call for papers

Conference tracks

Paper and poster manuscripts are now invited on the following thematic tracks. They should be submitted using the online abstract submission system.

Deadline: 30 June 2026

  • Academic Search and Learning with GenAI: New Agency, New Behavior, and New Ecology (IP&M)

Chang Liu, Peking University Soo Young Rieh, University of Texas at Austin Luanne Sinnamon, University of British Columbia Orland Hoeber, University of Regina Xiaoxuan Song, Nanjing Agricultural University

Generative artificial intelligence (GenAI) is transforming academic search by enabling automated or semi-automated learning environments with conversational and synthesis capabilities. This paradigm shift raises fundamental questions about learner agency, higher-order cognition and metacognition, critical thinking, digital literacy, digital fairness, and academic integrity in GenAI-enhanced academic search. This special track provides a forum for interdisciplinary dialogue among researchers in information behavior, interactive information retrieval, learning science, and human‑centred AI. We seek contributions that advance theoretical models, empirical studies, and system designs that place the human at the centre of GenAI‑enhanced academic search.

We welcome submissions on the following topics, including but not limited to:

  • Agency and cognitive models of academic searchers using GenAI tools (stand-alone or embedded within academic search platforms)

  • GenAI-enhanced academic search systems

  • Impact of GenAI on academic search and learning processes

  • Search as Learning in human-AI collaborative contexts

  • Supporting critical and creative thinking using GenAI-enhanced academic search

  • Information behavior in GenAI-assisted academic search

  • Evaluation and metrics for human-AI collaborative search

  • Literacy cultivation for GenAI-assisted academic search

  • Ethical frameworks for responsible GenAI-assisted academic search

  • Tools and methods for information quality assessment in GenAI-assisted academic search

  • GenAI scaffolding for metacognition in the context of academic search

  • FIRM: Foundation-model Integrity and Ranking Methods in the LLM Era

Ben He Zhaochun Ren Xiao Wang Xuanang Chen Ying Zhou

We welcome submissions that cover a wide range of topics regarding ranking and recommendation methods in the LLM era. Accordingly, this track aims to convene researchers and practitioners across information retrieval, learning to rank, recommender systems, NLP/LLMs, trustworthy AI, and human-AI interaction, fostering cross-community dialogue on building effective and trustworthy information access systems in the foundation-model era. All accepted full papers will be asked to submit to a special issue of Information Processing & Management (IF=6.9, Q1).

We welcome submissions on the following topics, including but not limited to:

Foundation-model integrity in the LLM era

  • Robustness of FMs/LLMs under distribution shift, adversarial prompts, and noisy retrieval

  • Hallucination analysis and mitigation in RAG or agentic systems

  • Fairness, bias, and exposure analysis for FM-driven search and recommendation

  • Privacy, safety, and content-filtering mechanisms for FM-centric IR systems

  • Transparency, explanation, and interpretability for multi-stage FM pipelines

  • Governance, monitoring, and auditing frameworks for foundation-model services

Ranking/recommendation methods in the LLM era

  • FM/LLM-based learning-to-rank and LLM-as-ranker architectures

  • Late-interaction and multi-vector indexing for FM-based retrieval

  • Large search models and unified architectures integrating retrievers, rerankers, and LLMs

  • Query understanding, rewriting, and reformulation with FMs/LLMs

  • Conversational and session-aware ranking with memory and user modelling

  • Efficient and scalable ranking under long contexts and high-throughput constraints

  • LLM-based ranking for recommender systems,generative recommendation, LLM4Rec

Interplay between integrity and ranking

  • Integrity-aware objectives and training schemes for ranking in FM/LLM pipelines

  • Joint optimisation of retrieval, ranking, and generation for robust RAG

  • LLM-assisted relevance judgement and automatic labelling for evaluation

  • Benchmarks and datasets capturing integrity issues (hallucinations, bias, safety) in ranking

  • Human-in-the-loop protocols combining human expertise with LLM support for evaluation and monitoring

  • Domain-specific case studies (e.g., scientific, biomedical, legal, financial, educational) highlighting integrity-ranking trade-offs

  • Human Reasoning and Large Language Models: Alignments and Divergences

Kevin Roitero Johanne Trippas Mengdie Zhuang

Scope

Human reasoning vs. LLM reasoning, including:

- Alignment & divergence in reasoning behaviors

- Faithfulness, robustness, reliability

- Human–AI hybrid reasoning

- Evaluation, benchmarks, and cognitive perspectives

  • The Algorithmic Anomie and its Social Consequences in a Society with AI-driven Systems

Editors:

  • Xi Chen, Yunnan University, Kunming, China

  • Ivan Wen, University of Hawai‘i at Mānoa, Honolulu, United States

  • Rongheng Lin, Beijing University of posts and telecommunications, Beijing, China

  • Qixing Qu, University of International Business and Economics,Beijing, China

  • Xiaoyu Song, Oxford University, Oxford, United Kingdom

Artificial intelligence (AI) has evolved from a purely instrumental technology into a quasi-subjective collaborator in human reasoning, decision-making, and creative processes (Hou et al., 2025). At its core, AI seeks to emulate aspects of human intelligence through algorithmic systems that formalize rules, procedures, and learning mechanisms governing data processing, pattern recognition, and output generation (Berente et al., 2021; Collins et al., 2021; Rinta-Kahila et al., 2022; Stelmaszak et al., 2025). Algorithms thus constitute the foundational architecture of AI, defining how intelligence is operationalized and deployed across technological contexts (Ågerfalk; Kellogg et al., 2020).

As AI systems proliferate across diverse domains of social life, algorithmic logic has become deeply embedded in social structures and everyday practices, reshaping modes of interaction, governance, and value creation (Benbya et al., 2021; Kronblad et al., 2024). This transformation, however, necessitates a renewed affirmation of the distinctiveness of human cognition, emotion, and moral judgment—capacities that remain fragile yet irreplaceable in conferring ethical meaning upon technological progress (Glickman et al., 2025). While algorithmic systems undeniably enhance efficiency and innovation, they are simultaneously accompanied by growing risks of algorithmic anomie—a condition characterized by normative misalignment between algorithmic operations and established ethical, legal, and social values. Such deviations pose substantial threats to social order, fairness, justice, and human well-being (Bengio et al., 2024; Tanriverdi & Akinyemi, 2025; Teodorescu et al., 2021).

Algorithmic anomie manifests through multiple, interrelated dimensions. First, algorithmic bias and discriminatory outcomes emerge when biased training data or flawed model design reproduce or amplify existing social inequalities (Zhou & Wang, 2025). Second, opacity and the lack of accountability characterize many advanced algorithmic systems, particularly “black-box” models whose internal logic remains inaccessible or incomprehensible to affected stakeholders (Hu & Ou, 2025). Third, ethical boundary erosion occurs when algorithmic applications intrude excessively into domains that demand contextual sensitivity, moral reasoning, and human judgment (Craig, 2025). Fourth, social autonomy may be undermined as algorithms exert disproportionate influence over individual decision-making and collective behavior, subtly shaping preferences, choices, and norms (Zheng et al., 2025). Fifth, when systems optimized for localized objectives interact within complex socio-technical ecosystems, they may generate systemic vulnerabilities, including heightened information security risks (Lv et al., 2025).

Taken together, algorithmic anomie reflects a structural disjunction between the rationality embedded in algorithm design and broader normative commitments to justice, autonomy, and social well-being. As intelligent systems increasingly permeate the fabric of social life, it becomes imperative to ensure that technological innovation does not eclipse emotion, conscience, and human dignity (Bankins et al., 2023). The challenges posed by algorithmic anomie are therefore not merely technical in nature but fundamentally social, ethical, and civilizational.

In response to these concerns, a critical question arises: What does information “processing” and “management” signify in an era when intelligent systems no longer merely retrieve information but actively infer, collaborate, and co-create with humans? Addressing this question requires sustained scholarly engagement that integrates theoretical depth with empirical rigor, builds upon established scientific foundations, and advances responsible, human-centered trajectories for cognitive information systems in the age of AI.

Accordingly, this call for papers invites interdisciplinary contributions that critically examine algorithmic anomie and its societal implications, while exploring pathways toward ethical, transparent, and accountable AI. Topics of interest include, but are not limited to:

  • Manifestations, representative cases, and underlying causes of algorithmic anomie in artificial intelligence systems;

  • The impacts of algorithmic discrimination, bias, and manipulation on social fairness and justice;

  • The erosion of information ecosystems and public cognition under the influence of algorithmic anomie.

  • Ethical dilemmas related to data misuse, privacy violations, and associated social risks;

  • Challenges and frameworks for enhancing algorithmic transparency, interpretability, and accountability;

  • Multidimensional approaches to algorithmic governance from legal, technical, and ethical perspectives;

  • Policy recommendations and mechanisms for building social consensus around the responsible development of AI;

  • Future-oriented ethical norms and models of responsible algorithmic innovation.

We look forward to receiving rigorous and insightful contributions that collectively advance understanding, mitigate social risks, and promote the sustainable and humane development of artificial intelligence—ensuring that algorithmic systems remain aligned with the guiding principle of algorithms for humanity.

Reference:

  1. Ågerfalk, P. J. (2020). Artificial intelligence as digital agency. European Journal of Information Systems, 29(1), 1–8. doi:10.1080/0960085X.2020.1721947

  2. Benbya, H., Davenport, T. H., & Pachidi, S. (2020). Artificial intelligence in organizations: Current state and future opportunities. MIS Quarterly Executive, 19(4). 10.2139/ssrn.3741983

  3. Bengio, Y., Hinton, G., Yao, A., Song, D., Abbeel, P., Darrell, T., ... & Mindermann, S. (2024). Managing extreme AI risks amid rapid progress. Science, 384(6698), 842-845. doi:10.1126/science.adn0117

  4. Berente, N., Gu, B., Recker, J., & Santhanam, R. (2021). Managing artificial intelligence. MIS quarterly, 45(3), 1433-1450. doi:10.25300/MISQ/2021/16274

  5. Collins, C., Dennehy, D., Conboy, K., & Mikalef, P. (2021). Artificial intelligence in information systems research: A systematic literature review and research agenda. International journal of information management, 60, 102383. doi:10.1016/j.ijinfomgt.2021.102383

  6. Kellogg, K. C., Valentine, M. A., & Christin, A. (2020). Algorithms at work: The new contested terrain of control. Academy of management annals, 14(1), 366-410. doi:10.5465/annals.2018.0174

  7. Kronblad, C., Essén, A., & Mähring, M. (2024). When justice is blind to algorithms: Multilayered blackboxing of algorithmic decision-making in the public sector. MIS Quarterly, 48(4), 1637-1662. doi:10.25300/MISQ/2024/18251

  8. Rinta-Kahila, T., Someh, I., Gillespie, N., Indulska, M., & Gregor, S. (2022). Algorithmic decision-making and system destructiveness: A case of automatic debt recovery. European Journal of Information Systems, 31(3), 313-338. doi:10.1080/0960085X.2021.1960905

  9. Stelmaszak, M., Möhlmann, M., & Sørensen, C. (2025). When algorithms delegate to humans: Exploring human-algorithm interaction at Uber. MIS Quarterly, 49(1), 305-330. doi:10.25300/MISQ/2024/17911

  10. Tanriverdi, H., & Akinyemi, J. P. O. (2025). Algorithmic Social Injustices: Antecedents and Mitigations. MIS Quarterly, 49(4), 1417-1448. doi:10.25300/MISQ/2025/18314

  11. Teodorescu, M. H., Morse, L., Awwad, Y., & Kane, G. C. (2021). Failures of fairness in automation require a deeper understanding of human–ml augmentation. MIS quarterly, 45(3), 1483-1500. doi:10.25300/MISQ/2021/16535

  12. Ågerfalk, P. J. (2020). Artificial intelligence as digital agency. European Journal of Information Systems, 29(1), 1–8. doi:10.1080/0960085X.2020.1721947

  13. Bankins, S., & Formosa, P. (2023). The ethical implications of artificial intelligence (AI) for meaningful work. Journal of Business Ethics, 185(4), 725-740

  14. Benbya, H., Davenport, T. H., & Pachidi, S. (2020). Artificial intelligence in organizations: Current state and future opportunities. MIS Quarterly Executive, 19(4). 10.2139/ssrn.3741983

  15. Bengio, Y., Hinton, G., Yao, A., Song, D., Abbeel, P., Darrell, T., ... & Mindermann, S. (2024). Managing extreme AI risks amid rapid progress. Science, 384(6698), 842-845. doi:10.1126/science.adn0117

  16. Berente, N., Gu, B., Recker, J., & Santhanam, R. (2021). Managing artificial intelligence. MIS quarterly, 45(3), 1433-1450. doi:10.25300/MISQ/2021/16274

  17. Collins, C., Dennehy, D., Conboy, K., & Mikalef, P. (2021). Artificial intelligence in information systems research: A systematic literature review and research agenda. International journal of information management, 60, 102383. doi:10.1016/j.ijinfomgt.2021.102383

  18. Craig, M. J. (2025). Human-Machine Communication Privacy Management, Privacy Fatigue, and the Conditional Effects of Algorithm Awareness on Privacy Co-ownership in the Social Media Context. Computers in Human Behavior, 108786. 10.1016/j.chb.2025.108786

  19. Glickman, M., & Sharot, T. (2025). How human–AI feedback loops alter human perceptual, emotional and social judgements. Nature Human Behaviour, 9(2), 345-359

  20. Hou, J., Wang, L., Wang, G., Wang, H. J., & Yang, S. (2025). The double-edged roles of generative ai in the creative process: Experiments on design work. Information Systems Research

  21. Hu, A., & Ou, M. (2025). From passive to active: How does algorithm awareness affect users’ news seeking behavior on digital platforms. Telematics and Informatics, 102291. 10.1016/j.tele.2025.102291

  22. Kellogg, K. C., Valentine, M. A., & Christin, A. (2020). Algorithms at work: The new contested terrain of control. Academy of management annals, 14(1), 366-410. doi:10.5465/annals.2018.0174

  23. Kronblad, C., Essén, A., & Mähring, M. (2024). When justice is blind to algorithms: Multilayered blackboxing of algorithmic decision-making in the public sector. MIS Quarterly, 48(4), 1637-1662. doi:10.25300/MISQ/2024/18251

  24. Lv, X., Li, J., & Wang, Q. (2025). The dark side of recommendation algorithms in Chinese mass short video apps: effect of perceived over-recommendation on users’ cognitive dissonance and discontinuance intention. International Journal of Human–Computer Interaction, 41(11), 6701-6715. 10.1080/10447318.2024.2383038

  25. Rinta-Kahila, T., Someh, I., Gillespie, N., Indulska, M., & Gregor, S. (2022). Algorithmic decision-making and system destructiveness: A case of automatic debt recovery. European Journal of Information Systems, 31(3), 313-338. doi:10.1080/0960085X.2021.1960905

  26. Stelmaszak, M., Möhlmann, M., & Sørensen, C. (2025). When algorithms delegate to humans: Exploring human-algorithm interaction at Uber. MIS Quarterly, 49(1), 305-330. doi:10.25300/MISQ/2024/17911

  27. Tanriverdi, H., & Akinyemi, J. P. O. (2025). Algorithmic Social Injustices: Antecedents and Mitigations. MIS Quarterly, 49(4), 1417-1448. doi:10.25300/MISQ/2025/18314

  28. Teodorescu, M. H., Morse, L., Awwad, Y., & Kane, G. C. (2021). Failures of fairness in automation require a deeper understanding of human–ml augmentation. MIS Quarterly, 45(3), 1483-1500. doi:10.25300/MISQ/2021/16535

  29. Zheng, H., Luo, C., Song, S., Ou, M., & Hansen, P. (2025). The Algorithmic Influence: What Drives People to Use AI-Powered Social Media as a Source of Health Information? International Journal of Human–Computer Interaction, 1-13. 10.1080/10447318.2025.2495115

  30. Zhou, T., & Wang, M. (2025). Examining Generative AI User Discontinuance from a Dual Perspective of Enablers and Inhibitors. International Journal of Human–Computer Interaction, 1-11. 10.1080/10447318.2025.2470280

  • Trustworthy Content Governance and Safe Generative AI in the Age of Large Language Models

Wenping Zhang, Renmin University Lele Kang, Nanjing University Juliana Sutanto, Monash University

The rapid proliferation of large language models (LLMs), multimodal generative AI systems, and autonomous content agents is reshaping how digital information is produced, accessed, circulated, and governed (Feher 2025; Zhao & Zeng 2026). These systems have transitioned from document retrieval to dialog-based interaction, and increasingly from passive generation to active reasoning—directly aligning with contemporary transformations in the field of information processing and management.

However, this transition also introduces unprecedented risks. Synthetic misinformation, deepfake media, hallucinations, bias amplification, adversarial attacks, and value misalignment can threaten both individuals and institutions (Xu et al. 2025). Meanwhile, data contamination, opaque training corpora, and the scale of AI-generated content challenge long-standing assumptions about provenance, trust, and the integrity of information ecosystems (Ansari 2025; Hou et al. 2025). Effective content governance—spanning technical safeguards, human-centered oversight, and regulatory frameworks—is now critical to ensuring the safe, responsible, and trustworthy deployment of generative AI.

This Special Issue aims to advance the state of knowledge around computational, behavioral, and governance approaches for safe content generation, evaluation, and reasoning. We invite submissions that explore foundational theories, empirical studies, system designs, risk assessments, and real-world deployments that support trustworthy AI content ecosystems. Our goal is to synthesize interdisciplinary insights and guide the responsible evolution of generative AI in information-rich environments.

We welcome submissions on the following topics, including but not limited to:

  • Frameworks and theories of content governance for LLMs and generative AI

  • Safety evaluation: hallucination detection, bias mitigation, toxicity reduction

  • Content provenance, watermarking, and deepfake detection

  • Model alignment, value-sensitive design, and responsible generation

  • Human–AI collaboration in content auditing and moderation

  • Synthetic misinformation, influence operations, and mitigation strategies

  • Dataset contamination, red-teaming, and adversarial evaluation

  • Trust, explainability, and transparency in AI content systems

  • Governance and regulatory perspectives on generative AI

  • Case studies and system designs for safe deployment of LLM-driven applications

References

  1. Ansari, M. S. (2025). AI Slop and Data Pollution in the Age of Generative AI: Strategic Risks, Economic Consequences, and Governance Pathways for Business, Management, and the Creative Industries. SSRN Working Paper (November 20, 2025).

  2. Feher, K. (2025). Generative AI, Media, and Society. Routledge.

  3. Hou, L., Min, Y., Pan, X., & Gong, Z. (2025). Distinguishing AI-generated versus real tourism photos: Visual differences, human judgment, and deep learning detection. Information Processing & Management, 62(5), 104218.

  4. Xu, Q., Mu, W., Li, J., Sun, T., & Jiang, X. (2025). Advancements in AI-Generated Content Forensics: A Systematic Literature Review. ACM Computing Surveys, 58(3), 1-36.

  5. Zhao, Y., & Zeng, S. (2026). Leveraging deep active learning for multimodal health misinformation detection on social media. Information Processing & Management, 63(3), 104540.

From Posts to Dialogues: Conversational and Contextual AI for Mental Health

Javier Parapar, University of A Coruña, A Coruña, Spain

Anxo Perez, University of A Coruña, A Coruña, Spain

Xi Wang, The University of Sheffield, Sheffield, UK

Ana-Maria Bucur, Università della Svizzera italiana, Lugano, Switzerland

Fabio Crestani, Università della Svizzera italiana, Lugano, Switzerland

Over the past decade, eRisk and related shared-task initiatives have recognised social media as a vital yet challenging signal source for early risk detection in mental health, offering insights into individuals' lived experiences. However, existing research in this area has concentrated on static or post-level inference, analysing patterns from individual messages or aggregated user histories. However, mental health is inherently contextual and dynamic. Language evolves over time, meaning shifts depending on the situation and audience, and platform features like threads, communities, reactions and moderation influence what users express and how they engage with others. This track aims to advance computational mental health research by prioritising context-awareness and conversational scenarios as key modelling and evaluation targets. We welcome a broad range of contributions, including the creation of novel datasets and resources, the development of new modelling approaches, and the proposal of evaluation frameworks that account for interaction, context, and temporal dynamics. Submissions may cover topics ranging from static text modelling to the processing of interactive contextual information, including conversational agents, multi-user discussions, longitudinal conversational data, or other realistic interaction settings in online communities.

We welcome submissions on the following topics, including but not limited to:

- Context-aware and conversational modelling for mental health risk detection

- Longitudinal and temporal approaches to detecting mental health risks

- Datasets and resources for research on conversational and contextual mental health

- Evaluation of conversational AI agents in realistic settings

Generative AI for Health and Well-being: Information Processing, Psychological Mechanisms, and Behavioral Impact

Generative AI for Health and Well-being: Information Processing, Psychological Mechanisms, and Behavioral Impact

Background

Over the past decade, artificial intelligence (AI) chatbots have undergone rapid evolution from simple rule-based systems to sophisticated generative AI agents, powered by large language models (LLMs). In this track, we use the term “Generative AI agent (GenAI)” to refer to any LLM-powered conversational system utilized in health and well-being contexts. The integration of advanced generative AI agents, such as ChatGPT and DeepSeek, into healthcare settings and everyday life offers unprecedented opportunities to support health information seeking, mental health management, and overall well-being (Jin et al. 2025). Evidence from recent research suggests that interactions with these agents can produce a significant positive impact on psychoeducation, cognitive-behavioral strategies, and mindfulness, reducing symptoms of depression and anxiety while improving self-care behaviors (Li et al., 2023; Nyakhar & Wang, 2025). Fundamentally, these agents act as interactive information systems that mediate users’ access to critical health knowledge. These benefits are particularly relevant and significant to patient and their caregivers, given global shortages of mental health professionals and barriers such as cost and stigma (American Psychological Association [APA], 2025).

Current Issues

Despite promising outcomes, critical challenges remain. From an informational perspective, concerns about accuracy, transparency, and bias persist, as responses from these technologies often compromise truthfulness (NBC News, 2025). Psychologically, the use of GenAI can enhance engagement and empathy, but it also carries risks of psychological dependency and displacement of human relationships (Herath, 2025). Behaviorally, key unresolved challenges include sustaining long-term adherence and preventing cognitive offloading, in which users rely excessively on AI for decision-making (Mayor, 2025). From a technical perspective, challenges in prompt engineering, retrieval-augmented generation (RAG) for medical accuracy, and interface design strategies that mitigate hallucinations are critical for successful deployment. Ethical and regulatory gaps exacerbate these risks, as most GenAI tools that offer therapy-like guidance do not have clinical approval (MedicalXpress, 2025).An interdisciplinary approach is essential for tackling the challenges above because GenAI interactions are complex and influence multiple usage dimensions (Yan et al. 2024). GenAI agents are fundamentally information systems; these agents mediate access to knowledge, organize content, and shape both information-seeking and information-utilization behavior. Information science, therefore, is an essential discipline for referencing. Simultaneously, these agents affect emotions, cognition, and other psychological states. Psychology offers theories and metrics (e.g., subjective well-being, resilience) to help understand and assess the impact of AI on human psychological states. Beyond feelings and cognitions, AI also influence actions, such as adherence to health routines, stress management, and lifestyle changes (Huang et al. 2025). A behavioral science perspective informs the design of agents and intervention strategies that sustain positive behavior and mitigate negative behavioral outcomes (e.g., addiction, cognitive offloading) over time. Quality information alone does not guarantee well-being; it must be delivered interactively through human-centered design strategies that are psychologically supportive and safe, and eventually translated into healthy behaviors. Hence, this track will focus on leveraging synergies from the informational, psychological, and behavioral perspectives to advance our current research in this area.

Impact and Relevance

This track will explore how Generative AI interactions influence well-being across three dimensions:Information Processing – Ensuring accuracy, provenance, algorithmic fairness, and humane attention design in conversational content.Psychological Mechanisms – Understanding mechanisms of emotional support, alliance formation, and cultural sensitivity.Behavioral Impact – Examining engagement patterns, habit formation, and integration with real-world health behaviors.

Possible topics include (but are not limited to):

Theme 1: Information Processing & System Design

Information Quality in Generative AI ResponsesImpact of Source Transparency on User Trust and Well-BeingBias and Fairness in Health & Wellness Information Delivered by AI AgentsThe Role of Explainability in Generative AI InteractionsIntegration of Generative AI into Physical RobotsDesign Patterns for Trustworthy and Safe Health-Focused Generative AIRetrieval-Augmented Generation (RAG) for Mental Health Q&AEvaluation Metrics for Generative AI in Therapeutic Contexts

Theme 2: Psychological Mechanisms & Human-AI Interactions

Information-Seeking Behavior in Generative AI-Mediated EnvironmentsEffects of Language Style on Well-Being OutcomesMechanisms of Emotional Support in Human-Agent InteractionsImpact of Agent AnthropomorphismCultural Sensitivity in Generative AITherapeutic Alliance in Human–Agent InteractionManaging Emotional Dependency in Long-Term Generative AI UseEffects on Conversational Tone and Personality

Theme 3: Behavioral Impact & Longitudinal Engagement

Generative AI as Behavioral Change AgentsGamification and Engagement Strategies for Sustaining Well-Being InterventionsLongitudinal Effects of Generative AI UseBehavioral Nudges in Generative AI InterventionsRomantic Relationships with AI AgentsGenerative AI AddictionCoping Mechanisms and Stress Management via AI AgentsImpact of Generative AI on Health Literacy and Information AppraisalDrawing on interdisciplinary perspectives, the track aims to advance our scientific understanding of the impact of Generative AI interactions on well-being and the mechanisms underlying this impact. Research reported in this track will also inform the practice and standards for safe, effective, and

ethical AI design, thereby bridging the gap between research and practice to maximize benefits while mitigating potential harms. Given the accelerating adoption of GenAI agents in everyday life, this track serves as a timely and essential platform for bringing together pertinent research and helping to shape responsible innovation enabled by AI.

References

American Psychological Association. (2025). Health advisory: Use of generative AI chatbots and wellness applications for mental health. https://www.apa.org/topics/artificial-intelligence-machine-learning/health-advisory-chatbots-wellness-apps

Herath, R. (2025). Emotionally intelligent chatbots in mental health: A review of psychological, ethical, and developmental impacts. International Journal of Computer Applications, 187(29), 1–10.

Huang, Y., Wang, W., Zhou, J., Zhang, L., Lin, J., Liu, H., ... & Dong, W. (2025). Integrative modeling enables ChatGPT to achieve average level of human counselors performance in mental health Q&A. Information Processing & Management, 62(5), 104152.

Jin, I., Tangsrivimol, J. A., Darzi, E., Hassan Virk, H. U., Wang, Z., Egger, J., Hacking, S., Glicksberg, B. S., Strauss, M., & Krittanawong, C. (2025). DeepSeek vs. ChatGPT: Prospects and challenges. Frontiers in Artificial Intelligence, 8, 1576992.

Li, H., Zhang, R., Lee, Y.-C., Kraut, R. E., & Mohr, D. C. (2023). Systematic review and meta-analysis of AI-based conversational agents for promoting mental health and well-being. npj Digital Medicine, 6, 236.

Mayor, E. (2025). Chatbots and mental health: A scoping review of reviews. Current Psychology, 44, 13619–13640.

MedicalXpress. (2025, December 5). Researchers call for clear regulations on AI tools used for mental health interactions. https://medicalxpress.com/news/2025-12-ai-tools-mental-health-interactions.html

NBC News. (2025, December 4). AI chatbots used inaccurate information to change people's political opinions, study finds. https://www.nbcnews.com/tech/tech-news/ai-chatbots-used-inaccurate-information-change-political-opinions-stud-rcna247085

Nyakhar, S., & Wang, H. (2025). Effectiveness of artificial intelligence chatbots on mental health & well-being in college students: A rapid systematic review. Frontiers in Psychiatry, 16, 1621768.

Yan, W., Hu, B., Liu, Y. L., Li, C., & Song, C. (2024). Does usage scenario matter? Investigating user perceptions, attitude and support for policies towards ChatGPT. Information Processing & Management, 61(6), 103867.

Information Processing & Management Conference: Plenary presentations and interactive discussions on submitted papers and posters

Man speaking before audience at a conference