Information Processing & Management Conference: Plenary presentations and interactive discussions on submitted papers and posters

Unfortunately we don't fully support your browser. If you have the option to, please upgrade to a newer version or use Mozilla Firefox, Microsoft Edge, Google Chrome, or Safari 14 or newer. If you are unable to, and need support, please send us your feedback.
We'd appreciate your feedback.Tell us what you think!
Paper and poster manuscripts are now invited on the following thematic tracks. They should be submitted using the online abstract submission system.
Deadline: 30 June 2026
AI Decision Support Based on Multimodal Information (DIM)
Data Elements and Data-intelligence Empowerment (IP&M)
Data-centric AI for Social Science (IP&M)
Explainable and Responsible Information Systems for Social Good (DIM)
Generative AI for Health and Well-being: Information Processing, Psychological Mechanisms, and Behavioral Impact (DIM)
Generative AI for Knowledge Work: Trust, Productivity, and Information Quality (DIM)
Poster Presentation only
Chang Liu, Peking University Soo Young Rieh, University of Texas at Austin Luanne Sinnamon, University of British Columbia Orland Hoeber, University of Regina Xiaoxuan Song, Nanjing Agricultural University
Generative artificial intelligence (GenAI) is transforming academic search by enabling automated or semi-automated learning environments with conversational and synthesis capabilities. This paradigm shift raises fundamental questions about learner agency, higher-order cognition and metacognition, critical thinking, digital literacy, digital fairness, and academic integrity in GenAI-enhanced academic search. This special track provides a forum for interdisciplinary dialogue among researchers in information behavior, interactive information retrieval, learning science, and human‑centred AI. We seek contributions that advance theoretical models, empirical studies, and system designs that place the human at the centre of GenAI‑enhanced academic search.
We welcome submissions on the following topics, including but not limited to:
Agency and cognitive models of academic searchers using GenAI tools (stand-alone or embedded within academic search platforms)
GenAI-enhanced academic search systems
Impact of GenAI on academic search and learning processes
Search as Learning in human-AI collaborative contexts
Supporting critical and creative thinking using GenAI-enhanced academic search
Information behavior in GenAI-assisted academic search
Evaluation and metrics for human-AI collaborative search
Literacy cultivation for GenAI-assisted academic search
Ethical frameworks for responsible GenAI-assisted academic search
Tools and methods for information quality assessment in GenAI-assisted academic search
GenAI scaffolding for metacognition in the context of academic search
Ben He Zhaochun Ren Xiao Wang Xuanang Chen Ying Zhou
We welcome submissions that cover a wide range of topics regarding ranking and recommendation methods in the LLM era. Accordingly, this track aims to convene researchers and practitioners across information retrieval, learning to rank, recommender systems, NLP/LLMs, trustworthy AI, and human-AI interaction, fostering cross-community dialogue on building effective and trustworthy information access systems in the foundation-model era. All accepted full papers will be asked to submit to a special issue of Information Processing & Management (IF=6.9, Q1).
We welcome submissions on the following topics, including but not limited to:
Foundation-model integrity in the LLM era
Robustness of FMs/LLMs under distribution shift, adversarial prompts, and noisy retrieval
Hallucination analysis and mitigation in RAG or agentic systems
Fairness, bias, and exposure analysis for FM-driven search and recommendation
Privacy, safety, and content-filtering mechanisms for FM-centric IR systems
Transparency, explanation, and interpretability for multi-stage FM pipelines
Governance, monitoring, and auditing frameworks for foundation-model services
Ranking/recommendation methods in the LLM era
FM/LLM-based learning-to-rank and LLM-as-ranker architectures
Late-interaction and multi-vector indexing for FM-based retrieval
Large search models and unified architectures integrating retrievers, rerankers, and LLMs
Query understanding, rewriting, and reformulation with FMs/LLMs
Conversational and session-aware ranking with memory and user modelling
Efficient and scalable ranking under long contexts and high-throughput constraints
LLM-based ranking for recommender systems,generative recommendation, LLM4Rec
Interplay between integrity and ranking
Integrity-aware objectives and training schemes for ranking in FM/LLM pipelines
Joint optimisation of retrieval, ranking, and generation for robust RAG
LLM-assisted relevance judgement and automatic labelling for evaluation
Benchmarks and datasets capturing integrity issues (hallucinations, bias, safety) in ranking
Human-in-the-loop protocols combining human expertise with LLM support for evaluation and monitoring
Domain-specific case studies (e.g., scientific, biomedical, legal, financial, educational) highlighting integrity-ranking trade-offs
Kevin Roitero Johanne Trippas Mengdie Zhuang
Scope
Human reasoning vs. LLM reasoning, including:
- Alignment & divergence in reasoning behaviors
- Faithfulness, robustness, reliability
- Human–AI hybrid reasoning
- Evaluation, benchmarks, and cognitive perspectives
Editors:
Xi Chen, Yunnan University, Kunming, China
Ivan Wen, University of Hawai‘i at Mānoa, Honolulu, United States
Rongheng Lin, Beijing University of posts and telecommunications, Beijing, China
Qixing Qu, University of International Business and Economics,Beijing, China
Xiaoyu Song, Oxford University, Oxford, United Kingdom
Artificial intelligence (AI) has evolved from a purely instrumental technology into a quasi-subjective collaborator in human reasoning, decision-making, and creative processes (Hou et al., 2025). At its core, AI seeks to emulate aspects of human intelligence through algorithmic systems that formalize rules, procedures, and learning mechanisms governing data processing, pattern recognition, and output generation (Berente et al., 2021; Collins et al., 2021; Rinta-Kahila et al., 2022; Stelmaszak et al., 2025). Algorithms thus constitute the foundational architecture of AI, defining how intelligence is operationalized and deployed across technological contexts (Ågerfalk; Kellogg et al., 2020).
As AI systems proliferate across diverse domains of social life, algorithmic logic has become deeply embedded in social structures and everyday practices, reshaping modes of interaction, governance, and value creation (Benbya et al., 2021; Kronblad et al., 2024). This transformation, however, necessitates a renewed affirmation of the distinctiveness of human cognition, emotion, and moral judgment—capacities that remain fragile yet irreplaceable in conferring ethical meaning upon technological progress (Glickman et al., 2025). While algorithmic systems undeniably enhance efficiency and innovation, they are simultaneously accompanied by growing risks of algorithmic anomie—a condition characterized by normative misalignment between algorithmic operations and established ethical, legal, and social values. Such deviations pose substantial threats to social order, fairness, justice, and human well-being (Bengio et al., 2024; Tanriverdi & Akinyemi, 2025; Teodorescu et al., 2021).
Algorithmic anomie manifests through multiple, interrelated dimensions. First, algorithmic bias and discriminatory outcomes emerge when biased training data or flawed model design reproduce or amplify existing social inequalities (Zhou & Wang, 2025). Second, opacity and the lack of accountability characterize many advanced algorithmic systems, particularly “black-box” models whose internal logic remains inaccessible or incomprehensible to affected stakeholders (Hu & Ou, 2025). Third, ethical boundary erosion occurs when algorithmic applications intrude excessively into domains that demand contextual sensitivity, moral reasoning, and human judgment (Craig, 2025). Fourth, social autonomy may be undermined as algorithms exert disproportionate influence over individual decision-making and collective behavior, subtly shaping preferences, choices, and norms (Zheng et al., 2025). Fifth, when systems optimized for localized objectives interact within complex socio-technical ecosystems, they may generate systemic vulnerabilities, including heightened information security risks (Lv et al., 2025).
Taken together, algorithmic anomie reflects a structural disjunction between the rationality embedded in algorithm design and broader normative commitments to justice, autonomy, and social well-being. As intelligent systems increasingly permeate the fabric of social life, it becomes imperative to ensure that technological innovation does not eclipse emotion, conscience, and human dignity (Bankins et al., 2023). The challenges posed by algorithmic anomie are therefore not merely technical in nature but fundamentally social, ethical, and civilizational.
In response to these concerns, a critical question arises: What does information “processing” and “management” signify in an era when intelligent systems no longer merely retrieve information but actively infer, collaborate, and co-create with humans? Addressing this question requires sustained scholarly engagement that integrates theoretical depth with empirical rigor, builds upon established scientific foundations, and advances responsible, human-centered trajectories for cognitive information systems in the age of AI.
Accordingly, this call for papers invites interdisciplinary contributions that critically examine algorithmic anomie and its societal implications, while exploring pathways toward ethical, transparent, and accountable AI. Topics of interest include, but are not limited to:
Manifestations, representative cases, and underlying causes of algorithmic anomie in artificial intelligence systems;
The impacts of algorithmic discrimination, bias, and manipulation on social fairness and justice;
The erosion of information ecosystems and public cognition under the influence of algorithmic anomie.
Ethical dilemmas related to data misuse, privacy violations, and associated social risks;
Challenges and frameworks for enhancing algorithmic transparency, interpretability, and accountability;
Multidimensional approaches to algorithmic governance from legal, technical, and ethical perspectives;
Policy recommendations and mechanisms for building social consensus around the responsible development of AI;
Future-oriented ethical norms and models of responsible algorithmic innovation.
We look forward to receiving rigorous and insightful contributions that collectively advance understanding, mitigate social risks, and promote the sustainable and humane development of artificial intelligence—ensuring that algorithmic systems remain aligned with the guiding principle of algorithms for humanity.
Reference:
Ågerfalk, P. J. (2020). Artificial intelligence as digital agency. European Journal of Information Systems, 29(1), 1–8. doi:10.1080/0960085X.2020.1721947
Benbya, H., Davenport, T. H., & Pachidi, S. (2020). Artificial intelligence in organizations: Current state and future opportunities. MIS Quarterly Executive, 19(4). 10.2139/ssrn.3741983
Bengio, Y., Hinton, G., Yao, A., Song, D., Abbeel, P., Darrell, T., ... & Mindermann, S. (2024). Managing extreme AI risks amid rapid progress. Science, 384(6698), 842-845. doi:10.1126/science.adn0117
Berente, N., Gu, B., Recker, J., & Santhanam, R. (2021). Managing artificial intelligence. MIS quarterly, 45(3), 1433-1450. doi:10.25300/MISQ/2021/16274
Collins, C., Dennehy, D., Conboy, K., & Mikalef, P. (2021). Artificial intelligence in information systems research: A systematic literature review and research agenda. International journal of information management, 60, 102383. doi:10.1016/j.ijinfomgt.2021.102383
Kellogg, K. C., Valentine, M. A., & Christin, A. (2020). Algorithms at work: The new contested terrain of control. Academy of management annals, 14(1), 366-410. doi:10.5465/annals.2018.0174
Kronblad, C., Essén, A., & Mähring, M. (2024). When justice is blind to algorithms: Multilayered blackboxing of algorithmic decision-making in the public sector. MIS Quarterly, 48(4), 1637-1662. doi:10.25300/MISQ/2024/18251
Rinta-Kahila, T., Someh, I., Gillespie, N., Indulska, M., & Gregor, S. (2022). Algorithmic decision-making and system destructiveness: A case of automatic debt recovery. European Journal of Information Systems, 31(3), 313-338. doi:10.1080/0960085X.2021.1960905
Stelmaszak, M., Möhlmann, M., & Sørensen, C. (2025). When algorithms delegate to humans: Exploring human-algorithm interaction at Uber. MIS Quarterly, 49(1), 305-330. doi:10.25300/MISQ/2024/17911
Tanriverdi, H., & Akinyemi, J. P. O. (2025). Algorithmic Social Injustices: Antecedents and Mitigations. MIS Quarterly, 49(4), 1417-1448. doi:10.25300/MISQ/2025/18314
Teodorescu, M. H., Morse, L., Awwad, Y., & Kane, G. C. (2021). Failures of fairness in automation require a deeper understanding of human–ml augmentation. MIS quarterly, 45(3), 1483-1500. doi:10.25300/MISQ/2021/16535
Ågerfalk, P. J. (2020). Artificial intelligence as digital agency. European Journal of Information Systems, 29(1), 1–8. doi:10.1080/0960085X.2020.1721947
Bankins, S., & Formosa, P. (2023). The ethical implications of artificial intelligence (AI) for meaningful work. Journal of Business Ethics, 185(4), 725-740
Benbya, H., Davenport, T. H., & Pachidi, S. (2020). Artificial intelligence in organizations: Current state and future opportunities. MIS Quarterly Executive, 19(4). 10.2139/ssrn.3741983
Bengio, Y., Hinton, G., Yao, A., Song, D., Abbeel, P., Darrell, T., ... & Mindermann, S. (2024). Managing extreme AI risks amid rapid progress. Science, 384(6698), 842-845. doi:10.1126/science.adn0117
Berente, N., Gu, B., Recker, J., & Santhanam, R. (2021). Managing artificial intelligence. MIS quarterly, 45(3), 1433-1450. doi:10.25300/MISQ/2021/16274
Collins, C., Dennehy, D., Conboy, K., & Mikalef, P. (2021). Artificial intelligence in information systems research: A systematic literature review and research agenda. International journal of information management, 60, 102383. doi:10.1016/j.ijinfomgt.2021.102383
Craig, M. J. (2025). Human-Machine Communication Privacy Management, Privacy Fatigue, and the Conditional Effects of Algorithm Awareness on Privacy Co-ownership in the Social Media Context. Computers in Human Behavior, 108786. 10.1016/j.chb.2025.108786
Glickman, M., & Sharot, T. (2025). How human–AI feedback loops alter human perceptual, emotional and social judgements. Nature Human Behaviour, 9(2), 345-359
Hou, J., Wang, L., Wang, G., Wang, H. J., & Yang, S. (2025). The double-edged roles of generative ai in the creative process: Experiments on design work. Information Systems Research
Hu, A., & Ou, M. (2025). From passive to active: How does algorithm awareness affect users’ news seeking behavior on digital platforms. Telematics and Informatics, 102291. 10.1016/j.tele.2025.102291
Kellogg, K. C., Valentine, M. A., & Christin, A. (2020). Algorithms at work: The new contested terrain of control. Academy of management annals, 14(1), 366-410. doi:10.5465/annals.2018.0174
Kronblad, C., Essén, A., & Mähring, M. (2024). When justice is blind to algorithms: Multilayered blackboxing of algorithmic decision-making in the public sector. MIS Quarterly, 48(4), 1637-1662. doi:10.25300/MISQ/2024/18251
Lv, X., Li, J., & Wang, Q. (2025). The dark side of recommendation algorithms in Chinese mass short video apps: effect of perceived over-recommendation on users’ cognitive dissonance and discontinuance intention. International Journal of Human–Computer Interaction, 41(11), 6701-6715. 10.1080/10447318.2024.2383038
Rinta-Kahila, T., Someh, I., Gillespie, N., Indulska, M., & Gregor, S. (2022). Algorithmic decision-making and system destructiveness: A case of automatic debt recovery. European Journal of Information Systems, 31(3), 313-338. doi:10.1080/0960085X.2021.1960905
Stelmaszak, M., Möhlmann, M., & Sørensen, C. (2025). When algorithms delegate to humans: Exploring human-algorithm interaction at Uber. MIS Quarterly, 49(1), 305-330. doi:10.25300/MISQ/2024/17911
Tanriverdi, H., & Akinyemi, J. P. O. (2025). Algorithmic Social Injustices: Antecedents and Mitigations. MIS Quarterly, 49(4), 1417-1448. doi:10.25300/MISQ/2025/18314
Teodorescu, M. H., Morse, L., Awwad, Y., & Kane, G. C. (2021). Failures of fairness in automation require a deeper understanding of human–ml augmentation. MIS Quarterly, 45(3), 1483-1500. doi:10.25300/MISQ/2021/16535
Zheng, H., Luo, C., Song, S., Ou, M., & Hansen, P. (2025). The Algorithmic Influence: What Drives People to Use AI-Powered Social Media as a Source of Health Information? International Journal of Human–Computer Interaction, 1-13. 10.1080/10447318.2025.2495115
Zhou, T., & Wang, M. (2025). Examining Generative AI User Discontinuance from a Dual Perspective of Enablers and Inhibitors. International Journal of Human–Computer Interaction, 1-11. 10.1080/10447318.2025.2470280
Wenping Zhang, Renmin University Lele Kang, Nanjing University Juliana Sutanto, Monash University
The rapid proliferation of large language models (LLMs), multimodal generative AI systems, and autonomous content agents is reshaping how digital information is produced, accessed, circulated, and governed (Feher 2025; Zhao & Zeng 2026). These systems have transitioned from document retrieval to dialog-based interaction, and increasingly from passive generation to active reasoning—directly aligning with contemporary transformations in the field of information processing and management.
However, this transition also introduces unprecedented risks. Synthetic misinformation, deepfake media, hallucinations, bias amplification, adversarial attacks, and value misalignment can threaten both individuals and institutions (Xu et al. 2025). Meanwhile, data contamination, opaque training corpora, and the scale of AI-generated content challenge long-standing assumptions about provenance, trust, and the integrity of information ecosystems (Ansari 2025; Hou et al. 2025). Effective content governance—spanning technical safeguards, human-centered oversight, and regulatory frameworks—is now critical to ensuring the safe, responsible, and trustworthy deployment of generative AI.
This Special Issue aims to advance the state of knowledge around computational, behavioral, and governance approaches for safe content generation, evaluation, and reasoning. We invite submissions that explore foundational theories, empirical studies, system designs, risk assessments, and real-world deployments that support trustworthy AI content ecosystems. Our goal is to synthesize interdisciplinary insights and guide the responsible evolution of generative AI in information-rich environments.
We welcome submissions on the following topics, including but not limited to:
Frameworks and theories of content governance for LLMs and generative AI
Safety evaluation: hallucination detection, bias mitigation, toxicity reduction
Content provenance, watermarking, and deepfake detection
Model alignment, value-sensitive design, and responsible generation
Human–AI collaboration in content auditing and moderation
Synthetic misinformation, influence operations, and mitigation strategies
Dataset contamination, red-teaming, and adversarial evaluation
Trust, explainability, and transparency in AI content systems
Governance and regulatory perspectives on generative AI
Case studies and system designs for safe deployment of LLM-driven applications
References
Ansari, M. S. (2025). AI Slop and Data Pollution in the Age of Generative AI: Strategic Risks, Economic Consequences, and Governance Pathways for Business, Management, and the Creative Industries. SSRN Working Paper (November 20, 2025).
Feher, K. (2025). Generative AI, Media, and Society. Routledge.
Hou, L., Min, Y., Pan, X., & Gong, Z. (2025). Distinguishing AI-generated versus real tourism photos: Visual differences, human judgment, and deep learning detection. Information Processing & Management, 62(5), 104218.
Xu, Q., Mu, W., Li, J., Sun, T., & Jiang, X. (2025). Advancements in AI-Generated Content Forensics: A Systematic Literature Review. ACM Computing Surveys, 58(3), 1-36.
Zhao, Y., & Zeng, S. (2026). Leveraging deep active learning for multimodal health misinformation detection on social media. Information Processing & Management, 63(3), 104540.
Javier Parapar, University of A Coruña, A Coruña, Spain
Anxo Perez, University of A Coruña, A Coruña, Spain
Xi Wang, The University of Sheffield, Sheffield, UK
Ana-Maria Bucur, Università della Svizzera italiana, Lugano, Switzerland
Fabio Crestani, Università della Svizzera italiana, Lugano, Switzerland
Over the past decade, eRisk and related shared-task initiatives have recognised social media as a vital yet challenging signal source for early risk detection in mental health, offering insights into individuals' lived experiences. However, existing research in this area has concentrated on static or post-level inference, analysing patterns from individual messages or aggregated user histories. However, mental health is inherently contextual and dynamic. Language evolves over time, meaning shifts depending on the situation and audience, and platform features like threads, communities, reactions and moderation influence what users express and how they engage with others. This track aims to advance computational mental health research by prioritising context-awareness and conversational scenarios as key modelling and evaluation targets. We welcome a broad range of contributions, including the creation of novel datasets and resources, the development of new modelling approaches, and the proposal of evaluation frameworks that account for interaction, context, and temporal dynamics. Submissions may cover topics ranging from static text modelling to the processing of interactive contextual information, including conversational agents, multi-user discussions, longitudinal conversational data, or other realistic interaction settings in online communities.
We welcome submissions on the following topics, including but not limited to:
- Context-aware and conversational modelling for mental health risk detection
- Longitudinal and temporal approaches to detecting mental health risks
- Datasets and resources for research on conversational and contextual mental health
- Evaluation of conversational AI agents in realistic settings
