"Explainable AI and Network Science for Social Systems and Collective Intelligence"
DETAILS
Call for Papers-"Explainable AI and Network Science for Social Systems and Collective Intelligence"
Journal: Information Processing & Management
Publisher: Elsevier
Submission Deadline: 31 Mar 2027
Submission Portal | Article Type | Author Guidelines |
|---|---|---|
"VSI: AINet" |
Key Requirements:
XAI + Network Science integration for social systems mandatory
Multilayer/higher-order networks beyond pairwise analysis
VSI: AINet article type selection CRITICAL during submission
Overview
Black-box AI limits social platform governance as LLMs + GNNs analyze multilayer networks, hypergraphs, group interactions shaping collective intelligence. Addresses information disorder + polarization through explainable methods revealing influence dynamics + consensus formation. First issue combining XAI + network science for trustworthy human-AI collective decision-making amid real-time digital communication.
Key Research Themes
Advanced Network Modeling:
Multilayer networks + higher-order hypergraphs
Dynamic influence/leadership identification
Explainable AI Applications:
XAI for social behavior prediction + causal inference
Knowledge graphs explaining collective intelligence
Human-AI Systems:
LLM agents in online communities
Reputation/trust formation in adversarial settings
Collective Behavior Analysis:
Opinion dynamics + polarization modeling
Early-warning signals + risk forecasting
Submission Instructions
1. Access EVISE Submission System
2. Register/Login (new users create EVISE account)
3. Submit to Special Issue category "VSI: AINet"
4. CRITICAL: Select "VSI: AINet" from Article Type dropdown
5. Format strictly per Author Guidelines
6. Originality: Manuscripts not under review elsewhere
Timeline: Published 01 Apr 2026 | Closes 31 Mar 2027
Guest Editor Team
Dr. Tao Wen, Alliance Manchester Business School, University of Manchester, UK (tao.wen@manchester.ac.uk)
Asst. Prof. Xinyi Zhou, Boise State University, USA (xinyizhou@boisestate.edu)
Prof. Richard Allmendinger, Alliance Manchester Business School, University of Manchester, UK (richard.allmendinger@manchester.ac.uk)
Assoc. Prof. Kang Hao Cheong, Nanyang Technological University, Singapore (kanghao.cheong@ntu.edu.sg)
Why This Issue Matters
5B social media users generate 500M tweets/day requiring explainable collective intelligence models beyond black-box predictions. Misinformation cascades + AI-mediated polarization demand trustworthy XAI frameworks for platform governance. 20% citation advantage accelerates responsible AI deployment addressing human-AI decision-making + network governance at global scale.
ServiceSetu Academics — Premier Platform for Academic Opportunities & Research Collaboration
Visit official website of the publisher
COMMENTS (0)
Sign in to join the conversation
SIGN IN TO COMMENT