Advancements in Privacy, Security, and Integrity of Neural Networks: Attacks and Defences

CFP
Journal
online
SUBMISSION DEADLINE
30/08/2026
JOURNAL
Advancements in Privacy, Security, and Integrity of Neural Networks: Attacks and Defences
GUEST EDITORS
Rajeev Kumar, Kevin Curran, Minoru Kuribayashi
POSTED ON
30/03/2026

DETAILS

CALL FOR PAPERS

Advancements in Privacy, Security, and Integrity of Neural Networks: Attacks and Defences

Publisher: Emerald Publishing

Submission Opens: 20 February 2026

Submission Deadline: 30 August 2026


Introduction

The widespread deployment of deep neural networks (DNNs) in healthcare, finance, autonomous systems, and generative AI has heightened concerns about security, robustness, and integrity. AI models are increasingly vulnerable to adversarial perturbations, poisoning, backdoor attacks, unauthorized retraining, and model extraction — which can silently compromise performance in safety-critical applications.

Since trained models represent significant intellectual and computational investments, protecting ownership, detecting tampering, and verifying authenticity have become key research priorities. Techniques such as watermarking, fingerprinting, reversible data hiding, and forensic analysis offer promising solutions without degrading model performance.

The rapid growth of generative AI further raises critical issues of authenticity and provenance. Integrity-preserving mechanisms — along with complementary tools such as blockchain-based audit trails — can support secure verification and accountability across AI systems.


Scope & Significance

This Special Issue invites contributions on adversarial robustness, integrity-preserving methods, and secure verification frameworks for neural networks and AI-generated systems. It seeks to bring together researchers working at the intersection of deep learning, cybersecurity, and digital forensics to advance both theoretical understanding and practical defences against emerging threats to AI model integrity and privacy.


List of Topic Areas

Manuscripts are invited on themes including, but not limited to:

  1. Adversarial perturbation detection and certified defences

  2. Model poisoning and backdoor attack mitigation

  3. Integrity verification and tamper detection in neural networks

  4. Digital watermarking and fingerprinting for model protection

  5. Neural network forensics and authenticity verification

  6. Detection of unauthorized retraining and model extraction

  7. Security of generative AI systems

  8. Privacy-preserving and secure neural network design

  9. Blockchain-supported model provenance and audit mechanisms


Guest Editors

Dr. Rajeev Kumar Delhi Technological University, India 📧 rajeevkumar@dtu.ac.in

Prof. Kevin Curran Ulster University, UK 📧 kj.curran@ulster.ac.uk

Prof. Minoru Kuribayashi Tohoku University, Japan 📧 kminoru@tohoku.ac.jp


Key Deadlines

📅 Manuscript Submission Opens: 20 February 2026

⏰ Manuscript Submission Deadline: 30 August 2026


Submission & Review Process

All submitted manuscripts will undergo a formal single-blind peer-review process. Papers will be handled on a first-come, first-served basis.

  • Accepted papers will be published open access upon acceptance

  • Accepted papers will later be compiled into the Special Issue collection

  • Manuscripts not accepted within the publication window may be transferred to the journal's regular track

For submission instructions and to submit your paper, visit the official journal submission page on the Emerald Publishing website.

⚠️ Submitted articles must not have been previously published, nor should they be under consideration for publication elsewhere while under review for this journal.


Service setu Academics — Premier Platform for Academic Opportunities & Research Collaboration

Visit official website of the publisher

COMMENTS (0)

Sign in to join the conversation

SIGN IN TO COMMENT

Related Posts