Boosting Efficiency and Transparency in AI-Driven Anomaly Detection

Scientists at the University of Bristol's School of Computer Science have made significant progress in addressing the problem of Artificial Intelligence "hallucinations" and enhancing the dependability of anomaly detection algorithms in Critical National Infrastructures (CNI). The study has been published in the Proceedings of the 10th ACM Cyber-Physical System Security Workshop.

Boosting Efficiency and Transparency in AI-Driven Anomaly Detection
Dr. Sridhar Adepu accepting Best Paper Award at the 10th ACM Cyber-Physical System Security Workshop at the ACM ASIACCS2024 conference. Image Credit: University of Bristol

Recent developments, especially regarding sensor and actuator data for CNIs, have brought to light the potential of AI in anomaly detection. These AI systems, however, frequently need a long training period and have trouble identifying particular parts in an aberrant state. Moreover, the lack of transparency in AI's decision-making processes raises questions regarding accountability and trust.

The group implemented several efficiency-boosting strategies to help combat this, such as:

Enhanced Anomaly Detection: The researchers used two state-of-the-art anomaly detection algorithms that maintain comparable efficiency rates. These algorithms had much shorter training times and faster detection capabilities. A dataset from the Singapore University of Technology and Design's operational water treatment testbed, SWaT, was used to evaluate these algorithms.

Explainable AI (XAI) Integration: The team combined the anomaly detectors with eXplainable AI (XAI) models to improve trust and transparency. This method makes a better interpretation of AI decisions possible, enabling human operators to comprehend and validate AI recommendations prior to making crucial decisions. The efficacy of the various models was also assessed to determine which XAI models best support human understanding.

Human-Centric Decision Making: The study highlights how crucial human supervision is to AI-driven decision-making. The team wants to ensure that AI functions as a decision-support tool rather than an unquestioning oracle by explaining AI recommendations to human operators. This methodology ensures accountability by having human operators make the final decisions guided by AI insights, policies, rules, and regulations.

Scoring System Development: A meaningful scoring system is being developed to gauge how confident and correct the AI's explanations are. This score aims to help human operators assess the dependability of AI-driven insights.

These developments raise overall accountability and trust by guaranteeing that human operators continue to play a crucial role in the decision-making process, in addition to increasing the effectiveness and dependability of AI systems in CNIs.

Humans learn by repetition over a longer period of time and work for shorter hours without being error-prone. This is why, in some cases, we use machines that can carry out the same tasks in a fraction of the time and at a reduced error rate.

Dr. Sarad Venugopalan, Study Co-Author, University of Bristol

Venugopalan explained, “However, this automation, involving cyber and physical components, and subsequent use of AI to solve some of the issues brought by the automation, is treated as a black box.  This is detrimental because it is the personnel using the AI recommendation that is held accountable for the decisions made by them, and not the AI itself.”

In our work, we use explainable AI, to increase transparency and trust, so the personnel using the AI is informed why the AI made the recommendation (for our domain use case) before a decision is made.

Dr. Sarad Venugopalan, Study Co-Author, University of Bristol

Dr. Sridhar Adepu is the supervisor of Mathuros Kornkamon, whose MSc thesis includes this research.

Dr. Adepu added: “This work discovers how WaXAI is revolutionizing anomaly detection in industrial systems with explainable AI. By integrating XAI, human operators gain clear insights and enhanced confidence to handle security incidents in critical infrastructure.”

The paper won the Best Paper Award at the 10th ACM Cyber-Physical System Security Workshop at the ACM ASIACCS2024 conference.

Journal Reference:

Mathuros, K., et al. (2024) WaXAI: Explainable Anomaly Detection in Industrial Control Systems and Water Systems. Proceedings of the 10th ACM Cyber-Physical System Security Workshopdoi.org/10.1145/3626205.3659147

Tell Us What You Think

Do you have a review, update or anything you would like to add to this news story?

Leave your feedback
Your comment type
Submit

While we only use edited and approved content for Azthena answers, it may on occasions provide incorrect responses. Please confirm any data provided with the related suppliers or authors. We do not provide medical advice, if you search for medical information you must always consult a medical professional before acting on any information provided.

Your questions, but not your email details will be shared with OpenAI and retained for 30 days in accordance with their privacy principles.

Please do not ask questions that use sensitive or confidential information.

Read the full Terms & Conditions.