Menu
24matins.uk
Navigation : 
  • News
    • Business
    • Recipe
    • Sport
  • World
  • Health
  • Culture
  • Tech
    • Science
Currently : 
  • Entertainment
  • Tech
  • Health
  • International

AI and Cybersecurity: A Risk-Based Approach for a Secure Future

Tech
By 24matins.uk,  published 10 February 2025 at 14h22, updated on 10 February 2025 at 14h22.
Tech

In response to escalating threats, international experts are offering a risk analysis of AI and key recommendations to ensure the development of safe and trustworthy systems.

A Shared Vision for Safe AI

International experts, under the auspices of the National Agency for the Security of Information Systems, have collaboratively crafted a document aimed at promoting a “risk-based approach” to the use of artificial intelligence (AI) systems. This unified effort is in anticipation of the AI Summit and strives to balance the opportunities and risks associated with AI technology in a landscape where cyber threats are ever-evolving.

AI: A Ubiquitous Yet Risky Technology

Since its inception in the 1950s, AI has impacted numerous sectors including defense, energy, health, and finance. The growing use of language model systems (LLMs) should prompt stakeholders to be aware and assess associated risks, including those related to cybersecurity. “Malicious actors could exploit AI technology vulnerabilities and potentially compromise their use.” It is crucial to mitigate these risks to foster trustworthy AI.

Risk Analysis for Secure AI

The document highlights the significance of cybersecurity in AI systems by presenting a high-level risk analysis. It emphasizes the need to control cyber risks associated with AI systems and offers solutions to achieve this, drawing on past recommendations from the NCSC-UK and CISA.

This risk analysis addresses systems incorporating AI components and provides an overview of threats to these systems, rather than an exhaustive list of vulnerabilities.
– Data poisoning: manipulation of training datasets to alter model performance.
– Sensitive information extraction: retrieving confidential data from the AI model.
– Evasion attacks: altering inputs to manipulate system decisions.
– Infrastructure compromise: exploiting flaws in AI model hosting and management.
– Threats from interconnections: AI often connects to other systems, increasing the risk of lateral attack spread.
– Human and organizational gaps: lack of training and over-reliance on automation.
– Malicious use of AI: automating and enhancing attacks.

Preventive and Security Measures

AI is a major transformative force but also poses new cybersecurity challenges. Through this analysis, international experts advocate for a proactive risk management approach to ensure the development and use of trustworthy AI systems.
Recommendations to enhance AI system security include:
– Security by design
– Ongoing monitoring and maintenance
– Supply chain evaluation
– Managing interconnections
– Awareness and training
– Governance and regulatory framework

Le Récap
  • A Shared Vision for Safe AI
  • AI: A Ubiquitous Yet Risky Technology
  • Risk Analysis for Secure AI
  • Preventive and Security Measures
  • About Us
© 2026 - All rights reserved on 24matins.uk site content