Securing AI systems under today’s and tomorrow’s conditions

Securing AI systems under today's and tomorrow's conditions
Binance


Thank you for reading this post, don't forget to subscribe!

Evidence cited in an eBook titled “AI Quantum Resilience”, published by Utimaco [email wall], shows organisations consider security risks as the leading barrier to effective adoption of AI on data they hold.

AI’s value depends on data amassed by an organisation. However, there are security risks to building models and training them on that data. These risks are in addition to better-publicised threats to intellectual property that exist around the point of inference (prompt engineering, for example).

The eBook’s authors state that organisations need to manage threats throughout their AI development and implementation processes. At the same time, companies can and should prepare to change their security protocols, changes that will become mandatory if quantum computing-powered decryption tools become easily available to bad actors.

Utimaco lists three areas under threat:

Training data can be manipulated by bad actors, degrading model outputs in ways are hard to detect,Models can be extracted or copied, eroding intellectual property rights,Sensitive data used during training or inference can be exposed.

Current public key cryptography will become vulnerable in the next ten years, the report’s authors attest; a period in which capable quantum systems may emerge. Regardless of the timescale, it’s thought that better organised groups currently collect encrypted data and store it to decrypt when or if quantum facilities become available. Any dataset with long-term sensitivity, including model training data, financial records, or intellectual property, may require protection against future decryption, therefore, Utimaco says.

A migration to quantum-resistant cryptography will affect protocols, key management, system interoperability, and performance, so any migration is likely to take several years. The report’s authors suggest what they term ‘crypto-agility’, which it defines as changing cryptographic algorithms without redesigning underlying systems. ‘Crypto-agility’ is based on the principle of hybrid cryptography – combining established algorithms with post-quantum methods, such as those suggested by NIST.

The eBook’s authors concur that cryptography on its own doesn’t address all possible areas of risk. It advocates the use of hardware-based trust devices that can isolate cryptographic keys and sensitive operations from normal working environments.

If companies are developing their own AI tools and processes, protection on that basis should extend throughout the AI lifecycle, from data ingestion through to training, model deployment, and inference in production. Hardware keys used to encrypt data and sign models can be generated and stored inside a boundary. Model integrity can then be verified before deployment, and sensitive data processed during inference remains protected.

Hardware-based enclaves isolate workloads so that even system administrators with sufficient privileges can’t access any of the data being processed. Hardware modules can verify that the data enclave is in a trusted state before releasing keys – a process of external attestation – helping create a ‘chain of trust’ from hardware to application.

Hardware-based key management produces tamper-resistant logs covering access and operations to support compliance frameworks such as the EU AI Act.

Many of the risks inherent in AI systems are well known if not already exploited. The risk from quantum computing’s ability to decrypt data currently considered safe is less immediate, but the implications should affect data and infrastructure decisions made today, Utimaco states. It advocates:

A strengthening of controls throughout the AI development and deployment lifecycle,The introduction of ‘crypto-agility’ to allow transition to post-quantum security,Establishing hardware-based trust mechanisms wherever high-value assets are in play.

(Image source: “Scanning electron micrograph of an apoptotic HeLa cell” by National Institutes of Health (NIH) is licensed under CC BY-NC 2.0. To view a copy of this license, visit https://creativecommons.org/licenses/by-nc/2.0)

 

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is part of TechEx and co-located with other leading technology events. Click here for more information.

AI News is powered by TechForge Media. Explore other upcoming enterprise technology events and webinars here.



Source link