CISOs face uncharted territory in preparing for AI security risks
- by nlqip
Moreover, under a 2023 AI safety and security White House executive order, NIST released last week three final guidance documents and a draft guidance document from the newly created US AI Safety Institute, all intended to help mitigate AI risks. NIST also re-released a test platform called Dioptra for assessing AI’s “trustworthy” characteristics, namely AI that is “valid and reliable, safe, secure and resilient, accountable and transparent, explainable and interpretable, privacy-enhanced, and fair,” with harmful bias managed.
CISOs should prepare for a rapidly changing environment
Despite the enormous intellectual, technical, and government resources devoted to creating AI risk models, practical advice for CISOs on how to best manage AI risks is currently in short supply.
Although CISOs and security teams have come to understand the supply chain risks of traditional software and code, particularly open-source software, managing AI risks is a whole new ballgame. “The difference is that AI and the use of AI models are new.” Alon Schindel, VP of data and threat research at Wiz tells CSO.
Source link
lol
Moreover, under a 2023 AI safety and security White House executive order, NIST released last week three final guidance documents and a draft guidance document from the newly created US AI Safety Institute, all intended to help mitigate AI risks. NIST also re-released a test platform called Dioptra for assessing AI’s “trustworthy” characteristics, namely AI…
Recent Posts
- Bob Sullivan Discovers a Scam That Strikes Twice
- A Vulnerability in Apache Struts2 Could Allow for Remote Code Execution
- CISA Adds One Known Exploited Vulnerability to Catalog | CISA
- Xerox To Buy Lexmark For $1.5B In Blockbuster Print Deal
- Vulnerability Summary for the Week of December 16, 2024 | CISA