NIST publishes new guides on AI risk for developers and CISOs

1798109056 decision making ciso soc



Synthetic Content Risks

Today’s first-generation AI systems are capable of maliciously synthesizing images, sound, and video well enough for it to be indistinguishable from genuine content. The guide “Reducing Risks Posed by Synthetic Content” (NIST AI 100-4) examines how developers can authenticate, label, and track the provenance of content using technologies such as watermarking.

A fourth and final document, “A Plan for Global Engagement on AI Standards” (NIST AI 100-5), examines the broader issue of AI standardization and coordination in a global context. This is probably less of a worry now but will eventually loom large. The US is only one albeit major jurisdiction; without some agreement on global standards, the fear is AI might eventually become a chaotic free-for-all.

“In the six months since President Biden enacted his historic Executive Order on AI, the Commerce Department has been working hard to research and develop the guidance needed to safely harness the potential of AI, while minimizing the risks associated with it,” said US Secretary of Commerce Gina Raimondo.

“The announcements we are making today show our commitment to transparency and feedback from all stakeholders and the tremendous progress we have made in a short amount of time.”

NIST guides are likely to become required cybersecurity reading

Once the documents are finalized later this year, they are likely to become important reference points. Although NIST’s AI RMF is not a set of regulations organizations must comply with, it sets out clear boundaries on what counts as good practice.

Even so, assimilating a new body of knowledge on top of NIST’s industry-standard Cybersecurity Framework (CSF) will still be a challenge for professionals said Kai Roer, CEO and founder of Praxis Security Labs, who in 2023 participated in a Norwegian Government committee on ethics in AI.



Source link
lol

Synthetic Content Risks Today’s first-generation AI systems are capable of maliciously synthesizing images, sound, and video well enough for it to be indistinguishable from genuine content. The guide “Reducing Risks Posed by Synthetic Content” (NIST AI 100-4) examines how developers can authenticate, label, and track the provenance of content using technologies such as watermarking. A…

Leave a Reply

Your email address will not be published. Required fields are marked *