Adapting to a new era of cybersecurity in the age of AI
- by nlqip
AI has the power to transform security operations, enabling organizations to defeat cyberattacks at machine speed and drive innovation and efficiency in threat detection, hunting, and incident response. It also has major implications for the ongoing global cybersecurity shortage. Roughly 4 million cybersecurity professionals are needed worldwide. AI can help overcome this gap by automating repetitive tasks, streamlining workflows to close the talent gap, and enabling existing defenders to be more productive.
However, AI is also a threat vector in and of itself. Adversaries are attempting to leverage AI as part of their exploits, looking for new ways to enhance productivity and take advantage of accessible platforms that suit their objectives and attack techniques. That’s why it’s critical for organizations to ensure they are designing, deploying, and using AI securely.
Read on to learn how to advance secure AI best practices in your environment while still capitalizing on the productivity and workflow benefits the technology offers.
4 tips for securely integrating AI solutions into your environment
Traditional tools are no longer able to keep pace with today’s threat landscape. The increasing speed, scale, and sophistication of recent cyberattacks demand a new approach to security.
AI can help tip the scales for defenders by increasing security analysts’ speed and accuracy across everyday tasks like identifying scripts used by attackers, creating incident reports, and identifying appropriate remediation steps—regardless of the analyst’s experience level. In a recent study, 44% of AI users showed increased accuracy and were 26% faster across all tasks.
However, in order to take advantage of the benefits offered by AI, organizations must ensure they are deploying and using the technology securely so as not to create additional risk vectors. When integrating a new AI-powered solution into your environment, we recommend the following:
- Apply vendor AI controls and continually assess their fit: For any AI tool that is introduced into your enterprise, it’s essential to evaluate the vendor’s built-in features for fostering secure and compliant AI adoption. Cyber risk stakeholders across the organization should come together to preemptively align on defined AI employee use cases and access controls. Additionally, risk leaders and CISOs should regularly meet to determine whether the existing use cases and policies are adequate or if they should be updated as objectives and learnings evolve.
- Protect against prompt injections: Security teams should also implement strict input validation and sanitization for user-provided prompts. We recommend using context-aware filtering and output encoding to prevent prompt manipulation. Additionally, you should update and fine-tune large language models (LLMs) to improve the AI’s understanding of malicious inputs and edge cases. Monitoring and logging LLM interactions can also help security teams detect and analyze potential prompt injection attempts.
- Mandate transparency across the AI supply chain: Before implementing a new AI tool, assess all areas where the AI can come in contact with your organization’s data—including through third-party partners and suppliers. Use partner relationships and cross-functional cyber risk teams to explore learnings and close any resulting gaps. Maintaining current Zero Trust and data governance programs is also important, as these foundational security best practices can help harden organizations against AI-enabled attacks.
- Stay focused on communications: Finally, cyber risk leaders must recognize that employees are witnessing AI’s impact and benefits in their personal lives. As a result, they will naturally want to explore applying similar technologies across hybrid work environments. CISOs and other risk leaders can get ahead of this trend by proactively sharing and amplifying their organizations’ policies on the use and risks of AI, including which designated AI tools are approved for the enterprise and who employees should contact for access and information. This open communication can help keep employees informed and empowered while reducing their risk of bringing unmanaged AI into contact with enterprise IT assets.
Ultimately, AI is a valuable tool in helping uplevel security postures and advancing our ability to respond to dynamic threats. However, it requires certain guardrails to deliver the most benefit possible.
For more information, download our report, “Navigating cyberthreats and strengthening defenses in the era of AI,” and get the latest threat intelligence insights from Microsoft Security Insider.
AI has the power to transform security operations, enabling organizations to defeat cyberattacks at machine speed and drive innovation and efficiency in threat detection, hunting, and incident response. It also has major implications for the ongoing global cybersecurity shortage. Roughly 4 million cybersecurity professionals are needed worldwide. AI can help overcome this gap by automating…
Recent Posts
- The true (and surprising) cost of forgotten passwords
- ChatGPT allows access to underlying sandbox OS, “playbook” data
- CISA Releases Nineteen Industrial Control Systems Advisories | CISA
- Spectra Partners With Beltex Insurance, Ingram Micro: Exclusive
- Top 8 Cloud Platform Services Ranked: Azure, AWS, Google Lead Gartner Magic Quadrant