Innovating safely: Navigating the intersection of AI, network, and security

2245159551



The widespread adoption of artificial intelligence (AI) has thrust it in the limelight, accelerating change across enterprises and industries. Given its potential use as a tool both for and against organisations, security leaders are keeping a watchful eye on developments in this space. 

According to Foundry’s 2023 Security Priorities Study, 68% of security leaders in the Asia-Pacific region are leveraging AI in their arsenal of technologies. Of those who are using AI, 75% are already seeing benefits in faster identification of unknown threats, accelerated detection and response times, and automation to reduce employee workload. 

At a recent CSO Australia roundtable event in Perth, supported by Cisco and partner Kytec, senior technology executives discussed the potential uses of AI as well as challenges to keep security front of mind as enterprises adopt the technology. 

Carl Solder – Cisco’s Chief Technology Officer – highlighted the firm’s AI Readiness Index, which revealed that Australian organisations lag the global average when it comes to preparedness. According to the report, only 5% of these organisations are fully ready to leverage and deploy AI – despite 59% admitting serious concerns about the impact on their business if they fail to harness AI in the next 12 months. 

Building on that, David Okulicz – Managing Director of Kytec – shared his observations from working with Australian enterprises on their technology and security challenges. “We’re definitely seeing great AI use cases in the market right now and some businesses are approaching it with care as they weigh the opportunities against their own needs.” He also highlighted the importance of organisations getting their data right, to fully optimise learning models. 

Attendees at the roundtable luncheon concurred, as some represented industries that demonstrated relatively fewer opportunities at present for AI adoption operationally. However, they noted that certain business units within their enterprise were quick to adopt large language model (LLM) tools and AI image generators when these were first made available to the public.  

However, attendees also spoke about how the wider adoption of generative AI technology could present new threats – deepfakes and the unintended disclosure of sensitive and confidential information, for example. Employees could inadvertently share developer codes or other proprietary information when using tools such as ChatGPT. They also discussed a recent financial scam using deepfake technology that resulted in UK engineering firm Arup losing $25 million in Hong Kong. 

These threats highlight the importance of having comprehensive policies and processes in an enterprise, ensuring safeguards in place to mitigate such risks. However, developing effective AI policies related to security presents a multifaceted challenge. As attendees shared – the rapid pace of technological advancement means policies can quickly become outdated, the inherent complexity of AI systems requires policymakers to have a deep technical understanding of the tools used, there must be a balance between fostering innovation and ensuring robust security so as not to stifle the former, as well as other ethical and privacy concerns – such as bias and data protection – that add another layer of complexity, demanding fair and transparent policies.  

One attendee shared his golden rule for developing policy around sharing proprietary information on LLM tools: “If it’s not information that you’re comfortable to see displayed on news headlines, then it shouldn’t be shared on platforms such as ChatGPT.” 

Another attendee believes the interdisciplinary nature of AI security demands collaboration across business units in an enterprise: “There has to be a coordinated effort among technology, legal, human resources, and compliance teams to ensure success.” 

Additionally, addressing security challenges requires a proactive and flexible approach that involves continuous monitoring, stakeholder engagement, and cooperation to ensure policies are effective and up-to-date. 

Apart from leveraging people and processes, obviously technology also has a role to play in protecting enterprises. This is where AI circles back – this time not as technology to be wary of, but as a tool that can radically change the security paradigm. 

Carl spoke about Cisco’s Hypershield, which he believes represents a new way for the firm to better protect its customer’s network and cloud environments. “At the core of this new tool are enforcement points that check for anomalous behaviour.” 

“This fabric of security enforcement points – which grows over time without the need to overhaul policy – can run on a server, in data processing units, or networking hardware.” Hypershield leverages AI to establish known good behaviours of applications and it is also kept up-to-date with data on potential attacks or threats. According to Carl, its ability to be in more places across the network and be more reactive to anomalous behaviour sets it apart from old ways of securing applications and data: “Definitely a potential game-changer for how businesses secure their networks.” 

As security practitioners grapple with ever-evolving threats and attacks in the landscape, it is about time they can count on a tool that can keep up. 



Source link
lol

The widespread adoption of artificial intelligence (AI) has thrust it in the limelight, accelerating change across enterprises and industries. Given its potential use as a tool both for and against organisations, security leaders are keeping a watchful eye on developments in this space.  According to Foundry’s 2023 Security Priorities Study, 68% of security leaders in…

Leave a Reply

Your email address will not be published. Required fields are marked *