Exclusive: Tony Burnside of Netskope Discusses the Importance of AI Guardrails for Enhancing Security Transition

Cyberattacks are escalating at an unprecedented rate, according to Tony Burnside, Senior Vice President and Head of APAC at Netskope, during a recent interview with TechDay. He highlighted that the surge in breaches and misuse of Artificial Intelligence (AI) is compelling companies to fundamentally rethink their security strategies. Burnside referenced statistics indicating that the Australian Signals Directorate responded to over 1,100 breaches. He noted that despite Australia representing a small fraction of the global user population, it has become an appealing target for cybercriminals. A significant threat for local employees is credential theft, as attackers often impersonate trusted business applications. When employees receive links to confirm or reset their credentials, they may not question their authenticity, which poses a considerable risk.

One of the most urgent issues, according to Burnside, is how enterprises manage the swift adoption of AI tools. He observed that a large majority of Australian businesses are detecting generative AI usage among their workforce. While it may seem prudent to block these tools until they are deemed secure, Burnside argued that this approach is not optimal. He believes that if AI tools serve legitimate business purposes, the goal should be to enable their use as quickly as possible while implementing appropriate security measures. Otherwise, employees might resort to risky alternatives, such as using personal devices and accounts that security teams cannot monitor. Burnside emphasised that employees often utilise these tools with good intentions, whether to enhance efficiency or improve reports.

Risks arise from the use of non-sanctioned applications, as the absence of security guardrails leaves organisations vulnerable. Consequently, many organisations are now moving towards providing corporate-sanctioned AI tools. This shift does not imply that employees are no longer using other tools, but it marks the first instance of a decline in ChatGPT usage as generative AI tools like Gemini or Copilot, which are integrated into larger productivity suites, gain priority as sanctioned options. Burnside underscored that Netskope aims to maintain high productivity levels without exposing businesses to security risks. If a user protected by Netskope attempts to use a non-sanctioned platform to enhance a PowerPoint presentation, the system would block them if corporate information is involved.

The system would notify the user that they are trying to send sensitive information to ChatGPT and suggest using Gemini, the corporate-approved generative AI tool instead. This approach allows employees to work more efficiently without compromising security. Additionally, Netskope employs coaching prompts to encourage staff to consider the risks before taking potentially hazardous actions. The company has made significant investments in infrastructure across Australia and New Zealand, establishing eight local data planes to enhance security and support for its clients. 

Categories: Cybersecurity Threats, AI Tool Management, Credential Theft 

Tags: Cyberattacks, Breaches, AI Misuse, Security, Credentials Theft, Generative AI, Non-Sanctioned Applications, Corporate-Sanctioned Tools, Productivity, Risk Management 

Leave a Reply

Your email address will not be published. Required fields are marked *