Exclusive Interview: Garrett O’Hara Discusses Mimecast’s AI Strategies to Combat Cyber Risk

In an era marked by increasingly sophisticated cyberattacks, organisations are turning their attention to what Garrett O’Hara, Senior Director of Solutions Engineering for APAC at Mimecast, identifies as the “most unpredictable element in security”—humans. O’Hara explains that human risk encompasses actions that individuals take, either accidentally or intentionally, that expose organisations to potential threats. He notes that these actions are often not malicious but stem from factors such as fatigue, tight deadlines, or the desire to work more efficiently. Employees may inadvertently bypass security protocols, such as uploading sensitive documents to personal drives for remote work, without recognising the significant risks involved.

While AI tools can enhance productivity, they also present new vulnerabilities for organisations. O’Hara highlights that employees may use platforms like ChatGPT to summarise documents, inadvertently sharing sensitive corporate information. Conversely, AI serves as a crucial ally in combating these emerging threats. It excels at identifying patterns and detecting threats that traditional methods might overlook, such as subtle variations in URLs indicative of phishing attempts or AI-generated scam emails. O’Hara points out that phishing campaigns have evolved, becoming nearly indistinguishable from legitimate communications, as attackers leverage AI to create flawless emails. Mimecast’s platform employs AI across its operations, utilising techniques like sandboxing and behavioural analysis to detect language markers associated with business email compromise (BEC). For instance, if an email mimics a CEO urgently requesting gift card purchases, Mimecast’s AI can intercept it, effectively shielding employees from potential threats. However, O’Hara acknowledges that trust in AI remains a challenge, as the cybersecurity landscape experiences “hype fatigue.” He cautions against vendors overusing the term “AI,” which can undermine trust. Some vendors rely solely on AI, leading to high false positive rates and overwhelming security teams. O’Hara emphasises that a layered approach, where AI decisions are supported by additional checks, is essential to mitigate these issues. 

Categories: Cybersecurity, Human Risk Management, Artificial Intelligence in Security 

Tags: Cyberattacks, Human Risk, Artificial Intelligence, Security Policies, Phishing, Email Compromise, Productivity, Trust, False Positives, Layered Approach 

Leave a Reply

Your email address will not be published. Required fields are marked *