'The use of AI, especially genAI is only going to get worse, especially in an enterprise ecosystem.'
Ransomware and social engineering, crimes that involve attacks on technology service providers and bad actors manipulating people to give up confidential information, are increasing in India.
Aaron Bugal, chief technology officer (CTO) at global cybersecurity firm Sophos, visited India recently and urged organisations to strengthen security.
In an interview with Shivani Shinde/Business Standard in Mumbai, Bugal discussed Sophos's work in India.
How significant is India for Sophos?
India is a crucial market for us. We have a substantial presence here, including sales and development teams, as well as a significant client base.
It is one of our fastest-growing regions within the APJ (Asia Pacific and Japan) market and has been a consistent performer.
The country presents a massive opportunity for Sophos. A large portion of our firewall development happens in India, supported by a strong engineering team.
Additionally, we have support teams, developers and security operators here.
How has Indian enterprises' approach to cybersecurity evolved over the years?
The most significant change has been a shift in attitude: Organisations are now taking cybersecurity seriously.
The adoption of strong governance, risk and compliance frameworks has improved significantly.
Companies are recognising the challenges posed by cybersecurity, especially with the emergence of new technologies.
Indian organisations have developed a more mature vision, implementing robust governance, risk controls and cyber resilience strategies.
Ransomware attacks are increasing in India, affecting enterprises. What should they do?
It is a big problem for organisations, and it is used to gain awareness for those that might not be taking it seriously enough or believe that it will not happen to them.
Ransomware as an incident is headline-grabbing as well but one has to realise that ransomware is the penultimate phase of attack.
An attacker has already gained access to the environment and they know a lot already about the environment. And then they drop the ransomware threat.
What will be the security trends in 2025?
Social engineering and deepfakes being used to scam people is only going to increase.
This will force governments to come down to those that provide social media services to ensure that they are regulating scam content.
Other trends we see are more information threats: This could be from State-sponsored hackers.
Artificial intelligence (AI) is really going to democratise the ability for anybody to pick up these tools with little to no experience and write a compelling phishing lure and send it to your family and friends.
The good thing is that organisations are learning the importance of security.
We see that security budgets will go up this year by at least 5 per cent to 10 per cent.
Social engineering is a big threat as more and more people get on the internet.
It s a massive issue, and it is exceptionally hard to track this.
As a technology vendor, we can create filters and tools that act as an interface for people accessing the internet.
We can detect potentially fraudulent apps on smart phones but social engineering attacks can come from anywhere.
Social engineering from a threat vector standpoint remains as the number one cause of people losing credentials, surrendering access to unauthorised third parties and allowing the attackers to gain access to those environments.
How do you see AI being misused in social engineering attacks?
The use of AI, especially genAI is only going to get worse, especially in an enterprise ecosystem.
With the advancement of technology, especially around GenAI, that is going to cause a problem for enterprises who have not taken security awareness training around thoroughly to spot potential scams.
Organisations must realise that they need to secure the AI systems themselves.
It is going to become paramount because many organisations are still experimenting or have started deploying AI in a very small controlled environment.
While we have seen AI giving amazing results when it is deployed, the risk of the rest of the systems is also high.
Some of the scenarios can be the training data for the LLMs (large language models) could be compromised or someone can introduce biases into it. The fallout can be huge.
Feature Presentation: Aslam Hunani/Rediff.com