Big in prevention, even bigger in AI

10 Min Read

Hundreds of cybersecurity professionals, analysts and decision makers gathered earlier this month for ESET World 2024a conference that showcased the company’s vision and technological advancements and provided a number of insightful talks on the latest trends in cybersecurity and beyond.

The topics varied, but it’s safe to say that the ones that resonated the most were ESET’s groundbreaking threat research and perspectives on artificial intelligence (AI). Now let’s briefly look at some sessions that covered the topic on everyone’s lips these days: AI.

Back to basic

First, ESET Chief Technology Officer (CTO) Juraj Malcho took the lead and provided his views on the key challenges and opportunities presented by AI. However, he didn’t stop there and continued looking for answers to some fundamental questions surrounding AI, including “Is it as revolutionary as it is claimed to be?”.

Juraj Malcho, Chief Technology Officer, ESET

Current iterations of AI technology usually take the form of large language models (LLMs) and various digital assistants that make the technology feel very real. However, they are still quite limited, and we need to thoroughly define how we want to use the technology to strengthen our own processes, including its use in cybersecurity.

For example, AI can simplify cyber defenses by deconstructing complex attacks and reducing resource demand. In doing so, it improves the security capabilities of understaffed business IT operations.

Demystifying AI

Juraj JaNosik, director of artificial intelligence at ESET, and Filip Mazan, Sr. Manager of Advanced Threat Detection and AI at ESET, then presented a comprehensive insight into the world of AI and machine learning, exploring their roots and differentiators.

Juraj Jánošík, Director of Artificial Intelligence at ESET, and Filip Mazan, Senior Manager of Advanced Threat Detection
Juraj Jánošík, Director of Artificial Intelligence at ESET, and Filip Mazan, Senior Manager of Advanced Threat Detection

Mr. MaSatI showed how they are fundamentally based on human biology, with the AI ​​networks mimicking some aspects of how biological neurons function to create artificial neural networks with different parameters. The more complex the network, the greater its predictive power, leading to improvements in digital assistants like Alexa and LLMs like ChatGPT or Claude.

See also  'House of the Dragon' season 2 review: bigger and gloomier

Later, Mr. Mazan emphasized that as AI models become more complex, their usefulness may decrease. As we approach the reconstruction of the human brain, the increasing number of parameters necessitates thorough refinement. This process requires human supervision to continuously monitor and refine the model’s performance.

And pigs can fly (generative AI models can be masterfully artistic)
And pigs can fly… (generative AI models can be masterfully artistic)

In fact, slimmer models are sometimes better. Mr. Mazan described how ESET’s strict use of internal AI capabilities results in faster and more accurate threat detection, meeting the need for fast and accurate responses to all types of threats.

He also echoed Mr Malcho and highlighted some of the limitations faced by large language models (LLMs). These models work based on predictions and involve connecting meanings, which can easily become confused and lead to hallucinations. In other words, the usefulness of these models only goes so far.

Other limitations of current AI technology

In addition, Mr. Jánošík continued to address other limitations of contemporary AI:

  • Explainability: Current models consist of complex parameters, making their decision-making processes difficult to understand. Unlike the human brain, which works based on causal explanations, these models function through statistical correlations, which are counterintuitive to humans.
  • Transparency: Top models are their own (walled gardens), without insight into their inner workings. This lack of transparency means there is no accountability for the way these models are configured or the results they produce.
  • Hallucinations: Generative AI chatbots often generate plausible but incorrect information. These models can exude a lot of confidence while providing false information, which can lead to accidents and even legal problems down the line Air Canada’s chatbot presented false information about a discount for a passenger.

Fortunately, the restrictions also apply to misuse of AI technology for malicious activities. While chatbots can easily formulate plausible-sounding messages in support of spear-phishing or corporate email attacks, they are not. well rested to create dangerous malware. This limitation is due to their tendency to “hallucinate” – producing plausible but incorrect or illogical results – and their underlying weaknesses in generating logically connected and functional code. As a result, creating new, effective malware typically requires the intervention of a true expert to correct and refine the code, making the process more challenging than some might think.

See also  Take a screenshot on Android

Finally, as Mr Jánošík pointed out, AI is just a tool that we need to understand and use responsibly.

The rise of the clones

In the next session, Jake Moore, Global Cybersecurity Advisor at ESET, gave a taste of what’s currently possible with the right tools, from cloning RFID cards and hacking CCTVs to creating compelling deepfakes – and how it can impact corporate data and finance at risk.

Among other things, he showed how easy it is to compromise a company’s premises by using a well-known hacking gadget to copy employee access cards or (with permission!) a social media account of the company’s CEO. hacking. He then used a tool to clone his likeness, both face and voice, to create a convincing deepfake video that he then posted to one of the CEO’s social media accounts.

Jake Moore, Global Security Advisor, ESET
Jake Moore, Global Security Advisor, ESET

The video – which one When the future CEO announced a ‘challenge’ to cycle from Britain to Australia and got more than 5,000 views, it was so convincing that people started proposing sponsorship. Even the company’s CFO was fooled by the video, asking the CEO about his future whereabouts. Only one person was not fooled: the CEO’s 14-year-old daughter.

In a few steps, Mr. Moore demonstrated the danger posed by the rapid spread of deepfakes. Seeing is no longer believing: companies and people themselves must scrutinize everything they encounter online. And with the advent of AI tools like Sora, which can create video based on a few lines of input, dangerous times may be upon us.

Finish

The last session dedicated to the nature of AI was a panel including Mr. Jánošík and Mr. Mazan, and Mr Moore, and was led by Ms Pavlova. It started with a question about the current state of AI, with panelists agreeing that the latest models are overloaded with many parameters and need further refinement.

See also  A 'bionic eye' scan of an ancient, scorched scroll points to Plato's long-lost grave
The AI ​​panel discussion was chaired by Victoria Pavlova, UK editor of CRN Magazine
The AI ​​panel discussion was chaired by Victoria Pavlova, UK editor of CRN Magazine

The discussion then shifted to the immediate dangers and concerns for businesses. Mr Moore highlighted that a significant number of people are unaware of the capabilities of AI, which bad actors can exploit. While the panelists agreed that advanced AI-generated malware does not currently pose an immediate threat, other dangers, such as enhanced phishing email generation and deepfakes created using public models, are very real.

Moreover, as emphasized by Mr. Jánošíkthe biggest danger lies in the data privacy aspect of AI, given the amount of data these models receive from users. In the EU, for example, the GDPR And AI law have established some data protection frameworks, but that is not enough as these are not global laws.

Today's AI offers both opportunities and some real dangers
Today’s AI offers both opportunities and some real dangers

Mr Moore added that companies should ensure their data remains internal. Enterprise versions of generative models can accommodate this, avoiding the ‘need’ to rely on (free) versions that store data on remote servers, potentially putting sensitive business data at risk.

To address data privacy concerns, Mr. Mazan suggested that companies should start from the bottom up and use open source models that can work for simpler use cases, such as generating summaries. Only if these prove to be insufficient should companies switch to cloud-based solutions from other parties.

Mr. Jánošík concluded by noting that companies often overlook the disadvantages of AI use. Indeed, guidelines are needed for the safe use of AI, but even common sense can go a long way in keeping their data safe. As Mr Moore summarized in an answer on how AI should be regulated, there is an urgent need to raise awareness about the potential of AI, including its potential harm. Encouraging critical thinking is critical to ensuring safety in our increasingly AI-driven world.

Share This Article
Leave a comment

Leave a Reply

Your email address will not be published. Required fields are marked *