Cybercrime in the Age of AI

Cybercrime in the Age of AI

Cybercrime is predicted to cost the world $9.5 trillion USD in 2024, according to Cybersecurity Ventures. And today, cybercriminals have a new weapon in their arsenal: artificial intelligence (AI). 

LLMs: The AI-Powered Assistants of Scammers

Trained on vast amounts of text data, LLMs are the AIs you’d be most familiar with – think ChatGPT or Bard. They can act as on-demand virtual assistants, analyse spreadsheets, or break down complex topics into monkey banana terms.

They’re also helping threat actors perfect their cyber attacks.

Spelling errors, grammar mistakes, and wordings that sound unnatural or out of character are some of the key ways we detect scams. So what happens when the same AI that’s helping you spellcheck your reports ends up in the hands of scammers?

“BECs used to be a contentious topic in Japan because the attacker did not speak Japanese and didn’t understand business customs. Now the language and cultural understanding barriers [are] gone... and [attacks] are increasing in volume.”

The threat doesn’t stop at individual messages. LLMs can also be used to orchestrate elaborate multi-persona phishing campaigns, in which attackers pretend to be various individuals within the same email discussion. This method builds trust and deceives the victim by leveraging the principle of social proof. By engaging in seemingly harmless conversations involving multiple personas, the attackers gradually gain the trust of their target, eventually persuading them to click on malicious links, give away sensitive information or process fraudulent payments.

AI-Fueled Infiltration and Attack Strategies

But LLMs and other AI models can do a lot more than just emails.  Crunching large datasets to find vulnerabilities, developing malicious code and brainstorming altogether new attack strategies are just a few ways AI can act as a cybercriminal’s super-assistant. 

The vast pool of stolen personal data available on the dark web (think of the recent “mother of all breaches” with 26 billion records leaked) further fuels these AI-powered attacks. By analyzing these datasets, cybercriminals can pinpoint the most vulnerable targets and tailor their attacks accordingly. 

Just as ChatGPT can help you devise new marketing strategies or ideas for a child’s birthday party, its malicious iterations can be used to brainstorm new approaches to infiltrate or de-fraud organisations and individuals.

Deepfakes: Blurring the Lines of Reality

Synthetic media so convincing it replaces real people in videos and audio. That’s deepfakes.

All you need is a few seconds of someone’s voice to create an imitation of them. Synthetic videos are also becoming increasingly common. It used to take hundreds of images of someone and an immense amount of computer processing power to create a convincing deepfake. Thanks to AI and modern technology, you only need a few photos of someone’s face and a phone app.

Imagine your CEO’s voice in a fraudulent video message requesting urgent financial transfers. Or a realistic deepfake of your colleague asking for sensitive company information via phone. These are just a few chilling possibilities.

How often do you call your clients or business partners? How often do you do virtual meetings? What happens when these trusted channels of communication are breached?

These aren’t emerging technology or hypothetical risks – the threats are here, today.

No Artificial Intelligence Fake Replicas And Unauthorized Duplications

This may sound crazy but…

A multinational company just lost $40 Million to a deepfake video call scam where scammers posed as the firm’s chief financial officer and other coworkers.

At first the employee was suspicious, but that all changed when the staffer was invited to join a video call. Sitting in that meeting was the CFO, coworkers, and a few externals who convinced the employee to make 15 transactions to local bank accounts over the course of a week, totalling HK$200 million (AUD $40 million).

AIs have been around for only a little over a year, this is only the beginning of what’s to come.

The Unknown

“Give a man a fish and you feed him for a day; teach a man to fish and you feed him for a lifetime. But teach an AI to fish and it’ll teach itself biology, chemistry, oceanography, evolutionary theory and then fish all the fish to extinction”


With AI tools becoming increasingly accessible, the potential for unforeseen risks and unintended consequences grows exponentially. The future is uncertain, but one thing is clear: ignoring the AI threat to our cybersecurity is no longer an option.

Source: eftsure

Read More

In a world increasingly attuned to environmental concerns, understanding and managing your business’s carbon footprint is not just a regulatory hurdle, but a strategic advantage.

This is where carbon accounting steps in. As your ultimate green GPS, carbon accounting quantifies the climate impact of your operations, revealing the invisible footprint your business leaves.

Christmas may be a time of giving but be careful not to gift too much lest you accidentally step into a Fringe Benefits Tax trap!

From what records to keep to the difference between “Entertainment” and “Gift”, ask the right questions early on to save yourself a tax headache in the new year.

 

In the age of today, a single disruption to our phones and internet connections can have far-reaching consequences beyond just one or two missed calls.

The recent Optus outageleft many Optus customers feeling frustrated and concerned. So, what’s next for affected customers? Let’s explore the options available for seeking compensation and, if you’re still unsatisfied, how you might be able to end your contract with Optus early.

Be the first to access articles like these and more by subscribing to our newsletter.

Tailored Accounts © All rights reserved.