91˰涶

Today’s top cybersecurity threats for consumers, and how to fight them

2024-06-21T13:39:00+10:00

Deep fake. Deepfake and AI artificial intelligence video editing technology. Face of a person in editor. Machine learning concept. Fraud picture swap.
Lisa Uhlman
Lisa Uhlman,

A cybersecurity 'arms race' means consumers and businesses should prepare to face profound new threats, says UNSW’s Dr Eila Erfani.

Recent massive data breach scandals, paired with the explosion of artificial intelligence and deepfake technologies, have dramatically changed the digital trust landscape, says cybersecurity expert Dr Eila Erfani. A new set of solutions is needed to counter these threats.

Critically, the same technological advancements driving business narratives in 2024 are also empowering bad actors to overcome existing defences. Cybersecurity, therefore, must evolve to encompass a “multifaceted, user-centric” approach, says Dr Erfani, a Senior Lecturer at the School of Information Systems and Technology Management of UNSW Business School. This approach extends beyond technology alone, incorporating insights from psychology, sociology, ethics, and economics to create more resilient and responsible cybersecurity strategies.

The Australian Competition and Consumer Commission’s National Anti-Scam Centre (NASC) says scam losses declined 13.1 per cent annually in 2023 to $2.74 billion. However, the number of scams reported increased 18.5 per cent in that time, with more than 601,000 reports made over the year, compared with 507,000 in 2022.

The NASC’s for 2023 attributed the decline in reported losses to “collaborative efforts across government, law enforcement, consumer organisations and industry” in the fight against financial crime. However, it said, there is still work to do.

The report said several concerning trends were emerging and that while reported losses have declined, they remain too high. Moreover, losses to phishing, payment redirection and job scams increased in 2023.

In addition, the report showed older people suffered the most significant harm, with people over 65 representing the only age group to experience increased scam losses.

Overall, investment scams continued to cause the most harm ($1.3 billion) in 2023, followed by remote access scams ($256 million) and romance scams ($201.1 million), the NASC reported.

Dr Erfani says that developments in cybercriminals’ capabilities mean attackers increasingly use artificial intelligence (AI) and machine learning technologies to make increasingly sophisticated, effective campaigns. The top cybersecurity risks consumers and businesses are facing in 2024 reflect the pervasiveness of this technological edge.

AI-powered attacks, deepfakes on the rise

Attackers are taking full advantage of developing technologies as they become available, and bad actors have shown an adaptiveness that keeps the threat level high.

“AI can lead to cybersecurity attacks by automating and scaling up attacks, generating personalised phishing emails, creating deepfakes, evading detection, cracking passwords, optimising DDoS attacks, exploiting AI systems, poisoning data and enhancing social engineering attacks,” Dr Erfani says.

The increasing use of this technology in scams poses significant challenges for consumers and is already affecting the “landscape of trust and security” in the digital age, she adds. It’s also led to the rise of deepfake scams, which harness the power AI to create realistic audio and video forgeries, allowing scammers to impersonate trusted figures or entities.

Deepfake technology poses a “profound threat” to consumers’ personal security and privacy, Dr Erfani says. Attackers use these sophisticated forgeries to create convincing scams, manipulate perceptions and commit identity theft.

For instance, ASIC cited a recent case in which a woman lost her life savings after she saw a deepfake Elon Musk video that prompted her to click a link and register her details online. And deepfakes based on Australian politicians have reportedly been used in recent investment scams.

“Consumers may face fraudulent activities conducted in their name or be misled by seemingly authentic communications from trusted individuals or brands,” Dr Erfani says. “The psychological impact is also notable, as it becomes increasingly difficult for individuals to discern truth from manipulation, leading to distrust.”

And while the “arms race between cybersecurity defences and AI-powered attacks is expected to intensify”, this rapid technological advancement also presents tools for mitigating the emerging threats, Dr Erfani says.

“AI plays a dual role in cybersecurity: while it can be used to create sophisticated attacks, we can also harness its power to develop effective strategies for mitigating these threats,” she says.

Media enquiries

For enquiries or to arrange interviews,  please contact:

Katie Miller 
News and Content Coordinator

Tel: +61 408 033 715
Email: katie.miller1@unsw.edu.au


Deepfake scams harness the power AI to create realistic audio and video forgeries, allowing scammers to impersonate trusted figures or entities. Photo: Getty Images

Ransomware and other top threats

Ransomware attacks, which use malware to encrypt data or systems for extortion, have presented a “critical threat” for years but are now evolving to be more sophisticated, Dr Erfani says.

“Beyond encrypting data, future ransomware attacks may escalate by threatening to leak sensitive information publicly or by targeting backups and cloud services to maximise their impact,” she says. “The rise of ‘ransomware-as-a-service’ platforms also makes these attacks accessible to a broader range of malicious actors.”

These platforms, which make a market between ransomware operators and buyer ‘affiliates’, can enable actors with little technical knowledge to deploy harmful ransomware attacks, says the Australian Signals Directorate (ASD), which in its most recent called ransomware the “most destructive cybercrime threat to Australians”.

Another increasingly critical threat is supply-chain attacks, which compromise software updates, hardware integrity or third-party services and can lead to widespread security breaches.

“Cyberattacks targeting supply chains aim to exploit vulnerabilities in the network of suppliers, vendors and partners that organisations rely on,” Dr Erfani says. “The interconnectedness of digital ecosystems makes supply chain attacks efficient for attackers to exploit multiple targets through a single point of weakness.”

She also cites the rise of quantum computing – which uses quantum mechanics to solve problems too complex for classical computers – as a source of growing scam risk.

And many of these increased risks are enhanced by the fact that cybercriminals now have an expanded “attack surface”, Dr Erfani says. The explosion of ‘Internet of Things’ (IoT) devices has been critical to that expansion.

Dr Erfani highlights that IoT devices are susceptible to cyberattacks due to their interconnected operations, which expose them to a wide range of threats across complex networks, particularly when devices have inadequate security features.

Multifaceted approach needed

Because of the complexity of the cybersecurity threat today, countering emerging risks requires a “multifaceted and user-centric cybersecurity approach and empowerment strategies solution”, Dr Erfani argues. This approach will lead to more resilient and responsible cybersecurity strategies.

“Integrating advanced technology with insights from psychology, sociology and economics, along with a strong ethical foundation, offers the potential to establish a cybersecurity infrastructure that not only defends against threats but also fosters a secure, inclusive and equitable digital environment for all users,” she says.

User-centric approaches tailor cybersecurity assessments and guidance to users’ specific needs and behaviours, ensuring that defences are relevant and effective and enhancing user security. Dr Erfani also calls for the creation of a Cyber Victim Support Hub to provide affected individuals and organisations with resources, guidance and recovery assistance. In addition, expansion of the Digital ID program would help provide secure and reliable verification methods, reduce identity fraud and enhance online security.

Key to mitigation efforts will be “combatting AI with (responsible) AI”, she adds. “By implementing responsible AI technologies, we take a proactive stance in detecting, preventing and mitigating cyber threats. This approach ensures ethical use and safeguards against misuse, demonstrating our commitment to responsible and effective cybersecurity.”