June 15, 2024

Arup recently confirmed having fallen prey to a deepfake fraud resulting in approximately HK$200m (£20m) being misdirected to fraudsters in a deceptive video call.

Earlier in the year, the company reported this incident to Hong Kong’s law enforcement, noting the usage of counterfeit voices and visuals.

A declaration from Hong Kong’s police force in February indicated a falsely manipulated call led an employee of an undisclosed company to forward money to imposters faking to be senior executives of the firm.

According to Arup, the occurrence isn’t an isolated one: “We informed the police in January about a fraud incident in Hong Kong. Nevertheless, we are unable to delve into specifics at this phase, given the ongoing active investigation.”

“Luckily, our fiscal position and day-to-day operations were not impacted, and there was no security breach in our internal databases.”

Arup’s Global Chief Information Officer, Rob Greig, admits that the company is regularly targeted, with the number and the complexity of these incidents escalating lately: “Our operations, along with many others, are frequently susceptible to a range of attacks which include fake invoice fraud, phishing, WhatsApp voice spoofing, and deepfakes. Recent months have witnessed a steep rise in both the volume and sophistication of these attacks.”

Greig anticipates this episode to be the rallying call for increased vigilance against these advanced cybersecurity threats: “This isn’t merely a problem specific to a business or industry – it’s a societal issue. Hopefully, our experience serves as a poignant reminder of the increasing intricacy and shifting strategies of such fraudsters.”

Until now, no arrests have been made in this case, confirmed the Hong Kong police. However, they mentioned that investigations are still underway.

Frequently Asked Questions(FAQ)

What is Deepfake?

Deepfake is an artificial intelligence-based technology used to create or alter content, making it appear authentic. This could include manipulating voices, visuals, and video content to create fake communication that seems legitimate.

What measures can businesses take against Deepfake attacks?

Organizations need to bolster their cybersecurity procedures, provide training to employees to identify potential threats, and encourage them to validate any unusual requests, particularly those involving finances. Employing AI tools to detect manipulated content can also be beneficial.

What are the challenges in bringing deepfake fraudsters to justice?

The primary challenge in tracking and prosecuting deepfake culprits is the anonymity provided by the internet, sophisticated encryption techniques, and often the international nature of these crimes. This makes it complex to trace, detain and prosecute perpetrators, taking into account jurisdictional aspects in applying cyber laws.