Cybercrime is a big business in Asia, and the AI could be about to worsen things


Southeast Asia has become a global epicenter of cyber frauds, in which high-tech fraud hits human trafficking. In countries such as Cambodia and Myanmar, criminal syndicates “Schweinbutschern” operations carry out victims in wealthier markets such as Singapore and Hong Kong.

The scale is breathtaking: an UN estimate of the global losses from this schemata At 37 billion US dollars. And it could get worse soon.

The rise of cybercrime in the region has already affected politics and politics. Thailand has reported a decline in Chinese visitors this year A Chinese actor was kidnapped and forced to work in a fraudster based in Myanmar; Bangkok is Now fights Convincing tourists is safe to come. And Singapore has just adopted an anti-scam law that enables the law enforcement authorities to freeze the bank accounts of fraudsters.

But why did Asia become notorious for cybercrime? Ben Goodman, General Manager of October in the Asian-Pacific area, notes that the region offers a unique dynamic that facilitates cybercrime frauds. For example, the region is a “mobile market”: popular mobile messaging platforms such as WhatsApp, Line and Wechat help to enable a direct connection between the fraudster and the victim.

AI also helps fraudsters to overcome Asia’s linguistic diversity. Goodman notes that mechanical translations, while a “phenomenal application for AI” also “makes it easier for people to click on the wrong links or to approve something”.

Nation states are also involved. Goodman also refers to allegations that North Korea uses fake employees in large technology companies to collect information and receive a lot of money into the isolated country.

A new risk: ‘Shadow’ Ki

Goodman is concerned about a new risk via AI at work: AI “Shadow” or employees who use private accounts to access AI models without a company supervision. “This could be someone who prepares a presentation for a business review, goes in a personal report in Chatgpt and generates a picture,” he explains.

This can lead to employees unknowingly uploading confidential information to a public AI platform and “potentially creating a great risk in terms of information on information”.

With the kind permission of octa

Agentic Ai could also blur the boundaries between personal and professional identity: for example, something associated with your personal e -mail, in contrast to your company. “As a company user, my company gives me an application that I can use and you want to rule how I use it,” he explains.

But “I never use my personal profile for a company service and never use my company profile for personal service,” he adds. “The ability to describe who they are, be it at work and in the use of labor services or in life and the use of their own personal services, is how we think about customer identity in comparison to companies.”

And things get complicated for goodman. AI agents are authorized to make decisions in the name of a user – which means that it is important to define whether a user acts in personal or corporate capacity.

“If your human identity is ever stolen, the explosion radius in relation to what can be done quickly to steal money from you or damage your reputation is goodman.



Source link

Leave a Reply

Your email address will not be published. Required fields are marked *