How Gen-AI is making cybercrime in the real estate industry easier than ever
As the FBI report suggested, generative AI shares much of the blame for the rise in financial crimes.
‘It narrows the playing field’ Matt O’Neilla retired Secret Service agent and co-founder of 5OH advicesaid.
Previously, O’Neill said cybercriminals would specialize in certain areas of crime or certain technologies. They would then work together and offer each other what is essentially ‘cybercrime as a service’ to defraud their victims.
Now, however, O’Neill says AI has made it where cybercriminals don’t really need any level of technological skill.
“Two years ago, the low-level players weren’t having much success, it was pure volume play, but now with AI it’s so much easier for them to create advanced attacks,” O’Neill said. .
While cybersecurity experts believe fraudsters are only in the early stages of using AI, they have already seen some impressive applications.
Adams and his team recently came across a fake website for a real title company, something he finds very concerning.
“It was a direct replica of the actual title company’s website. Everything was the same except the phone numbers, and they had already infiltrated in one transaction, posing as the title company,” Adams said. “Those situations scare me the most, especially when it comes to the advancement of AI, because it’s no longer a bunch of people trying to figure out how to rebuild a website. With AI, they can just scrape it and rebuild it, making it super simple.”
But sophisticated website spoofs aren’t the only way fraudsters are using AI. Cybersecurity experts say they are also seeing generative AI applications popping up in everyday things like phishing. According to industry leaders, fraudsters’ use of AI makes the scams credible, and unfortunately for victims, it works.
According to a study conducted by Fredrik Heiding, Bruce Schneier, and Arun Vishwanath at Harvard University, 60% of study participants fell victim to AI automated phishing. The researchers say this is in line with the success rates of non-AI phishing messages created by human experts. However, what the researchers found most concerning is that the entire phishing process can be automated using Large Language Models (LLMs), which can reduce the cost of phishing attacks by more than 95%.
“As a result, we expect that phishing will drastically increase in quality and quantity in the coming years,” the researchers write an article in the Harvard Business Review.
The increased sophistication of phishing fraud has raised alarms for Andy White, the CEO of CloseLockespecially since much of the focus on cybersecurity has been on more sophisticated attacks and not on phishing fraud, which has been around for decades.
“We don’t really think of phishing fraud as a way for fraudsters to use AI to infiltrate the real estate industry, but if you can use AI to create a fraudulent link that is more credible and more people click on it, then you can infiltrate. any party in the transaction you want. You could even get into a title company’s systems and then send emails from the title company itself and not from a fake account, or change all the account numbers where money is going to fraudulent accounts,” White said.
While this is scary in itself, cybersecurity experts warn that even scarier scams lie ahead as it becomes easier to create highly convincing deepfake videos.
“The technical bar and level of sophistication to carry out these attacks is no longer particularly high, and the cost of the hardware to carry them out has fallen to a reasonable level,” said John Heasman, head of information security at an identity verification company. Evidencesaid. “We expect to see more cases of real-time face-swapping and real-time production of deep-fake videos throughout the year.”
While Adams believes ‘deep fakes’ pose a very real threat to the housing industry, he doesn’t believe we will see scams using this technology in the coming months.
“I think we’re going to see some really impressive fake IDs this year for virtual notaries and things like that, and that’s going to be one of the biggest risks of the year, but when it comes to deep counterfeits and coming up with Zooming and not knowing if you’re actually talking to the real person, I think we’ll start seeing that late this year or early 2026,” Adams said.
Given all this, cybersecurity experts acknowledge that it’s easy for home construction professionals to feel overwhelmed by the threats from fraudsters and their newly honed AI capabilities, but they believe it’s not all doom and gloom.
“Small and medium-sized businesses are becoming more mature in their security, doing things like conditional access and tightening their security, which is promising,” said Kevin Nincehelser, the CEO of cybersecurity firm. Prime Minister Onesaid.
While the fraudsters may have some new tricks up their sleeves, Nincehelser says the “good guys” also have some new tools.
“Many parts of the security apparatus now also use AI and it has been very helpful in finding and mitigating more attacks,” Nincehelser said.
Companies working with Premier One on their cybersecurity have started using AI-powered email filtering products, which Nincehelser says has been a game changer in preventing both fraud and ransomware attacks.
“Email filters used to only look at patterns, but then the bad guys stopped using patterns and started using AI. The AI tools we have can stop those attempts or attacks that come through email because they look at behavior and intent,” says Nincehelser. said. “The AI tools not only see the link in the email as a human would, but they also see the next three steps beyond that link and what the user will ask for. From a defense perspective, AI email security has been one of the most powerful new technologies to achieve this to date.”
While O’Neill recognizes the need for advanced fraud detection and prevention tools, he believes the housing sector could also use a boost from government to further incentivize it to improve its cyber security.
“I’m working with state legislators to create some kind of duty of care that says you have to have these basic steps in place like multi-factor authentication and using secure communications platforms outside of web-based email when working with customers transacting above a certain dollar amount,” he said.
At the federal level, O’Neill said there is a push in the financial industry to take advantage of this 314b of the Patriot Act to enable financial institutions to share information with each other. He believes that broader application of the regulation will go a long way in preventing fraud.
According to O’Neill, part of the challenge is that 314b is currently voluntary, which has led many banks to make the decision not to actively participate. As a result, banks are not often held responsible for losses, which are only passed on to consumers.
“If they can’t do that anymore, they’re going to have to start communicating with each other,” O’Neill said. “There could be some meaningful changes if financial institutions did things like matching account numbers with the names of account holders and things like that.”