Fake online videos growing corporate threat: Cybersecurity expert

What will deepfakes and AI scams mean for cybersecurity | Orange Business Services

Fake online videos growing corporate threat: Cybersecurity expert
Many fear that deepfakes could be used to interfere with U.S. elections.

But the impact of fake video and audio could stretch beyond propaganda as cybercriminals leverage deepfake-as-a-service toolkits to wage disinformation wars on corporates and, worse, to power sophisticated phishing attacks.

Deepfakes first emerged in 2017, created initially by one anonymous Reddit user who gave Internet users access to the artificial intelligence (AI)-powered tools they would need to make their own deepfakes.

Since then there have been a number of high-profile deepfake videos, with examples movie director Jordan Peele’s Barack Obama public service announcement (PSA) – this intended as a warning about the dangers and convincing nature of deepfake videos – and the fake video of CEO Mark Zuckerberg telling CBS News “the truth of and who really owns the future” showcasing their power.

Deepfakes can cause significant problems for commercial organizations.

There was a recent example in which an employee of a UK-based energy company was tricked into believing he was talking to the CEO of their German parent company, who convinced the employee to transfer $243,000 to a Hungarian supplier. It turned out the employee was not speaking to the real CEO but to a scam artist impersonating the CEO using a voice-altering AI tool.

A new threat? Or a new wrinkle on an old threat?

Cybersecurity experts have been predicting the rise of AI in cybercrime for a few years now, with the threat of automated cyberattacks central to that.

The German CEO voice scam appears to be the first of its kind using AI, or at least the first that we have heard about in the public sphere.

But AI is now being used for more sophisticated phishing attacks and to fool biometric IS scanners with things fake fingerprints.

What is different about the potential impact of deepfakes is that they could disrupt enterprises by making them think that deepfakes might be used to deceive them. According to a report by Deeptrace Labs, there have been no actual occurrences of deepfakes used in disinformation campaigns or in ways that could affect enterprises, but that isn’t to say there won’t be in time.

Deeptrace’s Henry Ajder commented, “Deepfakes do pose a risk to politics in terms of fake media appearing to be real, but right now the more tangible threat is how the idea of deepfakes can be invoked to make the real appear fake. The hype and rather sensational coverage speculating on deepfakes’ political impact has overshadowed the real cases where deepfakes have had an impact.

Forrester predicts that deepfakes could end up costing businesses as much as $250 million this year.

How could deepfakes potentially damage your business? At a basic level, hacktivists could be using deepfake tech to make false claims and statements about your company in order to undermine and destabilize it.

At a more sinister level, malicious agents could target senior executives in your company and put together a deepfake video in which the exec confesses to financial crimes or other offenses. Examples these could have major consequences for your company’s brand, reputation and share price.

Not to mention that they can be difficult for you to disprove and consume time and money attempting to do so. These threats could come from individuals, cybercriminal gangs or state-sponsored hackers who want to create disruption in financial markets, argues Experian.

How to combat lies

Scams using deepfake technology and AI present a new challenge for companies, since traditional cybersecurity tools designed to keep hackers and malicious agents corporate networks are not really designed to spot spoofed voices or doctored videos.

Cybersecurity companies are in the process of developing products to detect deepfake recordings, while big organizations and Microsoft have begun to take deepfakes very seriously: the two companies reported that they are working with leading U.S. universities to build a large database of fake videos for research. Google, too, has put together a database of 3,000 deepfakes designed to help researchers and cybersecurity professionals develop tools to combat the fake videos.

What else can you do?

As an enterprise organization, there are other steps you can take:

1. Train your employees: As with standard cybersecurity measures, employee training and vigilance is often at the forefront.

Make your workers aware of deepfakes during cybersecurity training: give examples how they might receive an unexpected call from the company CEO asking them to perform an unexpected or uncommon task and think about putting internal security questions in place to help employees confirm a caller’s identity should they need to.


Review your company’s online brand and presence: You ly already monitor and measure your brand’s online output but, in the deepfake era, ensure your designated employees are aware of the existence of fake content and that they know to keep an eye out for it. If they spot something suspicious, they should seek to remove it immediately and mitigate potential damage to your organization.

3. Be transparent: It might sound counterintuitive, but if you are a victim of a deepfake attack, it can be worth your while to publicize it.

Some of the best PR tactics involve getting out in front of an issue, and if you make your audience aware of the existence of a deepfake attack, they might appreciate it rather than consider it a negative.

Ignoring an attack or assuming your audience doesn’t know about or hasn’t seen the deepfake video can backfire on you. Highlight that someone from your company has been the target of a deepfake attack, own the issue and you can mitigate the damage.

The dangers involved in deepfakes are very real, and you shouldn’t underestimate them: one malicious rumor can have a major negative and lasting impact on your business. It’s time to be aware of the threat and factor it into your cybersecurity thinking and strategy.

In today’s ever-expanding threat landscape you need to adopt holistic threat management to protect against the reality of continuous advanced threats coming your way. Read our six steps to effective threat management.

Источник: https://www.orange-business.com/en/magazine/fake-news-what-could-deepfakes-and-ai-scams-mean-cybersecurity

Deepfake ransomware technology is being sold for cheap on Dark Web

Fake online videos growing corporate threat: Cybersecurity expert

We’re truly living in unprecedented times and battling a fast-moving virus for the past several months is only the beginning. COVID-19 itself is one obstacle, but we also have to deal with cybercriminals who are taking advantage of the trying times we’re living in.

The number of scams exploiting the coronavirus is staggering. In fact, we’ve seen a huge increase in COVID-19 related scams in just the last few weeks. Tap or click here to find out why there’s been such a spike in these crimes.

As if things haven’t been bad enough, it’s about to get a whole lot worse. That’s because deepfake technology has been advancing to the point where it’s become difficult to tell what’s real from what’s fake. Even worse, thieves don’t even need to be super tech-savvy because they can get deepfake help for cheap on the Dark Web.

Boss…is that you?

Cybercriminals will look for any way possible to get your information, whether it’s targeting individuals with phishing messages or hacking into massive databases.

Speaking of, in just the month of May alone, there were more than 460 million records exposed in data breaches – and those are just the ones that have been publicly acknowledged.

Tap or click here for details on a recent major breach at a popular bank.

For the past few years, ransomware has been an increasingly popular tool among scammers. Now many crooks are turning their attention to ransomware using deepfake technology.

As a refresher deepfake technology is an emerging technique that uses facial mapping, artificial intelligence (AI) and deep machine learning to create ultra-realistic fake videos of people saying and doing things that they haven’t actually done. When this technology first emerged, it was mainly used to create deepfakes of celebrities or politicians to have them say things they never actually said.

Here’s the scary part: The technology is improving at such a rapid pace that it’s getting increasingly difficult to tell what’s fake. Because of that, criminals are starting to use it for the everyday Joe to scam people money.

For example, a crook could create a fake video of your boss asking you to transfer money from your corporate account into one owned by the scammer. All they need is a picture of a company’s CEO, HR person, or team leader to make these deepfakes. And, with nearly everyone on social media these days, it’s easier than ever to find them.

Making matters worse is the fact that you don’t even need to know how to create deepfake technology to use it because criminals are selling it on the Dark Web at super cheap prices. Trend Micro found numerous examples of deepfake images, videos and even services for sale on the Dark Web.

As you can see in the image above, scammers are selling deepfake videos starting at $50, still images for about $2.50 each and you can buy the software used to create deepfakes starting at about $25. Very inexpensive for those with a certain moral-less code willing to use it.

How to protect against deepfake ransomware

Another scenario to worry about is a twist on the common sextortion scam that we’ve seen all too often recently. Sextortion scams are when a victim receives an email from a scammer who claims to have either screenshots of them watching pornography or a detailed history of their online habits.

The scammer threatens to send incriminating evidence to the victims’ family and friends if they don’t pay a ransom. Tap or click here to see a recent example.

Now, the fear is criminals will find a picture of you online, most ly from your page and use it to create a deepfake video. They could create a video showing you watching porn, even though it’s not actually you it will appear to be real.

Here’s where it gets really interesting. The creep who made the deepfake will email screenshots from the video to victims and include a link so they can watch it. But, here’s the problem: The link is malicious and if you click it, your device will be infected with ransomware. Then just that, you’ve lost access to all the important files on your device.

It’s not exactly surprising, since cybercriminals have no limits to the depths they will sink to in order to rip people off. And, with this technology available at affordable prices on the Dark Web, it’s sure to become much more common. The good news is you can protect yourself with a few simple precautions.

The first thing to know is you should always be cautious with links found inside emails or text messages. There’s a good chance the link or attached doc could be malicious and infect your device with ransomware. If you need to visit a website always type the address directly into your browser instead of trusting a link.

As far as one of these suped-up sextortion scams go, don’t worry about watching an alleged video of you that you already know is fake. And, never pay the ransom for ransomware. The I has been saying for years not to pay because it wouldn’t guarantee the return of your files. We are dealing with untrustworthy criminals after all.

Another thing you should do is when available, always be using multi-level authentication (2FA). This is when you need at least two forms of verification, such as a password and a security code that is sent to you before logging into any sensitive accounts. Tap or click for more details on 2FA.

Finally, and this is extremely important, always have your critical files backed up. Hopefully, you never fall victim to a ransomware attack, but if you do, you need to be prepared. Take the I’s recommendation and don’t pay the ransom. Just back up your files before the attack and you can recover everything on your own.

We recommend using IDrive. IDrive lets you back up all of your devices, whether you have a Mac, PC, Android, iPad or iPhone, and you can conveniently manage your backups through a single online account.

IDrive is also affordable. Save 50% on 5 TB of cloud backup now!

Источник: https://www.komando.com/security-privacy/deepfake-tech-makes-ransomware-more-convincing/740632/

Leave a Reply

;-) :| :x :twisted: :smile: :shock: :sad: :roll: :razz: :oops: :o :mrgreen: :lol: :idea: :grin: :evil: :cry: :cool: :arrow: :???: :?: :!: