Deepfakes Expose Societal Dangers of AI, Machine Learning

What does the rise of deepfakes means for the future of cybersecurity?

Deepfakes Expose Societal Dangers of AI, Machine Learning

Imagine you’re holding a video conference with a colleague or business partner in another city. You’re discussing sensitive matters, the launch of a new product or the latest unpublished financial reports.

Everything seems to be going well, and you know who you’re talking to. Maybe you’ve even met them before.

Their appearance and voice are as you expected, and they seem to be pretty familiar with their jobs and your business.

It might sound a routine business call, but what if the person you thought you were talking to is actually someone else? They might seem genuine, but behind the familiar imagery and audio is a social engineering scammer fully intent on duping you into surrendering sensitive corporate information. In a nutshell, this is the disturbing world of deepfakes, where artificial intelligence is the new weapon of choice in the scammer’s arsenal.

What exactly are deepfakes?

Among the newest words on the technology block, ‘deepfake’ is a portmanteau of ‘deep learning’ and ‘fake.’ The term has been around for two years, when it first appeared on a Reddit community of the same name.

The technology uses artificial intelligence to superimpose and combine both real and AI-generated images, videos and audio to make them look almost indistinguishable from the real thing.

The apparent authenticity of the results is rapidly reaching disturbing levels.

One of the most famous deepfakes of all was created by actor and comedian Jordan Peele, who made this video of Obama delivering a PSA about fake news.

While this one was made for the sake of humor and to raise awareness to this rapidly emerging trend, deepfake technology has, unsurprisingly, been misappropriated since the very beginning.

Its implications for credibility and authenticity have placed it squarely in the spotlight.

The worrying consequences of deepfakes

Wherever there’s technology innovation, there’s nearly always pornography, so it’s little surprise that the first deepfakes to make waves on Reddit were videos which had been manipulated to replace the original actresses’ faces with somebody else’s – typically a well-known celebrity.

Reddit, along with many other networks, has since banned the practice. However, as actress Scarlett Johansson said of deepfake pornography, while celebrities are largely protected by their fame, the trend poses a grave threat to people of lesser prominence.

In other words, those who don’t take steps to protect their identities could potentially end up facing a reputational meltdown.

That brings me to the political consequences of deepfakes. So far, attempts to masquerade as well-known politicians have been carried out largely in the name of research or comedy. But the time is coming when deepfakes could become realistic enough to cause widespread social unrest.

No longer will we be able to rely on our eyes and ears for a firsthand account of events. Imagine, for example, seeing a realistic video of a world leader discussing plans to carry out assassinations in rival states.

In a world primed for violence, the implications of deepfake technology could have devastating consequences.

Purveyors of fake news seeking to make a political impact are just one side of the story. The other is the form of social engineering that business leaders are all too familiar with. As the video conference example illustrates, deepfakes are a new weapon for cybercriminals.

And it’s not nearly as far away as you may think: the world’s first deepfake-based attack against a corporation was reported in August 2019, when a UK energy firm was duped by a person masquerading as the boss of its German parent company.

The scammer allegedly used AI to impersonate the accent and voice patterns of the latter’s CEO, someone the victim was familiar with, over a phone call. The victim suspected nothing and was duped $243,000.

These uses of deepfake technology might seem far-fetched, but it’s important to remember that social engineering scammers have been impersonating people since long before the rise of digital technologies.

Criminals no longer have to go to such lengths as studying targets in great depth and even hiring makeup artists to disguise themselves; they now have emerging technologies on their side, as businesses do for legitimate purposes. Previously, successfully impersonating a VIP was much more difficult.

Now, the ability to create deepfake puppets of real people using publicly available photos, video and audio recordings is within everyone’s grasp.

Can you protect your business from deepfakes?

The common misassumption, that synthetic impersonation can never be nearly as convincing as the real thing, is the biggest danger of all.

We live in a world where it’s getting harder to tell fact from fiction.

From the hundreds of millions of fake social media profiles to the worrying spread of fake news and the constant rise of phishing attacks – it’s never been more important to think twice about what you see.

Perhaps, after all, there is a case for a return of face-to-face meetings behind closed doors when discussing important business matters. Fortunately, there are other ways you can prepare your business for the inevitable rise of deepfakes without placing huge barriers in the way of innovation.

To start with, ‘seeing is believing’ is a concept you’ll increasingly want to avoid when it comes to viewing video or listening to audio, including live broadcasts.

To untrained eyes, deepfakes are getting harder to tell apart from the real thing, but there are, and ly always will be, some signs due to the fundamental way AI algorithms work.

When a deepfake algorithm generates new faces, they are geometrically transformed with rotation, resizing and other distortions. It’s a process that inevitably leaves behind some graphical artifacts.

While these artifacts will become harder to identify by sight alone, AI itself can also be used as a force for good – it can detect whether a video or stream is authentic or not. The science of defending against deepfakes is a battle of wills: as deepfakes increase in believability, cybersecurity professionals need to invest more in seeking the truth.

A team of researchers in China recently published a method for using AI itself to expose deep fakes in real time.

Another paper published by the same team figured out a way to proactively protect digital photos and videos from being misappropriated by deepfake algorithms by adding digital noise, invisible to the human eye.

As the threat of deepfakes edges ever nearer, we can hopefully expect more countermeasures to follow suit.

This article represents the personal opinion of the author.

Источник: https://usa.kaspersky.com/blog/secure-futures-magazine/deepfakes-2019/21932/

Read

Deepfakes Expose Societal Dangers of AI, Machine Learning

Stokes outlined three possible ways to detect deep fakes: manual investigation, algorithmic detection, and content provenance. However, there has not been enough research on any of these methods, he noted, and scaling them will be difficult.

Manual investigation uses humans to detect fakes. For example, several news outlets have created deep fake task forces that are training editors and reporters to recognize deep fakes and create newsroom guidelines.

Unfortunately, technological progress may soon make manual investigation impossible.

A paper published last year pointed out that deep fake videos rarely show blinking, and so a person could conclude that a video without blinking is fake.3 However, now that this is recognized, Stokes speculated that developers will create better methods for generating blinking, and this telltale sign will soon cease to be a reliable way to detect a fake.

Algorithmic detection uses computers to analyze synthetic media.

in particular is working on several AI methods in this space, for not only identifying deep fakes, but also targeting misinformation more generally, including content presented context, and false claims in text and audio.

In particular, it is scanning images to perform optical character recognition or transcribing audio to generate text that can then be searched upon to see whether anyone has debunked them. However, this is very difficult to do at scale, Stokes noted.

Content provenance is a digital signature or cryptographic validation of audio or video that is specific to the actual camera or microphone used to record it.

For example, Stokes described efforts by an Israeli startup to insert hashes into a video file in a device-specific way and upload them into a publicly available blockchain sequence; a comparison of the video to the blockchain-stored value can then reveal, using a simple color scheme, which parts of a video are real. Content provenance can also be assured through digital signatures, directly analogous to the certificates used to authenticate Web pages. Stokes named one tool, called Proof Mode,4 that embeds metadata signatures in video or images to ensure a chain of custody and inspire confidence that the content collected is real. This app was designed for the purpose of providing credibility to documentation of human rights abuses.

However, content provenance and digital trustworthiness are not new ideas, and several domain experts have long been skeptical of their effectiveness.

Skeptics point out that while such methods may be technically feasible, they are extremely difficult to implement.

For example, if a certificate is stolen, it must be revoked from all cameras and a new one issued—a high-cost, brittle solution difficult to implement at Internet scale, Stokes said.

Looking Forward

While no federal legislation has been passed to address the issue, a bill has been introduced in Congress that would criminalize malicious creation and distribution of deep fakes.

In New York State, a law was proposed that would punish individuals who make non-consensual deep fakes of others, but movie companies are fighting back, citing First Amendment rights.

Some believe that deep fakes, while used in certain communities, are unly to be widespread or cause serious damage and that concerns are overblown—in particular, because posting a deep fake might actually call attention to the malicious actor, making it not worth the risk.

In summary, Stokes stressed that technology is moving incredibly fast, deep fakes are causing real harm to real people, and it is only a matter of time before they are deployed for political manipulation. Given this context, Stokes urged academia, industry, and government to take advantage of this brief window of opportunity to help find solutions.

DETECTION OF FORGED OR SYNTHETIC CONTENT: VISUAL, AUDIO, AND TEXT

Delip Rao, AI Foundation

Rao described the AI Foundation’s efforts to develop methods for detecting synthetic digital content. This work is undertaken in order to improve what the foundation terms information safety, a goal with three components: education (degree programs, employee training, and public safety campaigns), enforcement (creating and enforcing

___________________

3 Y. Li, M.-C. Chang, and S. Lyu, 2018, “In Ictu Oculi: Exposing AI Created Fake Videos by Detecting Eye Blinking,” in 2018 IEEE International Workshop on Information Forensics and Security (WIFS), doi:10.1109/WIFS.2018.8630787.

4 Guardian Project, 2017, “Combating ‘Fake News’ with a Smartphone ‘Proof Mode,’” posted on February 24, https://guardianproject.info/2017/02/24/combating-fake-news-with-a-smartphone-proof-mode/.

Источник: https://www.nap.edu/read/25488/chapter/7

Deepfakes Expose Societal Dangers of AI, Machine Learning

Deepfakes Expose Societal Dangers of AI, Machine Learning

As we head into the next presidential election campaign season, you'll want to beware of the potential dangers that fake online videos bring through the use of artificial intelligence (AI) and machine learning (ML).

Using AI software, people can create deepfake (short for “deep learning and fake”) videos in which ML algorithms are used to perform a face swap to create the illusion that someone either said something that they didn't say or are someone they're not.

Deepfake videos are showing up in various arenas, from entertainment to politics to the corporate world.

Not only can deepfake videos unfairly influence an election with false messages, but they can bring personal embarrassment or cause misleading brand messages if, say, they show a CEO announcing a product launch or an acquisition that actually didn't happen.

Deepfakes are part of a category of AI called “Generative Adversarial Networks” or GANs, in which two neural networks compete to create photographs or videos that appear real.

GANs consist of a generator, which creates a new set of data a fake video, and a discriminator, which uses an ML algorithm to synthesize and compare data from the real video.

The generator keeps trying to synthesize the fake video with the old one until the discriminator can't tell that the data is new.

As Steve Grobman, McAfee's Senior Vice President and Chief Technology Officer (CTO), pointed out at the RSA Conference 2019 in March in San Francisco, fake photographs have been around since the invention of photography. He said altering photos has been a simple task you can perform in an application such as Adobe Photoshop. But now these types of advanced editing capabilities are moving into video as well.

How Deepfakes Are Created

Although understanding AI concepts is helpful, it's not necessary to be a data scientist to build a deepfake video. It just involves following some instructions online, according to Grobman. At the RSA Conference 2019 (see video above), he unveiled a deepfake video along with Dr.

Celeste Fralick, Chief Data Scientist and Senior Principal Engineer at McAfee. The deepfake video illustrated the threat this technology presents. Grobman and Fralick showed how a public official in a video saying something dangerous could mislead the public to think the message is real.

To create their video, Grobman and Fralick downloaded deepfake software. They then took a video of Grobman testifying before the US Senate in 2017 and superimposed Fralick's mouth onto Grobman's.

“I used freely available public comments by [Grobman] to create and train an ML model; that let me develop a deepfake video with my words coming [his] mouth,” Fralick told the RSA audience from onstage. Fralick went on to say that deepfake videos could be used for social exploitation and information warfare.

To make their deepfake video, Grobman and Fralick used a tool a Reddit user developed called FakeApp, which employs ML algorithms and photos to swap faces on videos. During their RSA presentation, Grobman explained the next steps. “We split the videos into still images, we extracted the faces, and we cleaned them up by sorting them and cleaned them up in Instagram.”

Python scripts allowed the McAfee team to build mouth movements to have Fralick's speech match Grobman's mouth. Then they needed to write some custom scripts. The challenge in creating a convincing deepfake is when characteristics gender, age, and skin tone don't match up, Grobman said.

He and Fralick then used a final AI algorithm to match the images of Grobman testifying before the Senate with Fralick's speech. Grobman added that it took 12 hours to train these ML algorithms.

The Consequences of Deepfakes

Hacker-created deepfake videos have the potential to cause many problems—everything from government officials spreading false misinformation to celebrities getting embarrassed from being in videos they really weren't in to companies damaging competitors' stock market standings.

Aware of these problems, lawmakers in September sent a letter to Daniel Coats, US Director of National Intelligence, to request a review of the threat that deepfakes pose. The letter warned that countries such as Russia could use deepfakes on social media to spread false information.

In December, lawmakers introduced the Malicious Deep Fake Prohibition Act of 2018 to outlaw fraud in connection to “audiovisual records,” which refer to deepfakes. It remains to be seen if the bill will pass.

As mentioned, celebrities can suffer embarrassment from videos in which their faces have been superimposed over porn stars' faces, as was the case with Gal Gadot.

Or imagine a CEO supposedly announcing product news and sinking the stock of a company.

Security professionals can use ML to detect these types of attacks, but if they're not detected in time, they can bring unnecessary damage to a country or a brand.

“With deepfakes, if you know what you're doing and you know who to target, you can really come up with a [very] convincing video to cause a lot of damage to a brand,” said Dr.

Chase Cunningham, Principal Analyst at Forrester Research.

He added that, if you distribute these messages on LinkedIn or or make use of a bot form, “you can crush the stock of a company total bogus video without a while lot of effort.”

Through deepfake videos, consumers could be tricked into believing a product can do something that it can't. Cunningham noted that, if a major car manufacturer's CEO said in a bogus video that the company would no longer manufacture gas-powered vehicles and then spread that message on or LinkedIn in that video, then that action could easily damage a brand.

“Interestingly enough from my research, people make decisions headlines and videos in 37 seconds, Cunningham said. “So you can imagine if you can get a video that runs longer than 37 seconds, you can get people to make a decision whether [the video is] factual or not. And that's terrifying.”

Since social media is a vulnerable place where deepfake videos can go viral, social media sites are actively working to combat the threat of deepfakes. , for example, deploys engineering teams that can spot manipulated photos, audio, and video. In addition to using software, (and other social media companies) hire people to manually look for deepfakes.

“We've expanded our ongoing efforts to combat manipulated media to include tackling deepfakes,” a representative said in a statement.

“We know the continued emergence of all forms of manipulated media presents real challenges for society.

That's why we're investing in new technical solutions, learning from academic research, and working with others in the industry to understand deepfakes and other forms of manipulated media.”

Not All Deepfakes Are Bad

As we have seen with the educational deepfake video by McAfee and the comedic deepfake videos on late night TV, some deepfake videos are not necessarily bad. In fact, while politics can expose the real dangers of deepfake videos, the entertainment industry often just shows deepfake videos' lighter side.

For example, in a recent episode of The Late Show With Stephen Colbert, a funny deepfake video was shown in which actor Steve Buscemi's face was superimposed over actress Jennifer Lawrence's body.

In another case, comedian Jordan Peeler replaced a video of former President Barack Obama speaking with his own voice.

Humorous deepfake videos these have also appeared online, in which President Trump's face is superimposed over German Chancellor Angela Merkel's face as the person speaks.

Again, if the deepfake videos are used for a satirical or humorous purpose or simply as entertainment, then social media platforms and even movie production houses permit or use them.

For example, allows this type of content on its platform, and Lucasfilm used a type of digital recreation to feature a young Carrie Fisher on the body of actress Ingvild Deila in “Rogue One: A Star Wars Story.”

McAfee's Grobman noted that some of the technology behind deepfakes is put to good use with stunt doubles in moviemaking to keep actors safe. “Context is everything.

If it's for comedic purposes and it's obvious that it's not real, that's something that's a legitimate use of technology,” Grobman said.

“Recognizing that it can be used for all sorts of different purposes is key.”

How to Detect Deepfake Videos

McAfee isn't the only security firm experimenting with how to detect fake videos.

In its paper delivered at Black Hat 2018 entitled, “AI Gone Rogue: Exterminating Deep Fakes Before They Cause Menace,” two Symantec security experts, Security Response Lead Vijay Thaware and Software Development Engineer Niranjan Agnihotri, write that they have created a tool to spot fake videos Google FaceNet.

Google FaceNet is a neural network architecture that Google researchers developed to help with face verification and recognition. Users train a FaceNet model on a particular image and can then verify their identity during tests thereafter.

To try to stop the spread of deepfake videos, AI Foundation, a nonprofit organization that's focused on human and AI interaction, offers software called “Reality Defender” to spot fake content. It can scan images and video to see if they have been altered using AI. If they have, they'll get an “Honest AI Watermark.”

Another strategy is to keep the concept of Zero Trust in mind, which means “never trust, always verify”—a cybersecurity motto that means IT professionals should confirm all users are legitimate before granting access privileges. Remaining skeptical of the validity of video content will be necessary. You'll also want software with digital analytics capabilities to spot fake content.

Looking Out for Deepfakes

Going forward, we'll need to be more cautious with video content and keep in mind the dangers they can present to society if misused. As Grobman noted, “In the near term, people need to be more skeptical of what they see and recognize that video and audio can be fabricated.”

So, keep a skeptical eye on the political videos you watch as we head into the next election season, and don't trust all of the videos featuring corporate leaders. Because what you hear may not be what was really said, and misleading deepfake videos have the potential to really damage our society.

Источник: https://uk.pcmag.com/old-news/120854/deepfakes-expose-societal-dangers-of-ai-machine-learning

NEWS
Leave a Reply

;-) :| :x :twisted: :smile: :shock: :sad: :roll: :razz: :oops: :o :mrgreen: :lol: :idea: :grin: :evil: :cry: :cool: :arrow: :???: :?: :!: