Researchers Defend Personal Information Through Applying Machine Learning

Wednesday, October 16, 2019 - 12:01

Researchers from Duke University could discover a way to confuse machine learning systems in order to find a way to protect online privacy.

According to the Duke Chronicle report, researchers have displayed the potential for so-called “adversarial examples,” or deliberately altered data, to confuse machine learning systems.

Neil Gong, assistant professor of electrical and computer engineering stated: “We found that, since attackers are using machine learning to perform automated large-scale inference attacks, and the machine learning is vulnerable to those adversarial examples, we can leverage those adversarial examples to protect our privacy.”

Gong said that a self-driving car can recognize a stop sign—but it might think that a stop sign with a sticker on it is a speed limit sign. “This basically means we can somehow change my data such that machine learning makes incorrect predictions,” he said.

The result, a system that Gong and Jinyuan Jia, a Ph.D, call “AttriGuard” in their paper, could be a tool for companies to defend themselves from third-party attackers.

“There are many other machine learning-based inference attacks you can also use adversarial examples to defend against,” Jia said.

Gong and Jia have made headlines for their research, which was featured in a recent Wired article about adversarial examples.

The pair has already begun to conduct further research on machine learning attacks, Gong explained, examining ways to trick systems that predict whether data points fall within a certain data set.

“I think that is very important … defending against this kind of attack,” Jia said, “because machine learning [is becoming] very popular to perform this kind of attack.”

Opinions


Popular News

Latest News