Diverse team of experts develops defense system for neural networks

0

A diverse team of engineers, biologists and mathematicians at the University of Michigan have developed a neural network defense system based on the adaptive immune system. The system can defend neural networks against different types of attacks.

Nefarious groups can tailor a deep learning algorithm’s input to misdirect it, which can pose a major problem for applications like identification, computer vision, natural language processing (NLP), language translation, feud detection, and more.

Robust enemy immune-inspired learning system

The newly constructed defense system is called the Robust Adversarial Immune-Inspired Learning System. The work was published in IEEE access.

Alfred Hero is a professor at the John H. Holland Distinguished University. He helped direct the work.

“RAILS represents the first-ever adversarial learning approach modeled after the adaptive immune system, which functions differently than the innate immune system,” said Hero.

The team found that deep neural networks, already inspired by the brain, can also mimic the biological process of the mammalian immune system. This immune system creates new cells designed to fight off certain pathogens.

Indika Rajapakse is Associate Professor of Computational Medicine and Bioinformatics and co-leader of the study.

“The immune system is built for surprises. It has an amazing design and always finds a solution,” said Rajapakse.

mimicking the immune system

RAILS mimics the immune system’s natural defenses, allowing it to identify and address suspicious inputs to the neural network. The biological team first studied how the adaptive immune system of mice responded to an antigen before creating a model of the immune system.

Data analysis of the information was then performed by Stephen Lindsly, who was a graduate student in bioinformatics at the time. Lindsly helped translate this information between the biologists and engineers, allowing Hero’s team to model the biological process on computers. To do this, the team inserted biological mechanisms into the code.

RAILS defenses have been tested with enemy inputs.

“We weren’t sure we really had captured the biological process until we compared the learning curves of RAILS to those from the experiments,” Hero said. “They were exactly the same.”

RAILS outperformed two of the most common machine learning techniques currently used to defend against adversary attacks. These two processes are Roust Deep k-Nearest Neighbor and Convolutional Neural Networks.

Ren Wang is a research associate in electrical and computer engineering. He was largely responsible for the development and implementation of the software.

“A very promising part of this work is that our general framework can defend itself against different types of attacks,” said Ren Wang.

Researchers then used image identification as a test case to evaluate RAILS against eight types of adversary attacks in different datasets. It showed improvement in all cases and even protected against Projected Gradient Descent attacks, which are the most damaging type of enemy attacks. RAILS also improved overall accuracy.

“This is an amazing example of using mathematics to understand this beautiful dynamic system,” said Rajapakse. “We may be able to use what we learned from RAILS and help reprogram the immune system to work faster.”

Share.

Comments are closed.