Hackers can hide malware codes inside AI neural networks

A group of researchers at cornel University has discovered that it is possible to inject malware code into Ai neural networks. The study says This way cybercriminals could access computer systems that run AI applications.

 

AI-empowered systems process a high volume of data to perform tasks assigned, but such networks are vulnerable to infiltration by foreign code. The team demonstrated it by embedding malware into the neural network behind an AI system called AlexNet, despite it being rather hefty, taking up 36.9 MiB of memory space on the hardware running the AI system. The traditional antivirus software failed to detect the virus.

 

The AI system performance was unchanged after being infected. Thus, the infection could have gone undetected if covertly executed.