Researchers from George Mason College have demonstrated a option to manipulate synthetic intelligence (AI) fashions by altering a single binary digit of their reminiscence.
This sort of assault, named “Oneflip”, targets the saved values, often called weights, that decide how an AI system capabilities. These values are stored as strings of 1s and 0s in a pc’s reminiscence.
If one among these bits is modified on the proper location, it will probably shift the AI’s habits with out reducing its general accuracy.
Do you know?
Subscribe – We publish new crypto explainer movies each week!
Crypto Charges Defined: How To not Overpay? (Animated)
The underlying technique borrows from a recognized {hardware} flaw referred to as Rowhammer. This system includes repeatedly accessing one a part of a reminiscence chip to unintentionally change the worth of a close-by bit.
The brand new analysis focuses this technique on reminiscence areas that retailer AI parameters to regulate the AI’s habits with only a single flip.
To hold out the assault, an intruder first must run some kind of software program on the identical system because the goal AI. This may occur by a malicious app, an contaminated file, or unauthorized entry to a shared cloud service.
As soon as in, the attacker searches for part of the mannequin’s reminiscence the place a minor bit change could possibly be helpful with out elevating suspicion.
A single altered bit doesn’t sometimes trigger main efficiency points. The AI nonetheless appears to operate as anticipated, so most routine audits won’t spot something fallacious. It’s this stealthy nature that makes Oneflip particularly tough to detect.
On August 19, Microsoft’s head of AI, Mustafa Suleyman, raised issues in regards to the fast progress of AI. What did he say? Learn the complete story.








