Google Brain AI created its own form of encryption using deep learning.
Adam Rifkin stashed this in Machine Learning
Researchers from the Google Brain deep learning project have taught AI systems to make AI-generated, human-independent encryption.
According to a new research paper, Googlers Martín Abadi and David G. Andersen have willingly allowed three test subjects -- neural networks named Alice, Bob and Eve -- to pass each other notes using an encryption method they created themselves.
As the New Scientist reports, Abadi and Andersen assigned each AI a task: Alice had to send a secret message that only Bob could read, while Eve would try to figure out how to eavesdrop and decode the message herself. The experiment started with a plain-text message that Alice converted into unreadable gibberish, which Bob could decode using cipher key. At first, Alice and Bob were apparently bad at hiding their secrets, but over the course of 15,000 attempts Alice worked out her own encryption strategy and Bob simultaneously figured out how to decrypt it. The message was only 16 bits long, with each bit being a 1 or a 0, so the fact that Eve was only able to guess half of the bits in the message means she was basically just flipping a coin or guessing at random.
Of course, the personification of these three neural networks oversimplifies things a little bit: Because of the way the machine learning works, even the researchers don't know what kind of encryption method Alice devised, so it won't be very useful in any practical applications. In the end, it's an interesting exercise, but we don't have to worry about the machines talking behind our backs just yet. With open-source deep learning tools like Microsoft's Cognitive Toolkit, it might be interesting to see this play out on an even larger scale.
At a high level deep learning has takes a set of inputs (the features you want to train on). Then there are a number of hidden layers, followed by an output layer.
Presumably, google created an deep learning network where a document and a key can be provided as input, pass through the hidden layers, and the output is an encrypted document. Either the same network or a different network (not sure) is used to process the encrypted output + key to produce the original document.
But what are the hidden layers? Each layer is essentially a matrix of numbers. Multiply the input vector by one layer to produce an output vector. Then repeat for each hidden layer and finally the output layer.
It is very difficult to understand what numbers in a hidden layer represent in all but the simplest cases. If you scroll down this page There is an interactive gui allowing you to change the values of weights and biases in a simple network. You can easily see what changing these parameters in a simple network does to the output. Just imagine what happens as the number of parameters grows into the hundreds or thousands. The direct contribution of any one parameter in the final output would be difficult to guess.