Just two algorithms sending messages to each other – and you can’t peek in.
After becoming better than any human at Go — which is much harder than chess — and figuring out how to navigate London’s metro all by itself, Google’s AI is moving onto much darker waters: encryption. In a new paper, Googlers Martín Abadi and David G. Andersen describe how they have instructed three AI test subjects to pass messages to each other using an encryption they themselves created. The AIs were nicknamed Alice, Bob, and Eve.
Abadi and Andersen assigned each AI a task. Alice had to send a secret message to Bob, one that Eve couldn’t understand. Eve was tasked with trying to break the code. It all started with a plain text message that Alice translated into unreadable gibberish. Bob had to figure out the key to decode but. He was successful, but so was Eve in decoding it. For the first iterations, Bob and Alice were pretty bad at hiding their secrets, but after 15,000 attempts, they really got better. Alice worked out her own encryption strategy and Bob simultaneously figured out how to decrypt it – and Eve didn’t get it this time. Basically, they succeeded in making themselves understood, while also encrypting the content of their message. It took a while, but ultimately, the results were surprisingly good.
Of course, this is just the basic overview — the reality of how the algorithms function is much more complex. In fact, it’s so complex that researchers themselves don’t know what method of encryption Alice used, and how Bob simultaneously figured out how to decode it. However, according to Andrew Dalton from Engadget, we shouldn’t worry about robots talking behind our backs just yet as this was just a simple exercise. But in the future… well I guess we’ll just have to wait and see.