Every (great) piece of software will be powered by deep learning within the next ~3 years. ~Oliver Cameron
Adam Rifkin stashed this in Deep Learning
Stashed in: Software!, Awesome, Turing, Fonts!, Bust a Caption!, Self-driving Cars, Medium, Machine Learning, Artificial Intelligence
Deep learning has enabled a text-to-speech system that is almost indistinguishable from human voice. Think of the possibilities!
Increasingly, deep learning technology is being used to mint, make, and design fresh patterns from scratch.
As I discussed in this recent post, deep learning is driving the application logic being used to create new video, audio, image, text, and other objects. Check out this recent Medium article for a nice visual narrative of how deep learning is radically refabricating every aspect of human experience.
These are what I’ve referred to as the “constructive” applications of the technology, which involve using it to craft new patterns in new artifacts rather than simply introspecting historical data for pre-existing patterns. It’s also being used to revise, restore, and annotate found content and even physical objects so that they can be more useful for downstream uses.
You can’t help but be amazed by all this until you stop to think how it’s fundamentally altering the notion of “authenticity.” The purpose of deep learning’s analytic side is to identify the authentic patterns in real data. But if its constructive applications can fabricate experiences, cultural artifacts, the historical record, and even our bodies with astonishing verisimilitude, what is the practical difference between reality and illusion? At what point are we at risk of losing our awareness of the pre-algorithmic sources that should serve as the bedrock of all experience?
This is not a metaphysical meditation. Deep learning has advanced to the point where:
- You can autocorrect images by generating and superimposing onto the original any visual elements that were missing, obscure, or misleading.
- You can transform any rough doodle into an impressive drawing that seems to have been created by expert human artists who were depicting real-world models.
- You can take hand-drawn sketches of human faces and algorithmically transform them into photorealistic images.
- You can transform any low-resolution original image into a natural-looking high-resolution version.
- You can instruct a computer to render any image so that it appears it was composed by a specific human artist in a specific style.
- You can organically conjure from any image any patterns, figures, and other details that were not present in the source.
- You can automatically generate captions, annotations, and other narratives from images and other source content so that it appears they were composed by authentic eyewitnesses or subject matter experts.
- You can render any computer-generated voice into one that truly sounds like it was naturally produced in a human vocal tract.
- You can rely on a computer to compose music that feels like it expresses some authentic feeling deep in the soul of an actual human musician.
- You can fabricate highly functional physical objects, such as prosthetic limbs and organic molecules, from scratch through 3D printing, CRISPR, and other new technologies.
Clearly, the power to construct is also the power to reconstruct, and that’s tantamount to having the power to fabricate and misdirect. Though we needn’t sensationalize this, deep learning’s reconstructive potential can prove problematic in cognitive applications, given the potential for algorithmic biases to cloud decision support. If those algorithmic reconstructions skew environmental data too far from bedrock reality, the risks may be considerable for deep learning applications such as self-driving cars and prosthetic limbs upon which people’s very lives depend.
Though there’s no stopping the advance of deep learning into every aspect of our lives, we can in fact bring greater transparency into how those algorithms achieve their practical magic. As I discussed in this post, we should be instrumenting deep learning applications to facilitate identification of the specific algorithmic path (such as the end-to-end graph of source information, transformations, statistical models, metadata, and so on) that was used to construct a specific artifact or take a particular action in a particular circumstance.
Just as important, every seemingly realistic but algorithmically generated artifact that we encounter should have that fact flagged in some salient way so that we can take that into account as we’re interacting with it. Just as some people wish to know if they’re consuming genetically modified organisms, many might take interest in whether they’re engaging with algorithmically modified objects.
Source:
http://infoworld.com/article/3138034/analytics/deep-learning-is-already-altering-your-reality.html
Amazingly one of the inventors of the deep learning algorithm, was Geoffrey Hinton, the same guy who invented the backpropagation algorithm, which for a long time was *the* generic training algorithm (it's supremacy ended 10 or 15 years ago). Hinton says that the algorithm explains why the human brain transmits single bits of information (instead of "richer" values) and why sexual reproduction is so advantageous. He has a fascinating video on the subject.
That does sound fascinating. Will look for the video.
12:30 AM Nov 16 2016