Artificial Intelligence: Should You Be Afraid Of It?
Geege Schuman stashed this in AI
In 2000, Bill Joy, the former chief technology officer of now-defunct Sun Microsystems, penned one of the most famous apocalyptic rants about the threat to humanity from AI ever, in his article “Why the Future Doesn’t Need Us”—published by Wired and widely discussed as the new century began.
Shortly after Carr’s iconic diss of Google and all things Internet, Silicon Valley, too, found its humanist voice in the likes of former Valley entrepreneur Andrew Keen.
Keen wrote a scathing critique of the web as the place where literary standards go to die, in effect. The shallow “Web 2.0” fad had quickly infected journalism to its core, argued Keen: blogs, mainly, but later Facebook and Twitter, all downplaying human literary abilities, replaced with anonymous scribbles and fragments from social media, analysed for advertising value by Big Data and AI.
The machines may be impressive, said Keen, but what about our own standards? Keen’s The Cult of the Amateur challenged what Lanier has called a worldview of “Cybernetic Totalism” in the modern web that views humans as swarms of helper-bees, dutifully and mindlessly working on behalf of “the hive,” our modern digital-network environments, where quality is supposed to emerge from scores of anonymous people feeding increasingly powerful machines.
Wikipedia is the perfect hive app, to Lanier. He’s written trenchantly that the individual intelligence and expertise of contributors to Wikipedia subserves goals of the communal project. What isn’t disguised—what’s never disguised—is the gee-whiz aspect of the technology frameworks that make it all possible. As Lanier points out, the focus on tech, not people, nicely props the AI idea that machines are our future.
Humanists have a seemingly simple point to make, but combating advances in technology with appeals to human value is an old stratagem and history hasn’t treated it kindly. Yet the modern counter-cultural movement seems different, somehow. For one, the artificial intelligence folks have reached a kind of narrative point-of-no-return with their ideas of a singularity: The idea that smart machines are taking over is sexy and conspiratorial, but most people understand the differences between people—our minds, or souls—and the cold logic of the machines we build.
The modern paradox remains: Even as our technology represents a crowning enlightenment of human innovation, our narratives about the modern world increasingly make no room for us. Consciousness, as Lanier puts it provocatively, is attempting to will itself out of existence. But how can that succeed? And to the paradox: How can we both brilliantly innovate and become unimportant, ultimately slinking away from the future, ceding it to the machine’s we’ve built?
Computers with personalities would be akin to the discovery of alien life in other galaxies.
Lanier suggests that when progress in artificial intelligence becomes our benchmark, we begin acting in subtle, compensatory ways that place our tools above ourselves. It’s these subtle shifts away from our own natures, say the New Humanists, which lead us astray.
It happens, perhaps, much like falling in love: first slowly, then all at once.
The deafening silence of a world without human excellence at its centre is a picture almost too chilling to entertain. If New Humanists are right, though, we’re already on our way. The lesson of AI is not that the light of mind and consciousness is beginning to shine in machines, but rather the dimming of our own lights at the dawn of a new era.