All Rise For Chief Justice Robot!
Geege Schuman stashed this in Bots
As Nicholas Carr explores in his recent book, The Glass Cage: Automation and Us, advances in artificial intelligence and automation, from IBM’s Watson to Google’s self-driving cars, have put “elite” white-collar professions — including doctors, lawyers and investment managers — into the path of the robot. Computer algorithms, for example, are increasingly capable of reviewing vast amounts of text and data, identifying correlations, reasoning toward decisions, making accurate predictions and replicating what Carr calls “deep, specialized, and often tacit knowledge.”
Big data has also hit the legal services industry. In recent years, law firms have discovered that predictive coding algorithms can handle electronic document review much more cheaply than a junior associate — and produce more accurate results. Lex Machina, a startup founded by Stanford law professors and computer scientists, uses data analytics to help companies and law firms not only to predict judicial outcomes and favorable forums, but also to craft entire legal strategies around their intellectual property interests. Could such technology one day substitute for the judgments of courts themselves?
I'm trying to think of this as positive rather than as dehumanizing.
Perhaps human judges and robots could confer, until it becomes too embarrassing for the humans.
I like that idea of a transitional plan.
Robot advisers seem like a good idea.
Like the self-driving cars that still have a human in the driver's seat.
Well we'll either merge with robots to become something transhuman, or we'll be like their pets (at least pets would be a better scenario than being considered - by A.I. standards - lazy moronic parasites).
Lazy Useless Moronic Parasites (LUMP) ...
Is THAT what this song is about?
The robots will only be as good as the rules they are programmed with - how do you explain the evolution of human culture and changing dynamics??
Can we program robots with compassion?
Will robots possess self-interest?
Sentience will guarantee unintended consequences. Learning is a form of self-programming, and A.I. will need to be able to learn. We're fooling ourselves if we think we'll be able to control, via programming, a sentient machine. Slavery is the wrong tack to take. I think trying to aim for programming compassion, aesthetic sensibility, and the like is important. If our primary focus for A.I. is efficiency... well... we'll not only be programming ourselves out of every job, we'll be encouraging A.I. to look upon humans as LUMPS.