The Subtle Ways Your Digital Assistant Might Manipulate You
Rich Hua stashed this in Technology
The more it learns about you, the more it guides you toward your own preferences.
And yet, despite the promise of digital assistants, they also carry significant social, political, and economic concerns. The leading platforms’ plans, the Guardian reports, are clear: They envision “a future where humans do less thinking when it comes to the small decisions that make up daily life.” To work well, the digital butler will likely operate from an existing platform and tap into the vast personal data and services that platform offers. Four super-platforms—Apple, Amazon, Facebook, and Alphabet—dominate today’s online world. Not surprisingly, each is aiming for its digital assistant (Apple’s Siri, Amazon’s Alexa and Echo, Facebook’s M, and Google’s Assistant and Home) to become our head butler.
Why is each super-platform scrambling to be first? The more we rely on our butler, the more data it collects on us, the more opportunities for the algorithms to learn, and the better the butler can predict our needs and identify relevant services. The more we use the butler, the more power it will have.
Amazon’s Echo and Alphabet’s Home cost less than $200 today, and that price will likely drop. So who will pay our butler’s salary, especially as it offers additional services? Advertisers, most likely. Our butler may recommend services and products that further the super-platform’s financial interests, rather than our own interests. By serving its true masters—the platforms—it may distort our view of the market and lead us to services and products that its masters wish to promote.
So it creates a kind of rut for you. Like Facebook etc.
All of our technologies are self-reinforcing to make us believe even more strongly what we already believe.
Amazon is putting the Echo in Echo Chambers.
If you’re one of the world’s 1.8 billion Facebook users, the service collects data on the things you and your friends do, the information you provide, your devices, your connections, and much more. It shares some of this information with your friends and some of it with third parties, and it makes deductions about your political leanings based on your activity.
In 2012 Facebook conducted a study in which it manipulated some users’ news feeds to examine how people transmit positive and negative emotions to others. When Facebook surreptitiously reduced positive content in the News Feed, the users’ own status updates were also less positive; when Facebook surreptitiously reduced the friends’ negative content in its News Feed, the users were less negative themselves.
If Facebook can affect users’ mood and engagement by simply promoting some content in the users’ News Feed, just imagine the power of digital butlers to affect our feelings and behavior. By complimenting and cajoling, encouraging us to communicate with others, and sending personalized notes on our behalf, it potentially can affect our moods and those of our friends. Further, as many have reported recently, Facebook’s personalization may affect our views and opinions through a selective news feed.
As we welcome the digital assistants into our homes, we may appreciate the free service. But we won’t know the exact cost. As the digital butler expands its role in our daily lives, it can alter our worldview. By crafting notes for us, and suggesting “likes” for other posts it wrote for other people, our personal assistant can effectively manipulate us through this stimulation. “With two billion ‘likes’ a day and one billion comments,” psychiatrist Dr. Eva Ritvo wrote in Psychology Today, “Facebook stimulates the release of loads of dopamine as well as offering an effective cure to loneliness.” Imagine the dopamine spike when your butler secures a personal record in the number of “likes” for a political message it suggested. Your friends won’t know that your butler drafted the post. And none of us will know how that post might sway the public discourse in ways that benefit the super-platform.