+593 99 022 3684 dime@hassler.ec

Quantifying and Persuading Humans with Data

I know how to control you with a robot.

Or at least, I’m learning to. Though I suppose what I really mean to say is that my robots and AI are. If this doesn’t scare the shit out you, it should. Not because we suddenly have technology that we can use to subtly manipulate people into doing what we want them to do — we’ve had that for years. Nor should you fear that we suddenly have access to information and data that allows us to know a person to a frightening detail — that’s nothing all that new either. You should be most disturbed by the fact that our manipulative tools, for the first time in history are starting to convincingly look, sound, act, and feel human.

There are numerous technological advancements that are enabling this new wave of personalized persuasion. Natural language processing (NLP) and text-to-speech (TTS) technologies have come a long way since the early days of ELIZA and awkward digital voices created by vocoders. Contextual conversation systems are getting better and better at understanding human intent and are even starting to use imperfections within their voice that make them sound more life-like. Few better demonstrations of this exist than Google’s Duplex system that, say what you will about the sources and selective editing of their early demonstrations, sounds frighteningly humanlike. A crucial breakthrough of recent years has been to go beyond simply correct pronunciation of words and to work on perfecting the vocalics: rate, pitch, inflection, volume, and variety of the voice that give it a richer timber and experience.

From a physical standpoint, we continue to make robots whose features — eyes, skin, hair, and teeth — climb higher and higher out of Mori’s Uncanny Valley. Whether it’s Hanson Robotics’ Sophia receiving a bullshit honorary citizenship from Saudi Arabia or Hiroshi Ishiguro trolling people at academic conferences by sitting beside his robotic clone, Geminoid HI-1, to see if people can tell the difference, our current efforts make a good case for needing a Voight-Kampff test within the next 10 years.

Beyond the physical design of these systems, a key step forward in more lifelike interactions with robots has been our understanding of nonverbal communication. On the one hand, computer visions systems have been slowly learning how to gauge our emotional reactions via facial expression monitoring or body language recognition. On the other hand, we are taking these learnings and imbedding digital ticks, flinches, and behaviours back into our robots to make them appear as equally lifelike and expressive as we are.

However, the largest leaps forward in recent years have been in what’s going on upstairs (or in the basement depending on where the designers placed the CPU). It’s the deep-learning enabled intelligence that, while still nowhere near AGI or a human-like intelligence, is getting good enough for government work (in some cases, literally). Through relatively straightforward learning algorithms, humans are being quantized and reduced down to simple profiles that allow organizations big and small understand a small fraction of an individual, yet effectively communicate back to them in a convincing or effective way. It doesn’t work all the time and it won’t work forever, but it doesn’t have to in order to accomplish relatively simple feats like influence a purchase decision, sway a voter, or negotiate your terms down. Even if it only works a little bit and sometimes, on the scale that can be reached with these kinds of technologies, one only needs a relatively low effectiveness in order to have significant, global impact.

And if anything from that last bit of what I just wrote excites you, let me be very clear: you frighten me. This article is not meant as cerebral soft-core porn for digital marketers to intellectually get-off to; it is a cry for help and a call to arms. The core problem is that for each of the small armies working on advancing the technologies described above and for each of the businesses deploying these technologies to excerpt their influence on markets, nation-states, and communities, there is but a handful of ethicists, philosophers, or simply morally-rooted human beings advocating for pause and reflection. But this small, fragmented group is growing tired of playing the moral conscious of the world while organizations blindly charge forward towards a fabled nirvana, leaving our collective conscious feeling like a crazy person standing on a street corner somewhere wearing a sandwich board that simply reads, “what the hell were you thinking?”

That simple but elegant question highlights my concern with much of business and engineering culture today. For the unfortunate, knee-jerk response that many individuals have to this question is simply “nothing” as we have shed our gift of reflexivity in exchange for faster and greater productivity. We move faster, carry more, and do more; however, we see less along the way and have neglected the scent of too many passing roses. Our addiction to La Technique — society’s obsessive focus on hyper-efficiency and productivity pervading every aspect of our functional and social lives — has become unquenchable to the point that we have now created meta-technologies that can optimize our optimization.

The reason this concerns me so deeply in the case of robotic and AI technologies is that, until we do a better job of considering and clarifying our moral position on digital persons (a term you prepare yourself to grow very comfortable with), I believe we have a duty to humanity to exercise transparency in the use of automated agents. Google was quick to insist that their Duplex system would be used along with the agent identifying itself as a ‘digital assistant’ at the start of a call, however, beyond its identify, what else should our technologies need to be transparent about? If we are attempting to recreate human-like relationships, should there not also be human-like pretense and awareness of power dynamics? Should a digital agent have to state its objectives? Should it identify its patron or owner? Should it reveal what it knows about you and what data it used to draw its conclusions? Should it identify what it is learning about you during the interaction? Should it be able to lie?

These are the questions that few organizations have devoted the proper time or resources to considering. I won’t be so naïve as to think that organizations are going to choose to answer all of these questions as ethically-pure angels, however, an identification that they exist and the recognition of an organization’s general policy or beliefs around these kinds of questions would, at a minimum, allow individuals the ability to choose if and how they engage with different organizations and their agents. Without the appropriate information to advise this choice, we essentially leave people with no choice at all.

And without the recognition of these questions, organizations essentially leave this choice up to the individuals on the front-lines of their technical development. They are massive engines of productivity who have distributed the steering and acceleration tasks across different sub-components of their vehicles; they have forgone strategic, long-term thinking and leadership in order to allow the technical masses to chaotically determine their, and our, fates. This is irresponsible, this is lazy, and this is short-sighted. Any victories claimed now and in the future are won with the caveat of the fragile, teetering foundation upon which they are built. People may have stupid moments; however, they are not stupid forever. And just like Cambridge Analytica’s hands getting caught in the big data cookie jar, all organizations engaging in ethically questionable data and automation use, are essentially operating on borrowed time until their day of digital reckoning comes.

Ultimately you can justify your efforts as an ethical decision, a cultural decision, or a financial decision; it doesn’t matter so long as there are efforts. Technologies are improving and learning to better imitate and manipulate us with each step forward. The choice to ignore the implications of these technologies within your organization will have ethical, cultural, and financial blowbacks given enough time. I do not caution the development of these technologies — this is nearly a foregone conclusion that I myself am involved in — however, I caution the deployment of them before a diverse group of perspectives have given fair discussion, consideration, and clarification on how they’ll be used. Google’s recent removal of “don’t be evil” from their code of conduct, while disappointing, is completely allowed — so long as in its place, they introduce the transparency and clarity that allows you and I to decide which necessary evils we will tolerate. Likewise, you’re free to be as good or evil as you’d like with automation tech, but opaque deception and manipulation will catch up with you sooner or later.

 

SOURCE: https://medium.com/@ArtificialShane/automating-manipulation-51d2478af1b4

 

Write by

Go to the profile of Shane Saunderson

Shane Saunderson

Robot sympathizer. HRI PhD @UofTMIE. Writer @misc_mag@DgtlCulturist@chatbotsmag. Vocals & guitar @TheNobleRogues & @HailRobot. Plays well with others.

 

Si continuas utilizando este sitio aceptas el uso de cookies. más información

Los ajustes de cookies de esta web están configurados para "permitir cookies" y así ofrecerte la mejor experiencia de navegación posible. Si sigues utilizando esta web sin cambiar tus ajustes de cookies o haces clic en "Aceptar" estarás dando tu consentimiento a esto.

Cerrar