The Machine and the Ghost

Advances in the investigation of the physical universe we live in.
Post Reply
Ammianus
Posts: 306
Joined: Tue Dec 27, 2011 1:38 pm

The Machine and the Ghost

Post by Ammianus »

The upcoming deluge of subneural advertising/propaganada:

http://www.tnr.com/article/books-and-ar ... e?page=0,0
Now we can download numerous apps to our smartphones to track every step we take and every calorie we consume over the course of a day. Eventually, the technology will be inside of us. In Steven Levy’s book In the Plex, Google founder Larry Page remarks, “It will be included in people’s brains ... Eventually you will have the implant, where if you think about a fact it will just tell you the answer.” The much-trumpeted release of the wearable Google Goggles was merely the out-of-body beta test of this future technology.

Now that we feasibly can embed electronics in nearly any object, from cars to clothing to furniture to appliances to wristbands, and connect them via wireless signals to the World Wide Web, we have created an Internet of Things. In this world, our daily interactions with everyday objects will leave a data trail in the same way that our online activities already do; you become the person who spends three hours a day on Facebook and whose toaster knows that you like your bagel lightly browned. With the Internet of Things, we are always and often unwittingly connected to the Web, which brings clear benefits of efficiency and personalization. But we are also granting to our technologies new powers to persuade or compel us to behave in certain ways.........

TECHNOLOGY IN PRACTICE is nearly always ahead of technology in theory, which is why our cultural reference points for discussing it come from science fiction rather than philosophy. We know Blade Runner and not Alfred Borgmann, HAL and not Heidegger. We could even view the city of Songdo through the lens of “The Life and Times of Multivac,” Isaac Asimov’s story, from 1975, about a supercomputer that steps in to run society smoothly after human missteps lead to disarray. But our tendency to look to fictional futurist extremes (and to reassure ourselves that we have not yet overstepped our bounds) has also fueled a stubbornly persistent fallacy: the idea that technology is neutral. Our iPhones and Facebook pages are not the problem, this reasoning goes, the problem is how we choose to use them. This is a flattering reassurance in an age as wired as our own. In this view, we remain persistently and comfortably autonomous, free to set aside our technologies and indulge in a “digital Sabbath” whenever we choose.

Yet this is no longer the case. As Peter-Paul Verbeek, a Dutch philosopher, argues, it is long past time for us to ask some difficult questions about our relationship to our machines. Technologies might not have minds or consciousness, Verbeek argues, but they are far from neutral. They “help to shape our existence and the moral decisions we take, which undeniably gives them a moral dimension.” How should we assess the moral dimensions of these material things? At a time when ever more of our daily activities are mediated by technology, how do we assign responsibility for our actions? Is behavior that is steered by technology still moral action?

Drawing on technology theorists such as Don Ihde and Bruno Latour, as well as the work of Michel Foucault, Verbeek proposes a “postphenomenological approach” :lol: :lol: :lol: that recognizes that our moral actions and the decisions we make “have become a joint affair between humans and technologies.” :ugeek: In this affair, human beings no longer hold the autonomous upper hand when it comes to moral agency; rather, Verbeek argues, we should replace that notion with one that recognizes “technologically mediated intentions.”

Such intentions are clear in the use of an older technology—fetal ultrasound—that has transformed our understanding and experience of the unborn child. As Verbeek notes, a technology originally devised to enhance our medical knowledge has generated unintentional but serious moral quandaries. “This technology is not simply a functional means to make visible an unborn child in the womb,” Verbeek argues. “It actively helps to shape the way the unborn child is humanly experienced.” That experience is now one of greater transparency and greater abstraction. We see the child as something distinct from its mother; the womb becomes an “environment.”

The technology fundamentally alters not only what we can see, but also the quality and the quantity of the choices we are asked to make. At a time when the vast majority of women choose to terminate Down Syndrome pregnancies, for example, even the decision not to have an ultrasound to screen for possible birth defects is staking a moral position, one that brings with it the implication that one is risking future harm to one’s child. In making these decisions, Verbeek argues, the mother is not the only autonomous actor; the technology itself “plays an active role in raising moral questions and setting the framework for answering them.”

That our machines might exert control over our moral decision-making is an unpopular idea. We like to think of ourselves as exercising autonomy over the things we create and the actions we take. Verbeek finds in our desire to cling to this notion a touching fidelity to the principles of the Enlightenment. Although he shares those principles, he no longer finds them sufficient grounds for moral thinking in an era whose technologies are as ubiquitous and powerful as our own. Verbeek wants us to “move the source of morality one place further along” from the Enlightenment’s emphasis on human reason to a system that grants equal weight to our technologies—the things, like ultrasound, that we increasingly rely on to understand ourselves and our world. Many technologists already embrace this idea: as Alex Pentland argues in Honest Signals, his book about sociometers, “We bear little resemblance to the idealized, rational beings imagined by Enlightenment philosophers. The idea that our conscious, individual thinking is the key determining factor of our behavior may come to be seen as foolish a vanity as our earlier idea that we were the center of the universe.”.....................................

Persuasive technologies take many forms, from cell phone apps to sophisticated pedometers, and use familiar strategies, such as simulations and positive reinforcement, to achieve their goals. The old behaviorist notion of “operant conditioning”—using positive reinforcement to change behavior—is evident in the micro-persuasion techniques used on websites such as Amazon and eBay, where reviewer ratings and consumer rankings encourage a sense of increased trustworthiness among users and where you are greeted by name on every page—all in an effort to persuade you to keep coming back.

Persuasive technology takes less subtle forms as well, such as the Banana-Rama slot machine that features two characters, a monkey and an orangutan, who serve as a kind of virtual audience for the gambler, celebrating when she wins and offering surly and impatient expressions when she stops feeding coins into the machine. And they can act as efficient surveillance devices. “HyGenius,” a device marketed to restaurants and hospitals (and in use in many McDonalds and the MGM Grand Casinos), is placed in bathrooms so that employers can monitor (via embedded sensors in employee badges) whether or not their workers are properly washing their hands.

Arguably, our technological persuaders are better than people because they are devilishly persistent, can manage large volumes of information, and have long memories. One writer for Boston magazine who used her smartphone to help her lose weight and meet other life goals noted that, after downloading a handful of apps, “my phone became a trainer, life coach, and confidant. It now knows what I eat, how I sleep, how much I spend, how much I weigh, and how many calories I burn (or don’t) at the gym each day.”

Spend any time delving into the literature on persuasive technology, however, and you will find yourself encountering the language of seduction as often as persuasion. These technologies actively try to provoke an emotional or behavioral response from us, which can be a satisfying experience for people in need of motivation and encouragement. But technologies whose stated goal is mass interpersonal persuasion also raise important questions about privacy and autonomy. You might like a digital pedometer to track your daily walk, but how would you feel if your cell phone came equipped with a sensor that could tell when you were becoming sexually aroused and send a helpful text message reminding you to wear a condom?

TO UNDERSTAND such challenges, Verbeek would have us look to the design and the engineering of technological objects themselves. Like Lawrence Lessig and his argument about the crucial role the architecture of code played in the creation of the Internet, Verbeek believes that all users of technology need to be more engaged in controlling how these technologies are designed and used. Human beings, he declares, should “coshape their technologically mediated subjectivity by styling the impacts of technological mediations.”

It is revealing that Verbeek lapses into the passive voice when the discussion moves in this direction. “Arrangements should be developed, therefore, to democratize technology development,” he writes. Well, yes. But by whom, and how? What would a democratically developed technology look like? If our experiences with privacy violations committed by companies such as Google or Facebook are any guide, individual users have very little power to “style” the impact that many technologies have on us. You cannot “coshape” an environment designed by others to prevent you from influencing it. As it becomes increasingly difficult to refuse certain technologies, you cannot even decide to opt out of these environments. How much “coshaping” can a food-service worker do when his employer issues him a badge that tracks the number of minutes he spent washing his hands after using the bathroom?

Verbeek also urges designers of these technologies to think through the intended and unintended consequences that are likely to arise from the use of their creations. “Deliberate reflection on the possible mediating roles of a technology-in-design should, therefore, be part of the moral responsibility of designers,” he argues. But the technologists who make these objects have little motivation to build in ethical safeguards or to relinquish control to users in the way that Verbeek encourages, and so far they have demonstrated little concern for the unintended consequences of their creations. Indeed, in an early book on the subject of persuasive technologies, B.J. Fogg admitted that the science of persuasive technology does not include “unintended outcomes; it focuses on the attitudes and behavior changes intended by the designers of interactive technology products.” As one participant in a Persuasive Technology Conference in Palo Alto in 2007 rightly observed, the field “has a tradition of being morally ignorant of its consequences.”

This exposes the central tension between ethicists like Verbeek and the technologists whom he wishes to influence. Verbeek wants technologists to design things with greater transparency—things that will, effectively, tell us what they intend to do before they do it. Such warnings, like the “advertisement” labels that appear across the top of the page in print magazines denoting content that is not objective, would presumably allow us to make informed judgments about our use of technologies. The problem is that the goal of the technologists is to make these technological persuaders less transparent, not more so. Indeed, these technologies are admired for their invisibility.
There is quite a bit more in this TNR article, but I've highlighted the most important themes as can see. Worth a read from start to bottom. I'm surprised various techie oriented magazines or blogs such as Wired or arstechnica haven't yet broached this issue. But considering the implications raised by this piece, perhaps I should not have. In any case, I'm reminded of a muttering from a certain infamous mathematician:

"Um, I'll tell you the problem with the scientific power that you're using here, it didn't require any discipline to attain it. You read what others had done and you took the next step. You didn't earn the knowledge for yourselves, so you don't take any responsibility for it. You stood on the shoulders of geniuses to accomplish something as fast as you could, and before you even knew what you had, you patented it, and packaged it, and slapped it on a plastic lunchbox, and now .................. you're selling it, you wanna sell it. well...Your scientists were so preoccupied with whether or not they could that they didn't stop to think if they should."

Just like an interstellar train hurtling towards a black hole at 0.9c
Post Reply