Accessibility links

Ridley Scott's advert that launched the Macintosh personal computer in 1983 sought to show:

Ridley Scott's advert that launched the Macintosh personal computer in 1983 sought to show:

...the fight for the control of computer technology as a struggle of the few against the many, says TBWA/Chiat/Day's Lee Clow. Apple wanted the Mac to symbolize the idea of empowerment, with the ad showcasing the Mac as a tool for combating conformity and asserting originality. What better way to do that than have a striking blonde athlete take a sledghammer to the face of that ultimate symbol of conformity, Big Brother? [link]

One of the aims of this project is to connect individuals working in the design for behaviour change field with policy makers looking for ways to encourage behaviour change. When you think of the public policy implications of persuasive technology, do you think it will empower individuals, or open the gate to a Big Brother?

The persuasive technology discourse (despite its rather Orwellian name) in a similarly general way to Google says "don't be evil". This emphasis was set at the first Persuasive conference in 2006:

"In the PERSUASIVE 2006 conference, a particular emphasis was put on those applications that serve a beneficial purpose for humans in terms of increasing health, comfort, and well-being, improving personal relationships, stimulating learning and education, improving environmental conservation, et cetera." [1]

However the ethics of stuff that is designed to change behaviour is still a bit of a minefield. Here are four points that seem to describe the ethical issues made by writers publishing within the persuasive technology discourse.

1. Awareness or Deception

B. J. Fogg's definition of persuasive technology precludes coercion or deception [2], meaning that persuasion must be voluntary. But doesn't avoiding deception require the user to have fairly sophisticated knowledge how the techniques employed by a piece of persuasive technology work?

The possible problem is illustrated by the point of view of practioners like Wai and Mortensen [3], who writing from a commercial perspective, suggest that successful adoption by consumers of some devices lies in making them as boring as possible, and making efforts to “mask any behaviour change”.

The point is picked up by Atkinson [4] in a critical review of Fogg's book, who writes that persuasive technology could only be ethical “if [users] are aware of the intention from the outset of their participation with the program [or product]”. Atkinson maintains that going further than this would be manipulation.

2. Who has the right?

The designer's mandate is usually to have the desires of the user firmly at the centre of their decision making (user-centred design is the mot juste). As Johnson [5] writes in his review of Persuasive Technology, the techniques of persuasive technology, however, shift the focus from the user's desires to those areas in which the user could buck up his or her ideas and change behaviour (paraphrased).

This is presumably not such a big deal in a free market, where any person is free to buy a particular product (providing the product is not deceptive - as the previous point) or not, but what happens when the state gets interested?

3. Which behaviours?

The third area of concern raised is around which behaviours are fair game for designers to encourage. Berdichevsky and Neuenschwander note that any persuasive attempt (regardless of whether technology) is on “uneasy ethical ground” and propose a golden rule of persuasion:

“The creators of a persuasive technology should never seek to persuade anyone of something they themselves would not consent to be persuaded of.” [6]

Fallman [7] calls for a philosophy of Human Computer Interaction (HCI) to decide which behaviours could be ethically persuaded by persuasive technology.

4. Infantilisation?

The final point (and to my mind an important one), is well made by Atkinson, who conceding that persuasive technology might be ethical if the designer’s intent were altruistic:

“But would not this sort of benevolent intent be better constructed and represented by the sound reasoning we know as advocacy or even education, where intent is exposed at the outset or revealed through simple inquiry about course content? ... Exposure to both is cognitively enriching and can result in attitude, belief and behavioural change, but both remain respectful of the individual’s own ability to synthesise the offerings provided by new information into a worldview that is meaningful for that individual.” [4]

That seems to me to be a whole blog posting in itself... Check back soon for more.

Big Brother or Empowering Individuals? How could ethical public policy be developed?

References: [1] IJsselsteijn, W., de Kort, Y., Midden, C. Eggen, B., van den Hoven, E. (2006), Preface. Lecture Notes in Computer Science, 3962, V. [2] Fogg, BJ (2003), Persuasive Technology: Using Computers to Change What We Think and Do. Morgan Kaufmann. [3] Wai, C. and Mortensen, P. (2007), Persuasive Technologies Should Be Boring. Lecture Notes in Computer Science, 4744, 96. [4] Atkinson, B.M.C. (2006), Captology: A Critical Review. Lecture Notes in Computer Science, 3962, 171. [5] Johnson, R. R. (2004), Book Reviews: Persuasive Technology. JBTC. Journal of Business and Technical Communication, April, 251–254. [6] Berdichevsky, D. and Neuenschwander, E. (1999), Toward an ethics of persuasive technology. Communications of the ACM, 42, 51–58. [7] Fallman, D. (2007), Persuade Into What? Why Human-Computer Interaction Needs a Philosophy of Technology. Lecture Notes in Computer Science, 4744, 295.


Be the first to write a comment

Please login to post a comment or reply.

Don't have an account? Click here to register.