Social values can be in conflict with the interests or preferences of individual people. Sustainability, for example, is widely viewed as crucial for our future. At the same time it is recognized that technology alone cannot bring about a sustainable society. Individual agents need to change their behavior as well. How do we motivate agents such that they realize our social values, even when these values conflict with their own private interests? This is where persuasive technology comes into the picture. It aims at persuading human agents to behave in socially-valued ways, by giving information, providing feedback, and taking over actions. The success of a persuasive technology that serves the public interest depends on the integration of sound technology, effective persuasive principles and careful attention to ethical considerations.
This research program investigates the psychological mechanisms and the ethical dilemmas of persuasive technology in two ways. First, we develop an in-depth empirical study of a concrete case where persuasive technology is under development: the energy management and safety of vehicles (cars, trucks). Vehicle simulators will be used to observe human agents using various forms of persuasive technology, where the most important variable will be the amount of control transferred from the user to the technological system. Second, we analyze the general psychological mechanisms and ethical dilemmas at stake, which will result in design recommendations for developers of persuasive technology. Since persuasive technology is a perfectly generic technology our results will be important for many other areas as well.