This year marks the 30th anniversary of the publication of The Social Construction of Technological Systems which marked a seminal shift in Science and Technology Studies, and introduced a new way to study technology that gave “equal weight to technical, social, economic, and political questions.”
The contributors to the book argued strongly that the “social groups that constitute the social environment play a critical role in defining and solving the problems that arise during the development of an artefact.” Extending this further, they suggested that if social groups not only define the problems, but also the solutions, to technological development, there is an inherent flexibility in the design of technologies. There is no “one best way” to design the technologies with which we live and interact.
The SCOTS (social construction of technological systems) approach highlights some important points for discussion. First, closing a door on technological determinism, it highlights how the design of technologies is intimately connected to the priorities of the societies in which they are created. In this way they not only mirror, but also facilitate, the futures desired by these societies. Second, it affords the physical/technological environment a role in ethics discourse. Indeed, if technologies are shaped by society, then the very environment in which we live reflects the decisions of our society – or parts of it. Third, the interactions that we as individuals have with the diverse range of technologies we use on a daily basis cannot be thought of as morally neutral. We both shape and are shaped by the technologies we use.
In recent years the awareness of the moral implications of technology design has come to inform discussions on responsible research and innovation (RRI). Increasingly, the RRI community is scrutinizing design decisions and systems construction to make explicit ethical problems arising from design. Most commonly, RRI focuses on the amelioration of potential harms arising from the implementation of technologies. One such example might be scrutinizing the algorithms underpinning search engines to ensure that children are not accidentally exposed to damaging content online.
Addressing the unintended harms arising from technological design decisions, however, has distracted attention away from the corollary argument: if harms can be ameliorated through responsible technological design, could good also be fostered? Is there a way of selecting for design decisions that necessarily maximize societal good through ethical individual interactions? While interesting on many levels, such questions should be of particular importance to scholars working in the tradition of virtue ethics. In contrast to other systems of ethics, virtue ethics focuses on individual behavior within a specific time/space context. Thus, while other systems advocate for universal rules to distinguish between good and bad, virtue ethics highlights the need to understand what would constitute appropriate behavior by an individual at a specific instance.
In this way, virtue ethics presents a highly contextual and reactive picture of good behavior in which individuals respond to the environment. Nonetheless, the literature on virtue ethics has typically said little about the environment, aside from as a backdrop to ethical action. Integrating the work from SCOT and RRI, however, frames this in a different light. It is apparent that we need to ask the question: how can virtuous behavior be understood in a socially-constructed context? By this, of course, I mean not only how can a virtuous individual navigate the social-ness of living with others, but how one should act when the physical environment itself has social – and thus ethical – content?
If one follows this train of thought, two key issues must be confronted. First, discussions on virtuous behavior need to include more discussions about the design decisions inherent in the technologies that are increasingly used in the modern world. Understanding virtuous actions in settings where behavior itself is insidiously guided through technological design is a mountain that virtue discussions still need to climb. Second, the remit of RRI might be extended to consider whether virtuous behavior – and the cultivation of virtues – might actually be fostered through intentional design of technologies. Could it be possible to embed design decisions within technologies that not only ameliorated harms, but also fostered ethical behavior?
Such ideas may sound inappropriately interfering and “Big Brother-like”. Indeed, it must be admitted that such concerns are not invalid. The idea that behaviors can be shaped through deliberate design decisions should be a matter for concern. However, is it not better to be aware of these issues, and to adopt the “moral high ground” than to continue to design in ignorance?