Technomoral Conversations: Who is Responsible for Responsible AI?
March 27, 2024, Playfair Library, University of Edinburgh
The Centre for Technomoral Futures at the University of Edinburgh hosted a public discussion on the theme, “Who is Responsible for Responsible AI?” Here are a few reflections on that conversation.
Is the Default Irresponsible AI?
Steph Wright of the Scottish AI Alliance reflected usefully on the strangeness of the phrases used to describe how AI research should be done today. Phrases like AI for good, data for good, and, of course, responsible AI. Her question: Is the norm AI for bad? Is the default irresponsible AI? And if so, how did we allow this move-fast-break-stuff culture?
It is time we viewed the development of AI as the development of interactions.
This sharpens many knives. First, consider that move-fast-break-stuff culture is an ingrained philosophy of modern start-up culture. In a fiercely competitive global tech scene, this is the way serial entrepreneurs learn what ideas will not work before they have expended too much of their resources, and themselves. Of those ideas that do get traction, it is the way they find out how best they will work, and what features, services, and business operations they should focus on and optimise for. And fewer spaces are more heated with competition than AI right now. And there are consequences when what is being built and tested, is not merely a tool, but an interaction. Especially an interaction based on a believable semblance of humanness and personhood. The implications for the humans involved in these interactions are of immense importance. It is time we viewed the development of AI as the development of interactions. Seen this way, the “stuff” in move-fast-break-stuff is people; their conceptions of self and other, the wiring of their relationality as persons, and the social facts that circumscribe their personal, professional, private, and public lives.
And yet, having worked more than a decade in the tech industry, I am loathe to pronounce a sudden halt. As an academic, what I am inclined to do is suggest a solution in the shape of a problem (this is my take on the term problematise!). The problem is: what cost is there to a slower, more deliberate approach to AI development, and what benefits are there? If this were a research programme, one subquestion would focus on the business/commercial side, the other on the human side, from whatever discipline the question was being engaged.
This problem provides resources for a cost-benefit analysis. But that presumes a utilitarian approach that may not always be the best approach. Is it a wholly escapable approach? That is another problem.
Making Sense of the Politics and Lived Realities
Jack Stile proposed making sense of the political environments within which AI is necessarily developed and deployed. He advised that rather than foisting obscure paradigms on the public, AI makers and implementers should seek to redefine AI in public terms. This is, of course, one place where questions of moral and intellectual values, traditions, and cultures forcefully enter the fray. Does this option offer any credible path towards a single understanding of what AI is or is meant for? In public terms, ethical AI means, for example, inclusive AI—AI that is free of inimical, disadvantageous characterisations and impacts on the basis of sexuality, for example. But this, of course, is not the case in many parts of the world, where the opposite principle is just as cherished on the basis of the protection of cultural values.
Another panellist, Rachel Coldicutt of Careful Industries, mentioned the need for “a digital civil society observatory that puts lived experience on a par with things like the AI Safety Institute and the [Alan] Turin [institute],” and that shows we are “respecting and honouring” that lived experience. It should be remembered that lived experience is possessed by oppressive majorities and oppressed minorities. Good as it might sound, designing AI in response to lived reality—participatory design or citizen-led AI—as it is sometimes called, is not a straightforward matter. Are there instances where lived reality should be considered in order to determine how it can be transcended or avoided in AI development?
It’s Not All There
Some panellists suggested human rights as a universal platform on which to possibly solve for some of these differences in culture. Steph Wright said, “if we just used human rights as the foundation, then everything builds from that.” Rachel Coldicutt responded, “There’s this idea that we always need new principles, new statements, actually no.” Steph Wright concluded, “It’s all there, it’s universal.” And yet, it’s not all there. Like any legal document, the Universal Declaration of Human Rights has not proven immune to myriad culturally specific interpretations. Some of those interpretations have been used in support of anti-LGBT legislating. A Ghanaian member of Parliament appealed to it to “the spirit and letter” of the document to argue for legislation criminalising LGBT relations and advocacy in Ghana. The UDHR allows, of course, for legal limitations on rights intended for “the just requirements of morality, public order and the general welfare in a democratic society.” The flexibility of interpretation this allows in a culturally plural world, chaotic as it is with secular, postsecular postmodern, postcolonial, imperial, and other trajectories and legacies of culture, has not provided a “universal” understanding of human rights.
Is there a way of preventing AI ethics from accreting around nationalist, ethnic, religious gravities?
The truth is that at the end of the day, participatory practices in technology innovation happen within an ethical framework that often goes unmentioned in discussions about technology. It did go unmentioned in this particular forum, although it lies implicitly in science and technology professor Dr Jack Stilgoe’s reminder that more important than the how of participatory design is the why of it. The why of it is inseparable from the who-we-are. In Scotland as in Europe and the United States, discussions about participatory design can transpire within ethical frameworks that, by and large, and despite persisting religious claims, are legally grounded. And they can take place in an inclusive fashion because that is the ethics that has been legally settled. Again, not so everywhere. Given this diversity, is there a way of preventing AI ethics from accreting around nationalist, ethnic, religious gravities? Should this be a goal? Or should we be seeking universal understandings of responsibility? Furthermore, is there need for a philosophical distinction between the “responsible” development of AI and the development of “good” AI?
These are only cursory reflections. If they are not fully formed, they at least point towards some of the knottier paths along which we must inevitably walk in our quest for safe postdigital futures.