Augmented Reality: Ethics, Perception, Metaphysics
A Royal Society of Edinburgh Research Network Project
November 2021 – November 2023
Investigators: Professor Fiona Macpherson, Professor Ben Colburn, Dr Derek Brown, and Dr Neil McDonnell
Research Assistant: Calum Hodgson.
Funder: The Royal Society of Edinburgh
Final Policy Document: Policy and Practice Recommendations For Augmented and Mixed Reality
Description of the Research
Augmented Reality (AR) technology allows us to see the world around us and experience virtual objects overlaid, or inserted into, our field of view. For example, you might experience virtual directions on the pavement guiding you to your destination, or a recreation of how your location in a city looked in the past or might look in the future. AR contrasts with Virtual Reality (VR) technology, which stops us from seeing the world, and immerses us wholly in the virtual. AR technology is in its infancy but is beginning to be widely available. Today, you can see what a new sofa would look like in your sitting room using your phone and you can experience virtual art works in galleries. Future AR technology will likely involve people wearing special glasses, contact lenses or implants. We can identify possible uses of the technology based on its present and likely future capabilities.
Four philosophers based at the Centre for the Study of Perceptual Experience (CSPE) at the University of Glasgow (Macpherson, Brown, McDonnell and Colburn) who, between them, have expertise in philosophy of perception, metaphysics, and moral and political philosophy, will build a network of scholars, through a series of online workshops, to consider philosophical and practical questions about AR. They will also organise a KE workshop on the nature and capabilities of AR with our business partners, Sublime (details below), and another KE workshop on AR policy and implementation with civil servants and lawyers. The project will culminate with an in-person conference at the CSPE that brings the participants together, showcasing our research, weaving the various strands of the research together, agreeing concrete guidance for regulating this new technology, and setting the agenda for future research, including applying for further grants.
Our work on AR will begin by examining the experiences had using AR and then, in the second stage, examining the nature of the virtual objects we seem to be aware of using AR. We will subsequently consider the ethical issues surrounding AR and, finally, compose guidelines for its ethical use. The rationale behind this approach is that only by fully understanding the technology, in the first and second stages, will we be able to assess the ethical implications of it in stage three, and then, at stage four, make policy recommendations.
More detail of each of these four stages now follows:
(1) AR raises important questions about perceptual experience. AR can provide extra information about our environment. For example, people’s names might appear beside them. But AR can also provide misleading experiences. How people look might be altered: they might be made to look younger, more beautiful, or to be of a certain race. Slums might be erased from view, replaced with beautiful landscaping. We normally think of perception as a trustworthy source of information. Would it remain so if AR becomes ubiquitous? Could perception be undermined as a source of knowledge by extensive AR usage? Moreover, what account can we give of the blended nature of normal (typically veridical) and AR (often non-veridical) elements of experience? Do extant accounts of illusion and hallucination explain this blend?
(2) We will consider the nature of the virtual objects we seem to see using AR. Do virtual objects exist? If so, are they immaterial, computational, emergent, functional? Can they possess value: aesthetic, financial, moral? Can they be bought, sold, or stolen (as courts in the Netherlands have recently considered)?
(3) Armed with an account of AR experience and virtual objects, we will turn to ethical questions. The introduction of the internet and other technologies led to inequalities that did not previously exist. When no one had the internet, people were not disadvantaged by not having access to it, whereas today, much of our society is organised assuming that people can access the web. Will the same become true of AR, since it can add greatly to our knowledge and capabilities?
AR can create misleading appearances; thus, it has the potential to give us false beliefs, prevent our access to true belief, and bias what we attend to. Thus, people who have the power to determine what AR experience is like have the potential to manipulate people by altering their experiences. This concern is made all the more pressing given future AR might involve wearing contact lenses or implants that cannot be (easily) removed combined with the potential future necessity of using AR to participate in society, mentioned above. We intend to map the sorts of harms that can accrue in this fashion, such as affecting what people think exists, altering people’s political views, skewing people’s choices, affecting their autonomy, and so on.
A particular form of these worries arises from the fact that there are typically just a few real statues in city squares, and laws concerning trespass, property, and access are built around that—whilst in AR there could be as many statues as there are users. Should there be limits on who can place which virtual objects where? Can a city legitimately stop a political group placing virtual statues in their city squares? Can a gallery owner be harmed by an artist placing virtual objects in their gallery?
(4) Finally, in response to these ethical problems, our fourth topic will be what sort of moral and legal guidelines societies ought to have in place to counter them. Who should have the power to make AR content and place it at locations? What sort of content is harmless and what not? What information should people be given about how AR is manipulating their experience? For example, if I am told that the AR will make people more beautiful, ought I to be told if that affects the apparent race of people? Would such information allow me to understand how my world view might be affected? Can we always know what all the consequences of changing the appearance of things will be? What kind of consent is optimal before using AR? What kinds of motive for creating AR experience are acceptable and unacceptable?
Impact
The research will improve our understanding of AR, allowing those who create and consume AR to appreciate the nature of AR experiences and virtual objects. It will affect whether, or in what form, people seek copyright, legal claims to ownership, and understand the value of AR products. Moreover, our exploration of the ethical impact of AR and the guidelines that we develop to govern its use will allow consumers to understand the pitfalls of the technology and seek full information on how what AR experience adds, alters, and deletes so that they understand how their worldview is being changed. Our guidelines, shaped by engagement with policymakers, disseminated on our website, and hopefully taken up by those policymakers, will help inform future regulation. Our work will also allow those making AR experiences, maximise the technology’s benefits while avoiding its potential ethical pitfalls around manipulation and consent.
The final policy document that we have produced is: Policy and Practice Recommendations For Augmented and Mixed Reality.
- Final Policy Document: Policy and Practice Recommendations For Augmented and Mixed Reality
- Network Members and Bios
- Project Workshops and Conference
- If you are a researcher, policy-maker, lawyer or Scottish Civil Servant and would like to join the reserach network please e-mail Calum Hodgson explaining your background and interest in the project. (The numbers we can accept on the project are limited.)
Project Events
We are hosting a series of online workshops on the topics outlined above. The workshops are numerous, but each will be short: lasting one afternoon. Each workshop will feature a number of speakers and will also allow time for group discussion. The last five workshops on policy development will feature fewer speakers and will devote time to virtual breakout rooms where participants will consider policies and guidance and work together to draft regulation.
We encourage all participants to attend as many of the workshops as they can, with a view to building up a sustained collective body of expertise to carry out future research and policy work.
Where speakers have been agreeable, we have recorded talks and put them on this website together with copies of written papers or handouts—which you can find below. We will also put drafts of the policies on the website as we are developing them.
Mailing List
To receive news and updates about the workshops, join our mailing list by writing to Calum Hodgson.
Workshop 1: The Nature of Augmented Reality (27th January 2022)
Our opening workshop lays the groundwork for our network by focussing on the technology of Augmented Reality itself. Speakers from industry and academia will outline the state of the technology today, and its future trajectory. How the technology works – the processing, the optics, the software, and the interfaces – is highly salient to a range of philosophical and policy considerations. This workshop’s main aim is to establish a common understanding and language around the technology for the benefit of the project as a whole.
Speakers
Neil McDonnell (University of Glasgow)
Steve Holmes (University of Glasgow, formerly Meta)
Julie Williamson (University of Glasgow)
Mark McGill (University of Glasgow)
Abstracts and Talks
Neil McDonnell - VR and AR and Philosophy
VR and AR and Philosophy
Steve Holmes - Meta and AR
The purpose of this paper is to outline the technology that will likely be used in a compelling, glasses form-factor, high-end, socially-acceptable, all-day-wearable, consumer-based, augmented reality (AR) headset. General system specifications will be summarised and some potential use cases considered. The links between use cases and the associated technological challenges, along with potential solutions, will be discussed. Some ethical issues that result from the use cases (or cases of misuse) will be considered. These will be limited to a sub-set of those that are already being considered to some extent by the organisations developing this technology.
The definition of the type of AR under consideration is important. This type of AR system is not a head-up display from the avionics or automotive industry, nor is it a single line of text hovering just below the user’s eyeline, nor is it tablet/smart-phone based. This is the type of compelling AR system to which a consumer-focused tech giant would be happy to apply their logo. It will allow full interaction with the real world via vision, hearing, and touch whilst superimposing on the real world convincing virtual objects, sounds, and text.
The question of what a system of this type will be able to do for its stakeholders remains open, some new use cases will develop, and some will arise that we cannot even begin to appreciate. Some relatively obvious use cases have already been surfaced, such as:
For consumers:
- SuperPowers
- Telepresenc
- Emulation of existing tech
- Context-aware personal assistants
For suppliers both of the platform and in the more general sense:
- Focused advertising
- Direct selling
- Users’ data access
Each of these use cases already exist in some form on existing platforms. When they are applied in widespread AR they create the potential for additional ethical issues. Some known examples of these include:
- Skin-tone rendering inequalities
- Identification and use of users’ characteristics of characteristics unknown to the user
- Privacy violations.
Fortunately, AR of this type requires technology breakthroughs that do not yet exist. We may be 10 years away from this type of fully integrated AR system and so there is some time to consider the wider societal and ethical issues.
Julie Williamson - Social Dimensions of AR Use
Immersive technologies make it possible to develop new forms of entertainment, productivity, and social experiences. Advances in graphics, tracking systems, and input devices mean that VR research can now focus on the challenges of interaction in real world contexts. My research focuses on new applications of immersive technologies for social and public settings. Even though head-mounted displays like Oculus Quest technically work in settings like a bar or restaurant, an office, or during travel, the social acceptability of such interactions is still an open challenge. In this talk I will discuss my current research in public and social settings, how immersive content can be a social experience, and the open challenges for a future where we have “always on” immersive devices.
Mark McGill - Bystanders and Privacy
Workshop 2: AR and Philosophy of Perception 1 – Illusion and Hallucination in AR (31st March 2022)
The workshop focuses on the kinds of perceptual experiences facilitated by AR. AR experiences are inherently about a mixture of both virtual and physical objects and properties. This generates challenging questions around: the differences and similarities between experiences of the virtual and experiences of the physical; the ways in which these types of experiences might interact with one another; and the extent to which these types of experiences are accurate as opposed to illusory or hallucinatory. A key practical issue concerns the ways in which AR experiences might be used to manipulate perceivers, whether to positive or negative effect. This workshop will examine these issues with an aim to establishing a common understanding and language around AR perceptual experiences and the means by which they might manipulate perceivers.
Speakers
Dr Derek Brown (University of Glasgow)
Professor Katalin Farkas (Central European University)
Professor David Chalmers (New York University)
Abstracts, Talks and Q&As
Derek Brown: AR, Indirect Perception & Illusion
Indirect perception is perceiving something by virtue of perceiving another thing. AR perceptual scenes involve overlapping virtual and physical objects. By virtue of this, such scenes involve indirect perception. I survey different types of indirect perception found throughout our everyday perceptual lives. I then show various ways these can generate illusions. Following this I consider the AR case, where all forms of indirect perception can apply. The opportunities for perceptual illusions are vast. I consider several possible cases, though the limits of these possibilities are largely unknown at present. Of particular interest are broad scene distortions that adjust the overall “look” of physical things. On the positive side, these might be used to help combat some mood disorders, to undo perceptual biases, and to generally make our perceptual lives more interesting and useful. On the negative side, broad scene distortions might draw our attention away from important aspects of physical reality or hide them from us entirely. Given the scope and kinds of perceptual manipulations that will be possible with AR contexts, AR designers, researchers and policymakers should be attuned to these issues.
AR, Indirect Perception & Illusion
Katalin Farkas: Illusion and Hallucination in Virtual Reality
In my earlier work, I defended a "constructivist" view of perceptual experience. When our sensory experiences come in a certain crossmodally coherent and predictable order, they give rise to the idea of an object that causes clusters of these sensations - and if there is indeed an object that causes them, this is the object we perceive. For example, when we hear and see a train arriving, the familiar changes in the auditory and visual sensations as the train approaches cluster around a certain object: the train. It is tempting to think that on this view, in virtual and augmented reality, we perceive virtual objects. After all, we have sensations that form a certain structure, and there is something that is responsible for this structure: lines in the computer program. However, I will argue that data structures don’t always possess the kind of unity that is required from an object of perception, and hence they are not actually perceived. Much of virtual reality is an illusion or hallucination, even if we are not taken in by these deceptions.
Illusion and Hallucination in Virtual Reality
David Chalmers: Perception, Illusion, and Hallucination in Virtual and Augmented Reality
I will argue that perception in virtual reality and augmented reality need not involve illusion or hallucination. In sophisticated users of VR and AR, perception typically presents things as they are.
Perception, Illusion, and Hallucination in Virtual and Augmented Reality
Workshop 3: AR and Philosophy of Perception 2 – Epistemology (26th May 2022)
In this workshop, we turn our attention to epistemological issues in augmented reality. Epistemology is the study of knowledge. Rene Descartes famously raised the possibility that our experiences—and our beliefs based on them, such as that trees are green—might not reflect the way the world really is. Perhaps an evil daemon has always been feeding us erroneous experiences of a world that doesn’t exist, and there are no green trees in the world. A modern version of this idea is that perhaps we might have always been in a completely virtual world generated by a computer. These scenarios raise the sceptical worry: can we trust our experience to give us knowledge about the world? The response of some philosophers to this problem of global scepticism has been to argue that if we were in such a situation, and had been for all of our lives, then we really wouldn’t be being deceived, for our experiences—and our beliefs generated by them—would really be about the virtual world and not about the “real” world lying behind that. And those experiences and beliefs would be accurate or true of that virtual world – there would be green trees in it. That sort of response is put under pressure by thinking of scenarios in which we sometimes experience the real world as it actually is and sometimes experience elements of a virtual world—but in which we can’t tell which is which. This scenario will be realised when augmented reality technology is perfected. Such a scenario seems to raise new sceptical worries. Would we then have undermined our perceptual knowledge of the world by introducing augmented reality experiences?
This workshop examines these, and related, problems with talks from three of our esteemed colleagues in epistemology at the University of Glasgow. We start with Prof Jack Lyons giving us an overview of epistemological theories and the problems augmented reality raises for them. We move on to hear from Dr J. Adam Carter who argues that the sceptical problems people have to date claimed augmented reality raises are not so serious. However, he raises a new, more worrying, sceptical problem that augmented reality engenders. Finally, we hear from Dr Emma Gordon who considers reasons we may have for resisting pharmacological cognitive bio-enhancement – that is taking drugs to gain new knowledge – and whether those reasons should also make us resist cognitive enhancement by means of augmented reality.
Speakers
Prof Jack Lyons (University of Glasgow)
Dr J. Adam Carter (University of Glasgow)
Dr Emma Gordon (University of Glasgow)
Abstracts, Talks and Q&As
Jack Lysons - 'Introduction to the Epistemology of Augmented Reality'
What are the epistemological implications of augmented reality? I examine the most salient options, discussing the likely epistemological impact of AR, given various kinds of AR and various background epistemological theories. Throughout, I will focus on perception and address questions of both knowledge and justification.
J. Adam Carter - 'Augmented Reality and Scepticism'
Some recent epistemologists have expressed the worry that augmented reality generates ‘sceptical problems’, and they do so by pointing to our claimed inability to differentiate between the virtual and the real in AR space. One strategy that would nip such arguments in the bud would be to contest received thinking about the difference between the virtual and the real (Chalmers 2017). But how else can might we respond to such worries, without disputing this difference? This talk has two aims. First I’ll show why extant arguments for ‘augmented scepticism’ overstate their case. The sense in which we are unable to discriminate the virtual and the real in VR space is not nearly as sceptically disastrous as some epistemologists have thought. Even so – and this is the second aim – I’ll suggest that there is a type of AR-based sceptical problem in the neighbourhood, one which turns on considerations to do with ontic occlusion rather than indistinguishability, which is comparatively more serious but thus far overlooked.
Emma Gordon - 'Augmented Reality and Intellectual Enhancement'
A central topic of research in recent bioethics concerns cognitive bioenhancement – the use of the latest science and medicine aimed at improving cognitive functioning to make us ‘better than well’. Even though cognitive enhancement offers a more expedient route to acquire epistemic goods such as true beliefs and knowledge, bioconservative philosophers maintain that we should forego enhancement on the basis of arguments that appeal to (among other things) (i) the alleged ‘cheapened’ value of our (enhanced) cognitive achievements; and (ii) the idea that relying on enhancements to gain knowledge undermines our intellectual authenticity. Such arguments have focused principally on pharmacologically mediated cognitive enhancements – such as ‘smart drugs’, and have yet to be extended to technologically mediated intellectual enhancement via augmented reality. In this talk, I will outline the achievement- and authenticity-based arguments bioconservatives have raised against pharmacological cognitive enhancement and consider what versions of these arguments look like when applied to cognitive enhancement via AR. Ultimately, it will be shown, neither argument offers a compelling case to refrain from enhancement via AR.
Workshop 4: AR and Metaphysics 1 - Do Virtual Objects Exists? (28th July 2022)
This workshop will be guided by the question: do virtual objects exist? Answering this question would appear to be relevant to considerations around the value of virtual objects and virtual experiences, and so may well sit at the root of considerations about the legitimacy of virtual representations, their epistemic status, and the moral and political implications that come with AR experiences. Or perhaps those issues do not depend on the ontological question at all. Chalmers has recently argued that virtual objects are genuinely real and (therefore) genuinely valuable. Wildman and McDonnell have argued that they are not real – they are fictions - and yet can be valuable nevertheless. This workshop explores this theme.
Speakers
Dr Nathan Wildman (Tilburg University)
Dr Adrian Alsmith (King's College London)
Alex Fisher (Cambridge University)
Abstracts, Talks and Q&As
'Augmented Reality Fictionalism' Nathan Wildman
The aim of this talk is to articulate and defend a broadly fictionalist account of augmented reality, in line with virtual walt-fictionalism (McDonnell & Wildman 2019, 2020). To do so, I begin by first sketching the background Waltonian framework, before then demonstrating how this framework can explain away the apparent existence, value, and causal powers of virtual objects, as well as how it supports an account of virtual perception neatly mirroring the realist’s. I then apply this account to objects in augmented reality. In particular, after discussing some points that make AR objects seem especially problematic for fictionalists, I show how we can make sense of them. Finally, I conclude by highlighting some AR cases that, I contend, raise some difficulties for the AR realist.
Augmented Reality Fictionalism
'Imaginative perception and the bounds of the virtual' Adrian Alsmith
Virtual objects are not typically illusory, as they are sometimes described. Nor are virtual objects real, in the sense that Chalmers has recently argued. Rather, in general, virtual objects are fictional. This talk will explore the psychological implications of this view: virtual objects are experienced through imaginative perception, in virtue of the exploitable congruency of digital props with relevant imaginative projects. This offers a superior account of our implicit grasp of the selectively permeable boundary between virtual and non-virtual worlds, as a case of our more general imaginative capacities. By contrast, illusionist or realist accounts of this fact are either impoverished, overly intellectual, or merely terminological variants of the view defended.
Imaginative perception and the bounds of the virtual
'Imagination in Virtual and Augmented Reality' Alex Fisher
It is controversial whether the objects present in virtual and augmented reality are fictional or real. The answer is taken to be significant for various practical questions about these technologies. This tracks a more general sense that it matters for how we engage with a piece of media whether it is fictional or real, with some form of imagination key to our engagement with the former.
However, recent work in the philosophy and psychology of fiction has argued that it is not whether a piece of media is fictional or real that matters for our engagement with it; rather, what is more significant is whether it constitutes a situation that admits of action or not. We engage with actionable situations one way, and with un-actionable situations in another, employing the imagination only in the latter case. VR and AR present a problem for this dichotomy, since they constitute situations which are both actionable and which seem to involve exercise of the imagination. This talk examines in what sense, if any, imagination is involved in engagement with each of these technologies, and the consequences for the fictionalist/realist debate about virtual objects.
Workshop 5: AR and Metaphysics 2 - What Properties Can Virtual Objects Have? (29th September 2022)
This workshop will be guided by the question: what properties do virtual objects have? This is an extension of the metaphysical enquiry into the nature of virtual objects and further grounds the considerations we will go on to consider in the latter part of the project: concerning the legal, moral, and epistemic status of AR experiences constituted by virtual objects. It may matter, for example, whether virtual objects can have non-virtual locations and thus be considered to be on someone else’s land. Or whether virtual objects can be owned or traded in quite the same manner as non-virtual objects and thus subject to extant property rights. This workshop explores the prior question of what sorts of properties virtual objects can have.
Speakers
Prof Kris McDaniel (University of Notre Dame)
Prof Robert Williams (University of Leeds)
Dr Alexandre Declos (Collège de France)
Abstracts, Talks and Q&As
Kris McDaniel - 'Are Virtual Objects Bad for Us?'
Robert Williams - 'How many skeletons?'
Suppose that some truths about a virtual world are generated dynamically and randomly. For example, the code contains an analogue of the role-playing game instruction: when trigger conditions are met, roll 2 six-sided dice to determine how many skeletons there are in a (real or virtual) closet. In this case, higher-order properties of virtual objects (how many of them there are) is settled only when trigger circumstances obtain---and the trigger circumstances might never obtain. I’ll use the case to think about the virtual truths that Chalmers’ digital realism can sustain (a worry being that digital realism will breed illusions about non-existent skeletons) comparing and contrasting this to the walt-fictionalist approach to virtual reality that McDonnell and Wildman advocate (where I use it to press questions about what we should count as the props that generate fictional truths).
Alexandre Declos - 'Virtual properties and their troubles'
According to David Chalmers, the virtual objects we encounter in VR and AR possess virtual properties of a specific kind. My goal will be to defend this claim against several objections, and to investigate its specific consequences for AR.
I’ll start by presenting Chalmers’ theory of virtual properties. Then, I’ll discuss the following objections: (i) this theory can’t apply to all types of properties/relations; (ii) it leads to a proliferation of property types; (iii) it ascribes massive errors to VR/AR users; (iv) it faces a version of Jackson’s “many-property problem” against adverbialism. Each of these concerns, as I’ll show, can be dealt with.
After that, I’ll consider virtual properties in the specific case of AR. AR reveals that non-virtual objects can possess both non-virtual and virtual properties. It also shows that a same non-virtual object can have different and even incompatible properties across AR spaces. I’ll finally discuss some issues raised by properties in AR: the risk of an ‘AR solipsism’ induced by user-based property customization, and the unclear persistence conditions for non-virtual objects in AR environments.
Workshop 6: Knowledge Exchange – How can theory inform practise and vice versa? (27th October 2022)
This workshop will be a change from our usual format as it will be asking: how can theory inform practice? We are looking at understanding the routes by which our work on this project and in academia more generally, can contribute to the positive development and adoption of AR. Contributors will speak for around 15 minutes on the topic from their own perspective. Each talk will be followed by a Q&A which will be divided into questions from the audience for the speaker, and questions to the audience from the speaker. This will foster the two-way dynamic that we’re aiming for. We asked the speakers to send across some of their questions in advance for members to take a look at; I’ve copied in these below.
Speakers:
Clare Hillis (Information Commissioner’s Office)
Rony Abovitz (Boston Consulting Group)
Martin McDonnell (Edify)
Schedule
2:00 - 2:10: Welcome by Neil McDonnell
2:10 - 2:25: Clare Hillis - ‘Immersive Technologies & Data Protection Implications'
2:25 - 2:55: Q&A
2:55 - 3:05: Break
3:05 - 3:20 - Rony Abovitz - ‘How Can the Academic World Guide & Inform the Biomedical Technology Factors Involved in XR Design?’
3: 20 - 3: 50: Q&A
3:50 - 4:00: Break
4:00 - 4:15: Martin McDonnell – 'Making a success of Academic <> Enterprise partnerships'
4:15 - 4:45: Q&A
4:45 - Neil McDonnell close.
Clare's discussion prompts:
- How should regulators identify / engage with academic expertise?
- Is there a belief UK regulators should only be seeking out UK academics, or is it appropriate to cast a wider international net?
- Does academia acknowledge differences between data protection / privacy and online safety regulation?
- In academia relating to matters of technology / emerging technology – is there a wide understanding of existing regulations? Could regulators offer better outreach on how laws apply?
Rony's discussion prompts:
- Why is XR like a medical device?
- Is XR a Brain-Computer Interface
- What are some of the neurologic and other factors (psychological)?
- What is meant by neuroplasticity, and does this play a role in XR design?
- Who should regulate this field and design space?
- Do we really understand the safety issues?
Martin's discussion prompts:
- Two Innovate UK funded consortia partnerships as case studie
- The thorny subject of IP
- Suggestions for simplified / standardised agreements
Workshop 7: AR and Ethics 1 (26th January 2023)
For this workshop we will aim to start building up a toolkit for the ethical evaluation of AR technological developments. The papers at the workshop will speak to this ambition.
Speakers
Dr Carl Fox (University of Leeds)
Dr Sarah Lehman (NASA Langley Research Center)
Dr Chris Mills (University of Warwick)
Talks and Asbtracts
Chris Mills: "Ethical Reasoning about Augmented Reality"
Augmented reality raises questions about how to reason ethically about technological issues and about which ethical worries generated by this new technology should concern us most. This talk addresses both methodological and normative matters raised by AR. Drawing from previous talks in the workshop series, it offers a range of distinctive ethical considerations to keep in mind when debating the use and abuse of AR.
Sarah Lehman: “Hidden in Plain Sight – Exploring Privacy Risks of Mobile Augmented Reality Applications”
Mobile augmented reality systems are becoming increasingly common and powerful, with applications in such domains as healthcare, manufacturing, education, and more. This rise in popularity is thanks in part to the functionalities offered by commercially available vision libraries such as ARCore, Vuforia, and Google’s ML Kit; however, these libraries also give rise to the possibility of a hidden operations threat, that is, the ability of a malicious or incompetent application developer to conduct additional vision operations behind the scenes of an otherwise honest AR application without alerting the end user. In this talk, we present the privacy risks associated with the hidden operations threat, and propose a framework for application development and runtime permissions targeted specifically at preventing the execution of hidden operations. We conclude with a discussion of open problems in the areas of software testing and privacy standards in mobile AR systems.
Carl Fox: "Augmented Reality, Public Statues, and Gratitude"
One obvious worry about augmented reality technology is that people could use it to effectively seal themselves off from sensory stimuli that they wish to avoid. In this paper, I’m interested in how this might facilitate a preference not to engage with a particular form of state speech – public statues and other works of commemorative public art. This strikes me as wrong, but why should that be so? And what distinguishes AR from how we currently use headphones, phones, and other devices that also reduce our availability to outside stimuli?
I will explore an idea borrowed from the literature on political obligation, that by receiving benefits we can sometimes incur debts of gratitude that oblige us to behave towards our benefactors in certain ways. Public spaces offer a range of benefits, and I will argue that, other things being equal, using and enjoying them can establish a weak obligation to the state for providing them. I will then argue that the appropriate way to discharge this obligation is to remain open to the state’s attempts to use art in those public spaces to communicate with us about its fundamental values. On my view, it is thus wrong to use AR to completely obscure public statues in one’s field of vision, and, further, the state is entitled to take at least some steps to regulate the development, distribution, and use of AR in order to discourage this practice.
Workshop 8: AR and Ethics 2 (March 23rd 2023)
In this workshop we will continue to build a toolkit for the ethical evaluation of AR technological developments. The speakers will identify some potential ethical problemss of AR which will provide the foundation for further discussion about these issues in future policy-oriented workshops.
Our speakers are:
• Prof Jeffrey Dunn (DePauw University)
• Dr Maximilian Kiener (University of Oxford)
• Prof Christopher Bartel (Appalachian State University)
Talk Recordings and Abstracts
Jeffrey Dunn: 'Augmented Reality: Attention and Property'
Augmented reality (AR) technology raises a host of interesting philosophical issues. In this work in progress, I'll focus on two (unrelated) ethical issues related to AR that I find interesting.
The first concerns attention and its role in our epistemic lives. In many cases we lack control over what beliefs or knowledge we acquire; our belief-forming processes operate automatically and sub-personally. We do have control, however, over what we attend to in our environment, which can have a strong influence on the beliefs we form. Because AR can exert pressure on what we attend to, it can exert pressure on the beliefs we form and the knowledge that we have. I argue that this has implications related to moral responsibility as well as the creation and maintenance of morally troubling echo chambers.
The second ethical issue related to AR concerns property, and particularly whether property owners should be able to restrict the placement of AR objects on their property. I argue that the answer depends on the kind of AR system we are considering. I'll also consider what sorts of policies might be put in place with respect to AR objects and property, drawing on some of the legal literature concerning light projections onto buildings.
Christopher Bartel: 'Augmented Reality and Fictional Wrongs: Lessons from Video Games'
I’m interested in the possible ways that AR could be used to occlude the external world and make it appear to the user the way that the user wants. The question I am asking is, are there any ethical limits to the ways in which I could alter my view of the external world? This question is similar to an ongoing debate in video game studies: is it ever morally wrong to enjoy representations of violence in video games? The similarity can be seen in that a hypothetical amoralist could draw on the existing debate over video games to insist that there are no ethical limits to the way that my AR presents the world to me on the grounds that (1) 'it’s just pixels on a screen' and (2) 'it’s not harming anyone'. Both of these arguments are commonly offered in defense of violence in video games; however, the target of my talk will be (1). I will consider some moral arguments from both Kantian and virtue ethical perspectives about why users of AR should still have some moral concern about pixels on a screen.
Maximilian Kiener: 'Augmented Reality and Moral Responsibility'
Augmented reality promises to enhance our decision-making, boost our entertainment experiences, and fundamentally change the way we communicate, interact, and collaborate. Yet, there are also serious ethical risks, including privacy violations, exploitation, the exacerbation of inequality, and the manipulation of users and bystanders into harming either themselves or others. In this presentation, I address these challenges in relation to concerns about moral responsibility. To do so, I shall first introduce two distinctions, namely (i) the distinction between prospective and retrospective responsibility and (ii) the distinction between answerability and blameworthiness. I shall then argue that the use of AR can cause ‘responsibility gaps’, outline why responsibility must play an important role in any ethical framework of AR, and consider two models of responsibility that have so far received little attention, namely strict answerability (as opposed to strict liability) and shared responsibility (as opposed to collective responsibility).
Workshop 9: AR and Policy Development 1 (8th June)
This is the first of three workshops focussing on ethics, policy and AR.
In these workshops we begin to consider the policy implications of the aspects of augmented reality explored so far in the series. Our aim is to continue identifying relevant ethical factors, both positive and negative; and also to take a turn towards considering the implications of those factors for policymakers, regulators, and developers. How should the law develop in these areas? How can we guide new technologies to harness AR's opportunities while guarding against its risks?
Speakers
Professor Sylvie Delacroix (University of Birmingham)
Karim Nader (University of Texas at Austin)
Professor Erick J. Ramirez (Santa Clara University)
Talk Recordings
Sylvie Delacroix - 'Learning through experimentation: AR and pre-reflective intelligence’
In this talk I will explore the extent to which AR can be structured in such a way as to facilitate what I call ‘normative experimentation’, and consequently foster (rather than hamper) our pre-reflective intelligence.
Erick J. Ramirez - 'XR Embodiment and Complicated Future of Selves'
In this talk I'll explore questions related to "XR embodiment." I'll begin by distinguishing physical embodiment against XR embodiment in order to make several claims. First, I'll argue that early evidence on XR embodiment suggests that people can, under some circumstances, identify themselves as strongly with their XR bodies as with their physical bodies. Second, people in future social XR spaces will desire to modify their XR bodies to express themselves. Third, this desire poses difficult moral and regulatory questions about the representation and control over XR bodies in virtual privately owned public spaces like VR Chat and Horizon. Lastly, I'll consider a sample of other issues XR bodies will force us to confront about relationships, identity tracking, and anti-bias training.
Karim Nader - 'Virtual fictional actions: What we can learn from policies on video game violence'
Virtual fictionalism is the view that virtual objects are what we imagine based on digital representations in augmented and virtual reality, and not the digital representations themselves. First, I give an account of virtual actions based on virtual fictionalism. If augmented and virtual reality are fictions, then our virtual actions are somehow fictional. But what does that mean? I argue that our virtual actions are the creation of fictional representations from outside of the fiction. I also show how this view can be used to properly and accurately morally evaluate our virtual actions. Second, I look into the history of policies on video game violence in the United States. I show that the moral panic and the subsequent policies around video game violence target depictions of violence and not acts of violence. In other words, policymakers care very little about what players can do in a video game, and care a lot more about what is represented. I think this is the right way to think about potential policies for augmented and virtual reality since our virtual actions are just the creation of fictional depictions.
Workshop 10: AR and Policy Development 2 – Consent and AR use (27th July 2023)
This is the second of three workshops focussing on ethics, policy and AR.
In these workshops we continue to consider the policy implications of the aspects of augmented reality explored so far in the series. Our aim is to continue identifying relevant ethical factors, both positive and negative; and also to take a turn towards considering the implications of those factors for policymakers, regulators, and developers. How should the law develop in these areas? How can we guide new technologies to harness AR's opportunities while guarding against its risks?
Speakers
Dr Joe Slater (University of Glasgow)
Dr Neil McBride (De Montfort University)
Talk Titles and Abstracts
Dr Joe Slater - Juries, Lookism, and AR
In a recent paper (2023), I argued that there are circumstances that would render juries fit for purpose. I presented a set of criteria, and argued that if they were satisfied for a particular subset of criminal trials, juries should be abolished for in those instances. I then argued that these criteria were satisfied in for sexual assault offences.
In this talk, I consider whether the criminal justice system may, through jury bias, treat individuals unfairly on the basis of physical attractiveness, i.e., that juries may exhibit “lookist” bias. I consider some work in philosophy and social psychology, and suggest that it is at least plausible that the unattractive are unjustly penalised (and the more attractive are privileged).
Finally, I make a positive proposal utilising AR, suggesting that something similar to deepfake technology might be fruitfully utilised to 1) analyse whether such a bias manifests in trial conditions, and 2) (if such a bias is prevalent) to mitigate the pernicious effects of this bias.
Neil McBride - Establishing a Framework for Augmented Reality Ethics
Establishing a technology ethics requires moving beyond consideration of the raw technology to understanding its relations in a social and organisational context. As such this requires characterising augmented reality technology as a component in an ecosystem. Firstly, I consider the characteristics of augmented reality and suggest that it is neither augmented nor real. Secondly, I outline current approaches to the ethics of augmented reality which offer limitations in discussion rather than routes to a wider consideration of guidelines and policies. I develop the idea of an augmented reality ecosystem, informed by some core Deleuzian concepts and illustrate through use cases. Finally, I discuss potential directions for augmented reality guidelines
Establishing a Framework for Augmented Reality Ethics
Workshop 11: AR and Policy Development 3 - Drafting Guidelines (28th Sept 2023)
During this workshop the team of investigators began work on drafting a series of policy recommendations.
Recomendations were discussed and reference to workshops and talks provided context, information, and stimulus for each discussion point.
Workshop 12: AR and Policy Development 4
This workshop was a continuation of the policy development from last workshop and an extended planning discussion. The outcome of this discussion was a draft policy proposal paper.
Workshop 13: Knowledge Exchange – AR guidelines in practice (26th Oct)
This workshop will be a final discussion involving the investigator team. This will focus on policy recomendations with a draft policy propsoal document as the expected outcome.
Final In-Person Conference
9th and 10th November 2023
At this conference we will examine the nature of the experiences had using AR; the nature of the virtual objects we seem to be aware of using AR; the ethical issues surrounding AR; and, finally, examine and refine the guidelines for its ethical use that we have drawn up.
Registration to attend the in-person (only) conference sessions is free and open, however advance registration for the conference is required. Please register here.
Full details of the conference and speakers:
Day 1
10.15 – 10.30 Conference registration, tea and coffee
10.30 – 11.00 Welcome, overview of topic and policy work to date
11.00 – 11.50 Sarah Lehmann (NASA)
“The Oracle Problem and Its Impacts in Mixed Reality”
Abstract: Machine learning-based systems, and augmented and virtual reality in particular, are rife with ethical issues. These issues are well and good to discuss and think upon, but such discussions only solve part of the problem if they do not consider how any solutions to these ethical problems are to be implemented and verified in subsequent systems. In this talk, I will present the “Oracle Problem”, or the difficulty in machine learning-based systems of differentiating (without human effort) between correct and incorrect system behavior, and what implications this difficulty will have for augmented reality ethics discussions in the future.
11.50 – 12.00 Break
12.00 – 12.50 Chris Mills (Warwick)
“Fictional Consent”
12.50 - 1.00 Break
1.00 – 1.50 Alex Fisher (Cambridge)
“Virtual Twofoldness”
Abstract: Our perception of paintings, photographs, and other depictive images is often said to be twofold. There are two aspects to our perception: we see images both as images and as what they represent. The former aspect is traditionally framed as involving conscious perception of the picture surface. I outline how augmented reality requires a more permissive notion of seeing something as an image. This alternative conception of twofoldness renders it a useful conceptual tool for capturing the phenomenology of augmented reality, as well as the experiences it often aims to elicit.
1.50 – 3.00 Lunch
3.00 – 3.50 Mellissa Terras (Edinburgh)
Interviewed by Neil McDonnell
3.50 – 4.00 Break
4.00 – 4.50 Nathan Wildman (Tilburg)
“Problems with Passthrough?”
Abstract: Recent pieces of XR technology – in particular Apple’s Vision Pro and Meta’s Quest 3 – make extensive use of passthrough, a function that lets users see “through” their display into the surrounding physical environment. In this way, passthrough provides an apparently easy route for integrating AR/VR entities with the real world: your visual display can include the coffee shop that you’re physically sitting in plus the virtual robot you are (virtually) working on. Yet while the underlying technical aspects are well understood and the relevant hard/software is steadily improving, little has been said about how passthrough impacts on-going ontology of XR debates. The aim of this paper is to make the first steps towards doing so. Specifically, I begin by developing the passthrough patchwork problem, an objection to virtual irrealism centering on the divergent attitudes irrealism requires we take to the entities apparently occupying our visual field. After sketching out the problem, I proceed to show how a modified version of it is equally challenging for virtual realism. I then demonstrate (i) how the realist can reply to the problem using Chalmers’ phenomenology of the virtual (2017, 2022), and (ii) how the irrealist can mimic their solutions (suitably modified, of course). The upshot is that passthrough isn’t a problem for irrealism – either there’s a solution to be had or, if there isn’t, then it is a bad bug for realism too. Finally, I conclude by stressing that, if we are going to take the ontology debates seriously, then we need to carefully consider what impact the positions have in larger discussions about XR.
Day 2
9.45 – 10.00 Tea and coffee
10.0 – 10.50 Katalin Farkas (Central European University)
“Virtual Objects and Irreplaceable Value”
Abstract: Some objects have irreplaceable value: for example, works of art, personal memorabilia, and in perhaps a different sense, people. This talk asks if virtual objects can have irreplaceable value. (Hint: the answer is no.)
10.50 – 11.00 Break
11.00 – 11.50 Chris Yiu (Director of Public Policy at META)
“Next steps toward AR glasses and the metaverse”
Abstract: The companies developing immersive technologies are also thinking deeply about how they will be deployed and used. This session will provide an industry perspective on products available today that bring us closer to AR glasses and the metaverse - with insights on topics like user controls, user education and new experiences.
11.50 – 12.00 Break
12.00 – 12.50 Chris Bartel (Appalachian State)
“Offensive Representations of People in Augmented Reality”
Abstract: Other people appear in my visual field in AR and, using the same real-time tracking technology that powers deepfakes, AR should be able to alter the appearance of others in the near future. This of course leads to the possibility that others will be represented in offensive ways. Different sorts of moral problems arise depending on whether the offensive representation is generated by the user or generated by the subject. My aim is to try to refine the problem somewhat, suggest some reasons why offensive representations of others in AR morally matter, and argue in favor of what I will call a subjectgenerated view of AR.
12.50 – 2.00 Lunch
2.00 – 2.50 Julie Williamson (Glasgow)
“Being Social in XR”
Abstract: Immersive technology (XR) designed for always-on interaction will forever change the way we communicate, collaborate, and connect with one another. XR technologies are rapidly advancing in terms of form factor and capabilities, but there is a present-future gap between how we use XR now and the rich social and interpersonal contexts where XR will be used in the future. Virtual environments enable new forms of interaction and connection but fall short of the meaningful experiences we expect during face-to-face interactions. In the real world, we “give off” a variety of social signals, such as position, posture, gesture, facial expression, and more that are crucial to interpersonal interaction and expression. Capturing or translating these signals into virtual environments may improve interaction, but we can also design beyond reality using fundamental human experiences as a starting point. My current work focuses on techniques for establishing stable interpersonal realities when interacting across the XR spectrum.
2.50 – 3.00 Break
3.00 – 3.50 Roundtable discussion of draft policy document by all
3.50 – 4.00 Break
4.00 – 5.00 Roundtable discussion of draft policy document by all
For more information, please contact calum.hodgson@glasgow.ac.uk