David Chalmers is University Professor of Philosophy and Director of the Center for Mind, Brain, and Consciousness at New York University. His new book, Reality+ (Allen Lane, 2022), dives into the philosophy of virtual and augmented realities. The book argues that digital worlds have the potential to be every bit as real as physical ones, and that the physical world may already be digital in more ways than we might think.
In keeping with the themes of the book, we conducted the interview with Prof. Chalmers in virtual reality, a first for all of us. We met up in Altspace, Microsoft’s social VR app. Embodied in cartoonish avatars, amidst miniaturized animals and physically impossible cubic sculptures, we discussed the philosophical consequences and possibilities of emerging virtual and augmented reality technologies.
Timofei Gerber: Thank you for joining us in this weird little space today. We’d like to dive directly into the general thrust of your book, which puts forward the idea of Virtual Reality as an expansion of reality. This sets your position apart from the technophobic and conservative aura that dominates our cultural media and imagination, those tired dystopian visions about what the digital future has in store for us. But when we were creating and preparing the virtual space here, we saw that Windows logo looming behind us that neither of us could remove. There are already many corporations and forces trying to get a hold of VR and monopolize it. In this context, can your book not only be read as an optimistic take about virtual reality, but also as a cautionary tale, saying that we should take Virtual Reality seriously because it’s coming and it’s here to stay?
David Chalmers: I tend to lean towards optimism, with glasses being half full rather than half empty. And I say in the book that I’m really focusing on the possibilities for VR in principle, rather than the actualities.
I’m trying to establish that there’s at least the possibility in VR for the full gamut of the human experience, from the awful to the wonderful. I’m not saying that it’s a utopia, but also not saying that it’s a dystopia. To the extent that many people are skeptical that VR could even be meaningful, it’s an optimistic perspective. But I’m not making any claims about where we’re going to fall in that range.
The book isn’t especially cautionary, but there are some warnings when I discuss building a society in a virtual world. The issue of corporations becomes very much front and center. Here we are in Altspace, owned by Microsoft. Horizon has arrived, it’s going to be one of the leading social virtual spaces, owned by Facebook, or “Meta”. Some of the leading spaces in VR right now are still owned by smaller companies, for example VRChat and Rec Room. But others are owned by large corporations.
If you think as I do, that virtual worlds are real environments on a par with the physical world, this becomes a volatile situation where our entire world, almost, is owned by corporations. There are obvious dangers and downsides there. The creators of a virtual world are like the gods of that world – creators who are all-powerful and all-knowing about that world. I don’t know exactly what relationship Microsoft has to this world we are in right now. But, at least in principle, they have access to everything which is going on with us. They could also manipulate us as they choose with advertising and all sorts of biases.
So I very much hope that as virtual worlds develop – we’re still in the very early days – it will look closer to the spirit of the internet as a whole. No one owns the internet. We can use the word “Metaverse” and I hope that no one will own the metaverse. There’ll be different corners of it, run by different corporations, but there’ll also be the possibility of a lot of it to be either user-owned, or user-controlled.
Gerber: What might speak for a democratic impulse of virtual realities is that we imagine them with an universal access, where everyone who enters them does so, in creating their avatar, with a ‘clean slate’. Take that, for example, in contrast to the current space race of the private sector, which concerns exclusively the members of the oligarchy.
Chalmers: Some people think it could be a real force for equality and democracy, if everybody has equal access to virtual reality. Now, this is, of course, a very optimistic vision. It’s usually the case that with the best technology, access to technology is not equal, and that the rich are going to have access to better virtual worlds than the poor. But on the other hand, there is this fact that virtual objects are by their nature abundant. A building in virtual reality can be duplicated just like that, with no effort. In principle, a kind of abundance ought to create distributive justice in the virtual world, more easily at least. Even planets, in principle, could be abundant.
At the same time we all know the forces of inequality are very strong, and there’s sure to be artificial scarcity in virtual worlds. For example, NFTs, nonfungible tokens, take things that look like they should be abundant to everybody and makes them scarce. So, here we have artificial scarcity. But there could also be certain forms of virtual abundance alongside virtual scarcity that could make a difference in enabling equality.
John C. Brady: Let’s take a closer look at an example of such a virtual space, The Matrix, which always makes a ripping example. In your book you argue that virtual worlds, virtual objects and events and so on, are not less real for their being virtual. They may be less real for other reasons, but being virtual isn’t one. So, when we consider the virtual world of The Matrix, which is near indistinguishable from physical reality, it is every bit as real in terms of our beliefs, values, and the possibilities of living a meaningful life. It would seem Neo’s choice to leave it, or to stay within it, then become equivalent. After all, he can probably do more good with his abilities engaging in philanthropic work within the Matrix. But is that all there is to it? Isn’t there something we grasp as viewers that pushes the scales towards leaving?
Chalmers: There’s a lot going on in The Matrix. I don’t want to say they are exactly on par. There are some reasons why one would want to leave the virtual world. For a start, to get access to more of the universe. I grew up in Australia. At some point I wanted to see the world, I didn’t want to be stuck in Australia, that was limited, there was more out there to see. Furthermore, maybe going more deeply, in The Matrix the virtual world, you could argue it’s a form of deception, a form of imprisonment. We’ve taken some people who are living in a non-virtual world, and put them in the Matrix. They didn’t go there voluntarily. They lost access to their memories and their histories. So, this is not an ordinary case of people entering a virtual world voluntarily. There’s probably a massive injustice there. I think at the end of the third movie, as far as I can remember, they’re given the choice to stay in the Matrix, now in full knowledge of the situation. That’s closer to the ideal situation as I see it; it gives people the choice.
Another way to run the thought experiment is to think about the Matrix as a non-virtual world where many of the same things are going, but instead it’s a distant planet. They take the human species to a different planet, set them up there in a nearly completely controlled environment, wipe their memories, and isolate them in the same way. So, people are very unaware of how they got there and their history and place in the world. That seems just as bad. To me that would suggest that what’s bad about the Matrix isn’t the fact of its being virtual, it’s the part about the domination, and lack of awareness. The red pill, for me, represents awareness. And I think knowledge and understanding is very important. Those are really valuable things in leading a meaningful life. So, insofar as the red pill represents coming to actually know and understand a situation better, then I’m all for the red pill.
Brady: That being said, even going through everything you argue, there’s still this visceral reaction to the idea of simulations. This is most marked when you are discussing the simulation hypothesis, the idea that it’s highly probable that we are in fact already living in a simulation. This idea seems to kick us in the gut. But at certain points in the book you make use of an argument that is, roughly, if we imagine that God is the one who created the universe, and thus is running a form of divine simulation, and based the cats and trees and things in our world on cats and trees and things in heaven, then that visceral reaction vanishes. This is just the old, familiar, theological picture. Now, your position seems to be that if we judge one case in one way, and the other in another, then we should rather bring these two intuitions into alignment, and judge them the same. But I’m curious why we have this change of intuitions at all? Why are we seemingly more comfortable, even as atheists, in a God created universe than one created by simulators?
Chalmers: Yeah, it varies between people. It’s not a psychological fact that everybody has the same reaction. One thing I suspect is it is at least partly generational. People my age, in their 50s, are much more likely to have the visceral reaction that a digital world is not fully real, whereas, people who are now in their teens who are used to hanging out in the digital world might have a very different reaction. I haven’t done the study, but it would be interesting to do some experimental philosophy here.
In the book I try to argue that the simulation hypothesis is identical to this thing called the it-from-bit creation hypothesis where God made the world by creating bits. You’re probably right that many people’s initial reaction is that there’s a difference here, whereas I try to argue that this would be the same. What’s really going on there? I suppose that it’s caused by this idea of reality that I call the “Garden of Eden” model, where we are inhabiting an absolute three dimensional space with solid objects out there in space and that’s what it takes for something to be genuinely real. Whereas a mere simulation won’t give you that: everything is evanescent, insubstantial, nothing is really solid, things aren’t really out there in space.
But that’s what I want to argue against, in particular by a look at modern science, quantum mechanics, relativity, where the physical world starts to look less and less like the Garden of Eden. Physical reality as we understand it now is very far from that model. Instead it’s evanescent in a similar way to digital reality. Once you modify your picture of physical reality to the picture in modern science, then I think these two hypotheses start to look much more on par.
Brady: That seems to be one of the threads of your main argument throughout the book: we can’t identify what properties a simulated reality has that would make it less real than a physical reality, and besides, we don’t know what’s going on with physical reality anyway. But, the best metaphysical or scientific picture is making physical reality seem like these virtual realities.
Chalmers: I quote Žižek of all people on this somewhere towards the end of the book. The upshot of virtual reality is actually the virtualization of the original physical reality. Something like that is going on in this case. Thinking about virtual reality, you could say we’re upgrading virtual reality to make it more like our original picture of physical reality. Another view is, physical reality has been downgraded so it’s more like virtual reality. Maybe they’ll end up converging in the middle.
Gerber: One thing that your book invites us to consider is the virtual space itself. But it also invites us to reconsider the technologies of the virtual spaces, the technologies that give access to the virtual space, like the VR glasses that you two are using now. Such technologies, including the semi or proto virtual ones, like the smartphone, are, as you argue, literally extending our minds. Can we talk about how your extended mind theory can help us to reconsider our current relationship to the technologies that we use? If I consider my smartphone to be an extension of my body, or of my mind, should I change my relation to it?
Chalmers: So this gets back to issues about the mind, and the philosophy of mind, which are actually the areas that I started out in. Philosophy is at least in part about issues concerning the mind, in part about issues concerning the world, and in part about the interaction between the mind and the world. Likewise when it comes to technology. I think some technologies really bear on our vision of the world and some bear on our vision of the mind. So, virtual worlds as I think about them, artificial worlds or augmented reality, brings to us augmented physical worlds. Artificial Intelligence, on the other hand, brings us the possibility of artificial minds. And some technologies, smartphones, the internet, bring us the possibility of augmenting our minds, extending our minds. And I think that’s what we’ve seen happen big time in the last decade or two with mobile technology, especially the smartphone, which has become such a central part of all of our lives. It’s extended our cognitive capacities, extended our memory, navigation abilities, planning, so many things now. Mobile technologies become integrated with our mind, and so has the internet in a broader sense.
But with augmented reality technology, I think it’s especially interesting. For example, glasses that project virtual objects into the physical world. These potentially do both. They augment the world. You can be inhabiting a somewhat richer, partly digital environment, thanks to augmented reality technology, but also they augment the mind. Maybe you’re navigating and the equivalent of Google Maps is sending you instructions on how to get there from here, maybe it’s automatically going to recognize people in the physical world by just portraying their names above their heads. And this could be seen as a kind of extended mind, extending our recognitional capacities, our navigational capacities, and so on. I think virtual and augmented reality actually have the capacity to do both here, to augment the world and to augment the mind.
Gerber: But do you think that as a consequence of this augmentation we should reconsider, right here and right now, the status of these objects? For example, everything on my smartphone is copyright protected by somebody else. So, could it be said that part of my mind belongs to somebody else, that I’m affected in my autonomy? In the short term view into the future, should we change our relationship to these objects and say that it’s more problematic that they belong to corporations, or to other people that can just change them at will?
Brady: I feel here on this point the visceral pain of a forced Windows update, where you have been temporarily and forcefully shut out from some of your mind’s extended capabilities.
Chalmers: Sometimes I wonder, if given the extended mind, then a whole lot of what’s realizing the basis of our mind is on some servers in the cloud, or in our smartphone. Sometimes I’ve wondered how much of my mind is owned by Apple, how much is owned by Google? Facebook too, though Apple and Google are probably competing for number one here. And insofar as that is genuinely part of my mind, then to what extent does that compromise my autonomy? It’s always important to realize that even in the extended mind picture, there is an internal mind, which, for now, is largely a biological mind, which supports our consciousness at the core of all this, and where we still have volition and autonomy. But, of course, we will be really affected by this technology that interacts with our internal minds and helps to constitute our beliefs and desires and plans.
The forced update example is interesting, but on the other hand, all of us, we get tired at night and we fall asleep. That’s just something our brains do to us, right? We know we can try and struggle against it but you’re going to fall asleep. We get old, we lose memories and so on. So, our autonomy even with the original biological mind is somewhat limited. But there is a distinctive concern tied to the fact that, say, a corporation and other agents might have this very direct control over you. It’s also happened for a long time with the socially extended mind. For a long time the people we interact with, our families, friends, and so on, have often served as extensions of our mind. There’s already a significant lack of autonomy for me when my partner isn’t around and I may lose my extended memories of whatever it is in the home. But I do think all of this speaks very strongly in favor of somehow regulating technology to ensure privacy and anti-manipulation of a fairly strong sort. And I think different corporations so far have taken different attitudes towards it. I think Apple is a bit better than some, but regulations are always going to stay a long way behind the technology.
Brady: It is interesting the way that the metaphysics and the discussion in law intersect here. I feel a lot of the legal frameworks are playing catch up with the technologies. For example, you hack into someone’s computer, is that breaking and entering? You duplicate a file off their hard drive, is that theft? There has been a lot of progress here since the 90s legally, but I feel here the extended mind raises even more questions. For example, your right to jailbreak your phone. Is that violating Apple, or Samsung, or whomever’s, intellectual property? Well if this machine is literally an extension of my mind, my interests in mental autonomy, and intellectual property might conflict. It seems the metaphysics can enter the courtroom here.
Chalmers: Yeah, maybe jailbreaking your smartphone could be like undergoing some kind of psychotherapy, and you need to do this to discover the hidden nature of yourself and the wellspring of your drives and motivations which are being driven by some technology on your smartphone that you want to have access to and change. Likewise, say someone steals my phone, is that theft, or is that something more like assault? As these things become closer and closer extensions of our mind, the moral weight in depriving someone of these technologies becomes all the more serious. Yeah, I do think as far as I can tell, law, regulation, and policies are a long way behind on all of this.
Brady: You just mentioned before that we’re extending our minds out through these technologies, but that there’s a core mind, this core sense of consciousness and so on. And I feel throughout the book, you’ve got Descartes’ discovery in your pocket: We can be wrong when things aren’t the way they seem, but they definitely at least seem the way they seem…
[At this point David Chalmers descends into the table]
Chalmers: Alright, just getting a chair here. Apologies if I just did something very weird.
[He visibly shrinks to half the size]
Brady: You just sort of sunk into the floor.
Chalmers: How is this now? Okay, but now I’m really low. Right? Let me reset.
[He returns to his regular proportions]
You were saying?
Brady: So, there’s that good old Cartesian point about consciousness, that we can’t be wrong about it. But now thinking about extending our minds, if we get really science fiction and start thinking about implanting chips, and extending our visual systems through cameras and sensors, and things like that, is that Cartesian self-consciousness, no matter how crazy we go, always going to be solid? In other words, is there something ‘indestructible’ about that sense of inner-consciousness, the ‘core mind’ as you called it?
Chalmers: I tend to be Cartesian in my thinking about consciousness. That is to say that we know about some of our conscious experiences with something approaching certainty. It’s not to say we know about our entire conscious experience, you don’t need to attend to much of it at once. And it’s not to say we know about the mind beyond our conscious experience, or beyond the conscious experience that you’re attending to. So what Cartesian introspection gives us is something really very limited. But I’d say even in the case of, say, mind extension technology and brain-computer interfaces, at least in the core cases, you’re still conscious and you still know about your consciousness.
There’s a lot going on beyond your consciousness that you can’t have this Cartesian certainty about, including, for that matter, what happened a moment ago. Maybe your mind was just uploaded into a new thing, and a program was just activated, before that nothing was going on. Perhaps the memories I have of this conversation, lasting for the last hour or so, were just loaded into me. Well, that’s not something I can have Cartesian certainty of. I can’t have Cartesian certainty of what was going on in the past. And I am inclined to think that at least for beings like us, we are conscious, we know we’re conscious, that’s a datum, that will continue to be a datum for many different extensions of us. Now, once you get to the technologies where our whole brains are transformed, our consciousness is transformed, there eventually could be beings which have forms of intelligence without consciousness. Maybe there’ll be beings not in a position to be certain about their own consciousness. In the short term of virtual reality and AI and mind enhancing and world enhancing technology, as far as I can tell I don’t really think that that should threaten our Cartesian certainty that we’re conscious.
Brady: So you leave that as a possibility then, that there could be beings who would be uncertain about their own consciousness?
Chalmers: I’m committed to at least the metaphysical possibility of there being zombies: beings that behave like us that are not conscious at all. Insofar as they believe or think anything, it looks like they could believe they’re conscious, even though they’re not conscious. Certainly, there are some beings who get this wrong. There are various pathologies like blindness denial, where people think they have visual experience, even though they don’t. So, I’m not committed to the impossibility of beings who get this stuff wrong. But, nonetheless, I think that doesn’t mean that beings like us are not getting it right. We’re in a position to be certain about some aspects of our consciousness now. If someone asks, “How do you know you’re not a zombie?” It’s like, “Well, I’m having all these conscious experiences”. And that’s something I have Cartesian certainty of.
Brady: In the book you discuss uploading your mind into the digital world, concluding that in theory it’s possible, but that the best way to do it would be to do it slowly, piece by piece by piece. Now, obviously it would be a good idea to do it slowly for the reason that you could back out at any minute, “Ouch, that hurts. Stop. No, I’ve changed my mind.” But do you think there’s something about consciousness itself that it needs to be done in this gradual manner? Or is it just a safety thing?
Chalmers: I guess it’s a safety thing. We can contrast gradual uploading, which happens a piece at a time, with instant uploading, which happens all at once. And let’s take the case where we destroy the original and create a duplicate, a bit like the Star Trek teleporter. I think it’s likely that with the teleportation option, the being at the other end of teleportation will be conscious. To me the more difficult question is: Will it be me? Will this just kill me and now a duplicate of me will be created. That’s the one I would worry about the most upon stepping into this teleporter, which does after all, destroy the original. I’m philosophically uncertain about that, and given that I would prefer to undergo gradual uploading. A piece of the brain at a time, preferably staying conscious throughout, because then I’ll have a continuous conscious state, which is probably the best guarantee of being the same person. It’s not that I think that continuity of consciousness is required to be the same person: after all, I fall asleep every night and wake up in the morning, probably without continuous consciousness. It’s just that having that provides the best guarantees. So that’s a matter of safety, I guess.
Brady: But then, in theory, once it’s been done a number of times, say a 100 times, and every time, it’s fine, we can then go “Well, let’s speed up the process a bit. We did it over a week, let’s do it over two days. Let’s do it over an afternoon. Actually, these processes can be done in an hour, a minute, a second.” Is there a point where it changes, and now suddenly, we can no longer be sure that it was as successful?
Chalmers: Yeah, sociologically speaking we’re going to become impatient. And once you do it in a week-long process, then later you’re going to do it in a five minute process. And after a little while the people who insist on doing it gradually are going to be regarded as some kind of Luddite. On the other hand, I think it’s tricky, because the sociology probably depends on how the technology is used. If it’s very common to create duplicates of yourself while the original is still around, people will get very used to treating these duplicates as entirely different beings, who probably aren’t gonna have the same rights, and the same status. And then instant uploading, for a certain perspective, is just going to look like another way of creating duplicates. So, if people view uploading as continuous with the non-destructive duplication creation process, they’ll be against it. So, maybe that’s the contrast class, people are going to say in order to be the original, you’ve got to work from the original in some very direct sense. But there’s two different kinds of questions here. One is sociological, what we will come to accept, and the other one is philosophical, what is actually going to be the correct view here?
Some people take this view of Star Trek, that basically people have come to accept the teleporter. Nonetheless, every time you step in it, you die. So there’s in fact been thousands of Captain Kirks and Picards and whoever.
Brady: That’s a fun way to watch Star Trek. They fully know that, that they die and a copy is created, but they just don’t care. Their sense of identity and personhood is just so remote from our own that their attitude is “It doesn’t matter.” They see their selves as multi-consciousness, shared, continuities.
Chalmers: Well, on something like a Buddhist view of identity, there may be no deep facts about self. I mean, no deep selves that continue through time anyway. It’s almost as if every moment of our lives we’re stepping into that teleporter. At every moment, our old self is destroyed and a new self is born. Then it’s not such a big deal that it happens in the teleporter. Every day is a new dawn. Some people just think every morning when you wake up, that’s a new consciousness and a new person. I had someone emailing me about this recently. “I’m really worried that every time I go to sleep I die. Now I want to stay awake all night. Can you reassure me?”
Brady: Wow. That’s a radical thought.
Gerber: If we stay with these far future science fiction ideas – later in the book, you invite us to imagine these digital modes of existence. We were wondering what kind of qualitative changes they will bring along with the quantitative ones. For example, we will expand our powers, we’ll be able to easily move cars and trees with our hands. But there also ought to be qualitative changes to our very outlooks on life. Think, for example, of Kant, who said that it’s because we live on a sphere, in a limited space that ethical and political questions arise. So if VR is this infinite plane of new world creations, this will radically change our political problems. In its extreme, this could lead to virtual solipsism, where everyone occupies their own little virtual world. Is this a possible danger? Will virtual life change our relations fundamentally, or will we just do the same but with a bit more abundance?
Chalmers: It’s so hard to know, and anything here is just massive speculation. But yeah, it seems extremely likely that as virtual worlds become more and more central, this is going to transform the way we inhabit the world in all kinds of ways. It gets more radical where AI, artificial intelligence, comes in, and our minds get transformed too. Brain-computer interfaces. That is totally unpredictable. While it’s still got something like a human brain at the center of it, I suspect the changes will be more limited. For example, we are fundamentally social creatures, it seems. Maybe this is somewhat environment-dependent, but it does look like people have a strong, deep-seated, and innate desire to interact with other people, though not everybody and not universally. I’d be very surprised if that went away in a VR future. There may be some people who become increasingly solitary. There are those who complain that in the smartphone era people are less social than they used to be, but these people are still socializing digitally, for the most part. I’d be amazed if socialization went away.
But it may well be that there comes to be many different worlds, and many different communities, and things become more fragmented, in some respects, than they were before with individuals with their groups or families or societies. I mean, I find it very difficult to speculate about what form this will take. But I think there are very good reasons to think that it’s likely to be transformative in many respects. You could think about this not as a utopia, but as a meta-utopia, where there’ll be many different virtual worlds set up on many different principles and guidelines, and people will be able to choose what kind of virtual world they enter. We don’t have a single model for utopia. But the hope is that people will be able to choose their own utopia, or at least choose their own world. And maybe that will give them a certain kind of autonomy about the world they’re in. But then there is the question about the social relationship between all those worlds, if all are going to be part of one giant society, or is it going to be massively fragmented. Those are all just pretty much open questions for me.
Gerber: That’s true, it’s impossible to say which route we will take. But doesn’t the reactivation of social contract theory, remodeled after some sort of terms of agreement, which you imply in the book, give us one potential avenue?
Chalmers: I think I clicked on something coming here into Altspace. The Altspace “social contract”. As always with social contracts, I didn’t read it. But in the long run, maybe people will actually explicitly get the choice about which virtual world to inhabit, which doesn’t usually happen with social contracts in the physical world. We just get born into these worlds. So maybe once we have got some choice about which virtual worlds we hang out in, there’ll be the possibility of more explicit social contracts, which could be interesting. I suspect it’s all going to still be governed by implicit social contracts in the back of all of that, but making some things explicit. I don’t know what the social contract theorists say about the upsides and downsides of making your social contract explicit and allowing people to actually choose. I suspect there’s both good and bad consequences there that have already been explored in depth by generations of social contract theorists, but I don’t know the literature there.
Gerber: This seems to be an inherent part of the book, not just to consider the argument of simulation theory, for example, but also to kindle our imagination about the virtual worlds to come.
Chalmers: Yeah, philosophers think about not just the actual, but the possible, and there’s a long tradition of philosophical thought experiments. So imagining the world is like this or that. And of course VR technology is going to give us the possibility to make some of those possible worlds actual, at least at a digital level, to realize some of the possible worlds that have previously only seemed like something you can imagine. The technology may not be there yet, but as philosophers we can also think about the virtual world of the future as a thought experiment. I mean, philosophers don’t have a universal lockdown on thought experiments, other people can do it too. There’s the economist Robin Hanson’s nice book, The Age of Em, all about future forms of uploaded minds. That’s basically an extended thought experiment. But I do think, yeah, thought experiments in general at this point are useful in thinking about the form that different virtual worlds are likely to take. At the same time, I fully expect that the actual development of virtual worlds will go places that we just cannot yet imagine. Science fiction authors have explored this in more depth than either philosophers or economists. They’re the ones who have actually come up with extremely detailed and brilliant visions of virtual worlds. And that’s also a guide here. But, at the same time, I suspect that where actuality takes us will probably be directions we haven’t yet imagined.
Gerber: Could you see a task for philosophy in that sense, to open up our imagination to these views, these different directions?
Chalmers: I don’t think there’s any single task for philosophy. But philosophy can do many things. In philosophy, we just want to think deeply about everything, and come to understand it. We need to do this with technology. We get the technology, but we don’t want to just get the technology and use it, we want to reflect on it, and reflect on what it means and on how it’s altering our lives. You want to reflect on where it’s going, on what’s possible. Again, philosophers aren’t unique in this. Everybody should be doing this. But philosophy does have a certain set of tools for reflecting on some of these issues, which is, I think, very useful, especially when it comes to virtual worlds and virtual realities. Because, after all, philosophers have been thinking about reality for a very long time. And likewise about connections to the mind. I didn’t start thinking about the topics covered in the book to think about technology. I came into it to think about mind and reality and the relationship between them. And then I just gradually found that all of this had so much bearing on practical issues about technology. That kind of dragged me there, whether it was the extended mind or virtual reality or augmented reality. In a way, the abstract philosophical thinking for me came first. And then it turns out some of this abstract philosophical thinking is extremely relevant to practical questions about reality. What drove me was trying to think about and understand reality, but then ideas developed in one context turn out to have application much more broadly. It can be amazing and wonderful when philosophy takes you in new directions.