Agamben, Giorgio. 2009. What Is an Apparatus, and Other Essays. Stanford University Press.
Baum, SD. 2020. “Medium-Term Artificial Intelligence and Society.” Information 11 (6): 290. https://doi.org/https://doi.org/10.3390/info11060290.
Bogost, Ian. 2012. Alien Phenomenology or What It’s Like to Be a Thing. University of Minnesota Press.
Bostrom, Nick. 2003. “Are You Living in a Computer Simulation?” Philosophical Quarterly 53 (211): 243–55.
———. 2017. Superintelligence: Paths, Dangers, Strategies. Oxford University Press.
Bostrom, Nick, and Eliezer Yudkowsky. 2014. “The Ethics of Artificial Intelligence.” In Cambridge Handbook of Artificial Intelligence, 316–34. Cambridge University Press. https://doi.org/https://www.fhi.ox.ac.uk/publications/bostrom-n-yudkowsky-e-2014-the-ethics-of-artificial-intelligence-the-cambridge-handbook-of-artificial-intelligence-316-334/.
Bown, Alfie. 2018. The Playstation Dreamworld. Polity Press.
Braidotti, Rosi. 2018. “A Theoretical Framework for the Critical Posthumanities.” Theory, Culture & Society 36 (6): 31–61. https://doi.org/https://doi.org/10.1177%2F0263276418771486.
Bratton, Benjamin H. 2015. The Stack: On Software and Sovereignty. The MIT Press. https://doi.org/http://thestack.org/.
Bulut, Ergin. 2014. “Creativity and Its Discontents: A Case Study of Precarious Playbour in the Video Game Industry.” PhD Thesis, University of Illinois at Urbana-Champaign. https://doi.org/https://www.ideals.illinois.edu/bitstream/handle/2142/50379/Ergin_Bulut.pdf?sequence=1&isAllowed=y.
Chalmers, David. 2016. “The Virtual and the Real.” Disputatio IX (46): 309–52. https://doi.org/ 10.1515/disp-2017-0009.
Cybulski, Alex Dean. 2014. “Enclosures at Play: Surveillance in the Code and Culture of Videogames.” Surveillance & Society 12 (3): 427–32. https://doi.org/http://www.surveillance-and-society.org.
Fisher, Mark. 2009. Capitalist Realism: Is There No Alternative? Zero Books.
Galic̆, M., T. Timan, and B. Koops. 2017. “Bentham, Deleuze and Beyond: An Overview of Surveillance Theories.” Philosophy & Technology 30: 9–37. https://doi.org/https://doi.org/10.1007/s13347-016-0219-1.
Galloway, Alexander R. 2006. Gaming: Essays on Algorithmic Culture. University of Minnesota Press. https://doi.org/http://art.yale.edu/file_columns/0000/1536/galloway_ar_-_gaming_-_essays_on_algorithmic_culture.pdf.
Hanson, Robin. 1994. “If Uploads Come First.” Extropy 6 (2). https://doi.org/http://mason.gmu.edu/~rhanson/uploads.html.
———. 2001. “How to Live in a Simulation.” Journal of Evolution and Technology 7 (1).
———. 2014. “What Will It Be Like to Be an Emulation?” In Intelligence Unbound: The Future of Uploaded and Machine Minds. Wiley. https://doi.org/http://intelligence.org/files/AIPosNegFactor.pdf.
Hui, Yuk. 2016. On the Existence of Digital Objects. University of Minnesota Press.
———. 2017. “On the Unhappy Consciousness of Neoreactionaries.” E-Flux 81. https://doi.org/https://www.e-flux.com/journal/81/125815/on-the-unhappy-consciousness-of-neoreactionaries/.
———. 2019. Recursivity and Contingency. Rowman & Littlefield.
Juul, Jesper. 2019. “Virtual Reality: Fictional All the Way down (and That’s Ok).” Disputatio. https://doi.org/https://doi.org/10.2478/disp-2019-0010.
Kunzelman, Cameron. 2014. “The Nonhuman Lives of Videogames.” Thesis, Georgia State University. https://doi.org/https://scholarworks.gsu.edu/communication_theses/110.
Mitchell, Ryan Martinez. 2020. “Chinese Receptions of Carl Schmitt Since 1929.” Penn State Journal of Law & International Affairs 8 (1): 181–263. https://doi.org/https://ssrn.com/abstract=3400946.
Negarestani, Reza. 2018. Intelligence and Spirit. Urbanomic Press.
Parizot, Cedric, and Douglas Edric Stanley. 2016. “Research, Art and Videogames: Ethnography of an Extra-Disciplinary Exploration.” antiAtlas Journal 1. https://doi.org/https://www.antiatlas-journal.net/01-research-art-and-video-games/.
Sandberg, Anders, and Nick Bostrom. 2008. Whole Brain Emulation: A Roadmap. Future of Humanity Institute.
Wark, McKenzie. 2005. “Securing Security.” Kritikos: An International and Interdisciplinary Journal of Postmodern Cultural Sound, Text and Image 2. https://doi.org/https://intertheory.org/security.htm.
———. 2007. Gamer Theory. Harvard University Press. https://doi.org/http://www.futureofthebook.org/gamertheory2.0/index.html.
Wolfendale, Peter. n.d. “The Reformatting of Homo Sapiens.” https://doi.org/https://www.academia.edu/26697963/The_Reformatting_of_Homo_Sapiens.
Yudkowsky, Eliezer. 2007. “Levels of Organization in General Intelligence.” In Artificial General Intelligence, 389–501. Cognitive Technologies. https://doi.org/http://dx.doi.org/10.1007/ 978-3-540-68677-4_12.
———. 2008. “Artificial Intelligence as a Positive and Negative Factor in Global Risk.” In Global Catastrophic Risks, 308–45. Oxford University Press. https://doi.org/http://intelligence.org/files/AIPosNegFactor.pdf.
Aspects of the FHI’s research, that loosely come under the term ‘transhumanism’ or the more definitionally problematic ‘posthumanism’, attract considerable criticism, for instance in this review of a book by FHI Fellow Toby Ord, or in this talk and paper • by theorist Rosi Braidotti, the latter interesting for its broad survey of currents in critical posthumanist thought. Some of the criticism seems to revolve around an (alleged) arrogation of a singular, privleged ‘we’ or ‘human’ position, without acknowledging the gender and race differences that are central to post-1968 post-colonial, social and cultural theory. There is also a potential perception of quasi-eugenicism that might stem from Hanson’s prediction that the first emulation(s) will be based on real humans specifically chosen for certain criteria, such as intelligence, health or docility•, pp. 297-299). A slightly more balanced, philosophical view, on the issues surrounding evolution of AGI from our current state as mostly-biological humans can be found in •, pp. 95-98, as well as in David Roden’s book. Lastly, an elegant summary of the varieties of ‘humanisms’ can be found in •, pp. 245-247.↩
WBE is one of the technological ways of achieving brain uploads, and the terms are used synonymously here.↩
• (pp. 35-43, 75-94) and • summarise the neuro-computational issues involved in copying a human brain, and how that might be a more viable intermediate step to creating an AGI. The search for AGI, albeit for contested reasons, appears to be stuck, perhaps missing some philosophical or algorithmic insight, or simply lacking the computing power. The theory is that WBE is a more achievable intermediate goal, because it does not rely on any major algorithmic hurdle, and once sufficient agents are available that function at near-human or super-human levels, across a broad range of tasks, these agents can try to work out a successful approach to ‘true’ AGI. •, pp. 291-293, 297-313, also sketches out the various dynamics of the path laid out above, control issues, and ethical-political considerations.↩
More exciting, or alarming, is the idea of a ‘Seed AI’ (defined as ‘an AI designed for self-understanding, self-modification, and recursive self-improvement’ •, p. 96-110)), which, by virtue of its algorithmic improvements that approach or exceed human capabilities, combined with access and ability to process far more data, as well as much higher speed, allows it to recursively improve its own performance and to explore architectures for other AIs.↩
In this essay, I distinguish what is called AI today, more accurately termed machine learning (ML), which in some views is an impressive act of data-fitting or multi-dimensional regression that relies more on massive datasets and compute, operating within extremely narrow problem domains, rather than any fundamentally new algorithmic innovation or basic ‘understanding’ of the domain (features that might be important for a true AGI).↩
See Alfie Bown, citing Jules Michelet in the quote above in •, pp. 56-57.↩
• also considers games as an apparatus, albeit from a rather different angle.↩
Though the term gamification seems to be very promiscuously applied, and seems to get games theorists’ back up judging from this article by Ian Bogost. Curiously, the overuse of gamification-as-buzzword mirrors how AI sprouted across the business landscape in the last few years.↩
A stack is a fairly standard computer science term most commonly referring to the TCP/IP networking protocol that underlies the Internet.↩
The intention here is not to over-state the centrality of game engines as the defining techno-capitalistic apparatus par excellence; after all, the WeChat or Facebook ecosystems arguably play similar roles at the moment, and such social media platforms are capable of (or already do) incorporate games. Yet, as the next section shows, companies often own multiple platforms and products that cross taxonomic borders.↩
Though this article presents the view that both Amazon and Google seem to be making unappealing games because their primary motivation is to monetise their existing investments in cloud servers, rather than simply make great games.↩
The 2016 game No Man’s Sky had 18 quintillion unique planets, procedurally generated only as a player got near one.↩
However, Bostrom argues that as a given civilisation (ours, for instance) becomes more advanced, it becomes more expensive to simulate. Thus, one is faced with the amusing, or alarming, prospect that our simulation (i.e. the world in which we exist) gets ‘shut down’ because we simply aren’t worth it: as we start poking around in particle accelerators, or begin forming off-world colonies, the amount of compute our simulation demands becomes too expensive for our descendants’ budget.↩
See Harun Farocki’s Parallel Games series, available here, in which he explores the world, behaviours and logics of simulations, including videogames.↩
The video embedded in this review, about Death Stranding (2016-2019) describes the behaviour as ‘pop-in’.↩
And this is partially written not from the perspective of a player, but rather that of someone who makes simulations, and hence sees these (sometimes hidden) sub-structures that Farocki highlighted.↩
Much as scientists do today - testing falsifiable hypotheses against the reckonable environment, while philosophers construct thought-experiments that are tested against logic.↩
The ethical issues involved with terminating or otherwise instrumentalising an artificial entity that is sentient, sapient, or even capable of near-human performance, are considered in •, and for objects-in-general in •, pp. 72-79.↩
This is the hope: the control of a single emulation or clan is a subclass of the general control problem with respect to AGI that is a principal topic of •.↩
Humans currently have differing chronological and phenomenological experiences of time, but somehow manage to agree on clock time, making any necessary adjustments. Some of these adjustments are mediated technically, through atomic clocks or a smartphone’s world time. Other adjustments are internal and occasionally jarring - that sense of ‘time flying’ when we are absorbed in a task or something pleasurable - the psychological state identified as ‘flow’ in videogames.↩
Indeed within game studies, there is a distinction between games with zero, single, and multiple players, with consequently different logics and aesthetics.↩
Actors, agents, characters and objects are used somewhat promiscuously in this essay, but for context: actors is a term used in UE to refer to most potentially animate or active characters (including things we normally think of as inanimate like rocks, but that happen to move in-game); objects, besides the everyday definition implying an inanimate thing, is also used in the OOO literature; agents is used within the AI literature.↩
This proliferation of addresses implies and necessitates a continuous process of translation, cross-referencing and consistency-checking of addresses •, pp. 199-204).↩
See also here for historical context on why the AI deployed in games is often not all that sophisticated.↩
[NEGARESTANI_SAPIENCE]The difference between the two is discussed at length in • and • pp. 56-62, 155, but sentience, the weaker test, is the ability to have subjective perceptual experiences. Sapience, essentially a socio-linguistic and technological activity, allows us to abstract from our everyday experience. It gives us imagination, which lets us internally simulate possible states-of-the-world or conceive of new worlds. Sapience also allows us to bootstrap: to construct hypotheses, test them, and use the results of these tests to create theories or heuristics, which are in turn, added into our store of concepts which we can subsequently use to reason. See also Peter Wolfendale’s YouTube lecture from 2015, here, and the related paper •. Sentience, to say nothing of sapience, seems, at present, well beyond the abilities of agents within games, and so-called ’AI’ generally.↩
There appears to be a curious border between ideas in this section and Bostrom’s Simulation Hypothesis. Namely, if a future computer is simulating our reality, is it also simulating a reality for the higher animals, who from a common-sense perspective, appear to have subjective experiences? What about the worms? And what then of the mimosa tree whose leaves droop at night, or the sunflower? Or are we privileged objects of interest whose reality is simulated with great care, while the others are just ‘fudged’, much as videogames do for the humble grass and the inanimate rock.↩
A useful summary is in •, and in more detail in Bogost’s book Alien Phenomenology. Also, see •, pp. 17-18 for a concise explanation of the differences between Hui’s perspective and that of Harman.↩
‘Material’ as a term that includes the realm of bits and code.↩
Defined in 1991, and what most webpages were and are still written in, to a large extent.↩
Hui discusses XML, RDF, and OWL in detail.↩
In the sense that they can act upon other online or indeed physical objects, such as IoT consumer products, using APIs and HTTP calls.↩
See semantic web for a summary of why this merging of formatting, data, and code might lead to a vast, teeming web of autonomous objects going about executing algorithms. At present, bots do the dog-work for search, tracking and social media, accounting for greater than 50% of Internet traffic •, pp. 277-279).↩
Building upon the sentient/sapient distinction he describes in •, summarised in the note .↩
See Note 1 for more on the flavours of hidden and decadent humanisms.↩
The ideas on why humans might be retained and allowed to flourish seem somewhat unsatisfying, in part due to the speculative nature of the question, but also the lack of clarity on whether WBE or AGI come first •, pp. 297-301). Specifically, if it comes first, the WBE era is projected to last one or two human years, hence would be a time of likely wrenching technological change, but possibly without immediate visible effects on employment. As for a ‘steady-state’ AGI era’s attitudes vis a vis humans, Bostrom’s writing axiomatically assumes that the AGI must be engineered to not be hostile, or rather, not be indifferent pace Eliezer Yudkowsky • to humans. The cryptography researcher Wei Dai has engaged with Hanson, Bostrom, Yudkowsky, and others on these questions, which can be found in the comments and links at this Cause Prioritization Wiki.↩
To paraphrase a title of an Agamben work upon the end of messianic time.↩
Other major platforms, such as CryEngine and Unity, have similar arrangements.↩
Which pre-dated the global, consumer internet, and strongly influenced the initial emancipatory, soaring dream of cyberspace...before the ‘crooked timber’ of capital and state control brought it back down to earth.↩
Maurizio Lazzarato’s term, also used by Antonio Negri and Michael Hardt, to reference the heterotopic practices central to the informatic, flexible economy, characterised by a blurred division between work and play, as well as a certain libidinal cyclicality that works on the imaginations, tastes and perceived needs of consumers, who might also be producers (on Instagram or TikTok, for instance).↩
Once again, Hanson points out that he is just extrapolating from current technology and understanding of economics, rather than making any normative judgement on the iniquities and general odiousness that such a society may carry for humans.↩
Which may, in fact, be a reasonable one: the first emulations are, by construction, very similar to their human models, other than the differences pointed out above regarding substrate, copying, etc. Hence it may be the case that they will expect an environment that is familiar. After the initial upload, these same emulations, or further clones of them, might engage in a process of recursive self-improvement where they choose to leave behind such detritus of their human ancestors, or indeed, may choose to retain these artefacts. The word ‘choose’ is used loosely, as decisions on values and perspectives are bound up with the initial coding of the emulation’s values so as to remain aligned with certain human norms, as extensively discussed in •, pp. 226-253.↩
Descends recursion chain until the stopping condition, which defines what the game is.↩
At the moment this takes the form of neural architecture search and automated machine learning, where machine learning is used to identify and design better AI algorithms↩
The layering and scaling that virtuality and recursivity imply are central to the way software runs. For instance, high-level languages (LISP, C++, Java, Processing, Haskell, etc.) are compiled/interpreted down to assembly language, which is much closer to the ‘native code’ of a microprocessor. Operating systems routinely use virtual machines: Windows emulators running on OS X, Linux on Windows, etc.↩
See The Matrix (1999), and this entry on bullet time.↩
To state the obvious, even if player perceptions were ignored, no agent within the game could operate faster than the clock speed of the substrate, i.e. the console, PC, smartphone, or server upon which the game is playing. At a more banal level, technological issues such as network latency can cause games to stutter, much as a video download pauses unexpectedly.↩
Though it might be conceptually interesting, if commercially daft, to write a simulation where one can experience the subjective time of an emulation, i.e. a vastly slowed down gameworld.↩
One of the first examples of recursion one might learn in programming class is writing a LISP interpreter in LISP.↩
The recursion continues until a stopping condition is hit.↩
The call stack is not the same as that in Bratton’s writing referenced above.↩
As pointed out above, the specific taxonomic label of such apparatus might not matter as online platforms continue to converge.↩
‘drain the swamp, just not yet.’↩