Henry Vanderspuy

How can computational reason enrich the discourse on the work of art in the age of immutable cryptography?

Foreword

Computational reason is a novel framework for understanding the epistemological aspects of computation— which is to say, how it relates to knowledge, what it means to compute and not least its relationship to intelligence and reason. The framework emerged in the research of AA Cavia, a contemporary philosopher of computation who first developed these ideas in a book called “Logiciel,” published by &&& in 2022. In Cavia’s research, an epistemic account of computation is laid out by examining its early roots in computational theory, tracing these insights into philosophy, and finally connecting them with key developments in theoretical computer science. As such, it offers a holistic view of computation by considering the diverse traditions found in its three canonical models, showing up by turns mathematical (Gödel), linguistic (Church) and mechanistic (Turing).

Computational reason challenges the presumed equivalence of these three models, a formalism enshrined in the Church-Turing thesis, to integrate their different aspects into a new whole. In my view, the perspective we arrive at not only suggests a novel theory of computation, centred on the operation of encoding, but offers a wealth of conceptual resources for imagining its possible futures. For me, this is appealing in a world where computers have come to underpin so much of what goes on around us, at ever accelerating speeds, some of which is worthy of critique and in need of careful reconsideration. Engaging with this material head on offered me what started to feel like a first principles understanding of computation, one that pointed to its foundations in new ways— a goal that I feel compelled to continue working towards. To demonstrate how this might look, consider the following quote (emphasis mine) :

For close to a century, Turing orthodoxy has identified the computational with the machinic, but what if computation is instead a more fundamental expression of form? Perhaps its true role is encoding logic into language and linking propositions to their structural, which is to say mathematical, form? – AA Cavia, Logiciel (Cavia, 2022).

What might the implications of this alternative perspective be? How might such an epistemic theory of computation change our aesthetic and cultural considerations on computing? Does this view provide a lens through which we can start to see it as perhaps a qualitative phenomenon, as much as a quantitative one? What would this imply about the epistemological limits of AI? Finally, how exactly does this perspective differ from dominant ideas, beliefs and dogmas about computation?  In the research that follows, I endeavour to answer some of these big questions by integrating my findings into a novel discourse adjacent to computational reason— namely, the work of art in the age of immutable cryptography. In simple terms, the latter is a discourse comprised of artists and enthusiasts working to craft artistic networks and objects with computers. By spending time here, I began to see glimpses of its potential to become a fully fledged cultural scene, home to a critical discourse potentially capable of having a large impact on the Art World, let alone broader culture. Yet, in spite of this great promise, I found that mainstream narratives largely perceive it as naive and lacking in any critical dimension. These views are not unfounded, as I will go on to demonstrate in my research. However, the broader possibilities previously outlined should not be rejected tout court.

Introduction

My research question attempts to integrate AA Cavia’s philosophical framework of computational reason with an emerging discourse on the work of art in the age of immutable cryptography. To introduce these ideas in simple terms, the former offers a novel perspective on how computation is more than just a mechanism, viewing it instead as a form of inference and mode of explanation with particular semantic connotations. The latter, on the other hand, is a play on Walter Benjamin’s 1935 seminal essay, on ‘The Work of Art in the Age of Mechanical Reproduction.’ Here, Benjamin’s essay is invoked as a reference to the recent emergence of blockchains— computer networks bound by uncorruptible ledgers — and their possibility as novel sites for art production in and beyond the digital age.

In what follows, I discuss both parts of this research question in depth to seek out novel perspectives. My broad argument is that the discourse on the work of art in the age of immutable cryptography can be moved into its critical phase through the integration of computational reason, both per se, and as a conceptual framework. Along the way, I will endorse perspectives from the latter, tracing its integration and synthesis of ideas from disciplines as broad as philosophy, formal logic, mathematics and not least, theoretical computer science. My view is that this framework, which ultimately casts computation as a way of explaining things — not as a machine— or in other words, as a distinct logos, has the potential to not only enrich the discourse at hand, but moreover the capacity to fundamentally change our understanding of what it means to compute. 

I present these ideas at a time where computational technologies are rapidly advancing in power and complexity, epitomised by the spectre of artificial general intelligence, in the hope that a wider understanding and interest in their conceptual underpinnings, or first principles of computation, traced all the way to their cognitive roots and semantic implications, can bring about a renewed sense of wonder and curiosity in the better angels of their nature. 

Before we dive in, I would like to emphasise that in discussing the framework of computational reason, we are examining notions of epistemology, semantics and ontology— ultimately, philosophical questions — that can at times seem disconnected to the broad range of technical artefacts under consideration. Having said that, as a research project, computational reason is a science-driven body of work that engages contemporary developments in both theoretical and empirical domains, the key aspects of which I have attempted to integrate into our discussion that follows. 

Abstract

Computational reason treats computation as essentially cognitive in character, based on an intuitionistic view which identifies it as a form of inference and mode of explanation. This framework arises from a key contemporary development in theoretical computer science—namely, Univalent foundations, a constructivist project which posits a topological view of computation, proposed by the late Fields Medalist, Vladimir Voevodsky in 2014. On this view, computation is no longer identified with an abstract universal machine so much as an epistemic theory of encoding, as put forth by AA Cavia. This novel perspective arises from an insistence that all notions of computation aren’t epistemically reducible to Turing machines, in that the latter presents an inadequate definition unable to account for computation’s diverse epistemic and conceptual undercurrents. As a framework, it has the potential to usher in a new phase of computationalist ideas, going beyond dogmatic and at times wildly speculative claims such as fully blown computational theories of mind, or worse, the ontological inflation found at the heart of pan-computationalism. Moreover, it raises important questions for the discourse surrounding the work of art in the age of immutable cryptography, also known as crypto. Up until now, this discourse has failed to make a lasting impact on the traditional Art World, nor the popular culture, in durable or meaningful ways— on the contrary, it has attracted extensive criticism, harming its reputation and obscuring its true potential. However, a suite of emerging networks and techniques such as blockchains, non-fungible tokens(NFTs), zero-knowledge proofs (ZKPs) and other forms of distributed ledger technologies (DLTs), should not be written off as potential substrates for novel forms of artistic creation, not to mention the growing scene of artists, artworks and collectors who take this possibility seriously. In this capstone, I will argue that by integrating key perspectives and vocabulary offered by computational reason, the discourse on the work of art in the age of immutable cryptography can be transitioned from its naive to critical phase, a shift whose potential I aim to elaborate in full. Throughout, I will endorse computational reason as a new understanding of computation, tracing its implications for philosophy, through the field of theoretical computer science and into discourse on the work of art in the age of immutable cryptography.

Literature Review

This literature review consists of two sections which trace the relationship of computation to philosophy over the twentieth century and into contemporary culture. The first section examines the “two central dogmas at the heart of computationalism”(Cavia, 2022) which are taken to include not only epistemological notions of computation, but also ontological ones. Here, I will aim to show how these two dogmas are crude and arise from “untenable applications of computational theory” (Cavia, 2022), from which I will begin to show how they offer little in the way of nourishment to the discourse under consideration. To close off this section, I will include a specific review of the relationship between artificial intelligence and the philosophy of computation, an intersection raising a number of critical questions pertinent to our discussion. 

By showing how these two dogmas of computationalism can be evolved, the second section will outline the central tenets of computational reason to begin unlocking their latent potential for enriching the discourse at hand. To do so, we will review a range of semantic theories about computation, ultimately siding with realizability semantics for interpreting the meaning of computational states. This will entail a review of key developments in theoretical computer science such as Univalent foundations (Voevodsky, 2014), the topological view of computation (Voevodsky, Martin Löf, Awodey, et al) and the import of the two distinctly computational operations under this framework– encoding and embedding (Cavia, 2022). 

Throughout, I will integrate ideas from the discourse on the work of art in the age of immutable cryptography, referencing examples and aspects which resonate with the range of literature under review. These will show up in broad flavours, such as the discourse’s need for a constructive aesthetics (Fuller, 2013), a proper location of medium specificity, and not least a more sophisticated vocabulary for interpreting and engaging with artworks.

Section one

To begin with, let’s tuck into a brisk review of the literature within what Cavia refers to as the two dogmas at the heart of computationalism (2022), touching on the dominant perspectives which gained traction in the twentieth-century and continue to influence mainstream narratives to this day. The “multiply realizable” (Putnam, 1967) nature of computation, be it “voltages on silicon or steam in valves” (Cavia, 2023) can in a sense be seen to underpin the emergence of functionalism in the 1960s (Putnam). The latter divorces mental states from their physical substrates and argues that they can be understood by their functional roles. This paved the way for philosophers such as Jerry Fodor (1980) to assert that the mind is a computational system in so called computational theories of mind (CTM). These posit that mental states are symbolic representations — also known as representational theories of mind (RTM)— processed by computational rules that can be instantiated on a variety of physical substrates (Fodor). The latter is generally referred to as the “multiple realizability of mind hypothesis” (Block, 1980). Furthermore, this range of theories have inspired more recent ideas such as David Chalmers’ causal theory of computation, which has been argued for as the foundations of cognitive science (1990s).

 In their extreme form, these ideas show up in popular culture as speculative claims by famous technologists such as Elon Musk, who on X recently claimed the brain is a “biological computer,” or other speculations such as the idea of mind upload for achieving “digital immortality” (Kurzweil, 2005). For Cavia, these are “speculative claims that arise from untenable applications of computational theory,” (Cavia, 2022)  namely the reduction of computation to mere mechanism, or in other words a “blindly obedient rule-following automaton” (Cavia, 2024, pg.9). Here Cavia insists that computation cannot be severed from logic at the cost of coherence “since a semantic theory of computational states is a prerequisite for identifying the constraints that mark a physical process as computational in the first place” (ibid, 2024, pg.10). This perspective is central to our discussion and will show up in a later review of LEJ Brouwer’s intuitionist philosophy (1930) and the ensuing realizability interpretation of logic (e.g. Kleene, Martin Löf), a lineage of ideas giving rise to the inferential theory of computation proposed by Cavia (2023) as a fundamental tenet of computational reason. Importantly, this view which is centred on realizability (Kleene, Martin Löf) provides a “richer semantics” (Cavia, 2023, pg.3) for considering the meaning of computational states, an idea which challenges the dominant semantics for formal languages i.e. model theory (Tarski). A perspective which we will review further in the next section. 

 Coming back to dogmatic beliefs found at the intersection of computation and epistemology, we can trace the influence of computational theories of mind on mainstream narratives and cultural artefacts, a prime example being the popular film series “The Matrix,” (1999, 2003, 2003, 2021) where human minds are portrayed as existing inside complex computer generated simulations. Following in the footsteps of Cavia, this capstone views these forms of computationalism as speculative, having little experimental evidence to substantiate their claims and thereby offering no low-hanging fruit for the discourse at hand.

Next, we can examine computationalist ideas in ontology. Here computational realism pervades the literature. These lines of thinking assert computation as a matter of fact in the world” (Cavia, 2022) and show up most prominently as pancomputationalism and digital philosophy (e.g Zuse, Deutsch, Wolfram). The former posits ideas like the “the universe computes” (Deutsch, 1995), a view which for Yohan John, “is like positing a great big programmer in the sky”(John, 2024). The latter on the other hand, appears most prominently in Stephen Wolfram’s cellular state automata lens of the universe (Wolfram, 2023), which ultimately likens the microstates of physical matter to computational ones, a view the cellular automata founder, John Conway was well known for disagreeing with. Other ontic claims about computation include Nick Bostrom’s simulation hypothesis (2003), itself  an idea largely reminiscent of pan-computationalism and digital philosophy. Here, Bostrom suggests we are living inside a massive computer simulation, the argument for which extends the exponential growth of the modern computing industry, as enshrined in Moore’s Law (1965), to suggest a future, post-humanist civilisation which has the computational resources capable of simulating various ancestral histories, of which we are taken to be one. Chalmers is seen to endorse these ideas along similar lines, viewing them as “serious metaphysical hypotheses” (Chalmers, 2022), yet ultimately conceding that we might never be able to know if they are true. 

For prominent philosopher of computation, Gualtiero Piccinini, this strain of ontological claims about computation are a form of “trivially speculative metaphysics which conflate modelling with explanation” (Piccinini, 2010). I will aim to show in a later section how this capstone finds these ontologically inflated ideas to be unproductive in the discourse on the work of art in the age of immutable cryptography, viewing them as untenable frames of reference onto which discussion around art objects and computer programs can bind themselves, offering nothing in the way of a generative vocabulary, nor insight into the medium specificity of the toolkit at hand. Instead, I will show how computational reason (Cavia) offers a more sophisticated approach in the discourses’ search for a proper ontology of its art object, a search articulated by curator Achilleas Sarantaris (2024) and which this capstone finds agreement with.

Any proper review of the relationship between computation and philosophy must account for ideas emerging from within the project of artificial intelligence. These often show up in the same pretences as much of the ideas discussed earlier (Kurzweil, Bostrom, Deutsch). Therefore, a focus on perspectives which complement our vision for a critical discourse will instead be reviewed. The first of which is the idea of Centaur Theory (Kasparov), a term introduced in 1997 when grand chess master Gary Kasparov lost to IBM’s artificial intelligence system, Deep Blue. Upon defeat, Kasparov coined the term “centaur chess,” which was later concretised in Deep Thinking: Where Machine Intelligence ends and human creativity begins (Kasparov, G., 2017). A more recent version of this idea can be found in the strategic board game of Go, where a 2022 study showed the average decision quality of professional Go players to have increased since Lee Sedol’s famous defeat to AlphaGo on March 15, 2016 (Shin, Kim, et al 2023). The significance of Centaur Theory to the discourse under consideration will show up later in our analysis of Terra0 (2016), a conceptual artwork stemming from the Berlin University of Arts (2016) which aims to confer economic sovereignty to natural ecosystems like forests, via tools of immutable cryptography such as smart contracts and non-fungible tokens. This is a similar idea to the concept of “Autonomous Worlds” (Ludens, 2022) which aims to create decentralised worlds, be they games or platforms, atop blockchain substrates. Here notions of autonomy and agency can be found in Cavia’s treatment of computation as “a distinct mode of explanation—a specific logos which is not merely subordinated to a pre-existing technē,” (Cavia, 2023, page 17) to which the name “computational reason” is attached (ibid, 2023). 

Generally speaking, these ideas arise from observations in the domain of neural computation (Cavia) which has its roots in the early work of McCulloh & Pitts’ (1940) on neural networks, a connectionist idea that has recently come to reach its ascendancy in the Deep Learning paradigm (Hinton, LeCun). The latter is responsible for having given rise to mainstream large language models such as GPT-4 (2023). Broadly speaking, this concerns “a shift from the notion of a stored program to that of a cognitive model” (Cavia, 2024) in computer architecture, raising important questions for the philosophy of computation and the aesthetics of the art object, not to mention language  in an age of artificial intelligence (Cavia). 

Another idea pertinent to this discussion is the manifold hypothesis posited by Chris Olah (2014) which states that “real-world data forms lower-dimensional manifolds, or continuous surfaces, in its embedding space,” (Olah, 2014) a candidate theory for interpretability and symbolic AI (Cavia). This idea corresponds with the topological view of computation (Voevodsky, Martin Löf, Awodey, et al) and raises broader questions for the philosophy of mind and how we understand inference and cognition (Cavia, Reed). Under this rubric, descriptive vocabulary for computational reason includes terms such as “paths,” “continuous spaces,” “navigation,” “orientation” and “ungrounding,” and are frames of reference that will be used extensively in the analysis of artworks such as Terraforms (2021), a computer program running continuously on Ethereum. Finally, Benjamin Bratton’s idea of cognitive infrastructures (2023) and Cavia’s perspective that “computation is the coming together of logic and matter which sits at the nexus of logical and material laws” (Cavia, 2023) will be leveraged to interpret other artworks found in the discourse.

Section two

Now that we have established a few strands of thought at the crux of computation and philosophy, we can begin to sketch a proper outline of computational reason. Cavia makes his aim clear to evolve ideas in the philosophy of computation by citing the two dogmas of computationalism (2021), a nod to Quine’s 1951 essay on the two dogmas of empiricism. Cavia posits that up until now, computationalist ideas have broadly represented a “cleaving of reason from mind,” (Cavia, 2022) a school of thought referred to as the “Turing orthodoxy” (Cavia, 2021) in which computation is reduced to mere mechanism. This is where the novel image of thought laid out by computational reason begins to show its face, insisting that “all notions of computation are not epistemically reducible to Turing machines” (Cavia, 2022b). In this sense, each of the three canonical models of computation (Gödel, Church, Turing) account for diverse aspects and traditions of what it means to compute, by turns mathematical (Gödel), linguistic (Church) and mechanistic (Turing). By evaluating each one’s attempt to “diagnose contingency in formal systems” (Fazi, 2018), Cavia places undecidability at the heart of both computation and mathematics, an “intuitionistic view” (Cavia, 2024) signalling an “ascendancy of the inferential over the axiomatic,” (Cavia, 2024). This capstone resonates with this constructive account of computation (Kleene, Martin Löf, Voevodsky, Awodey, Cavia) which follows from these ideas in that they mirror Matthew Fuller and Beatrice Fazi’s call for a constructive “computational aesthetics” (Fuller., Fazi, 2013) in computational cultures, one we will integrate into the discourse on the work of art in the age of immutable cryptography. 

At its core, computational reason can be summarised as countering Turing’s formalist bias (Turing Machines) “by endorsing instead an intuitionistic view of computation, founded on a spatial theory of types first proposed by Voevodksy, interpreted under realizability semantics originating in the work of Kleene and further developed by Martin-Löf” (Cavia, 2024). It is on this view that computation becomes more than simply a mechanical mode of calculation, which for Cavia represents a “doxa” (Cavia, 2022) seeping into a plethora of cultural narratives around computer science. Under this rubric, computation is cast as its “own epistemē,” a “world whose logic structures the possibility for encoding reason” (Cavia, 2022). Computation is thus elevated from being a suburb of mathematical practice, “merely a subspace of recursive functions on the natural numbers, to an equal partner in the progression of mathematical practice” (Cavia, 2022). This cluster of ideas at the heart of computational reason provides a framework for “distinguishing those acts that are distinctly computational, as opposed to mathematical, logical, causal, or otherwise”(Cavia, 2022) and affords this capstone an abundance of conceptual resources for the discourse on the work of art in the age of immutable cryptography.

So far we have traced a range of ideas at the intersection of computation and philosophy. On the one hand, we reviewed computational theories of mind, which “invariably invest computational states with semantics” (Cavia, 2022). On the other hand, we reviewed the notion of “Turing Orthodoxy,” which conflates computation with the machinic (Cavia), leaving it devoid of any semantic content whatsoever. For our purposes, a brief review of realizability semantics is in order. First developed by Kleene and expanded on by Martin-Löf, realizability semantics has its roots in intuitionism (Brouwer), which famously rejects the Law of the Excluded Middle (Aristotle), a fundamental law of western logic stating that a proposition must be either true or false. For intuitionistic logic (Kleene) the notion of “proof-theory” replaces the concept of “truth,” with “a many valued semantic theory of operations and procedures” (Cavia, 2023). In a 1945 paper “On the Interpretation of Intuitionistic Number Theory,” Kleene connected this intuitionistic logic with computational concepts, by introducing realizability, which interprets intuitionistic statements in terms of the existence of specific computable functions that serve as constructive proofs. This work was expanded on in the post-war period under the constructive type theory of Martin Löf (1972), were Martin Löf showed that under realizability, logical propositions are said to be ‘realized’ by their corresponding types, a notion that established a direct link between logic (proofs) and computation (programs). Under this scheme “voltages on silicon are no longer interpreted by Boolean truth tables, but as epistemic acts of encoding grounded in a realizability interpretation of logic, a theory which insists on the materiality of truth procedures” (Cavia, 2024) and one which we will synthesise with the discourse at hand. Furthermore, this correspondence has since been given the name “Computational Trinitarianism” by Robin Harper (2012) to denote an interrelation between type theory, proof theory, and category theory, a viewpoint Voevodsky (2014) extended through the concept of Homotopy Type Theory (HoTT) in a project called Univalent foundations, offering a new foundations for all of mathematics and computer science, as shown in figure 1 below. On this view, computation becomes synonymous with the “forging of paths in continuous spaces” (Cavia, 2024). We will aim to synthesise these ideas with the discourse on the work of art in the age of immutable cryptography, suggesting a view of art objects on blockchains as epistemic acts of encoding that can be interpreted in a number of spatial ways. 

Figure 1: Homotopy Type Theory: Univalent Foundations of Mathematics, (2013). Institute for Advanced Study. p. 11

Last but not least, I will attempt to sketch a brief outline of Cavia’s rendering of the two fundamental operations that underpin computational reason. For Cavia (2022) the “univalence axiom can be synthesised with the manifold hypothesis in machine learning,” in order to endorse a “topological account of computational reason, grounded in the two fundamental operations — encoding and embedding” (Cavia, 2022). Furthermore, Cavia elaborates on the general concepts of encoding and embedding as the two “operations which can be described as definitively computational within the univalent worldview” (2022, Logiciel) which we have covered. The former corresponds with the “assignment of a token to a type” which is the primitive operation in type theory, whereas the latter “corresponds to the generation of higher spaces via path induction, a synthetic operation which allows for the construction of paths in ever increasing levels of abstraction.” (Cavia, 2022). With these in mind, Cavia posits that “computation can be seen as quite simply the science of encoding”(Cavia, 2022). To close off this section, it is under this web of ideas spun together by the framework of computational reason that lead Cavia to view it as a “more fundamental expression of form” (Cavia, 2022., pg 133) whose true role is “encoding logic into language and linking propositions to their structural, which is to say mathematical, form” (ibid).

Analysis

In the spirit of Occam’s razor, it could be argued that the term ‘immutable cryptography’ is overly-complicated and better just left at ‘crypto,’ the latter being an emerging industry which has given rise to well-known cryptocurrencies, Bitcoin and Ether. In my mind however, the former terminology is more fitting for two reasons. Firstly, the argument being laid out here is that this discourse is still in its infancy, highlighting the need for a cultural effort that aims to construct new concepts and names entirely (Fuller, Fazi 2013). Secondly, it emphasises the importance of immutability – a word the Cambridge dictionary defines as ‘unchanging, or unable to be changed’ – to the work of art in an age of ubiquitous digital, not to mention mechanical, modes of reproduction. 

Moreover, it corresponds with the idea that there are times in history where our technologies outpace our vocabulary, summoning the need for new concepts, a view Benjamin Bratton refers to as the “technology of philosophy”(Bratton, 2023). Here, I would make the claim that ‘crypto’, in all of its social complexity, falls firmly under the latter. Therefore, the term ‘immutable cryptography’ is offered as a provisional name for a discourse that will likely change itself as it evolves and finds firmer conceptual content. Likewise, it acts as a constructive framing that might be capable of imagining a novel vocabulary for the discourse under consideration. 

In what follows, not only will I analyse the core aspects of this discourse which has led to its falling short on multiple fronts, but also those parts of it that hold great potential and promise. Essentially, the main thrust of my analysis is a comparison between those aspects of the discourse I found to be naive, which is to say the large majority, with those aspects that presented a more hopeful situation. To elucidate this stance, the former are predominantly grounded in findings from an NLP discourse analysis, whereas the latter correspond with case studies from netnographic research. By turns discursive, artistic and economic I will aim to show how the framework of computational reason can be integrated and synthesised in order to transition the work of art in the age of immutable cryptography into its critical phase.

On the lack of critical discourse

Central to this capstone and embellished in its title is a nod to Walter Benjamin’s 1935 seminal essay “On the Work of Art in the Age of Mechanical Reproduction.” For Benjamin, mechanical modes of reproduction and capital proliferating in the 20th century were responsible for stripping away the most essential aspects of an authentic artwork, such as its presence in time and space, or “its basis in ritual” (Benjamin, 1935). The name given to this phenomenon in Benjamin’s essay is “Aura” (Ibid, 1935) which in my mind can be thought of as that unique subjective feeling a true work of art manages to induce in its viewer. Thus, for the work of art in the age of mechanical reproduction — where print copies and electronic media become abundant, not to mention digital reproducibility— provenance obscures and Aura diminishes, if not disappears tout court. Herein lies the significant promise of immutable cryptography, proffering the artist a bricolage of tools and techniques for creating authentic artworks that, in an important sense, cannot be changed, copied or mutated.

However, it is also here where the discourse happens to have fallen short, as made all but too clear by its focus on singular and “reductionistic constructions” (Achilleas Sarantaris, 2024) such as the ‘NFT.’ This seems to corroborate with my hypothesis for keyword extraction in online communities in 2021, where I expect the focus of discussion to be centred almost to the point of obsession, on the singularity of this primitive. Based on this hypothesis, I expect to see patterns of language use surrounded by words like ‘jpeg,’ ‘million dollars’ and ‘wen moon,’ which would indicate nothing less than a form of wild speculation and inept market value perception (Rausch, 2024), not to mention aesthetics and beauty. 

By way of definitions, NFT stands for non-fungible token and is commonly taken to represent a unique ‘digital asset’ stored on the blockchain, in our case Ethereum, as afforded by the ERC-721 standard. Surrounding this keyword, part of my hypothesis is that serious critiques of the discourse from mainstream cultural narratives will tend to be leveraged at the countless ‘scams,’ ‘rugpulls,’ and ‘fraudulent activity’ in the discourse, all words that I have seen through qualitative methods of data collection and research . Moreover, this corresponds with wider research of the general market trends, which in a report from ‘dappGambl’ claimed 95% of NFTs had fallen to zero monetary value by September 2023. 

These findings point to the lack of critical discourse on the work of art in the age of immutable cryptography. Admittedly, whilst they are only in hypothesis format for now, they would likely correspond to the significant downturn in the NFT market that surrounds the discourse on the work of art in the age of immutable cryptography. This would therefore allow us to infer that the language games being played at the heart of the discourse in 2021, not to mention the types of objects being created, were likely centred around the wrong primitives, both technically and conceptually.

However, as Sarantaris explains, “people are experimenting with different things, under different guises,” (Sarantaris, 2024) and that a more holistic view for the discourse is still possible, one centred on a novel “ontology, topology and ritual”(Ibid, 2024), as categorical candidates for the constructive computational aesthetics I claim the discourse needs. Interestingly, this triad corresponds not only with the constructive logic we’ve endorsed throughout, but also the significance of topology in computational reason, where under univalence, “topology is not offered as a metaphor but is instead foundational” (Cavia, 2024). For Sarantaris, an “object is meaningful because of its context (topology),” a view which is almost isomorphic with the realizability semantics discussed earlier, which under the rubric of computational reason “refers precisely to our ability to construct a topological site for the identification of those entities that are said to inhabit that world,” a point which clearly corresponds with Sarantaris’s persepctive. Perhaps, on this view, the true novelty of non-fungible tokens, ‘NFTs’, is in affording the work of art an anchor in space and time which we might refer to in topological terms a process of “situational embedding” (Cavia, Reed, 2022). This perspective would differ substantially from the keywords and patterns of language use I have laid out in my hypothesis for a quantitative discourse analysis and is one that I claim offers traces of a more robust, not to mention critical starting point for the discourse on the work of art in the age of immutable cryptography.

A final part of my hypothesis is that the discourse is lacking in any particular low-time preference culture (Pirkowski, 2022), a term we can otherwise refer to as long-term thinking. Notions of ‘pump my bags’ and ‘Wagmi,’ where the latter translates to “we’re all going to make it” suggest that the discourse was blinkered into naive assumptions of pure speculation and greed, phenomena not at all unfamiliar to the online realm, as demonstrated by previous speculative frenzies like the well known ‘.com’ bubble of the early 2000s.

 Once again, this highlights the need for more robust discursive foundations. I would argue that cultural narratives centred on the “long arc of computational reason” (Cavia, 2021) might work to engender a sense of epistemic humility in the discourse. This is speculative, but speaks to Cavia’s point on computation, that we should not be fooled into “assuming we are dealing with a twentieth century development— the digital computer as a distinct technical artefact— but rather a logical structure integral to the long arc of reason itself” (Cavia, 2022, pg 12). Here, we can replace the ‘digital computer’ with the core primitives found in the discourse on the work of art in the age of immutable cryptography. This idea can help to decenter the narrow focus on singular words and frames of reference, instead opening up the discourse to a richer landscape of ideas and concepts with the longer time horizon essential to great art.

On the Advance of Computational Reason

A core aspect of my hypothesis in measuring the language games within the discourse was that the word ‘encoding,’ an operation computational reason identifies as distinctly computational in nature, would sparsely show up. After all, this discourse, which ultimately plays out downstream of computation, has a misconstrued perception of itself, as displayed in its tendency to become myopic at the expense of more critical notions available to it, such as that of a properly fleshed out definition of encoding.

 Interestingly, renowned technologist Elon Musk was reported to have critiqued the space on this very point, exclaiming the discourse should “at least encode [the NFT] on the blockchain” (Musk, 2024). My claim is that by moving the focus up-stream to the more fundamental operation of encoding, the discourse could ground itself in a more critical and distinguishable way. On this view, the perception of value, whether commercially or aesthetically, could begin to orbit the artists’ creativity for encoding artistic forms into programs on Ethereum, a non-trivial act under the constraints of a decentralised computer network with expensive fees and limited compute.

To demonstrate how this isn’t always the case in the discourse, consider the work of Mathcastles, a studio run by two pseudonymous “artist-programmers” (Rausch, 2024) who explore the “conceptual possibilities of the Ethereum Virtual Machine (EVM)” (ibid, 2024). In figure 1.2, Mathcastles encodes the price of its highly anticipated artwork, “Angelus” in a 32-by-32 pixel array, where to figure out the price, a potential buyer must calculate the sum of all pixel values, a nearly impossible feat given that each pixel represents a value between zero and a large prime number used in zero-knowledge cryptography— namely, the 77-digit prime number underpinning the BN-254 curve.” As can be seen in the program below, the smart contract uses zero-knowledge proofs to verify the existence of this encoded price, as well as the authenticity of the image, without revealing the actual data. The relevant parameters (_priceProof, _pricePubSignals, _imageProofs, _imagePubSignals) and functions (proveImage, _validatePurchase, _transfer) handle the validation and transaction processes securely and privately, as can also be seen in the comments at the top of the program, which speak to its conceptual underpinnings. This body of work demonstrates that a critical dimension of conceptual works that speak to the fundamental import of encodings in the computational domain, a view which computational reason might refer to as “simply the science of encoding,” (Cavia, 2024) we can begin to see glimpses of a critical discourse around a proper notion of medium specificity emerging. 

Figure 1.2: Screenshot for the smart contract for Mathcastles Angelus (2024)

Terraforms and Terra0

A fair amount has been written on the inner workings and artistic complexity of Terraforms, a project encoded and released on the EVM in late 2021 by Mathcastles Studio. Take for example Rausch’s perspective that Terraforms emerged as a profound “rejection of the status quo,” (Rausch, 2023), challenging dogmas at the heart of the discourse such as the view “that blockchains are merely a storage device for finished work,” (Rausch, 2023) and that “visual aesthetics are the primary criterion for art”(ibid, 2023), a rejection that corresponds with my view of a largely naive discourse.

At its core, Terraforms offers a riposte to the naivete of the discourse on the work of art in the age of immutable cryptography, which we have uncovered in this research. Its layers of complexity and detail captured by its encoding of a multi-dimensional 3D virtual structure on the Ethereum virtual machine; comprised of 20 layers, each consisting of heightmaps, chromas, pulses and precise coordinates in the overall world, speak to the capacity for an artist, or duo, to push the boundaries of what is possible for the discourse. Its minimalistic style as captured in its underlying program, mixed with its rich virtual rendering by a wide number of community members and collectors are a true testament to the capacity for the discourse to consider the possibility for truly critical and complex discourse and artworks, respectively. 

However, the most pertinent aspect of Terraforms, as an art object, to my research was in its ability to capture the essence of the split between canonical models and the topological turn in theoretical computer science, where the latter denotes a view of computation “unmoored from strictly discrete operations,” (Reed., Cavia., 2022) opening it up “to geometric or spatial domains”(ibid). This split between the discrete abstract symbolic manipulation commonly associated with Turing Machines and the “forging of paths in continuous spaces,” (Cavia, 2023) is mirrored in the difference between the seemingly static underlying program which underpins Terraforms, and the dynamically computed 3D world capable of floating atop the Ethereum blockchain for decades, if not centuries into the future. 

Figure 1.3: Screenshot of Terraforms by Mathcastles, a dynamically generated 3D structure on the EVM.

Finally, consider Terra0, a project which aims to create “automated ecosystem resilience through a set of smart contracts on Ethereum,” (Seidler, P., Kolling, P, et al, 2016) in an attempt to confer computational agency to natural ecosystems, such as forests, to engage with economic systems and political jurisdictions autonomously. Here, we can begin to see how the vocabulary of computational reason, such as computation’s capacity to “distinguish itself from tekhne,” (Cavia, 2022)  marking itself out as “distinct epistmē, a world whose logic structures the possibility for encoding reason” (ibid); might be integrated into the endeavour of Terra0 to grant epistemic rights to inhuman forms of intelligence, such as forests and other planetary ecosystems. My claim is that this could lay the first stone for humanity in granting computational agency due recognition as a distinct logos that sets itself apart from the dynamics of most other “technical objects”(Simondon, 2017).

 For Terra0, the concept of a self-sovereign program that can act as an autonomous entity is an interesting play on the idea from computational reason that Cavia refers to as computational agency. Moreover, it touches on important aspects of our literature review such as Centaur theory and Bratton’s idea of “cognitive infrastructures.” Here, we can raise the concept of inhuman intelligence as falling under the rubric of computational reason, in multiple valences. Moreover, as Cavia succinctly puts it— “computation is the coming together of logic and matter,”  an observation that is multiply realisable and for no means needs to stay centred on parochial human notions of agency. My view is that projects like Terra0 speak to the potential for a critical discourse on the work of art in the age of immutable cryptography. 

Conclusion

To conclude, I have attempted to integrate the framework of computational reason into the discourse on the work of art in the age of immutable cryptography. To show how, let us summarise our trajectory. We began with a review of the key themes at the heart of computationalism, a field typically taken to refer to computational ideas about the brain and mind, but which I suggested should also include ideas about computation and the real. Tracing epistemological and ontological ideas from their advocates into their popular culture expressions, I aimed to show that they were unproductive for the discourse on the work of art in the age of immutable cryptography. 

At which point we moved into a review of computational reason, sketching a brief outline of the multi-disciplinary web of ideas it weaves together. We touched on alternative aspects of computational thought in history, moving through Brouwer’s Intuitionism, his very own student Kleene’s realizability semantics, and into the constructive type theory of Martin-Löf. We then zeroed in on Homotopy Type Theory to review Univalence and some implications suggested by Voevodsky’s spatial theory of types. Here we discussed Cavia’s attempt to tease out a novel philosophy of mind, including a review of the manifold hypothesis and the fundamental operations of encoding and embedding. Computation was treated as essentially cognitive in character, showing how on this topological view, its role becomes one of forging paths in space, or in other words an epistemological toolkit for navigating continuous spaces.

 Once we had covered these properties and ideas of what is a relatively new vocabulary for computation, we then moved into a discussion of the problem space at hand. Namely, the lack of critical discussion surrounding the work of art in the age of immutable cryptography. Here, I used findings from my quantitative analysis to prove that the discourse is naive and in need of a more sophisticated vocabulary. Through a reading of Fuller and Fazi, I attempted to make an effort to construct novel aesthetic categories for computation. I centred this analysis on the dynamics of immutable cryptography, tracing the affordances of blockchain technologies and the stream of new media they continue to give rise to. I showed potential aspects of why the discourse went wrong to begin with, then moved into a longer analysis of its raw potential by analysing aspects of my case studies from the discourse at hand.

I attempted to link them to the vocabulary of computational reason, with a central focus on the topological view of computation, realizability semantics and concepts of computational agency. Specifically, I synthesised the notion of epistemic acts of encoding with computer program artworks running continuously on Ethereum, an argument I believe could help loosen the grip of the digital on computation in mainstream narratives and help shift dogmatic and uninspired interpretations of computer artworks.

 Moreover, I squeezed in a short discussion of the relevance of artificial intelligence to the discourse at hand, taking note of the possible overlap. To end, we explored forms far and wide, pitching a novel worldview that could enrich a new art scene on its journey to becoming. In this sense, we mixed together concepts and percepts with the hope of rendering intelligibility anew. We let analogies roll and correspondences take us in and through an interdisciplinary site under construction, and which time will tell if at all true.

Bibliography

Antikythera. (2023). Planetary computation.

Bostrom, N. (2003). Are we living in a computer simulation? Philosophical Quarterly, 53(211), 243-255.

Block, N. (1980). Troubles with functionalism. In N. Block (Ed.), Readings in philosophy of psychology (Vol. 1, pp. 268-305). Cambridge, MA: Harvard University Press

Brouwer, L. E. J. (1981). Brouwer’s Cambridge lectures on intuitionism. D. van Dalen (Ed.). Cambridge University Press.

Cavia, A. A. (2022). Logiciel: Six seminars on computational reason. TripleAmpersand Journal.

Cavia, A. A. (2024a). Interaction Grammars: Beyond the Imitation Game. In R. A. Trillo & M. Poliks (Eds.), Choreomata: Performance and Performativity after AI (pp. 123-138). Routledge.

Cavia, A. A. (2024b, June 11). Turing Trauma: Blind Spots in the Synoptic Vision of Intelligence. Paper presented at ‘Marxism & The Pittsburgh School’, University College London.

Cavia, A. A. (2024c). Art and language after AI (forthcoming).

Cavia, A.A. (2023a) The Topological View, Berggruen Institute 2023

Cavia, A. A. (2023). In M. Salemy (Ed.), Model Is the Message: Incredible Machines Conference 2022. TripleAmpersand Journal

Cavia, A. A. (2023). Pointless topologies. In R. Groß & R. Jordan (Eds.), KI-Realitäten: Modelle, Praktiken und Topologien maschinellen Lernens (pp. 351-364). transcript Verlag.

Cavia, A. A (2022, July 15). Inhuman Intelligence [Audio podcast episode]. In Interdependence. Patreon.

Chalmers, D.J., 1993. A Computational Foundation for the Study of Cognition.

Chalmers, D. J. (2022). Reality+: Virtual worlds and the problems of philosophy. W.W. Norton & Company.

Das, S., Roy, S., & Bhattacharjee, K. (Eds.). (2022). The Mathematical Artist: A Tribute to John Horton Conway. Springer. https://doi.org/10.1007/978-3-031-03986-7

Deutsch, D. E., Barenco, A., & Ekert, A. (1995). Universality in quantum computation. Proceedings of the Royal Society of London. Series A: Mathematical and Physical Sciences, 449(1937), 669-677. https://doi.org/10.1098/rspa.1995.0065

Fazi, B. (2018). Contingent Computation: Abstraction, Experience, and Indeterminacy in Computational Aesthetics. Rowman & Littlefield International.

Fodor, J. A. (1980). There’s no computation without representation. Synthese, 44(1), 59-82.

Fuller, M., & Fazi, M. B. (2013, October 28). Goldsmiths Department of Art MA Lectures 2013-14. Goldsmiths, University of London.

Galloway, A.R., 2014. Laruelle: Against the digital (Vol. 31). U of Minnesota Press. p. 76

Grynbaum, Michael M. “Computer Sculpture.” Artforum, April 1984,

Ha, D., & Schmidhuber, J. (2018). World Models. Zenodo

Habib William Kherbek, Entropia: Childhood of a Critic (London: Abstract Supply), 12

Harper, R. (2012). Practical foundations for programming languages. Cambridge University Press.

Institute for Advanced Study. (2013). Homotopy type theory: Univalent foundations of mathematics (p. 11). Institute for Advanced Study.

John, Yohan (2023)

John, Yohan (Host). (2023). Neurologos [YouTube channel]. YouTube. Retrieved July 25, 2024, from https://www.youtube.com/playlist?list=PLTEtXsHFKZTt_8pbhVMxQYGO1Vx-qPlWA

Kasparov, G. (2017). Deep thinking: Where machine intelligence ends and human creativity begins. PublicAffairs.

Kurzweil, R. (2005). The Singularity Is Near: When Humans Transcend Biology. Viking.

Ludens, (2022) Autonomous Worlds. 0xPARC

Löf, M. (1972). An intuitionistic theory of types: Predicative part. In H. E. Rose & J. C. Shepherdson (Eds.), Logic Colloquium ’73 (pp. 73-118). North-Holland

McCulloch, W. S., & Pitts, W. (1943). A logical calculus of the ideas immanent in nervous activity. Bulletin of Mathematical Biophysics, 5(4), 115-133

Mehta, V. M., & Pitz, W. J. (2023). Comprehensive analysis of renewable hydrogen pathways for energy systems. Science Advances, 9(12), eadh2458.

Olah, C. (2014, April 6). Neural Networks, Manifolds, and Topology.

Pearl, J. (2018). Theoretical impediments to machine learning with seven sparks from the causal revolution. Proceedings of the Eleventh ACM International Conference on Web Search and Data Mining (WSDM ’18)

Piccinini, G. (2015). Pancomputationalism in physical computation. In Physical computation: A mechanistic account (pp. 95-117). Oxford University Press.

Pirkowski, M. (2022). Time Preference and Cooperation. The Jim Rutt Show. Retrieved from https://jimruttshow.blubrry.net/ep-205-matthew-pirkowski-on-time-preference-and-cooperation

Putnam, H. (1967). Psychological predicates. In W. H. Capitan & D. D. Merrill (Eds.), Art, mind, and religion (pp. 37-48). University of Pittsburgh Press.

Rausch, Malthe “Mathcastles Zero Suite.” Outland, July 2023

Rausch, M. (2024

Sellars, W. (1956). Empiricism and the philosophy of mind. In H. Feigl & M. Scriven (Eds.), Minnesota Studies in the Philosophy of Science, Volume I: The Foundations of Science and the Concepts of Psychology and Psychoanalysis (pp. 253-329)

Shin, M., Kim, J., van Opheusden, B., & Griffiths, T. L. (2023). Superhuman Artificial Intelligence Can Improve Human Decision Making by Increasing Novelty. Proceedings of the National Academy of Sciences, 120(12), e2214840120.

Seidler, P., Kolling, P., & Hampshire, M. (2016, May). Terra0: Can an augmented forest own and utilize itself? Berlin University of the Arts, Germany.

Simondon, G. (2017). On the mode of existence of technical objects. University of Minnesota Press.

Steven Vickers, ‘Topology via Logic,’ published in 1989

Tarski, A. (1933). Der Wahrheitsbegriff in den formalisierten Sprachen (J. H. Woodger, Trans.). [The Concept of Truth in Formalized Languages].

Tokumitsu, M. (2024, June 6). Programming Prayer: The Woven Book of Hours (1886–87). The Public Domain Review.

“Turing, A. M. (1937). On Computable Numbers, with an Application to the Entscheidungsproblem. Proceedings of the London Mathematical Society, s2-42(1), 230-265.”

Voevodsky, V., 2014. The Origins & Motivations of Univalent Foundations. Institute for Advanced Study.

Wolfram, S. (2023, October 27). How to think computationally about AI, the universe and everything [Video]. TED.

Leave a comment