1 Introduction

New developments in artificial intelligence (AI) have given rise to ethical and societal controversy. Technologies created using AI afford unprecedented technical capabilities and stand to transform a wide variety of social institutions and organizations. In response, a societal debate has emerged aimed at addressing the implications of AI technologies in their development and use. The dominant framing of this debate maintains the notion that AI should be ‘human-centered,’ whereby consideration for human needs and wellbeing ought to replace an ethos that looks solely at scientific and technical progress. Indeed, the call for ‘human-centered AI’ (HCAI) is extensive in current AI ethics and policy discourses. As an example, we may point to the development of the European Union’s AI Strategy, comprising both ethical guidelines, policy and investment recommendations, and the legal framework of the AI Act, throughout which HCAI is employed to indicate the desired direction of AI development and deployment in the European market (HLEGAI 2019a, b). Beyond the specific context of European governance, the global conversation about AI has similarly adopted HCAI as a broad framing device to indicate the importance of taking into consideration human needs in the development and deployment of AI technologies, without necessarily adhering to a strict definition (Capel and Brereton 2023; Schmager et al. 2023).

Against the background of the significant and concerning harms to human beings that AI systems have already done and increasingly pose, HCAI appears as a noble and perhaps even imperative initiative. Among other concerns are the potential of AI technologies to cause mass unemployment, exacerbate discriminatory biases, and even threaten the foundations of democracy (Frey and Osborne 2017; Ferrara 2024; Coeckelbergh 2024). However, when approached from the broader context of the global climate crisis, the idea of ‘putting humans at the center’ seems misplaced. The consequences of anthropogenic climate change, involving the depletion of natural resources, increase of atmospheric temperatures, loss of biodiversity, and collapse of natural ecosystems, have come about precisely because human beings consistently put themselves at the ‘center’ of their worldview, and prioritize their own needs and wellbeing over those of the natural world. This attitude of self-prioritization is an expression of human exceptionalism, which refers to the idea that humans are unique beings and that their needs are more important than those of all other beings. Regardless of the philosophical foundations of this belief system, our current time reveals human exceptionalism as untenable. The needs and wellbeing of the natural world can no longer come after those of its human population, lest we march into total planetary collapse.

The call for HCAI, even if it aims to serve the sustainability of the ecosystems of planet Earth, might therefore have contrary effects and support harmful practices, if the attitude of human exceptionalism is not questioned. While the ‘human-centeredness’ called for in HCAI does not directly reflect this human exceptionalism (it is rather a response to the ‘techno-centered’ state of AI research and industry), it nonetheless falls victim to an anthropocentric way of thinking by emphasizing human concerns and neglecting the moral status of nonhuman nature. In this, current HCAI discourse aligns with the broader AI ethics discourse, which largely overlooks AI’s impact on nonhuman animals and its ecological footprint (Hagendorff 2022). This is problematic, not only because of the conceptual untenability of human exceptionalism, but also because of the growing awareness and evidence of the substantial impact that AI technologies have on nonhuman animals and the natural environment, from its greenhouse gas emissions and high energy consumption to enabling animal mistreatment and perpetuating speciesism (van Wynsberghe 2021; Hagendorff et al. 2023). Together, the urgency to reckon with all forms of human exceptionalism and the acknowledgement of the findings presented above reveal the limits of HCAI as a framing device in AI ethics, and invites consideration for moving beyond it in the debate about AI and its implications.

In this paper we propose to move beyond HCAI by investigating the relationship between AI technologies and the natural world, through the concepts of poiesis and mimesis, concluding with the articulation of ‘bio-centered AI’ (BCAI). BCAI takes the biosphere of planet Earth, rather than human beings alone, as the basis for our moral considerations regarding the development and deployment of AI. The notion of the biosphere is often associated with scientists Eduard Suess and Vladimir Vernadsky, who coined and popularized the term, respectively (Hutchinson 1970). However, we do not employ the term in the strict sense of the scientific theories developed by Suess or Vernadsky, and instead take it to refer to the total sum of all ecosystems that support life on Earth. We have selected it as the grounding concept of BCAI due to its emphasis on the interconnectedness of matter and living beings in the totality of the world. Following this, we are not focused in our analysis of the AI-biosphere relation on one aspect or level of the biosphere over another, but take each as part of the whole. In this sense, the move from HCAI to BCAI is as much a decentering as it is a recentering, as it does not commit to a singular center above others, but accounts for the existence of multiple centers as co-supportive (Midgley 1992; Weston 2006).

We do not merely suggest that AI should support the wellbeing of the biosphere rather than the wellbeing of humans alone (as seems to be implied by current HCAI approaches), but moreover argue that the biosphere provides opportunities to enrich the concept of artificial intelligence as such. We make this argument through a philosophical investigation of the AI-biosphere relation, guided by the concepts of poiesis and mimesis. These concepts are inspired by Aristotle’s philosophy of technical action, and indicate that technology development involves both poiesis, meaning the ‘making’ and constructing of objects that are new to nature, and mimesis, meaning the ‘mimicking’ of or taking inspiration from nature (Aristotle 1968, 194a20-25). In this paper, we take these insights as inspiration to investigate the relationship between AI and the biosphere in which they are embedded in three main sections. Section two views AI as a kind of poiesis to analyze the ways in which it depends on and shapes the biosphere. Building on this analysis, section three argues that current AI development constitutes an anthropocentric and exploitative reduction of the biosphere, as well as reflecting on the emerging concept of ‘Sustainable AI’ and critiquing it for leaving the assumption of AI as anthropo-poiesis unchallenged. Hereafter, section four views AI as a kind of mimesis to suggest an alternative and progressive understanding of the AI-biosphere relation that provides the basis for BCAI. We conclude by discussing the key components of our analysis and argumentation, and reiterating the pressing need for a recentering of the current AI debate, for which we propose the concept of BCAI.

2 AI as Poiesis: Making the Biosphere

The concept of poiesis points to the fact that we do not merely live in a natural environment, but actively shape it, taking materials from the Earth and transforming them to achieve our own ends. This applies to simple technological interventions in the environment like creating dirt paths between significant places in a community or combining sticks to build a shelter, all the way to the development of modern ‘high’ technologies such as airplanes and smartphones. Every human poietic intervention and making is technological. Via technology, the biosphere is altered by human activity in multiple ways. Most directly, natural materials are extracted from the Earth, causing transformations in the ecosystems to which those materials originally belonged. For example, a living tree is cut down and processed into wood for building a house, also affecting other beings that depend on this tree for survival. Besides this, the biosphere is affected by the creation of by-products. For example, wood that is burned to cook a meal emits carbon dioxide, and chemical waste accompanying production processes pollute surrounding ecosystems. Finally, the creation of new technologies also shapes its environment by adapting it to the new actions and relations that the technology affords. Once a new technology, from dirt paths all the way to AI systems, are in place, the environment within and upon they are employed gets adapted to it.

In this section, we suggest that viewing AI as a kind of poiesis allows us to reveal how the biosphere is affected by it. By viewing AI as poiesis, we can ask how the development of AI ‘makes’ the biosphere. This question has two possible readings. We may ask how AI technologies are made from the biosphere by requiring the extraction of material resources, as well as how the deployment of AI technologies shapes the biosphere in return by creating new ways of relating to it and imposing a new understanding of it. Both readings are important for the purposes of articulating an analysis of the AI-biosphere relation. In answering these questions, we are first led to reflect on the fact that AI technologies, despite popular imaginaries, are material technologies made from earthly materials. The development and maintenance of AI affects the biosphere through their material costs and ecological footprint. Hereafter, we consider how the implementation of AI technologies in turn impacts the biosphere, looking at different ways in which AI technologies adapt the natural environment to and within they are deployed.

2.1 The (hidden) Materiality of Artificial Intelligence

The first and most basic aspect of the AI-biosphere relation that is revealed through the notion of poiesis is a relation of material dependency. All artifacts are made of, from, and in the biosphere in the sense that they are created using materials that come from the natural world. The development of technical objects depends on the transformation of matter (organic or inorganic) into material resources and energy. Elements of the biosphere, such as trees and iron ores, are removed from their place in an existing ecosystem and consequently processed into materials, such as wood and steel, that human beings use to fabricate technical objects, such as tables and cars. This process reveals the dependency of technology on the biosphere, and points towards the seemingly trivial fact that all technologies are material. Surprisingly, however, philosophical treatments of technology often overlook this materiality, instead focusing on how artifacts are designed and used, rather than how they are manufactured (Winner 2014; Roos 2021). As a result, research in the field has had its gaze averted from the way in which the biosphere substantiates technical objects, and arguably prevents it from sufficiently engaging with the ecological crisis of our current time. Implicitly, technology has come to be understood as ontologically immaterial. We explicitly position ourselves against that understanding by emphasizing the material dependency of technical objects, and follow the call for a terrestrial turn in philosophy of technology, which maintains that all technology comes from the biosphere of planet Earth (Lemmens et al. 2017).

When it comes to philosophical and ethical thinking about AI in particular, the disregard for materiality is exacerbated by several other factors that further conceal it. Among these factors is the language that surrounds AI and its digital infrastructure. For example, we say that data is stored in ‘the cloud,’ seemingly implying an ethereal, immaterial space without geographic location, while in reality the storage of data takes place in large datacenters with a significant ecological footprint (Amoore 2018). This myth of immateriality by which AI is tacitly understood as independent of its ecological conditions is not only a result of the metaphors used in the discourse surrounding AI, but is also supported by its sociotechnical organization. The possibility of wireless transmission of information between devices, the miniaturization of microelectronics, and the invisibility of the digital infrastructure working in the background that supports the functionality of individual AI systems all hide the materiality of AI, giving rise to what has been called “fantasies of dematerialization” (Taffel 2024).

Despite such fantasies, AI technologies are material objects embedded in the material environment of planet Earth. This environment is not part of the digital infrastructure, but concerns the physical Earth as a condition of possibility of wireless transmission of data between devices. To understand this materiality, we should first acknowledge that ‘AI,’ which typically refers to a set of computational approaches and techniques like Machine Learning, machine reasoning, and roboticsFootnote 1, depends on a variety of digital devices and a large system of digital infrastructure for its development, implementation, and deployment. This leads us to considering computing parts, sensors, batteries, datacenters, internet cables, satellites, and all the materials that make up those elements, such as silicon, lithium, copper, and steel, as constituting the materiality of AI (Crawford and Joler 2019). Furthermore, when that infrastructure is in place, training, tuning, and running AI systems requires additional material resources, notably for power and cooling, which requires energy (solar, wind, coal and natural gas) and water (Luccioni et al. 2024). The resources required for the digital infrastructure as well as training and running particular systems reveal the immense materiality of AI that is often overlooked in our philosophical and ethical engagements with it.

Having cursorily overviewed the material resources required for AI, we continue by emphasizing that those resources express a relation of dependency to the biosphere. It is through the extraction of natural materials that are manufactured into resources for the development of AI that AI technologies are made from and of the biosphere. In this process, existing ecosystems are not left unaffected and are in fact often harmed. An example can be found by looking at the extraction and processing of lithium, an essential rare-earth material for the construction of rechargeable batteries, and by extension for digital devices that support AI systems and store data. Due to its particular location in the earth and material properties, the mining and processing of lithium requires massive amounts of water and produces a brine by-product, leading to irreversible damage to the local ecosystem (Tapia and Peña 2021). On a global scale, the widespread adoption of energy-intensive AI technologies (especially generative AI) puts considerable stress on national power grids. This requirement for energy furthermore leads to an increase in carbon emissions that goes as far as to threaten the completion of climate goals set to manage global warming (Halper 2024). The way biospheric elements are extracted for the development and use of AI comes at the cost of damage to both local ecosystems and the global health of the planet.

2.2 AI Shapes the Biosphere

The second aspect of the AI-biosphere relation that is revealed by the notion of poiesis is how the biosphere is shaped, or made, through the implementation of AI. The biosphere is not only affected in the manufacturing and maintenance of AI technologies, but also in how those technologies are implemented. AI systems shape and transform the biosphere in a variety of ways. In the following indents, we consider first how AI technologies are deliberately introduced to shape the biosphere before addressing its indirect or accidental impact on the biosphere.

Where the biosphere is deliberately shaped by the means of AI, this is often done with the explicit goal of promoting sustainability, combatting climate change, and fighting biodiversity loss and ecological degradation (Vinuesa et al. 2020; Tironi and Lisboa 2023). The basic idea behind these initiatives is that AI can support sustainable decision-making by offering insights into—and potentially automatically altering—the biosphere. These promises are often framed in the context of ecosystem management, conservation, and restoration practices (see e.g. Isabelle and Westerlund2022). AI applications can be used, for example, to monitor regional biodiversity status, forest health, pollution levels, wildlife behavior, and human interventions in ecosystems like fishery and logging in real time (Biu et al. 2024). Furthermore, AI systems can be used to predict future states of the biosphere based on gathered data. For example, AI systems are used to predict locations where illegal logging or poaching is likely to occur (Dorfling et al. 2022). Additionally, AI can be used to predict the success of human conservation and restoration efforts. This offers environmental policymakers the option to experiment with future scenarios and make trade-offs regarding the success of diverse policy directions and choices. AI can also recommend decisions and interventions, for example on when to best use energy to consume the highest amount of renewable energy relative to other sources (Felfernig et al. 2023). Finally, AI can be used to intervene automatically in the biosphere, and thereby ‘make it’ in the most literal sense. An example of an AI-driven intervention on behalf of the biosphere is the ‘COTsbot,’ which stands for ‘crown of thorns starfish robot.’ This is an underwater drone equipped with machine vision and a pneumatic injection arm to autonomously trace and kill the invasive crown-of-thorns starfish in the Great Barrier Reef (Zeldovich 2018).

Besides these examples of deliberate AI-enabled interventions into the biosphere, the implementation of AI also shapes the biosphere in unintentional and indirect ways. In Sect. 2.1 we already pointed out how the material costs of the development and maintenance of AI technologies harms the biosphere through extractivist practices. Here we emphasize that AI not only relies on such extractivism, but also reinforces it in its implementation, which produces increasing demands for material resources and energy (Crawford 2021). Related to this but less direct still, the introduction of AI technologies in society has been linked to growing consumerism, through which it further stresses the global biosphere and can affect specific ecosystems (Brevini 2021). We may also look at the way in which various elements of the biosphere are affected by biases that are unwittingly built into AI technologies. For example, in the same way that human groups can be disadvantaged by discriminatory biases, so too can nonhuman animals be marginalized by speciesist biases in recommender systems, image generators or large-language models, which in turn normalize violence against them (Hagendorff et al. 2023).

Whether deliberate or accidental, we emphasize that the biosphere is greatly affected by AI technologies, both in their development and implementation. By viewing AI as poiesis, we can reveal how AI ‘makes’ the biosphere, both in the sense that it involves the manufacturing of earthly elements into material resources and energy for the functioning of AI technologies, and in the sense that its implementation alters the biospheric make-up thereafter. In Sect. 3.1 we conceptualize this particular kind of poiesis as anthropocentric and reductive, through which we are able to reflect on the currently prominent framing devices in the AI ethics landscape of HCAI and Sustainable AI.

3 Human-centered & Sustainable AI: (anthropo-)poiesis Unchallenged

This section builds on our analysis of the AI-biosphere relation presented in the previous section, to reflect philosophically and critically on the kind of poiesis currently dominant in our thinking about the development and implementation of AI technologies. We argue that this is a kind of anthropo-poiesis, meaning that AI development currently serves specifically the human making of the biosphere, consequently imposing an anthropocentric reduction of the biosphere as passive and manufacturable.Footnote 2 Hereafter, we reflect on the concepts of HCAI and Sustainable AI as framing devices and argue that neither adequately challenge, and in fact reinforce, this reduction.

3.1 (Anthropo-)poiesis and Human Exceptionalism

By viewing AI as poiesis, we can uncover and reveal the problematic nature of the AI-biosphere relation as it currently exists. We argue that AI relates to the biosphere through an anthropocentric and exploitative reduction, where human beings are assumed as ‘maker’ and the biosphere as ‘makeable.’ This follows from both the materiality of AI, wherein biospheric elements are manufactured into material resources for human use and to address human wants, and from how AI is implemented as a means to shape the biosphere in the name of greater efficiency, reflecting a desire to exert control over it and manage it. This in turn imposes an understanding of the biosphere as controllable and passive, which feeds into extractivist ways of thinking. The Earth is viewed as a well of resources to be extracted for human goals, both in the material realities of AI development, and in the way the biosphere is acted upon by AI technologies once they are developed. In her book Atlas of AI, Kate Crawford describes the relationship between AI and the extraction of natural resources in more detail, demonstrating that AI both relies on and reinforces many kinds of extraction, including the depletion of natural resources (Crawford 2021).

This problematic and self-enforcing perspective on the biosphere is related to how AI technologies function at a basic and general level. Systems that use AI, and in particular those built with the approach of Machine Learning, function by identifying statistical distributions of patterns in large datasets. Data is therefore a cornerstone of AI and reveals how AI technologies adapt the environment within which they are implemented (Blok 2023). Most important to note is that a dataset is not a direct, transparent or neutral representation of its environment, and its construction entails a complex and extensive process (Gitelman 2013). In the construction of a dataset, an environment is not so much reflected as it is interpreted according to the inner working of a computer model, as computable and fit for statistical analysis. It is in this process that AI can be said to “convert an infinitely complex universe into a Linnean order of machine-readable tables” (Crawford 2021, p. 221). In the field of critical AI studies, this reduction of world to dataset has been criticized for the reason that it denies the agency, autonomy, and spontaneity of social agents in the social world (see Birhane 2021; McQuillan 2022). What is not often acknowledged is that the same argument can be made regarding the natural world. If AI technologies reject human spontaneity when applied to the social world, so too does it reject the spontaneity, complexity and heterogeneity of the biosphere when applied to natural phenomena.

The way that AI adapts its environments into data, combined with the pervasive belief of human exceptionalism, leads to the anthropocentric and exploitative reduction of the biosphere in current AI development and implementation. Both biospheric elements and human beings (which of course also belong to the biosphere) are subject to datafication, but when coupled with an anthropocentric distinction between humans as active and the biosphere as passive, the reduction of the biosphere in the service of human needs and wants is justified. As such, we find that the development and deployment of AI serves a worldview where the human is seen as an active agent who ‘makes’ as homo faber, and the biosphere is seen as passive and allows itself to be made by the human. From this, we can understand how AI is guided by the human’s desire to extract natural resources and shape the environment to their will. AI as poiesis is therefore better understood as AI as anthropo-poiesis, meaning that AI development serves human making and functions as a tool that allows human beings to manufacture their environment, while viewing the biosphere as manufacturable.

This reflects a general attitude that philosophers have described as characteristic of the Anthropocene, which expresses the belief that the biosphere and all of it parts can (and perhaps must) be mastered, pacified, and constructed through human intervention (Neyrat 2018; Cera 2023). This attitude may be pervasive in modern society, both in the development of AI technologies and elsewhere, but must nonetheless be recognized as fundamentally flawed. It expresses a dualism between human beings and other elements of the biosphere, viewing humans as superior and thereby justified in controlling the biosphere. This is an artificial dualism based on a logic of mastery, which ignores that human beings are both part of, and depend on, the biosphere (Plumwood 2002). Moreover, it is false to claim that the biosphere (outside of human life) is entirely passive, as the biosphere contains many elements that are poietic in shaping themselves and their environment in a variety of ways. Finally, the assumptions that the biosphere is manufacturable implies that total knowledge over the biosphere is possible. Instead, we maintain that the biosphere is essentially a terra incognita, ontologically and epistemically unfathomable and unpredictable (Blok 2019).

The discourse that has adopted ‘human-centered AI’ as a framing device does not question its commitment to human exceptionalism and unequivocally assumes AI as anthropo-poiesis. While those championing HCAI propose it as an attempt to improve the state of AI research and development, AI itself remains to be viewed as a means for humans to shape the world according to human needs and values. This is already apparent from its core idea of addressing ethical concerns by putting humans at the center, but also from how it proposes to do so. The dominant HCAI discourse is focused on suggesting interventions in the design of user interfaces, with the aim to “increase human performance, while supporting human self-efficacy, mastery, creativity, and responsibility” (Shneiderman 2020, p. 495). Unsurprisingly, the agency of other elements of the biosphere are not considered in the HCAI discourse. In the rare occasion that sustainability is introduced as a concern, it is still framed as only instrumentally valuable to the greater goal of human wellbeing (e.g. Ozmen Garibay et al. 2023). Our analysis of the AI-biosphere relation as anthropo-poietic demonstrates the harm of human exceptionalism in HCAI and provides more justification for moving beyond its anthropocentric framing.

3.2 The Promise and Pitfalls of Sustainable AI

It is relatively straightforward to demonstrate the deficiency of HCAI as a framing device with regard to its treatment of the biosphere, but we must also consider another emerging concept in the field of AI ethics that appears more promising. This is the concept of Sustainable AI, which figures as the core notion of a growing community of academic and industrial researchers seeking to understand the relationship between AI development and questions of sustainability. In contrast to HCAI, as well as AI ethics broadly speaking, Sustainable AI expresses an explicit focus on environmental concerns. In this, it considers both opportunities to employ AI technologies for sustainable goals, such as the SDGs, as well as the environmental costs of AI technologies themselves. This is captured by Aimee van Wynsberghe’s distinction between “AI for sustainability” and “the sustainability of AI” (van Wynsberghe 2021). This explicit focus on environmental concerns renders Sustainable AI promising in providing a remedy to the problematic nature of the AI-biosphere relation. Sustainable AI does not merely propose the addition of consideration for the biosphere to existing AI ethics approaches but rather signifies a new “wave” in AI ethics, performing a turn towards a structural approach to uncovering and analyzing ethical issues in relation to AI (Bolte and van Wynsberghe 2024). If Sustainable AI is indeed capable of addressing structural issues such as that of the exploitation of the biosphere in AI development and implementation, it may also successfully challenge the anthropo-poietic character of the AI-biosphere relation.

However, while we do not wish to undermine the value and importance of much of the research that is currently being done under the banner of Sustainable AI, we argue that it does not offer a satisfactory response to the problematic AI-biosphere relation that we have described in this paper so far. This is because, despite its promise of constituting a structural approach, Sustainable AI is currently accompanied by the dominant narrative of eco-modernism and techno-solutionism (Schütze 2024). This narrative pushes for technological development as a solution to environmental problems and does not question the basic societal desirability of technical progress, industrialization, and modernization. Sustainable AI as a concept is attractive to eco-modernist and techno-solutionist narratives because it allows for a framing of AI technologies as a potential solution to the climate crisis, rather than being part of the broader economic and socio-material conditions that contribute to it (Brevini 2023). Instead of making us aware of the exploitative AI-biosphere relation, then, Sustainable AI actually obscures this relation through its techno-solutionist framing. Coopted by eco-modernist and techno-solutionist narratives, Sustainable AI comes to be conceptualized as “the technological solution to the climate crisis from a techno-solutionist vantage point” which “inadvertently perpetuates and reinforces existing structures of socio-economic exploitation and exclusion” (Schütze 2024, 1–2).

We can analyze the connection between Sustainable AI and techno-solutionism according to the notion of anthropo-poiesis. The belief that AI presents, primarily, a solution to the climate crisis presupposes two implicit philosophical positions. First, it hinges on an instrumentalist conceptualization of technology, according to which technical artifacts are means to human ends. Second, it sees the climate crisis as a technical problem, and in doing so sees the biosphere as a system fit for technological intervention. Both positions are dubious and often challenged in the relevant bodies of philosophical literature, which we will not go over here. Instead, we point out that this way of thinking is fundamentally grounded in a perspective on technology as anthropo-poiesis; it views technology as a means for humans to ‘fix’ the climate, to ‘repair’ the global ecosystem, and in doing so to shape the biosphere according to their needs. Even if those needs are sometimes in line with planetary health, it departs from the notion that human beings are capable of and justified in making the biosphere, which in turn is there to be made by humans. The attitude that we introduced in Sect. 3.1 as characteristic of the Anthropocene returns here. The biosphere is manufacturable, and in fact should be manufactured to mitigate global warming. Sustainable AI expresses this same attitude when it presents AI as the technical solution to the climate crisis and consequently leaves the anthropo-poietic character of the AI-biosphere relation unquestioned.

We recognize that both HCAI and Sustainable AI are often (though not necessarily always) professed and maintained with good intentions. HCAI discourses may be motivated by the genuine will to avoid harm done to humans by AI technologies, and Sustainable AI by the noble desire to reckon with global warming and alleviate the suffering it brings about. But neither can support the concept of bio-centered AI if it is to constitute a rethinking of the anthropocentric and exploitative AI-biosphere relation. Insofar as neither HCAI nor Sustainable AI questions the notion of AI as anthropo-poiesis, neither successfully challenges human exceptionalism. We must therefore go beyond what these concepts can offer and reflect fundamentally on the AI-biosphere relation as well as the concept of the biosphere that it presupposes. The next section hopes to provide an avenue through which we can do so, by introducing the notion of AI as mimesis.

4 AI as Mimesis: Imitating the Biosphere

In this section we present an analysis of AI as mimesis, which provides a progressive understanding of the AI-biosphere relation. In this, we depart from the basic idea that AI is not only poietic, but moreover mimetic in its imitation of ‘natural’ intelligence. As indicated in section one, technology development can be viewed as poiesis insofar as it constitutes the making of objects new to their environment. This process of making itself is, however, not entirely separate from what already exists, and also relates to its environment by imitating and taking inspiration from what is already there. This is the dimension of mimesis, by which technical activity is understood as making something new, precisely by mimicking its environment. Through mimesis, which in the case of technical activity may be understood as a special kind of poiesis, allows us to investigate how the development of a technical object reflects a relationship to the natural world. In this section, we are particularly interested in bio-mimesis, to emphasize the way in which the development of technical objects such as AI technologies take inspiration from the biospheric phenomena and processes. Viewing the development of AI as bio-mimesis opens up a new progressive avenue to investigating the relationship between it and the biosphere, which makes explicit how the biosphere figures as a source of inspiration for the development of AI. We draw upon the emerging approach in design and engineering known as biomimicry, which opposes the exploitation of the biosphere and is rather characterized by its exploration and the attempt to learn from the biosphere (Benyus 2009; Blok and Gremmen 2016). Our hypothesis is that a reorientation of AI from the assumption of anthropo-poiesis towards a progressive understanding of bio-mimesis enables us to move beyond an exploitative and anthropocentric AI-biosphere relation and engage instead in an explorative and bio-centered concept of AI that integrates the intrinsic value of the biosphere with the technicity of AI systems, and can therefore truly claim to be ecologically sustainable.

4.1 Artificial, Human, and Natural Intelligence

We initiate our investigation of AI as mimesis by reflecting on the term ‘artificial intelligence’ itself. Admittedly, the basic idea of creating thinking machines is much older than this name and can be traced back millennia (see Cave et al. 2020). Still, a brief excursion to the origin of the term helps us uncover its inherent mimetic character. The term was introduced in the year 1955 by mathematician John McCarthy, who proposed it as the name of a field of research “on the basis of the conjecture that every aspect of intelligence can in principle be so precisely described that a machine can be made to simulate it” (as cited in Nilsson 2010, p. 77). Here, we see a basic conceptualization of AI as the simulation of human intelligence, or its technical mimesis, assuming that non-technical, or natural intelligence is something that already exists in the world and can be imitated. Probably aware of the difficulty in defining what such ‘intelligence’ is, and what it would mean for a machine to simulate it, McCarthy clarifies the project as that of “making a machine behave in ways that would be called intelligent if a human were so behaving” (as cited in Nilsson 2010, p. 77). From this, we recognize that ‘natural’ intelligence is, in fact, human intelligence. This conceptualization of AI is not unique to McCarthy and is still common in contemporary AI discourses, who similarly view AI as the attempt to create technical systems that do things humans require intelligence to do (e.g., Müller 2021; Gordon and Nyholm 2021). Following this basic definition, the project of AI is the technological mimesis of human intelligence.

The identification of intelligence generally speaking and human intelligence in particular is reminiscent of a Cartesian anthropocentrism regarding the mind, according to which human beings are unique beings in their capacity for intelligence. Other beings, such as nonhuman animals, lack intelligence and instead behave only according to machinic impulses.Footnote 3 This doctrine, as pervasive in the background of popular and academic discourses as other types of anthropocentric thinking, is untenable considering the wealth of scientific evidence that supports the presence and complexity of nonhuman intelligence throughout the biosphere (e.g. Narby 2006). Intelligent behavior has been observed in primates, birds, fish, various kinds of plants, and has been argued to take place at the level of ecosystems such as forests as well. The field of AI, however, predominantly focuses on human intelligence, decision-making, and language-use. In this, the field embraces a narrow view on intelligence and has ignored its other expressions that can be found in the biosphere.

Despite the prevailing focus on human intelligence, the field of AI has at several moments in its history taken inspiration from biospheric phenomena beyond human intelligence. Notably, the development of artificial neural networks in the early years of the field was inspired by studying the perception of frogs, and later development of convolutional neural networks, such as AlexNet, was inspired by studying the perception of cats (Pasquinelli 2023). Similarly, genetic algorithms and the notion of swarm intelligence, referring to different approaches in the design of self-organizing systems, were inspired by the phenomena of natural evolution and the behavior of bee swarms or flocks of birds, respectively (Holland 2010; Beni and Wang 1993). In these cases, we can see AI as a clear expression of bio-mimesis, taking inspiration from various elements and processes in the biosphere and reconstructing those technologically, beyond only looking to simulate human intelligent behavior. Recently, around the turn of the century, a conscious turn towards the biosphere has started to take place in the field of AI. This movement, called bio-inspired AI, reflects an expansion from a focus on human cognitive reasoning towards a wider range of phenomena in the biosphere, and is said to reflect “a philosophical revolution where humans are no longer at the center of the biological universe” (Floreano and Mattiussi 2025, xii).

Here, the concept of mimesis provides access into how the biosphere is understood as a source of inspiration in AI research and development. The way that bio-inspired AI practitioners take inspiration from biospheric elements and processes reveals, however, that they maintain a mechanistic and passive understanding of the biosphere, as it takes natural forms, functions or mechanisms (e.g. swarm behavior) as inspiration to solve technical problems (e.g. controlling unmanned vehicles) while putting increasing pressure on Earth’s carrying capacity, but not, for example, the fact that natural forms and functions are sustainable by nature (e.g. run on sunlight, are biodegradable, are resource efficient). Following biomimicry scholarship, we may call this way of relating to the biosphere exploitative, where the biosphere is not seen as valuable in itself but as merely having instrumental value to human beings (Benyus 2009). The biosphere is assumed to be a ‘technical model’: a source for the discovery of clever techniques that can be abstracted and applied into new designs for solving human problems (Gerola et al. 2023). As such, while proponents of bio-inspired AI may frame the approach as non-anthropocentric, the underlying relation to the biosphere is, in fact, anthropocentric in that the biosphere is regarded only insofar as it provides value to human beings.

4.2 Bio-mimesis for an Explorative AI-biosphere Relation

In contrast to the exploitative relation to the biosphere that is populated by a desire to dominate and control our collective natural environment, the philosophy of biomimicry provides opportunities for an explorative relation that explicitly seeks to learn from the biosphere’s own sustainable processes and translate them in technologies that are regenerative by design (Blok 2022). While admittedly a somewhat vague notion, ‘exploration’ indicates a mode of relating to the biosphere that does not seek answers to human questions, thereby interpreting it according to the scheme of human problem solving, but that attempts to understand the biosphere in its own right. Biomimicry, thus, involves a “quieting” of our technical cleverness in favor of a respectful “listening” to natural forms, processes, and systems (Benyus 2009, p. 287). For the sake of the argument presented in the current paper, biomimicry opens up a promising avenue for the articulation of BCAI that offers hope in overcoming the anthropocentrism and associated techno-solutionism of the contemporary AI discourse. The question it raises is this: how can the development and implementation of AI be inspired by the biosphere through exploration rather than exploitation.

One way to answer this question is to broaden of the notion of ‘intelligence’ itself. As briefly discussed in Sect. 4.2, the field of AI generally adheres to a narrow Cartesian conceptualization of intelligence as unique to human beings. Beyond this, the understanding of intelligence employed in the field of AI is also narrow in its focus on isolated, individual problem-solving, which overlooks embodied, embedded, dynamic, and distributed aspects of intelligent action (Dreyfus 2009; Thompson 2010). Because of this, a call for expanding our understanding of intelligence, in the field of AI and beyond, has begun to emerge (Frank et al. 2022; Bridle 2023; Gleiser 2025). We affirm this call and believe that the concept of BCAI, which is to be based in an explorative relation to the biosphere, ought to involve a broadening of the notion of intelligence from being a kind of isolated problem solving unique to humans, towards a natural phenomenon that occurs throughout the biosphere at different places and scales.

Another way to respond to the question of explorative bio-mimetic AI is by seeking to follow the unifying patterns of the biosphere, in an attempt to integrate the technicity of AI with the principles of life. Exemplary instances of these principles as articulated by biomimicry scholar Janine Benyus are: “Nature runs on sunlight,” “Nature recycles everything,” and “Nature taps the power of limits” (Benyus 2009, p. 7). We should point out that these principles are not universally accepted in the biological sciences and nuances can be made (Lecointre et al. 2023). Therefore, we do not wish to treat these principles as dogma and rather propose to consult them as a jumping off point for reimagining the AI-biosphere relation. At the very least, the idea of aligning AI development and deployment with even just the three principles above presents a stark contrast with current AI development and implementation practices by raising concern for energy sources, waste production, and ecological limits. As such, if BCAI departs from these principles, it affords a direction for AI that radically opposes to the eco-modernist and techno-solutionist perspectives that plague the notion of Sustainable AI and ultimately sustain only the status quo. The more AI is embedded in and mutually benefits both its design and the natural ecosystems of planet Earth, the more it can claim to be truly sustainable and bio-centered.

5 Conclusion

Departing from the observation that the current debate surrounding AI and its impact reflects an anthropocentric thinking in its neglect for the moral status of nonhuman animals and the natural environment, this paper has suggested a turn from human-centered towards bio-centered AI (BCAI). The concept of BCAI is to be grounded in a philosophical investigation of the AI-biosphere relation. To guide this investigation, we first introduced the concept of poiesis, by way of which we analyzed how AI technologies shape the biosphere through its dependency on biospheric materials as well as its monitoring, controlling, and intervening in biospheric processes. From this analysis we drew the conclusion that in the current AI-biosphere relation, AI is understood as a tool for human beings to make the biosphere according to their needs, implying a relationship of anthropo-poiesis. This is problematic because it reduces the biosphere to being passive and manufacturable, and upholds a relation of exploitation. The prominent framing of HCAI and Sustainable AI, despite good intentions, can be critiqued for not adequately challenging this anthropo-poietic relationship. We introduced the concept of mimesis to reveal another dimension of the AI-biosphere relation, shedding light on AI technologies as imitating and taking inspiration from the biosphere. Here, our analysis showed that while AI technologies are often inspired by biological lifeforms, and can therefore be understood as bio-mimetic, this inspiration is currently generally exploitative in its treatment of the biosphere as technical model. Through engagement with the philosophy of biomimicry we argued that a progressive AI-biosphere relation should establish an explorative relation of mimesis, whereby the biosphere is not merely solicited for solutions to human problems, but acknowledged as inherently intelligent and valuable.

We conclude the paper by reiterating the pressing need for a change in the current societal debate about AI. The position of human exceptionalism, whether it is held explicitly or assumed, is untenable in a time where our relation to the biosphere constitutes one of the most urgent moral and political challenges of our time. The ongoing conversation about AI and its impacts does not exist outside of this context, but rather, as our analysis has shown, is deeply involved with the anthropocentric and exploitative logic of exceptionalism. A shift in the discourse towards a progressive relation to the biosphere is critical. We hope that our efforts in articulating the concept of BCAI aids in bringing about this shift.