ChatGPT and Humans: inquiring about the moment of contact 

Introduction 

ChatGPT is an online chatbot based on language models developed by the company OpenAI and was launched on the 30th of November, 2022. During this short period of release, ChatGPT grabbed the public’s attention due to the performative task and resembled human interaction. What makes ChatGPT impressive is that it apparently pushes and challenges the boundaries of activities that often are understood as creative and intellectual tasks. Since the release of the software, several million have users, and for the most diverse application: from simple curiosity to text generators in several domains, such as making scripts for social media or publishing academic research, arriving at creative production such as writing short stories, poetry, and music.

Given the above, several questions arise from the application and use of ChatGPT. From an epistemological aspect, as what type of knowledge does ChatGPT create, what is the relevance of that knowledge, and from where does it come in the first place, to ethical questions, such as what should be the right use of ChatGPT, what are the social implications, and does ChatGPT promote the principles of equality and fairness.

All these are relevant questions to ask in order to have a better understanding of the implications and evaluate the outputs of ChatGPT. However, before asking these questions, I think it is relevant to ask what is happening at the very beginning, that is to say, in the moment of contact between the user and the chatbot. From how I see it, an interesting phenomenon is happening in the moment of interaction between the human user and ChatGPT. Two independent ontological beings interact, creating a new one, and from this new ontological entity, all possible epistemological and ethical questions arise. Therefore, this essay attempts to better understand what being comes into existence in the moment of contact. From this, the following research question is asked: How can we ontologically understand the interaction between humans and ChatGPT? 

My argument is that, ontologically, a new form of intelligence is taking place during the interaction, and this can be conceptualized as a hybrid intelligence. In order to prove my argument, I first introduce what is meant by hybrid intelligence. Then, in sections two and three, I present the two parties that make the hybrid intelligence. Section two argues that ChatGPT offers syntactic possibilities. This is shown using Turing’s (1950) principles of machine intelligence and Floridi’s (1999) notion of microworlds. Section three argues that humans offer embodied semantics, and this notion is built on Wittgenstein ideas of language and forms of life. From these, the conclusion follows. I expect to show with the essay that ChatGPT and human interaction will create a hybrid intelligence of textual embodied possibilities. 

 Hybrid intelligence 

The hybrid intelligence that I will be describing is bounded in the domain of language. This is because ChatGPT is an AI developed on large language models, and language is central to humans’ existence as in the role of thinking and relation with others. Therefore, the hybrid-intelligence that results from the interaction between ChatGPT and humans is ontologically bounded in the domain of language.

The term hybrid intelligence is constituted by two separate words that together form a bigger concept. Therefore, in order to define what is meant by hybrid intelligence, we must define the single terms independently. There is a hierarchy between the two; intelligence is the constitutive term, whereas hybrid describes the typology of intelligence qualitatively. Research proposes several different intelligence theories (Zhou et al., 2021). In this context, intelligence is defined in the most general sense as the ability to accomplish complex goals, learn, reason, and perform effective actions within an environment (Dellermann et al., 2019). It follows that human intelligence is defined as the ability to accomplish complex goals based on human characteristics. In this context, the relevant human characteristic is to be an embodied agent inside a body (Johnsons, 2008).

On the other side, artificial intelligence, in the most general sense, is the idea of machines that can automatically accomplish complex goals (Dellermann et al., 2019). 

Looking at the second term, hybrid refers to heterogeneity (Dallermann et al., 2019), meaning a complementary combination of capabilities. Therefore, it follows that hybrid intelligence is defined as the ability to achieve complex goals by combining heterogeneous intelligence, accomplishing superior results than what could have been accomplished separately, and continuously improving (Dallermann et al., 2019, p. 640). 

The definition implies that heterogeneous and independent intelligence, thus having a pre-given ontology, come together to create a new ontological entity. This idea implies the coexistence of different ontologies. Such an understanding of hybrid intelligence resembles the biological concept of symbiosis (Gerber et al., 2020). Symbiosis is a relational idea that is generally used to describe a specific type of interaction between living organisms. The extension to also not-living has a long debate (Brangier & Hammes-Adelé, 2011; Cesta et al., 2016); however, for the use of this essay, what is important about symbiosis is that it implies the idea of mutual benefit (Gerber et al., 2020). Therefore, by integrating mutual benefit into the concept of hybrid intelligence, we see the emergence of a totality, and this totally is established through a single principle that unifies the various parts (Katsuhito, 2016). Moreover, by using a Hegelian perspective, this principle can be defined as a purpose (Hegel, 2019). Therefore, it is a shared purpose, the principle that ontologically bounds together the two entities in one. 

Further, every relation of this being to the other refers to itself (Katsuhito, 2016). Applying the above to hybrid intelligence means that in the moment of contact, the two beings ontologically separated become ontologically one. The new ontology is relational by adding a mutual benefit and sharing constitutive principles that are the purpose, but because the purpose is auto-referential to the subject, there is an advancement from both sides. 

Therefore, from the above, we have defined hybrid intelligence as a new ontological being that emerges from the combining characteristics of the constitutive parts and that shares a common purpose. However, in the case of the hybrid intelligence that results from the interaction between humans and ChatGPT, what are these characteristics? 

Zhou et al. (2021) point out that the difference between artificial intelligence and human intelligence is due to the structured and unstructured nature of the two. This means that artificial intelligence needs specific tasks with clear boundaries and conditions; opposite, humans have more general levels of intelligence, meaning that they are more flexible and adaptable to context and situations. Similarly, Dellermann et al. (2019) notice that the difference between AI and humans is in the analytical and intuitive aspects. This can be translated into procedural and contextual, meaning that ChatGPT is syntactic, and humans are semantic. This difference is similar to what Searle (1980) famously pointed out in his Chinese room experiment, arguing that artificial intelligence is based on principles and rules and humans’ understanding and meaning. Therefore, in the hybrid intelligence that this essay analyses, we have an ontological being with the syntactic characteristic of ChatGPT and semantic features of the human user. In the next two sections, I will briefly explain why ChatGPT can be framed as syntactic and humans as semantic. 

Syntactic possibilities 

Commonly speaking, syntax is the structure of a language made out of a set of rules. However, as Muireartaigh (2016) points out, laid behind, there is the idea that language is generated automatically according to given rules of natural origins. Thus, language does not need a conscious mind to generate formal structure. Therefore, syntax can be described as a particular way of processing information (Muireartaigh, 2016). However, given how much language is intertwined with humans, syntactic properties are confused with proof of human intelligence, as the Turing test (Turing, 1950) and opposite, Searle’s (1980) Chinese room experiment prove as opposite examples. 

Turing and Mimesis syntactic

For this context, the ideas of Turing are relevant to prove the syntactic properties of ChatGPT and how these properties lead to new possibilities in the ontological being that comes into existence during the contact within the hybrid intelligence that the essay considers. 

A milestone moment in computer science was the publication of Turing’s (1950) Computing Machinery and Intelligence. Famously, in this essay, Turing proposes the famous Turing’s test, which is an attempt to answer the question, “Can machines think?” 

The test is well known and is based on the assumption that if a machine is able to deceive the human examiner, then it is plausible to state that the machine can think. The plausibility is given by induction (Moor, 1976), meaning that it is based on the assumption that if “I” (i.e., the subject) can pass the Turing test because I think, therefore, I have reasons to assume that even others entities that pass the test are capable of thinking. However, it can argued that the induction is misleading. The argument does not prove that there is understanding in a semantic way, but rather, there is a syntactic mimesis. 

Turing developed his test starting from the model of the Universal Turing machine (UTM) (De Mol, 2018). The UTM is a conceptual model based on symbolic systems that, given a set of rules, is able to compute information (De Mol, 2018). This perspective is applicable to a syntactic understanding of language and from an outside perspective. As in the case of the Chinese experiment (Searle, 1980), a computer program could pass the Turning test by simple syntactic processing; however, it does not say anything about the quality behind the syntax, thus making it just a simulation. The view that the simulation is sufficient to attribute human properties to machines is the mimetic perspective (Beavers, 2002), and this is compatible with the computational model of the mind (Rescorla, 2020). Turing’s perspective of the human brain shares the computational model (Rescorla, 2020). Therefore, according to Turing (1950), the human brain is one possible type of UTM.  

Applying the above to ChatGPT, it can be argued that on a linguistic level, ChatGPT mimes human capacities by having a computational basis. Further, given the data set and language models on which ChatGPT is based, the set of rules used increases syntactic possibilities. ChatGPT algorithms are inscribed in a set of rules that are able to detach probabilities based on the position of the words in use. Therefore, given a certain structure with related rules, ChatGPT processes syntactically the information, creating new combinations and possibilities. However, these new combinations are flat without distinction. The reason for their existence is not due to an understanding but rather because of rules and structure. 

Ontological immanence and microworlds

Floridi (1999) describes the flatness in which the response of ChatGPT is characterized by the term microworld. Microwolrd refers to an ontological commitment to the system (Floridi, 1999). ChatGPT comes into existence only inside the microworld of the server in which it is uploaded, which is then revealed on my monitor. ChatGPT is immanent in its microworld, and this immanence is an ontological necessity (Beavers, 2002). In order to understand better the implication of the ontological immanence of being bounded in a microworld, it is appropriate to present the opposite that applies to humans, that is, transcendence (Floridi, 1999). Accordingly, transcendence is described as the reflective detachment of the subject toward its environment (Floridi, 1999). This means that, opposite to ChatGPT, humans have the capacity to cross different lines (Beavers, 2002), to detach and shift from the surroundings, and therefore to interrupt the continuity. Is this interruption that makes the difference within the syntactic structure, making humans semantic. 

Therefore, ChatGPT is ontologically immanent, coupled with its digital environment. This allows ChatGPT to operate and exist syntactically but not able to move and shift, change and explore, be present and disappear. In a word, ChatGPT cannot transcend. In opposition, meaning is transcendent; it enters and exit contexts, embodied within the transcendent subject. Thus, ChatGPT within the algorithm and computational properties of its microworld opens to syntactic possibilities, but due to its immanence towards its microworld, it does not interrupt the syntactic continuity. Moreover, meaning is, by definition, interruption, therefore, transcendence. However, the semantic meaning is out of the ontological immanence of ChatGPT. 

Embodied semantics 

Opposite to syntactic, a semantic account holds the view that language comes into existence only when it has meaning, that is when it is “read” (Muireartaigh, 2016). As it was shown, syntactic may be automatic based on a procedural set of rules. However, in order for the syntactic linguistic production to come to life, meaning the interruption of the continuity of the surrounding landscape, it needs the transcendence aspect of semantics. And as Muireartaigh (2016) noticed, the semantic aspect is irreducible to the syntactic. 

Further, as shown, because the semantic act at first is an interruption and a distinction, requiring a transcendence detachment (Floridi, 1999) by the human subject, it is also necessarily within the human subject. Therefore, making the semantic aspect of language embodied within the subject. More generally, the embodiment of semantics is also a claim for an ontological status because of the ability to interrupt due to meaning attribution. 

Semantic and form of life

The embodiment of semantic linguistic forms can be connected to Wittgenstein’s (1968) concept of form of life. Wittgenstein ideas are relevant to this essay because, for him, machines cannot think as humans do, and this is due to ontological differences in the way of relating to language. Further, from a Wittgensteinian perspective, the critique of why machines cannot think is due to the inability to understand language, thus making a semantic argument. More generally, for Wittgenstein, the process of understanding is a linguistic act (Obermeier, 1983). To understand this, we have to present Wittgenstein’s idea of language game (Wittgenstein, 1968). A language game can be seen as a representation of a community (Tonner, 2017), meaning that the use of words is contextual, changing depending on the situation and culture.

Further, because understanding is linguistic and language is context-dependent, the conceptual system becomes metaphorical by nature (Lakoff & Johnson, 1980). Words and concepts share a metaphorical nature (Lakoff & Johnson, 1980); this means that over the syntactic structure, there is a semantic connection between concepts and words. Lakoff & Johnson (1980) illustrate this with the example of “argument is war” (Lakoff & Johnson, 1980, p.4). We win and we lose arguments; our position is attacked, and we try to defend it; more generally, our interlocutor is seen as an opponent (Lakoff & Johnson, 1980). More generally, our words and concepts are strictly connected to our embodied experience, creating a metaphorical translation from the perception experience to the linguistic system. Therefore, there is a metaphorical connection between words and concepts in the language game that makes the assumptions for the semantic aspect. However, the meaning could not take place without subjective understanding. Understanding is embodied within the subject, and this is because understanding can be seen as a perception. When I understand, I perceive the understanding, and I become aware of it. In a certain sense, I feel the information. This feeling is in my possibilities from a priori, and my reaction is bound in the a priori. If I didn’t understand, I wouldn’t feel, I wouldn’t react, and therefore, that linguistic information would not have meaning but just syntax. Machines cannot possibly share this process, that is, the human life form, due to the ontological difference, therefore excluded from the human landscape of meaning (Gambardella, 2019). 

Applying the above to the hybrid intelligence that the essay is concerned with, it was argued that words get meaning once they are understood. Following Wittgenstein’s ideas, linguistic understanding is strictly human, and this is due to the embodied semantic that is part of the human form of life. Further, this creates a transcendence moment because meaning is an interruption of the landscape. This interruption is outside of the realm of possibilities of ChatGPT because it is not part of its form of life; there is an ontological impossibility. Making an example: aeroplanes and birds both share the act of flying, but the way in which this is done is fundamentally different. The former has a task, the latter a form of life (Gambardella, 2019).

Conclusion

This essay has attempted to ontologically understand what happens in the moment of contact between ChatGPT and the human user. It was argued that from the interaction emerges a new ontological being that can be understood as a hybrid intelligence. Hybrid intelligence is a heterogeneous form that has shared complementary capabilities. Further, it was argued that hybrid intelligence is kept together by a single principle, which is a shared purpose. 

From this, the essay asked what are the complementary capabilities that are in place. 

Section two argued that ChatGPT offers syntactic possibilities. This is due to its computational and algorithmic nature. Further, ChatGPT is ontologically immanent to its microworld, making it incapable of transcendence, a requirement for semantic attribution. 

Section three argues that the complementary capacities that human users offer inside the hybrid intelligence is an embodied semantics. The idea of embodied semantics was shown based on Wittgenstein’s concept of form of life. This is strictly connected to the metaphorical aspect of the conceptual systems and bounded in the process of understanding. 

Thus, this essay showed that this hybrid intelligence forms an embodied possibility of linguistic creation, representing and creating the world through the production of linguistic texts.

Reference

Leave a comment