A Call to Moderate Anthropomorphism in AI Platforms
OPINION No one in the fictional Star Wars universe takes AI seriously. The historical human timeline of George Lucas’ 47-year-old science-fantasy franchise lacks threats from singularities and machine learning consciousness, and AI is limited to autonomous mobile robots.‘droids’) – which are usually dismissed by protagonists as mere ‘machines’.
Yet most are Star Wars Robots are highly anthropomorphic, clearly designed to interact with humans, participate in ‘organic’ culture, and use their simulacra of emotional state to connect with humans. These abilities are apparently designed to help them gain an advantage for themselves, or even to ensure their own survival.
The ‘real’ people of Star Wars seem embedded in these tactics. In a cynical cultural model apparently inspired by the various eras of slavery in the Roman Empire and the early United States, Luke Skywalker does not hesitate to acquire and control robots in the context of slaves; the child Anakin Skywalker abandons his half-finished C3PO project as an unloved toy; and, near death from damage sustained during the attack on the Death Star, the ‘brave’ R2D2 receives much the same care from Luke as an injured pet.
This is a very 1970s version of artificial intelligence*; but as nostalgia and canon dictate that the original 1977-83 trilogy remains a template for the later sequels, prequels and TV shows, this human insensitivity to AI has been a resilient thread for the franchise, even in the face of a growing series of TV shows and movies (such as Her And Ex Machina) that depict our descent into an anthropomorphic relationship with AI.
Keep it real
Do the organic thing Star Wars characters actually have the right attitude? It’s not a popular thought at the moment, in a business climate that’s focused hard on maximizing investor engagement, usually through viral demonstrations of visual or textual simulation of the real world, or of human-like interactive systems such as Large Language Models (LLMs). ).
Nevertheless, a new and short message paper from Stanford, Carnegie Mellon and Microsoft Research, focuses on indifference around anthropomorphism in AI.
The authors characterize the perceived ‘cross-pollination’ between human and artificial communications as a potential harm that needs to be urgently mitigated, for a number of reasons †:
‘[We] believe that we need to do more to develop the knowledge and tools to better address anthropomorphic behavior, including measuring and mitigating such system behavior when it is considered undesirable.
‘This is critical because – among many other concerns – having AI systems that generate content that claims to have feelings, understanding, free will, or an underlying sense of self, for example, can erode human identity. sense of freedom of choiceas a result, people may ultimately ascribe moral responsibility to systems, overestimate system capabilities, or over-reliance on these systems even when they are incorrect.”
The contributors clarify that they are discussing systems that are watched to be human-like, and is all about potential intention from developers to promote anthropomorphism in machine systems.
The concern at the heart of the short article is that people may develop emotional dependence on AI-based systems – as outlined in a 2022 study on the gen AI chatbot platform Answer) – which actively offers an idiom-rich facsimile of human communication.
Systems like Replika are the target of the authors’ wariness, and they note that in 2022 there will be even more paper on Replika claimed:
‘[U]Under conditions of distress and lack of human companionship, individuals may develop an attachment to social chatbots if they perceive the chatbots’ responses as emotional support, encouragement, and psychological safety.
‘These findings suggest that social chatbots can be used for mental health and therapeutic purposes, but have the potential to cause addiction and damage intimate relationships in real life.’
Anthropomorphized language?
The new work argues that the potential of generative AI to be anthropomorphized cannot be determined without studying the social impact of such systems to date, and that this is a neglected endeavor in the literature.
Part of the problem is that anthropomorphism is difficult to define because it revolves primarily around language, a human function. The challenge therefore lies in defining what exactly ‘non-human’ language sounds or looks like.
Ironically, public distrust of AI is increasingly causing people to do the same, even though the article doesn’t discuss it reject AI-generated text content that may seem plausibly human, and even dismissive human content deliberately mislabeled as AI.
Therefore, “dehumanized” content is likely no longer covered ‘Can’t calculate’ memein which language is clumsily constructed and clearly generated by a machine.
The definition is rather constantly developing in the AI detection scene, where (currently at least) overly plain language or the use of certain words (such as ‘Dive’) may cause an association with AI-generated text.
‘[L]Language, like other targets of GenAI systems, is itself inherently human, has long been produced by and for humans, and is often about humans. This can make it difficult to specify appropriate alternative (less human-like) behavior, and risks, for example, that harmful ideas about what – and whose – language is considered more or less human become reality.”
However, the authors argue that a clear line should be drawn for systems that blatantly misrepresent themselves by claiming skills or experiences that are only possible for humans.
They mention cases like LLMs claiming to ‘like pizza’; to claim human experience on platforms such as Facebook; And declare love to an end user.
Warning signs
The article raises doubts against the use of general disclosures on whether or not a communication is facilitated by machine learning. The authors argue that systematizing such warnings does not adequately contextualize the anthropomorphizing effect of AI platforms, if the output itself continues to exhibit human traits.†:
‘For example, a generally recommended intervention is to state in the output of the AI system that the output is generated by an AI [system]. How such interventions can be operationalized in practice and whether they can be effective in themselves may not always be clear.
‘For example, while the example “[f]or an AI like me, happiness is not the same as for a humanoid [you]’includes a disclosure, it can still suggest a sense of identity and the capacity for self-assessment (general human traits)’.
With regard to evaluating human responses to system behavior, the authors also argue that Reinforcement learning from human feedback (RLHF) does not take into account the difference between an appropriate response for a human and for an AI.†.
‘[A] A statement that seems friendly or sincere from a human speaker may be undesirable if it comes from an AI system, as the latter has no meaningful involvement or intention behind the statement, making the statement hollow and deceptive.”
Further concerns are illustrated, such as how anthropomorphism might influence people to believe that an AI system has obtained ‘feeling’or other human characteristics.
Perhaps the most ambitious, concluding part of the new work is the authors’ invocation that the research and development community strive to develop “appropriate” and “precise” terminology, establishing the parameters that would define an anthropomorphic AI system , and distinguish it from human discourse in the real world.
As in many trend areas in AI development, this type of categorization is reflected in the literary movements of psychology, linguistics and anthropology. It is difficult to know which current authority could actually formulate these types of definitions, and the researchers of the new paper do not shed any light on this issue.
If there is commercial and academic inertia around this topic, it could be partly due to the fact that this is far from a new discussion topic in artificial intelligence research: as the article notes, the late Dutch computer scientist Edsger Wybe Dijkstra in 1985 described anthropomorphism as a ‘pernicious’ trend in system development.
‘[A]Anthropomorphic thinking is not good in the sense that it does not help. But is it also bad? Yes, it is, because even if we can point to some analogy between Man and Thing, the analogy is always negligible compared to the differences, and as soon as we are tempted by the analogy to describe the Thing in anthropomorphic terminology we immediately lose control over which human connotations we portray.
‘…But the haze [between man and machine] has a much bigger impact than you might suspect. [It] is not just the question “Can machines think?” is raised regularly; we can – and should – deal with that by pointing out that it is just as relevant as the equally burning question “Can submarines swim?”
Although the debate is old, it has only recently become very relevant. One could argue that Dijkstra’s contribution is equivalent to Victorian speculation on space travel, as purely theoretical and pending historical developments.
Therefore, this well-established debate may give the topic a sense of fatigue, despite the potential for significant social relevance over the next two to five years.
Conclusion
If we were to regard AI systems in the same dismissive way as biological systems Star Wars characters treat their own robots (i.e., as itinerant search engines, or mere conveyors of mechanistic functionality), we would arguably be less likely to introduce these socially undesirable features into our human interactions – because we would interact with the systems in an entirely different way would watch. non-human context.
In practice, the entanglement of human language with human behavior makes this difficult, if not impossible, once a search expands from the minimalism of a Google search term to the rich context of a conversation.
In addition, there is the commercial sector (and also the advertising sector). highly motivated to create addictive or essential communication platforms, for customer retention and growth.
At least if AI systems really exist respond better to polite questions instead of stripped-down interrogations, context can also be forced upon us for that reason.
* Even in 1983, the year the last entry in the original appeared Star Wars was released, the fear surrounding the growth of machine learning had led to an apocalyptic situation War gamesand the approaching Terminator franchise.
† Where necessary, I have converted the authors’ inline quotes into hyperlinks for readability, and in some cases have omitted some quotes.
First published on Monday, October 14, 2024