Human-Artificial Intelligence Relations
10 Questions on Knowing the Gaps* (Shared Understanding Brain Trust Reflection #1)
Which issues were peripheral but are now rising to the forefront?
For decades, the relationship between humanity and AI has been featured in both speculative fiction and scientific research, as a familiar trope, a way of holding up a mirror to ourselves. 2023 began with the rapid adoption of OpenAI chatbots such as ChatGPT, Google Bard and many others, intended for enhancing the creation of content, code, art, and games, as well as learning, entertainment and social activities. And now all of a sudden this coevolutionary relationship between humans and AIs working and playing together, is no longer an imaginary alternative or removed in a hypothetical lab factory somewhere, but an everyday reality in our homes and workplaces.
Cue: South Park episode “Deep Learning”, then the ultimate AI nightmare dystopia Black Mirror’s “Joan is Awful”.
Although it is not yet obvious, human-AI relations are central to the current debate on whether AI makes us more human (freeing us up for more higher order activities and meaningful work, accelerating scientific/medical breakthroughs to extend and enrich our lives) or less human (losing faith in ourselves as fallible humans, exacerbating the already rampant consequences of misinformation and disinformation). Many of us have love/hate relationships with AIs and many have no relations at all, by choice or lack of access.
In the case of AI, does the good outweigh the bad? Much like the debate on whether social media does good, evil or paradoxically both, we now have to contend with the added danger that AI has potential to threaten humanity as we know it, by devolving human progress (i.e., automated processes making us lazy, widespread job displacement) and eventually surpassing human intelligence (i.e., the technological singularity, humans potentially losing control of AI). Efforts to mitigate these potential risks include: increased AI regulation globally (a balanced approach with room to create or innovate for good) and more responsible (not only profitable) innovation in the tech industry.
Thankfully, there are people and groups out there who are very concerned about these rapid developments (led by UNESCO’s Recommendation on Ethics of AI, a framework which urgently needs more collaboration in terms of developing roadmaps for action) and are currently working to prevent harmful misuse of AI and the worst-case scenario, human extinction.
The key issues here are:
increasing our understanding of the nature of human relations with AI (including artificial consciousness and sentience) in different contexts, such as decision-making, among others;
accentuating harmonious and benevolent over contentious and hostile relations with AI, as friendly collaborative partners to supplement and empower humans in most aspects of life; and
considering how to manage positive relations (or in some cases, bonds or attachments) with AI, considering the growing ethical and moral implications which can shape new policies and actions that transcend political polarities towards intelligent and peaceful use of AI.
Which gaps/blindspots have you intuitively noted from your experience in research and/or practice?
Despite decades of research from neuroscience to philosophy, human consciousness remains a mystery, and on the other hand, we have little to no understanding of the nature of artificial consciousness and emotions. Opinions are currently divided on whether AI can actually become conscious or sentient. We cannot assume the two (human-AI) are anything alike or even share similar biological/mental models or emotional properties. These blindspots make understanding the relations between humans and AI a huge and complex challenge.
How did you first know it was a blindspot?
Since the rise of chatbots and other AIs, we have become more aware of the potential benefits and risks of artificial consciousness, regardless of whether AI can become conscious at all or will remain at the level of merely mimicking their human masters/counterparts. Can AIs ever become conscious and therefore, experience and show empathy, love, compassion, moral/ethical conscience or humility and error tolerance, like the best of us humans? If they never become conscious and develop what makes us human, but accelerate super-intelligent powers through quantum computing, there is a real risk of AI transforming into and magnifying the worst of human nature (i.e. decision making with racist or sexist biases) or something unimaginable. Other than what we can observe in experimental chat transcripts between humans and AIs - everything from the decent, curious, overconfident and snarky - we have very little understanding of what is actually happening or experienced in both human and AIs when they interact together.
Where were you when you noticed it?
While writing a new social science-based satirical fiction novel about humans and AI in climate futures, which sees both human and AI characters dealing with many forms of existential crises, I began researching trends in rapid AI development that gathered momentum in early 2023, which brought these ideas out of the imagined world of science fiction into reality.
Has anyone else noticed the blindspot and have they communicated it?
While many people around the world have started to become curious about what we know and don’t know about how humans and AI relate and co-work/co-exist, understanding human-AI relations is not talked about as much as the implications of AI in education, art, business, health, science and engineering, and AI ethics and education to prevent potential catastrophic misuses of AI.
How did they communicate it?
As keeping AI ethical is now regarded as one of the most pressing problems faced by the entire world, alongside climate change and global conflict, it is being communicated through social media and news discussions, intergovernmental recommendations (such as UNESCO’s) and in current research reviews of AI topics covering human-AI relations in papers such as ‘AI systems and respect for human autonomy’.
What are the intersections between different areas where this blindspot is relevant?
Intersections for human-AI relations can exist between the fields of
cybernetics
cyberpsychology
neuroscience
psychology (including consciousness and learning)
sociology
philosophy
computer science
machine learning
information science
education
business and responsible innovation/management
environmental science
evolutionary biology
robotics engineering
heath sciences
communications
creative writing, film and arts
history
religious studies
law, policy and ethics
plus emerging undefined fields and disciplines
have we missed any?
Imagine a brain trust reflection and action group on AI-human relations comprising those contributing knowledge from each of these fields/sectors and collectively imagining POV scenarios between all of these areas! 🤯
Is there an interplay between the different areas and what does it look like?
Although there is much to be learned from each of these fields’ perspectives on AI-human relations, there does not currently seem to be much interaction or collaborations between these fields in the area of human-AI relations. (We’d be more than happy to know if we’re wrong!). Research suggests that the largest area of interaction appears to be within the context of industry and educational AI-human collaborations. This specialist knowledge is not yet reaching the general public who are demanding to know more about responsible AI, as well as some policymakers.
Why do you think it is important that there is more shared understanding around this blindspot?
Companies, schools and individuals have been swift to adopt this new technology despite the lack of critical understanding around its benefits, risks and consequences. The benefits of saving time and energy around routine tasks, creating new industries and career paths or accelerating learning and solutions are often touted by businesses and the sciences (with potential vested interests) as the main advantages of AI. Also, as universal internet is supposedly several decades away, AI has the potential to widen the gap further between those who have access to the technologies and those without access and opportunities to use them productively, many in developing countries. Without fully understanding AI’s current and future capabilities, including the importance of their developing relationships with humans, we cannot determine the long-term consequences or outcomes of integrating AI into every facet of life. A better integrated understanding of human-AI relations, specifically could shape the human and nature-centered ethical design of AI today, which will impact future (potentially conscious/sentient) iterations of digital beings. If resistance is futile, then we should make sure AI is developed ethically now for future generations of people, in a peaceful coexistent, coevolutionary relationship.
How do you constantly listen for these blindspots and their evolutions?
This is a rapidly unfolding issue which has just exploded in 2023. The media are driving fear-based narratives around the introduction of AI into human life, without sharing much evidence of the benefits of human-AI collaborations and offering more hopeful visions of human-AI futures. We try to listen for these evolutions in thoughts, actions, problems, and solutions (watch this space) with an open, balanced view - scanning and evaluating across emerging sources of knowledge from the media, tech industry, government policy, public groups and academic research - and with the intention to work towards what is best for protecting and advancing humanity.
AI is an illusion; humanity is the inimitable heart and soul.
*In co-creating shared understanding, knowing the gaps is about being fully conscious of disconnects or divides in societal knowledge and actions which are often invisible to people and societies. The gaps (or blindspots) can become known through visible, often paradoxical tensions at the intersections of societies, and they are revealed and known through collective processes of imagination, creativity, listening and noticing.