Imagine entering a room where a philosopher, a computer scientist, and a sociologist are engaged in a discussion about AI ethics. The philosopher is pacing back and forth, grappling with questions of whether machines can actually make moral choices. The computer scientist is frantically writing algorithms on a whiteboard, grumbling about "bias mitigation protocols." Meanwhile, the sociologist is gesturing towards charts illustrating how AI could concentrate power in the hands of a small number of tech giants.
They are all knowledgeable, and they are all right. However, they are all speaking completely different languages.
This too is what traditional academic collaboration typically amounts to, with each expert summarizing their insight in terms the others can understand, much like diplomats at a UN conference wearing headsets, waiting for their verbal cue. It works, but something gets lost in translation every time.
Transdisciplinarity is different. Instead of translation, it is fusion. Imagine those same three experts suddenly discovering that they are not studying different problems, but different manifestations of the same problem. The computer scientist's algorithms are not technical tools on their own—they are ethical agents who bring philosophical ideals to life and rebuild social structures. The philosopher's normative theories are not abstract constructs—they need to be coded into functioning systems that exist in actual communities. The sociologist's power dynamics are not separate from the code—actually, they are woven into how those systems function.
When working transdisciplinarily, something new is created that any of them could not have developed on their own. They develop approaches that are simultaneously computational, ethical, and socially aware. Not because they have been negotiated across the disciplines, but because they have transcended the artificial boundaries between them. Beyond this, they are developing the metacognitive, critically reflective abilities to move fluidly between different knowledge frameworks while maintaining intellectual honesty about their strengths, limitations, and appropriate applications.
This discussion has become more important than ever because AI ethics, and global problems such as sustainability, public health emergencies, or urban injustice, are not interested in the faculty divisions that are traditionally so prized by academia and other institutions. These are messy, complex, and thoroughly real-world problems. Addressing them requires thinking and practices that are equally complex and equally real. It is the difference between an orchestra where every section has their part to play separately and one where the music itself arises out of their genuine collaboration.
A very exciting new research project is in progress, inspired by the truly wonderful global collaborations I have been involved with over the past fifteen years in information research and transdisciplinary sciences.
Social-Ecological Information Experiences is an integrative conceptual framework that can guide the advancement of sustainability goals such as ecosystem health for biodiversity conservation. The framework is a synthesis of social sciences, information science, ecological and natural sciences, and technology (AI, data science etc). As these technologies are becoming essential to the study and implementation of social-ecological systems, the project highlights social, informational and experiential aspects that are particularly important for the ethical development of artificial intelligence, natural language processing and machine learning. For example, addressing growing ethical challenges in reducing a range of algorithmic biases that perpetuate inequalities.
A brief outline of the project is below. Feel free to email me if you are interested in this project: faye@humanconstellation.org
What are social-ecological systems?
Social-ecological systems (SES) is a well-established bridging concept and strategy commonly used to understand and address socio-environmental problems, and to increase impact and action in sustainability science (Partelow, 2018; Shackleton, 2024). Social-ecological systems is an essential tool for supporting the United Nations’ ongoing goals in ecosystem restoration, encouraging transdisciplinary teams to integrate while giving equal attention to interaction with non-scientific (traditional) and scientific information and knowledge. With growing evidence that conventional approaches are often ineffective in dealing with complex socio-environmental problems, SES arose from the need to rethink the ways humans relate to the environment (Sala & Torchio, 2019). Viewing the complex relations between humans and the ecosystems in which they co-exist with non-humans (animals, plants, AI etc), can help in clarifying cause and effect relationships, diagnoses and addressing the need for changes (Galafassi et al, 2018).
What is information experience in social-ecological systems?
The application and understanding of SES is a key solution to ameliorate climate change and biodiversity loss. A deeper and more nuanced understanding of the social dimensions of SES, including informational and technological, would help elucidate what works best in addressing specific problems. Information experience is a holistic concept that integrates how and why people interact and engage with information/knowledge, going beyond simplistic transactional notions of information/knowledge exchange usually applied in SES.
Information experience as an object of study has evolved from traditional areas of information research—information behavior, information literacy, knowledge management, human-computer interaction etc. All of these areas are human-centered fields; however, information experience is broader and multidimensional, allowing for contextual and causal understandings of social-ecological information from multiple lifeforms or species within ecosystems’ experiential perspectives, including the relational contexts between humans and non-humans i.e. animals, ecosystems and/or technologies (Bruce et al, 2014; Miller, 2020; Sohljoo et al, 2024).
Although information experience can include both sides, the field of study focuses more deeply on internal and subjective ‘through the eyes of consciousness’ experiences, rather than external visible interactions in specific contexts traditionally uncovered by information behaviour studies (Bruce et al, 2014; Gorichanaz, 2020; Miller, 2020). As yet, complex perceptual realities and diverse pluralities are inadequately reflected in information experience research. Examining shared elements of social-ecological systems and information experience can expand basic conceptions of information experience and establish its place in sustainability research. This can also help build the connective tissue much needed to deepen our understanding of holistic and nuanced experiences in social-ecological aspects of sustainability science and action.
Why is this research needed?
The purpose of this study is to bridge the gap between information research and sustainability sciences (Nolin, 2010; Guerrero et al, 2018; Meschede & Henkel, 2019; Suorsa, 2024) by developing an understanding of the concept of information experience through a social-ecological systems lens. There is currently a gap in research, particularly from the information research fields to help better understand the transdisciplinary integration of the social informational aspects into ecological systems research (Keith et al, 2022, Miller, 2020; Miller, 2024; Polkinghorne et al, 2024; Strikwerda, 2024). There is a need for more studies that examine or integrate both social and ecological informational aspects simultaneously to develop holistic understandings of human-nature interactions (Lischka et al, 2018). This is important as quality information and data from empirical evidence (quantitative, qualitative, mixed methods etc), and knowledge (i.e. experiential narratives) intertwined in sharing experience as a large part of social relations, are significant value components of social-ecological systems essential to ethical decision making for sustainability efforts (Keith at al, 2022; Partelow, 2018).
This research aims to fill this knowledge gap by explicating social-ecological information experience as an intersectional conceptual framework that can pave the way for:
Information scientists and practitioners to contribute to transdisciplinary social-ecological systems and sustainability science projects, and;
Ecologists, sustainability scientists and intermediaries (such as knowledge brokers, practitioners, creative professionals from social, technology and ecology/systems theory fields, working between research, industry and policy), to understand why and how information experience could be integrated into social-ecological systems projects.
This project aims to encourage and facilitate transdisciplinary collaborations through a literature review that weaves together commonly shared elements of information experience and social-ecological systems theory–relationality, impacts, understanding, plurality, and contextuality.
References
Bruce, C., Partridge, H., Davis, K., Hughes, H., & Stoodley, I. (Eds.). (2014). Information experience: Approaches to theory and practice. Emerald Group Publishing.
Galafassi, D., Daw, T. M., Thyresson, M., Rosendo, S., Chaigneau, T., Bandeira, S., Munyi, L., Gabrielsson, I. & Brown, K. (2018). Stories in social-ecological knowledge cocreation. Ecology and Society 23(1):23.
Gorichanaz, T. (2020). Information experience in theory and design. Emerald Publishing Limited.
Guerrero, A.M., Bennett, N.J., Wilson, K.A., Carter, N., Gill, D., Mills, M., Ives, C.D., Selinske, M.J., Larrosa, C., Bekessy, S. & Januchowski-Hartley, F.A. (2018). Achieving the promise of integration in social-ecological research. Ecology and Society, 23(3).
Keith, R. J., Given, L. M., Martin, J. M., & Hochuli, D. F. (2022). Collaborating with qualitative researchers to co‐design social‐ecological studies. Austral Ecology, 47(4), 880-888.
Lischka, S. A., Teel, T. L., Johnson, H. E., Reed, S. E., Breck, S., Carlos, A. D., & Crooks, K. R. (2018). A conceptual model for the integration of social and ecological information to understand human-wildlife interactions. Biological Conservation, 225, 80-87.
Meschede, C., & Henkel, M. (2019). Library and information science and sustainable development: a structured literature review. Journal of Documentation, 75(6), 1356-1369.
Miller, F. (2020). Producing shared understanding for digital and social innovation: Bridging divides with transdisciplinary information experience concepts and methods. Palgrave Macmillan, Basingstoke, UK.
Miller, F. Q. (2024). Shared understanding. In Darbellay, F. (Ed.) Elgar Encyclopedia of Interdisciplinarity and Transdisciplinarity. Edward Elgar Publishing, Cheltenham, UK.
Nolin, J. (2010). Sustainable information and information science. Information Research, 15(2), 15-2.
Partelow, S. (2018). A review of the social-ecological systems framework. Ecology and Society, 23(4).
Polkinghorne, S., Bowell, P., & Given, L. M. (2024). Transdisciplinarity: an imperative for information behaviour research. Information Research, 29(2), 495-511.
Sala, J. E., & Torchio, G. (2019). Moving towards public policy-ready science: philosophical insights on the social-ecological systems perspective for conservation science. Ecosystems and People, 15(1), 232–246.
Shackleton R.T. (2024) Social-ecological systems (SES). In Darbellay, F. (Ed.) Elgar Encyclopedia of Interdisciplinarity and Transdisciplinarity. Edward Elgar Publishing, Cheltenham, UK.
Solhjoo, N., Krtalić, M., & Goulding, A. (2024). The in-between: information experience within human-companion animal living environments. Journal of Documentation.
Strikwerda, J. (2024) Information. In Darbellay, F. (Ed.) Elgar Encyclopedia of Interdisciplinarity and Transdisciplinarity. Edward Elgar Publishing, Cheltenham, UK.
Suorsa, A. (2024). Embodied and dialogical basis for understanding humans with information: A sustainable view. Journal of the Association for Information Science and Technology.
The Tinkering Studio, The Exploratorium, San Francisco (photo credit: V. Ruiz)
In 2019, I wrote a reflective narrative about the origins of the Shared Understanding concept, which was a synthesis of several apparently unrelated ideas: reimagining Piagetian thought, valuing an antique children’s encyclopedia, tinkering in The Exploratorium museum, critiquing 90s internet culture, musing a Flaming Lips song lyric, reverberating a simulatory animation device called the zoetrope and illuminating the unknown gaps in-between…
Five years later, Shared Understanding has grown substantially, continuing to evolve in theory and practice, and is now more clearly and robustly defined in the much anticipated Elgar Encyclopedia of Interdisciplinarity and Transdisciplinarity. Considering reading antique encyclopedias was an unusual pastime of mine since I was very young, it is such a delightful honor to be invited to join an international community of over one hundred distinguished scholars and thinkers featured in this Encyclopedia, which in my opinion exemplifies retrofuturism or the renaissance ethos that often goes astray in our increasingly technological universe.
In the lead up to the book’s release in June 2024, I am revisiting the reflective piece which eventually became the Prologue to my book Producing Shared Understanding (2020):
The first time I ever saw a live and spinning zoetrope—a pre-cinematic animation device—was on a visit to the Exploratorium Museum of Science, Art and Human Perception on San Francisco’s Pier 15, in March 2017. Derived from the Greek words zoe, meaning “life”, and tropos meaning “turning”, a zoetrope is also known as a “wheel of life”.
This was my first “live” zoetrope experience because as a child I had seen a static zoetrope in a detailed, illustrated entry about how motion pictures were made, in a leather-bound antique encyclopedia volume by Arthur Mee. As Mee’s encyclopedias for children were published in mid-twentieth-century Europe, it is likely that I am in the minority of millennials who have ever heard of Mee’s encyclopedia. I remember being fascinated by countless entries that naturally unified the natural sciences, social sciences, humanities and arts, spanning across several volumes.
Most school teachers said that the information held in those books was quite archaic, and that I should have consulted the latest Britannica CD- ROM or even the pre-Google search engine AltaVista in the mid-1990s wave of the World Wide Web. But browsing these newer resources felt mechanical and limited. They were devoid of the fusing of ethical and humanist wisdom and wonder, alongside scientific proofs.
Over knowing what we know, these classical texts inspired us to place even higher value in being curious about what we do not know. Arthur Mee was honest about his encyclopedia; although it was very thorough, carefully presented and detailed, he made it clear to learning minds that it was not the fountain of all knowledge once devoured. The real magic was found in the unknown gaps in-between. Like the zoetrope, only through the illuminated gaps in the wheel did the whole story come to life.
If I had been more deliberate and orderly in my early research, I might never have randomly come across the zoetrope, as both an innovative concept and a precursor invention which ultimately led to traditional audiovisual motion picture and—as we know and love it today—digital multimedia and animation, social media, GIFs, memes, live streaming on reddit’s Public Access Network, YouTube, cat videos and Pixar.
A zoetrope is also known as a “wheel of life”. An illusion. A simulation. Is digital life an illusion? Is reality a simulation? Is simulation a reality?
You realize the sun doesn’t go down it’s just an illusion caused by the world spinning round—The Flaming Lips “Do You Realize?”
Also inside the Exploratorium that day, there was a sign in the Tinkering Studio.
A quote from Swiss psychologist Jean Piaget in the early 1970s, when transdisciplinary thinking was just taking flight:
To understand is to invent.
Quotes such as this reflect the assumptions prior to the Artificial Age—that the social or environmental impact of any invention was a primary consideration before unleashing it onto the universe. Now it appears to be the opposite. The concept of the Anthropocene is driven by technocratic narratives—both utopian and dystopian—but not all of its solutions will be technological ones. Furthermore, tech solutions to tech problems seem paradoxical. We need to make sure that the problem is not compounded by the “solution”.
Somewhere—in the mix of watching zoetropes live, thinking about Piaget’s thoughts and exploring a very hands-on museum, lighting up our imaginations—came the first sparks of this book.
Q: How and why is ‘Shared Understanding’ a potential model and approach to enabling informed learning and information literacy in complex transdisciplinary contexts?
FM: Shared Understanding is a variant model of Informed Learning/Information Literacy that has emerged from outside of the library and information science field through my interactions as a social scientist researching transdisciplinary innovators both within and outside of educational contexts, in an effort to bring together the emerging disciplines of Information Literacy and Transdisciplinarity.
Shared understanding is underpinned by the principles of critical, creative and ethical use and engagement with information for learning. However, through my experiences and research (over more than a decade), I realized that the term ‘shared’ highlighted the collective, collaborative and partnership aspects of information literacy—and ‘understanding’ highlighted the dual rational and affective/intuitive learning aspects, and the ultimate learning outcome of ‘understanding’ as a higher level of thinking and deeper empathy with the differences in worldviews and unity of diverse knowledge—not just engagement with the informational aspects.
Shared understanding is meant to have undertones of peacebuilding, compassion, humility and overcoming polarized thinking, which we see as critical to highlight and include into our education systems right now, and more broadly into our social consciousness.
Shared understanding is offered as a way of expanding information literacy as a transdisciplinary discipline and practice, in our current and future contexts, particularly in areas where transdisciplinary approaches are paramount—scientific innovation, digital and AI ethics, sustainability and regeneration, and public health.
Shared understanding merges with other re-imaginings of information literacy such as metaliteracy, critical information literacy and transliteracy.
AI is an illusion; humanity is the inimitable heart and soul.
Which issues were peripheral but are now rising to the forefront?
For decades, the relationship between humanity and AI has been featured in both speculative fiction and scientific research, as a familiar trope, a way of holding up a mirror to ourselves. 2023 began with the rapid adoption of OpenAI chatbots such as ChatGPT, Google Bard and many others, intended for enhancing the creation of content, code, art, and games, as well as learning, entertainment and social activities. And now all of a sudden this coevolutionary relationship between humans and AIs working and playing together, is no longer an imaginary alternative or removed in a hypothetical lab factory somewhere, but an everyday reality in our homes and workplaces.
Cue: South Park episode “Deep Learning”, then the ultimate AI nightmare dystopia Black Mirror’s “Joan is Awful”.
Although it is not yet obvious, human-AI relations are central to the current debate on whether AI makes us more human (freeing us up for more higher order activities and meaningful work, accelerating scientific/medical breakthroughs to extend and enrich our lives) or less human (losing faith in ourselves as fallible humans, exacerbating the already rampant consequences of misinformation and disinformation). Many of us have love/hate relationships with AIs and many have no relations at all, by choice or lack of access.
In the case of AI, does the good outweigh the bad? Much like the debate on whether social media does good, evil or paradoxically both, we now have to contend with the added danger that AI has potential to threaten humanity as we know it, by devolving human progress (i.e., automated processes making us lazy, widespread job displacement) and eventually surpassing human intelligence (i.e., the technological singularity, humans potentially losing control of AI). Efforts to mitigate these potential risks include: increased AI regulation globally (a balanced approach with room to create or innovate for good) and more responsible (not only profitable) innovation in the tech industry.
Thankfully, there are people and groups out there who are very concerned about these rapid developments (led by UNESCO’s Recommendation on Ethics of AI, a framework which urgently needs more collaboration in terms of developing roadmaps for action) and are currently working to prevent harmful misuse of AI and the worst-case scenario, human extinction.
The key issues here are:
increasing our understanding of the nature of human relations with AI (including artificial consciousness and sentience) in different contexts, such as decision-making, among others;
accentuating harmonious and benevolent over contentious and hostile relations with AI, as friendly collaborative partners to supplement and empower humans in most aspects of life; and
considering how to manage positive relations (or in some cases, bonds or attachments) with AI, considering the growing ethical and moral implications which can shape new policies and actions that transcend political polarities towards intelligent and peaceful use of AI.
Which gaps/blindspots have you intuitively noted from your experience in research and/or practice?
Despite decades of research from neuroscience to philosophy, human consciousness remains a mystery, and on the other hand, we have little to no understanding of the nature of artificial consciousness and emotions. Opinions are currently divided on whether AI can actually become conscious or sentient. We cannot assume the two (human-AI) are anything alike or even share similar biological/mental models or emotional properties. These blindspots make understanding the relations between humans and AI a huge and complex challenge.
How did you first know it was a blindspot?
Since the rise of chatbots and other AIs, we have become more aware of the potential benefits and risks of artificial consciousness, regardless of whether AI can become conscious at all or will remain at the level of merely mimicking their human masters/counterparts. Can AIs ever become conscious and therefore, experience and show empathy, love, compassion, moral/ethical conscience or humility and error tolerance, like the best of us humans? If they never become conscious and develop what makes us human, but accelerate super-intelligent powers through quantum computing, there is a real risk of AI transforming into and magnifying the worst of human nature (i.e. decision making with racist or sexist biases) or something unimaginable. Other than what we can observe in experimental chat transcripts between humans and AIs - everything from the decent, curious, overconfident and snarky - we have very little understanding of what is actually happening or experienced in both human and AIs when they interact together.
Where were you when you noticed it?
While writing a new social science-based satirical fiction novel about humans and AI in climate futures, which sees both human and AI characters dealing with many forms of existential crises, I began researching trends in rapid AI development that gathered momentum in early 2023, which brought these ideas out of the imagined world of science fiction into reality.
Has anyone else noticed the blindspot and have they communicated it?
While many people around the world have started to become curious about what we know and don’t know about how humans and AI relate and co-work/co-exist, understanding human-AI relations is not talked about as much as the implications of AI in education, art, business, health, science and engineering, and AI ethics and education to prevent potential catastrophic misuses of AI.
How did they communicate it?
As keeping AI ethical is now regarded as one of the most pressing problems faced by the entire world, alongside climate change and global conflict, it is being communicated through social media and news discussions, intergovernmental recommendations (such as UNESCO’s) and in current research reviews of AI topics covering human-AI relations in papers such as ‘AI systems and respect for human autonomy’.
What are the intersections between different areas where this blindspot is relevant?
Intersections for human-AI relations can exist between the fields of
cybernetics
cyberpsychology
neuroscience
psychology (including consciousness and learning)
sociology
philosophy
computer science
machine learning
information science
education
business and responsible innovation/management
environmental science
evolutionary biology
robotics engineering
heath sciences
communications
creative writing, film and arts
history
religious studies
law, policy and ethics
plus emerging undefined fields and disciplines
have we missed any?
Imagine a brain trust reflection and action group on AI-human relations comprising those contributing knowledge from each of these fields/sectors and collectively imagining POV scenarios between all of these areas! 🤯
Is there an interplay between the different areas and what does it look like?
Although there is much to be learned from each of these fields’ perspectives on AI-human relations, there does not currently seem to be much interaction or collaborations between these fields in the area of human-AI relations. (We’d be more than happy to know if we’re wrong!). Research suggests that the largest area of interaction appears to be within the context of industry and educational AI-human collaborations. This specialist knowledge is not yet reaching the general public who are demanding to know more about responsible AI, as well as some policymakers.
Why do you think it is important that there is more shared understanding around this blindspot?
Companies, schools and individuals have been swift to adopt this new technology despite the lack of critical understanding around its benefits, risks and consequences. The benefits of saving time and energy around routine tasks, creating new industries and career paths or accelerating learning and solutions are often touted by businesses and the sciences (with potential vested interests) as the main advantages of AI. Also, as universal internet is supposedly several decades away, AI has the potential to widen the gap further between those who have access to the technologies and those without access and opportunities to use them productively, many in developing countries. Without fully understanding AI’s current and future capabilities, including the importance of their developing relationships with humans, we cannot determine the long-term consequences or outcomes of integrating AI into every facet of life. A better integrated understanding of human-AI relations, specifically could shape the human and nature-centered ethical design of AI today, which will impact future (potentially conscious/sentient) iterations of digital beings. If resistance is futile, then we should make sure AI is developed ethically now for future generations of people, in a peaceful coexistent, coevolutionary relationship.
How do you constantly listen for these blindspots and their evolutions?
This is a rapidly unfolding issue which has just exploded in 2023. The media are driving fear-based narratives around the introduction of AI into human life, without sharing much evidence of the benefits of human-AI collaborations and offering more hopeful visions of human-AI futures. We try to listen for these evolutions in thoughts, actions, problems, and solutions (watch this space) with an open, balanced view - scanning and evaluating across emerging sources of knowledge from the media, tech industry, government policy, public groups and academic research - and with the intention to work towards what is best for protecting and advancing humanity.
AI is an illusion; humanity is the inimitable heart and soul.
*In co-creating shared understanding, knowing the gaps is about being fully conscious of disconnects or divides in societal knowledge and actions which are often invisible to people and societies. The gaps (or blindspots) can become known through visible, often paradoxical tensions at the intersections of societies, and they are revealed and known through collective processes of imagination, creativity, listening and noticing.
Thanks for reading Life Patterns! Subscribe for free to receive new posts and support my work.