What is even real anymore? – The case for personal agency being at the forefront of what it means to be literate.

This is an audio-with-slides recording of my keynote for the European Conference on Information Literacy, given on the 24th September 2025. The abstract is at end and of this post along with the broad bibliography of sources I have been reading around over the last year or so.

The talk covers how we relate to the uncanniness of AI LLMs and what the implications are for Information Literacy.

I recorded the audio on my phone then cleaned it up using Adobe Podcast. Occasionally this has mangled my pronunciation but overall it did a good job of making it listenable.

In this post I’ll outline some of the key ideas I covered but first I’d like to say thanks to Sonja Špiranec for the invitation. Thanks also to:

  • Rosie Jones, Director of Student and Library Services at Teesside University who helped me to test some of my thoughts on Information Literacy.
  • Wesley Goatley, from the University of the Arts London who is the academic that suggested to me that stopping LLM chatbots referring to themselves in the first person would unpick a lot of the misplaced anthropomorphisation.
  •  Ian Truelove, also from UAL, who thoughtfully engages in rangy dialog with me about this topic. This is invaluable for me as I’m very ‘talk-to-think’.

Summary of the main points covered

The talk is in four main sections. I’m hoping to write it up as a paper for the special edition of Postdigital Science and Education on ‘Designing for Literacy’:

Introduction – a bit of context about UAL and my role.

Framing – laying out my position.

  • The longstanding question of if a machine can be simulated with enough accuracy to effectively operate as a version of what it is simulating. Illustrated with a mechanical duck from 1738.
  • The problem of a zero-sum future narrative in which ‘being human’ is a finite concept that is being progressively reduced by emerging technologies.
  • The principle that learning requires effort and therefore the limits of making learning easier, or more efficient.

Part 1 – (almost) all of this has happened before and (almost) all of this will happen again.

  • AI as our current version of a long line of conceptual mirrors which allow us to explore what it means to be human. This is what underpins out fascination with the technology.
  • AI as a technology of Cultural Production.
  • Rupture – recurrent markers which appear each time a new, widespread, technology of Cultural Production emerges.
  • Schools of Literacy – the Teleological and the Ontological. The tussle that occurs between these approaches every time a new technology of Cultural Production comes along. In simple terms, the tension between a skills and a broader literacy approach, and the need to combine them.

Part 2 – Metaphors and Myths

  • The importance of bringing these to the surface as part of teaching Information Literacy.
  • Being honest about how insidious the Digital-as-Person myth is and how inaccurate this position is relative to how LLMs function.
  • The dangerous implicit myth of the Digital-as-Sacred or god-like and how this is sometimes an undercurrent in people’s conceptualisation of the technology.
  • Aporia – The way in which the distance between these myths and how the technology functions generates what appear to be unsolvable tensions or paradoxes.
  • Abdicatio – A brief allusion to our desire to morally offset onto the technology.

Participation time! – a light-hearted quiz to wake the audience up.

Part 3 – Intention, agency and practice

  • Alternative names for Information Literacy as a quick thought experiment.
  • A nod to Post-digital thinking and Information Literacy.
  • The importance of identifying which part of a process or practice AI can be usefully applied to (and which parts it shouldn’t be applied to).
  • My ‘academic writing hierarchy of practices’ diagram to discuss how we are always moving between the conventions, canon and negotiating meaning. The way in which this shifts from a subject to a person focus, thereby introducing the notion of personal agency.
  • A version of the diagram with an Information Literacy focus (from facts to meaning) and how information seeking is constantly moving between epistemological levels or approaches.
  • A discussion about how Information Literacy is increasingly moving towards scaffolding paths to expertise. The question as to the extent the use of AI damages these paths.
  • My AI Learning Gambit which outlines the tension between efficiency and agency when using technologies of Cultural Production.
  • ‘Poverty of Meaning’ – The relationship between efficiency and meaning, and the problem of AI ‘Workslop’ as an example of this tension.
  • The danger of Hyponiscience – a term I have coined to describe the false sense of having access to everything, as encouraged by online search and/or AI chatbots.
  • Intentionality – wrapping up with the importance of acting with intention and how AI tempts us to be unintentionally productive.

Abstract

In the context of information seeking AI can be thought of as an amplification of the ‘Wikipedia problem’ which caused academic distress a few years ago. When a believable answer requires no effort (or thinking) to find, what has been learned? The information literacy response to this is to teach the mechanism by which the answer was generated, to critically deconstruct the validity of the answer. However, we are now entering an AI era where most answers have no discernible provenance. There is very little ‘tracking back’ with AI because it is based on probability and not on cross-checking with reality.

In this talk I will suggest that we need to amplify the importance of personal agency in our concept of literacy. Fundamentally we should be asking students and staff to seriously consider what they are cognitively offloading and what they must hold onto to retain their agency as citizens, students and researchers.  I will explore frameworks such as the ‘AI Learning Gambit’ and approaches to teaching which highlight the importance of personal agency in the AI era.

Bibliography

Bacon, C.K., 2018a. Appropriated literacies: The paradox of critical literacies, policies, and methodologies in a post-truth era. Education Policy Analysis Archives 26, 147–147. https://doi.org/10.14507/epaa.26.3377

Bozkurt, A., Xiao, J., Farrow, R., Bai, J.Y.H., Nerantzi, C., Moore, S., Dron, J., Stracke, C.M., Singh, L., Crompton, H., Koutropoulos, A., Terentev, E., Pazurek, A., Nichols, M., Sidorkin, A.M., Costello, E., Watson, S., Mulligan, D., Honeychurch, S., Hodges, C.B., Sharples, M., Swindell, A., Frumin, I., Tlili, A., Tryon, P.J.S. van, Bond, M., Bali, M., Leng, J., Zhang, K., Cukurova, M., Chiu, T.K.F., Lee, K., Hrastinski, S., Garcia, M.B., Sharma, R.C., Alexander, B., Zawacki-Richter, O., Huijser, H., Jandrić, P., Zheng, C., Shea, P., Duart, J.M., Themeli, C., Vorochkov, A., Sani-Bozkurt, S., Moore, R.L., Asino, T.I., 2024a. The Manifesto for Teaching and Learning in a Time of Generative AI: A Critical Collective Stance to Better Navigate the Future | Open Praxis. https://doi.org/10.55982/openpraxis.16.4.777

Carty, F., n.d. Art or Algorithm [WWW Document]. URL https://www.arts-su.com/news/article/6013/art-or-algorithm/ (accessed 9.26.25).

Chiasson, R.M., Goodboy, A.K., Vendemia, M.A., Beer, N., Meisz, G.C., Cooper, L., Arnold, A., Lincoski, A., George, W., Zuckerman, C., Schrout, J., 2024. Does the human professor or artificial intelligence (AI) offer better explanations to students? Evidence from three within-subject experiments. Communication Education 73, 343–370. https://doi.org/10.1080/03634523.2024.2398105

Code, J., 2025. The Entangled Learner: Critical Agency for the Postdigital Era. Postdigit Sci Educ 7, 336–358. https://doi.org/10.1007/s42438-025-00544-1

Costa, C., and Murphy, M., n.d. Generative artificial intelligence in education: (what) are we thinking? Learning, Media and Technology 0, 1–12. https://doi.org/10.1080/17439884.2025.2518258

Creely, E., Henriksen, D., Henderson, M., Mishra, P., 2025. The Staging of AI: Exploring Perspectives About Generative AI, Creativity and Education. Journal of Interactive Media in Education 2025. https://doi.org/10.5334/jime.995

Fitria, T.N., 2023. Artificial intelligence (AI) technology in OpenAI ChatGPT application: A review of ChatGPT in writing English essay. ELT Forum: Journal of English Language Teaching 12, 44–58. https://doi.org/10.15294/elt.v12i1.64069

Henrickson, L., n.d. You are an old man in a cave: The authenticity of vagueness. Communication Teacher 0, 1–9. https://doi.org/10.1080/17404622.2025.2516227

Kim, Y., Sundar, S.S., 2012. Anthropomorphism of computers: Is it mindful or mindless? Computers in Human Behavior 28, 241–250. https://doi.org/10.1016/j.chb.2011.09.006

Kosoy, E., Chan, D.M., Liu, A., Collins, J., Kaufmann, B., Huang, S.H., Hamrick, J.B., Canny, J., Ke, N.R., Gopnik, A., 2022. Towards Understanding How Machines Can Learn Causal Overhypotheses. https://doi.org/10.48550/arXiv.2206.08353

Lee, S.-E., 2023. Otherwise than teaching by artificial intelligence. Journal of Philosophy of Education 57, 553–570. https://doi.org/10.1093/jopedu/qhad019

Midgley, M., 2003. HOW MYTHS WORK, in: The Myths We Live By. Routledge.

Nass, C., Moon, Y., 2000. Machines and Mindlessness: Social Responses to Computers. Journal of Social Issues 56, 81–103. https://doi.org/10.1111/0022-4537.00153

Niederhoffer, K., Kellerman, GR., Lee, A., Liebscher, A., Rapuano, K., Hancock, JT., AI-Generated “Workslop” Is Destroying Productivity [WWW Document], n.d. URL https://hbr.org/2025/09/ai-generated-workslop-is-destroying-productivity?autocomplete=true (accessed 9.23.25).

Radtke, A., Rummel, N., 2025. Generative AI in academic writing: Does information on authorship impact learners’ revision behavior? Computers and Education: Artificial Intelligence 8, 100350. https://doi.org/10.1016/j.caeai.2024.100350

Rapanta, C., Bhatt, I., Bozkurt, A., Chubb, L.A., Erb, C., Forsler, I., Gravett, K., Koole, M., Lintner, T., Örtegren, A., Petricini, T., Rodgers, B., Webster, J., Xu, X., Christensen, I.-M.F., Dohn, N.B., Christensen, L.L.W., Zeivots, S., Jandrić, P., 2025. Critical GenAI Literacy: Postdigital Configurations. Postdigit Sci Educ. https://doi.org/10.1007/s42438-025-00573-w

R.H. Lossin on AI art and hegemony – Criticism [WWW Document], n.d. . e-flux. URL https://www.e-flux.com/criticism/650116/value-in-garbage-out-on-ai-art-and-hegemony (accessed 7.21.25).

R.H. Lossin on AI art and hegemony part 2 – Criticism [WWW Document], n.d. . e-flux. URL https://www.e-flux.com/criticism/655477/godlike-and-free (accessed 8.13.25).

Riskin, J., 2003. The Defecating Duck, or, the Ambiguous Origins of Artificial Life. Critical Inquiry 29, 599–633. https://doi.org/10.1086/377722

Roe, J., Furze, L., Perkins, M., 2025a. GenAI as Digital Plastic: Understanding Synthetic Media Through Critical AI Literacy. https://doi.org/10.48550/arXiv.2502.08249

Roe, J., Perkins, M., Furze, L., 2025b. Reflecting Reality, Amplifying Bias? Using Metaphors to Teach Critical AI Literacy. Journal of Interactive Media in Education 2025. https://doi.org/10.5334/jime.961

Selwyn, N., Ljungqvist, M., Sonesson, A., n.d. When the prompting stops: exploring teachers’ work around the educational frailties of generative AI tools. Learning, Media and Technology 0, 1–14. https://doi.org/10.1080/17439884.2025.2537959

StefaniePanke, 2025. GenAI Literacy: What Is It, and How Should We Teach It? Frameworks, Reviews, Approaches. AACE. URL https://aace.org/review/genai-literacy-what-is-it-and-how-should-we-teach-it-frameworks-reviews-approaches/ (accessed 7.23.25)

UnescoPhysicalDocument [WWW Document], n.d. URL https://unesdoc.unesco.org/ark:/48223/pf0000391105 (accessed 8.29.25).

Walton, J., Cormier, D., 2025. Autotune for Knowledge: A Generative Metaphor for AI in Education | Journal of Interactive Media in Education. https://doi.org/10.5334/jime.998

White, D., 2025a. Hyponiscience – the false sense of having access to everything. David White: Digital and Education. URL https://daveowhite.com/hyponiscience/ (accessed 9.17.25).

White, D., 2025b. Agency vs Efficiency (The AI learning gambit). David White: Digital and Education. URL https://daveowhite.com/aigambit/ (accessed 9.5.25).

White, D., 2024. The problem isn’t AI, it’s the zero-sum future we’re being sold. David White: Digital and Education. URL https://daveowhite.com/zero-sum-future/ (accessed 9.5.25).

Wolfram, S., 2023. What Is ChatGPT Doing … and Why Does It Work? Stephen Wolfram Writings.

Hyponiscience – the false sense of having access to everything

A brief history of the Web could read ‘In the service of profit, everything open was fenced in’, which sounds like an opening line from a Cormac McCarthy novel. The result is platform capitalism or vast walled gardens of data and activity. So vast we often forget they have edges.

An abstract paining in greys and blues by David White
‘Untitled’ by David White

AI follows this model, hoovering up all available data into an inscrutable set of probabilities shrink wrapped in language. This can be useful for ‘evergreen’ content but is limited when we want to go beyond anything which has gone before.

Whether it’s ‘classic’ search or an LLM, it is easy to fall into the idea that these ‘places’ contain all that can be known. That all answers are available because everything that it is possible to know is online. We assume the garden is so vast that the walls cease to matter. Of course, this is the impression that any large platform wants you to have, ‘there is no need to wander off’.

Hyponiscience

In effect we treat many platforms as if they were all knowing, or omniscient and thereby put ourselves into a state of Hyponisience. A false sense of epistemic mastery.

For most of the time this is relatively harmless. Most information seeking is within a standard canon; there are correct answers. However, when we are looking to produce new knowledge and new thinking, or even to expand our worldview Hyponisience becomes problematic.

While the emergence of AI is what led me to invent this term, it is not a new problem. It also describes the process of being algorithmically crammed into an ever decreasing epistemic space driven by an attention economy. Hyponisience is also a state of only believing the information within your bubble. A state which fuels polarisation and which is form many a haven in the face of the complexities and pluralism of information abundance.

How do we counter this false state?

All flavours of information literacy advise engagement with multiple sources. Now perhaps we should advise to engage with multiple platforms online, and occasionally offline. Rather than a hierarchy of quality (From Journal Papers through to Overheard in the Pub), we could have a quality of range: how many places did you draw information from? How distinct were these places? Reaching beyond the walls of a single garden and having a wander can only lead to a broader, more meaningful, view. This also opens up Post-Critical possibilities, whereby some of the ‘places’ we seek might be people and/or embodied knowledge.

For example, as I outlined in my ‘AI Learning Gambit’ post, a new literacy involves knowing when not to use, or to go beyond, AI. In a post-provenance era, the best way to maintain some agency and see beyond the walls is to actively choose the less convenient, but possibly more rewarding, options.