Agency vs Efficiency (The AI learning gambit)

A continuum with Agency and Efficiency at either end.
The gambit we take each time we incorporate AI and technologies of cultural production in our work. We choose where we land on a continuum of agency and efficency.

There are many hopes and fears surrounding AI which clip into the recurring cycle of emerging technologies, especially those which are located in, or adjacent to, cultural production (and therefore impinge on education). ‘Classic’ concerns around cheating, authenticity and an erosion of critical thinking have come to the fore when, for example, internet search, Wikipedia and smartphones etc. became widely accessible. The debates which ensue often fail to unpick convenience (if it’s easy and immediate it must be bad) with more substantial shifts in how we access, use and produce knowledge/work. 

However, AI ups and broadens the game once again. It amplifies and accelerates these classic concerns while expanding the possible use cases. Discussions with colleagues from the Edinburgh Futures Institute highlighted that the broad applicability of AI, the fact that it can be used in so many contexts, put it in pole position in the moral panic / furtive adoption stakes. So while some concerns are ‘generic’ to any emerging tech in the cultural production space we have to acknowledge that AI is powerful, full of risk, ethically fraught, and everywhere. It demands we make sense of it relative to our practices in a way which is more refined than ‘use it but also be critical’ – which is where quite a lot of progressive university guidance lands.

Balancing the generic with the specific

The question I’ve been grappling with is how to articulate the specific pros and cons of using AI in a manner which also acknowledges its standing at the front of a long line of technologies which have spun the hype-and-fear-cycle in similar ways. I think the ‘Agency vs Efficiency Gambit’ help here, but first I want to lay out some education and technology context.

Learning that does not converge on a ‘correct’ answer

At the University of the Arts London we are mainly focused on the use of technology in project-based work, where students are developing creative outputs and reflecting on process. What they produce might not always be original in the strictest sense but it will be a novel journey for them, with not entirely predictable outcomes. This is in contrast to learning which converges on an agreed answer or a process, where there is a predictable outcome which might, nevertheless, take a lot of work to attain.

Where learning is developmental, and the ‘goal’ is a change in the person (learning as becoming if you will) the process of learning itself is not primarily interested in efficiency in producing an academic or creative output. The output is only relevant in so far as it facilitates becoming. (see, for example, the educational benefits of failing)

Doing the thing and questioning the thing

Given this, we are much less interested in our students being efficient than them taking the time to be critical and reflective. This is not to say that there aren’t more, or less, efficient ways of learning but we want our students to do the thing and question the thing (not uncommon in higher education). We want our students to retain their personal agency to enable their questioning and consciously position their practice relative to the tools and tech they might use.

Technology is efficiency

One definition of technology is that it is a mechanism that allows you to get more work done in less time or with less effort, AKA efficiency. It’s confusing to be presented with a technology which makes a process less efficient. We all have stories where this is the case, but they are presented in terms of frustration and disappointment.

My point is that if we use technologies of cultural production to gain efficiency we become less active in the process, and lose agency. 

This doesn’t extend to all technologies. It might not matter too much if we are digging a hole with a mini-digger rather than a spade (not technologies of cultural production) but if we are generating text for an essay or a clutch of ideas to get us past the ‘blank page’ for a project then we have offloaded some of our agency-through-thinking to the machine. If the technology is geared around cultural production this offloading will always be the case. 

Given this, I’d argue that in the context of learning we are frequently trading between agency and efficiency when using technology, especially AI. Done consciously, with enough understanding of how the tech is operating, the more efficient route can be empowering. Efficiency is not fundamentally counter to learning but it does come at a cost. 

Significantly, to make this choice meaningfully requires a good understanding of the principles on which the technology is operating. Unless we understand roughly how the work is being done we can’t gauge where we are landing on the agency/efficiency continuum. Context is important and often the context we are working in is the technology itself.

The gambit 

So every time we incorporate technology, including AI into our practices, especially when learning, we should be weighing up the extent to which any efficiencies attenuate our creative and critical agency. I think of this as a kind of gambit which reframes the old Silicon Valley mantra of ‘Move Fast and Break Things’ towards ‘Move Fast and Learn Less’. 

I’m not suggesting that we should always choose agency over efficiency, I’m suggesting that we must be aware of the gambit. It’s a continuum, not a binary choice – we can decide where we land. However, if as a student I choose to maximise efficiency a few times in a row I will be eroding the extent to which I’m learning and might want to change tack. Conversely, if I choose the pure agency route and attempt to largely avoid technology, I risk not getting far enough through the process to be able to fully engage with the intended learning. 

Assess less (a simple response to a complex problem)

Given that we are often pushed towards efficiency by a lack of time it makes sense to ask our students to produce less over a longer period of time. In simple terms this is a quality over quantity approach to assessment. I’ve not seen a Learning Outcome which says ‘you will be able to demonstrate that you can produce a huge amount of work in a limited time’ but I do often see volume of work used as a proxy of ‘academic rigour’. If it’s harder to write 2000 words than 5000 works why do assignments get longer the higher the academic level?

If we really value critical thinking then demanding less voluminous work for assessment is a more effective way to respond to emerging technologies than the intricacies of many forms of ‘authentic’ assessment. ‘You have time to choose agency over efficiency’ feels like an authentic and reasonable approach to me. 

All in *and* all out with AI
(start with assignments at both ends of the continuum)

Another way through this picks up on some discussion I’ve seen around not assessing material produced with AI but assessing reflections on that material. This approach asks students to use AI in the form of a questioning dialogue and then reflecting on the results.

At the start of given course-of-study some assessments can be designed on this basis and, as a balance, some could require students to not use any AI. It should be possible to explain the value of these two approaches as deliberately located at either end of the Agency / Efficiency continuum. Having experienced the extremes, students are then better equipped to make conscious Agency / Efficiency choices. With the right scaffolding they should be able to develop a usable critical position on the use of AI before they get to the more self-directed work later in the course. 


Leave a Reply

Your email address will not be published. Required fields are marked *