The online crit and imagining the not yet realised

On the 7th of June I’ll be in conversation with Professor Paul Lowe about the ‘Online Crit’ as part of our online seminar series (you can sign up here).

An interesting idea surfaced during the chat we had in preparation for the session: being able to imagine the not-yet-realised is fundamental to the creative process. There is no making without intent, and there is no intent without the imagined.

CC Chris Richmond https://www.flickr.com/photos/35652152@N07/24955436862

This idea is so embedded in creative work that I wonder if we forget it’s there. It’s certainly important when considering Online Creative Education as the move to online tends to raise questions about the material which risk demoting the importance of the conceptual.

Realisation

However, this must be considered not only in the context of creative practice but also creative education. The education process, writ large, being learning and becoming expressed via making, the ‘via’ being key. This is reflected in our assessment criteria which covers Enquiry, Knowledge, Process, Communication and Realisation.

While we might be assessing technical skill within these, when we consider Realisation we are most likely to be focusing on the Realisation of an evolving set of ideas. Given this. the final output or artifact offered in response to a creative brief becomes an emblem of the overall creative process. It should be used actively as a gateway into telling the story of that process and the thinking therein. This is why we ask students to submit a portfolio which contains that story and not to simply hand in a final piece.

The Crit and the physical

The traditional crit involves a group moving around a studio variously questioning and defending thinking-as-represented through a key pieces of work. Of course, technique and craft are likely to be discussed but they should only form part of the conversation. The crit is not a technical workshop in this regard and, if done well, should be scaffolding for telling the story of process, reflecting on thinking and imagining that which does not yet exist (as distinct from attempting to hone in on a ‘correct answer’).

Given this, physical presence is not fundamental to the crit and it could be argued that taking the crit online helps to focus on critique (in the best sense) development and becoming precisely because it is ‘immaterial’.

I’m not claiming that the tactile, the olfactic or certain forms of spatiality are not valuable in some crits, but they need not be fundamental to the process. For example, it would be rare for anyone to touch work as part of a crit.

Creative Education Online involves geographically dispersed making connected by collective questioning and critique. These things combined become ‘studio’. The online crit can, and should, be central to this as a location for telling the story of the work and imagining the not yet realised.

The problem isn’t AI, it’s the zero-sum future we’re being sold

Upside-down version of Blooms Taxonomy with an explosion of 'create' at the top.
Turning Blooms Taxonomy upside-down and removing the lid – CC BY 4.0 David White

A couple of blog posts ago I suggested that our response to AI is pushing us into a dangerous model of humanness.

“There is a tendency here to imply a zero-sum principle to humanness: the more the tech can do the less it means to be human. This feels wrong to me and isn’t helpful in an educational context.”

https://daveowhite.com/pointy/

I explored this zero-sum idea at a recent talk to staff at Kingston School of Art. To support my line of thought I picked up a quote in a post from Tobias Revell. The quote refers to Science Fiction (SF) but as Tobias points out, our current futures are largely based on SF thinking.

“I would argue, however, that the most characteristic SF does not seriously attempt to imagine the “real” future of our social system. Rather, its multiple mock futures serve the quite different function of transforming our own present into the determinate past of something yet to come.”

Progress versus Utopia; Or, Can We Imagine the Future? 
Fredric Jameson, Science Fiction Studies, Vol. 9, No. 2, Utopia and Anti-Utopia (Jul., 1982), pp. 147-158 – via https://blog.tobiasrevell.com/2024/02/07/box109-design-and-the-construction-of-imaginaries/

Questioning the future

I’m not a futurist but when it comes to emerging technologies it is useful to question what model of the future we are working with. How that is shaping our present and how this is, in turn, painting humanness into a corner. In short, the specific technology is less problematic than version of the future being sold.

The model of the future promoted around AI, and picked up in education, contains many assumptions and tacit implications. The main one being that once AI systems reach a certain level of complexity and/or have enough data to feed on they will reach ‘Artificial General Intelligence’ (AGI).

“…a type of artificial intelligence (AI) that can perform as well or better than humans on a wide range of cognitive tasks”

https://en.wikipedia.org/wiki/Artificial_general_intelligence

An image of intelligence entering the station?

A quick scan of the Wikipedia article on this makes it pretty clear that we are nowhere near that and there is little evidence that AI systems are actually on that path. However, the assumption that this has already happened or that it is inevitable is what is behind the zero-sum model of the future.

When I see articles with ‘this feels like AGI’ in them it reminds me of the train entering the station story from the early days of cinema. People allegedly panicked when they saw the film and cinema was, and is, a technology with a massive impact but what people saw was an image of something and not the thing itself.

L’Arrivée d’un train en gare de La Ciotat

We are not computers and intelligence is a sibling of mystery

Some of what drives this is a collective forgetting that the brain is not a computer and that that idea is merely a metaphor. So, building a hugely complex computer can only ever make a metaphorical brain. Or as Mary Midgley argues in Science as Salvation, the problem isn’t that we are operating with myths, the problem is that we have recategorised myth as fact, and therefore inevitable.

Add to this that there is no agreed definition of intelligence, and everything suddenly becomes very murky (Helen Beetham writes elegantly on this point).

My personal view is that we are extending the ‘Chinese Room’ in a manner which is impossible to understand (the way a neural networks operate in programming means that it’s not possible to deduce the process back to any kind of human-readable form). Our working definition of intelligence is then an absence or an ignorance, in that it’s a notion we ascribe to that which remains a mystery. This is another salient factor driving the zero-sum model of the future.

The problem with the pointy bit

When I first pointed out the zero-sum problem, I helpfully provided a bad diagram.

A red triangle diagram with yellow at the top.
A techno-evangelist, overly simplistic interpretation of education triangle diagrams post-AI? – CC BY 4.0 David White

The quick version of this being that many educational models are triangle shaped and the ‘higher order’ learning is in the pointy bit, often labelled as ‘creativity’ or something similar. The reason I called the diagram bad is because it perpetuates the zero-sum model of the future. Technology might help us to move into the pointy bit faster, but the diagram implies that the strictly human nature of pointy bit thinking and learning is small and constantly being chipped away at.

This is the problem with triangles, they get progressively smaller at the top until there is no space left at all. If we go with Blooms Taxonomy here, then it implies that human creativity is finite as if it was possible to complete being creative. Clearly this isn’t what is mean by the diagram, but these implicit notions are powerful and persistent. Having given this some, let’s go for a walk and have a think, time – I came up with a brilliant idea which it turns out a bunch of people have had before.

Flipping Blooms for unbounded creativity

What if we simply turn the triangle upside down rip the lid of it off (this lid ripping is my contribution).

Upside-down version of Blooms Taxonomy with an explosion of 'create' at the top.
Turning Blooms Taxonomy upside-down and removing the lid – CC BY 4.0 David White

What if remembering, understanding, applying etc. are what you dip into to support a process which starts with creativity? That would certainly chime with how students at the University of the Arts London work. Significantly, what if creativity wasn’t a finite pointy bit but was a jumping off point into a space which, by its very nature, cannot be bounded but opens out into unknown possibilities? Moreover, it could be argued that the relative educational weighting (if we go by the size of each slice) is a better reflection of where the educational emphasis should be in an era of information abundance and AI.

Certainly, a model of the future based on a lidless upside-down Blooms Taxonomy would be less fearful than the one we are currently being sold. In this lidless future, emerging technologies become a vehicle for us to explore the ever-expanding outer reaches of creativity rather than the thief of our humanness. I seem to remember that was the model of a technological future before technologists became our new high priests and I’d argue that the move to the zero-sum model is a failure of secularism (a topic I find fascinating but which is too big to get into here).

Not my idea

As I said, I wasn’t the first to up-end Blooms. That idea I’ve managed to trace back to around 2012 and relatively early discussions of the flipped classroom. For example this via Shelly Wright, which I traced back from this Open University post by Tony Hirst.

The OU post also links to a piece from Scott McLeod for around the same time which delves back into the thinking behind Blooms Taxonomy and how it was never mean to imply ‘higher’ and ‘lower’ forms of learning nor that each slice should be seen in terms of ‘amount of learning’. Given this, I suggest that putting it into a triangle was a spectacularly bad idea which, as Scott points out, has perpetuated a pretty impoverished approach within formal education. Hopefully, my lidless upside-down version of Blooms goes someway to redress this.

Embracing a squiggly future

Ultimately my favourite antidote to  the triangle is the squiggle. My favourite struggle of all time being Tristram Shandy’s diagrams of his approach to storytelling in a novel by Lawrence Stern.

A set of four lines with different patterns of squiggles on them.
Lines showing the direction of storytelling in the novel Tristram Shandy by Lawrence Stern – Wikimedia Commons

In the novel Shandy gets side-tracked so often that he never even gets born in the telling of his own life-story. And yet, somehow we learn an enormous amount about him through this wandering and the story is hugely entertaining. For me this is a fabulous touchstone for the principle of assessing the journey and not the output in education.  

The-squiggle-as-process is a much more honest metaphor for learning than the rigour of the triangle because a squiggle is messy, sometimes beautiful, and everyone takes a different route. We squiggle all over Blooms as we learn and we potentially squiggle our way out into the unknown beyond the lid of upside-down Blooms. So here’s to a creative, squiggly future in which education does not fear technology and our humanness knows no bounds.

Creativity with, or against, the machines?

A recent Ed Tech/HE conference I attended opened day one with a high production value, inspirational, video. Traditionally at this event the opening video has outlined how technology is the future and will enhance everything ‘Empowering students to shape their destinies’ type stuff. (This is normally expressed through metaphor, and images of VR-ish stuff happening).  

However, this year while the main message had a similar drive there was a new subtext. The video showed a young woman who appeared to be suffering from creative block, but through her own determination she broke through this, her ideas flourished, and she ended up inspired to paint and draw etc. I don’t recall much technology being involved beyond some laptop pointing and a bit of indistinct VR use. The human was central.

This seemed to be making the case for what, might be, unique about being human and how technology could support that. Another interpretation is that the video was actively defending humanness in an era of rapidly developing digital technologies. I’m sure this wasn’t the direct intention, but this theme did carry through the opening keynote from a Futurist who insisted, in the way that Futurists do, that we should try to imagine many possible futures. i.e. Not just the future that the technologist insist is inevitable on a hard dystopian / utopian split.

What do we bring to the table?

What I think I’m seeing is a post-AI shift towards a defence of humanity against technology. Or a gentler view might be that it’s an assertion of what is unique about being human – an attempt at defining what humans ‘bring to the table’. This theme isn’t new for example, in the nineteenth century Ruskin advocated for human centric values in the face of industrialised work which separated individuals from craft and nature. Harraway argued against the dehumanising drive of computational and technological ‘progress’ back in the 80s and the ‘what makes us human’ theme has been shot through literature for thousands of years (angels, Der Golem, Frankenstein’s monster, aliens etc.).  

The new aspect for me is that this was an Ed Tech conference, not a symposium on gothic literature, a meeting of theorists. The organisation running the conference are techno-evangelists of sorts and have facilitated and funded a bunch of useful digital things for Higher Education in the UK. And yet their ‘this is what we value’ conference opening video could easily be used by the marketing department of my own university (The University of the Arts London). Our strap line is ‘The World Needs Creativity’. The strapline of the video could have been ‘Being Human Still Has Value’.

I wonder if we are seeing the culmination of a technological narrative which has been building for years. Wherein those promoting the tech must switch to promoting humanness and how the tech amplifies, rather than reduces, our ‘uniqueness’.  There is a tendency here to imply a zero-sum principle to humanness: the more the tech can do the less it means to be human. This feels wrong to me and isn’t helpful in an educational context.

It’s more than the pointy bit – but it is the pointy bit.

I mentioned a few blog posts back that, for me as an educationalist, the role of technology is to give us the opportunity to spend more time in the pointy, top-end, of the educational triangle diagram (you can pick any educational triangle diagram – they all imply that the pointy bit is what we are aiming for). Creativity, or something akin to creativity, often labels the pointy bit for example, in Bloom’s taxonomy the term ‘Create’ is used. In other frameworks it relates to agency and identity – various forms of self-determination and expression.  

A triangle where the bottom two thirds are labelled 'The bit technology can do' and the top third is labelled 'The human bit'.
A techno-evangelist, overly simplistic interpretation of education triangle diagrams post-AI?

The downside of these triangles is that they imply ‘development’ is a kind of ladder. You climb your way to the top where the best stuff happens. Anyone who has ever undertaken a creative process will know that it involves repeatedly moving up and down that ladder or rather, it involves iterating research, experimentation, analysis, reflection and creating (making). Every iteration is an authentic part of the process, every rung of the ladder is repeatedly required, so when I say technology allows us to spend more time at the ‘top’ of these diagrams, I’m not suggesting that we should try and avoid the rest.

I’d argue that attempting to erase the rest of the process with technology is missing the point(y). However, a positive reading would be that, as opposed the zero-sum notion, a well-informed incorporation of technology could make the pointy bit a bit bigger (or more pointy). The tech could support us to explore a constantly shifting and, I hope, expanding, notion of humanness. This idea is very much in tension with the Surveillance Capitalism, Silicon Valley, reading of our times. I’m not saying that the tech does support us to explore our humanity, I’m saying it could and what is involved in that ‘could’ is worth thinking about.

It’s pleasant, in a quiet way, to see technologists reach for a simplistic only-the-pointy-bit version of creativity as the go-to concept when promoting a human-centric view. This is probably also comes about because the auteur-ish, ‘expressive’, aspects of a performative creativity make for good video visuals. My hope is that the new habit of defending, or promoting, humanness by technologists will lead to an increasing understanding of why ‘Art School’ values around not-knowing, risk, ambiguity, play and general graft are exactly what’s needed to continue to expand what it means to be human.