(A post about a the power of something that doesn’t exist.)
In my last post I explored how the students on the ALT-C 24 conference student panel were suspicious of ‘innovation’ with digital technology and appeared to prefer incremental improvements that had a more direct, meaningful, impact. From their point of view the term innovation is now close to synonymous with ‘over sold’. So now, when a lot of effort, design-work and creative thinking is put into a quiet, but extremely useful, improvement it’s not considered an innovation. A good example in my world would be increasing the student centred flexibility of provision. This is difficult to do and highly technical across digital, policy, process, and often institutional culture.
So how did this come about? Why is it that, by definition, work that makes things better is not categorised as innovative (even when it is innovative)? …and just to be clear, I’m not complaining about this phenomenon, I’m trying to unpick why it happens.
I like to propose there are three key reasons we have got to this point. The first is simple:
1. Hype Ennui
Too much hype from those that get funded by being the-next-big-thing. There is no need for me to go into this apart from to say that the investment cycle doesn’t require any innovation to actually become the next-big-thing, it just needs enough people to believe that it might for a short while. It’s interesting that moral panic about a technology is now considered valuable evidence that it will live up to what’s promised. If the tech is slated to destroy/disrupt-and-save civilization all the better. Move fast and break things is the mantra after all.
2. Normalisation
The second reason is straightforward but always difficult to navigate. One definition of well designed technology is that it ‘vanishes into use’. Most interventions, upgrades, redesigns, that help do this by reducing cognitive load. One definition of technology is that it allows you to do more with less effort. You might notice the first time you press that strangely convenient button which does just what you need when you need it, but you don’t notice the second, third or fourth time. It’s just ‘there’ in a process of rapid normalisation bordering on entitlement. Which brings me onto the third reason…
3. Natural Law
We imagine there is a Natural Law of Digital. This is not something we consider directly, but it’s there, right in our central, conceptual, blind spot. We each have a vague, but compelling, model of how everything Digital should work, if it was really working, if the universe was in correct balance. It’s a kind of digital Garden of Eden state where everything ‘works intuitively’ in a manner which releases us to only have to work on things we believe to be authentic and meaningful.
The massive, glaring, downside of this Digital Natural Law is that it doesn’t exist. So we operate less on ‘I know it when I see it’ and more on ‘I don’t know it but I know when I can’t see it’. A sense that the digital environment is in a permanent fallen state of grace that needs repairing towards a state which none of us would agree on even if we could describe it.
This then allows us to respond to meaningful innovation as simply an incremental step back to what should have been the ‘natural’, rightful, state of digital all along. When the wifi gets faster that’s because it was slower than wifi should be beforehand. When a website becomes easier to engage with that’s because some of the bugs, the brokenness, has been fixed. When my new phone has a bright screen it’s because the previous one had a screen which was far too dim. etc etc. Digital Natural Law thinking compels us to believe that the best it’s ever been is simply the closest to ‘fixed’ we have ever experienced. Our ability to upgrade this ‘natural state’ at each step and encounter innovation as an implicit right is partially fuelled by consumerism, it’s also just a bad habit on our part.
I’m not sure what to do about this
It’s difficult to know what to suggest to counter Digital Natural Law thinking. One method would be to ask people to describe this halcyon state in terms other than ‘intuitive’ or ‘personalised’, but people tend to get cross when their utopias crumble. Maybe we have to accept that anything which is part of the fabric of daily life gets sucked into an under considered idealism, I’m probably only truly conscious of my car when it’s broken or my bins when they aren’t collected.
One strategy which is enjoyable but inadvisable is what I used to do with my kids. When they demanded to know why the wifi was down at home (with the implication that I would somehow fix it?) I would say ‘Of course it’s down, do you have any idea of how complex it is? It’s a wonder it ever works at all!’. Personally that’s how I feel about all digital technology.
I’m in a permanent, low level, state of surprise and wonder that my tech works on any level (except for printers, they are just annoying and wrong). This isn’t because I’m enlightened, it’s because I’m old enough to remember when digital tech was broken-by-default, or at least when you had to jiggle cables and lean on superstitions to get a BBC Micro game to work.
It is good to see the extent of the scepticism that meets PR heavy digital innovation and maybe that’s half the battle. The students on the ALT-C panel were very appreciative of what some might call ‘the basics’ and the default view of tech was constructively critical. The Digital has had some of the shine scuffed off it it recent years and that can only help to reveal where the meaningful work is being done.