Why do we build our institutions on the principle that technology in-of-itself does useful or interesting things? I suspect it’s because culturally we cling to well ingrained assumptions, such as:
- Technology makes work more efficient – it reduces labour
- Technology is about automation – the machine ‘working’ while we control it
- Technology is neutral – it performs tasks without bias
- Technology is always developing – it is the ‘solution’ to our ‘problems’
These do hold true to a certain extent but only if you take a strict techno-centric view. The statements above are questionable as soon as we bring people into the picture and, of course, technology is an artifact. It’s designed, made and used by us.
The reason I feel the need to spell this, somewhat facile, principle out is because I worry that we do think of technology as an ‘other’, outside and beyond us. We can’t grasp its complexities so it becomes a mystery and, as is our habit, we develop superstitions about that which we do not fully understand. We almost go as far as personifying technology which is where the problem starts.
Take for example the last of the statements above: ‘Technology is always developing’. It sounds ok until you consider that technology doesn’t magically develop on its own. The statement should be: ‘People are continually developing technology’. Yet we seem comfortable to extract ourselves from the picture and think of technology, if not as an independent consciousness, then as a self-evolving entity.
The irony is that while on the one hand we lean towards personifying technology in its apparently neutral forms we are also extremely wary of those moments when it attempts to ape humanness directly. (again, I can’t get away from the forms of language here as I just said ‘it’ instead of ‘people design it to’) We like to know when we are interacting with a person and when we are interacting with code and feel at best conned and at worst abused if we confuse one for the other.
I’ve seen this in so many forms: suspicion of bots in text-based MUDs and MOOs, our response to avatars in virtual worlds (am I controlling ‘it’ or is it ‘me’?), our distaste for algorithmically generated news, our unease with talking to search bots in public and, in my case, a complex relationship with @daveobotic, my Twitter bot.
We dislike the idea of being socially or intellectually satisfied by an algorithm because we fear things we can’t clearly define as sentient, sensing a loss of our own humanity if we discover we’ve believed the code is a person. This is a classic human concern, whether it’s a Golem, Frankenstein’s monster, any number of cyborgs or artificial intelligence we have always been troubled by that which is animated but not explicitly alive. It’s one of the ways we explore the question of our own consciousness, a tantalising theme revisited throughout history in various forms.
I see these tensions playing out were education intersects with the digital. The business-like element of our institutions prefer to think of technology as in-of-itself efficient and neutral. The potential of technology to be the ‘solution’ for the ‘problem’ of teaching and learning at scale is attractive and, to a certain extent, operable if you frame education as a problem-to-be-solved. This breaks down if we see learning as transformational rather than transactional though – if we see it as a process of becoming. This is where education is intrinsically human with all of the vulnerabilities, prejudices and generally messiness that comes as standard where people are involved – a form of education that anyone who has ever taught will understand.
Nevertheless, I see an emerging trend in which we set-out to synthesise ‘contact’ in the digital to scale-up what we claim to be transformational education using a shell of transactions masquerading as persons. An early example of this is the planned nudging messages of encouragement, warning or even advice sent to students driven by ‘learning analytics’.
We are being tempted by this line of thought even though we have explored all this before and know that we are masters of detecting soulless interventions. Even if our algorithms are efficient and effective our experience will be hollow and unsatisfying. I deeply doubt our ability to develop as individuals on this basis (the ‘becoming’ form of education I believe in) and argue that while the digital can be a valuable place for people to connect with each other, technology is inherently limited in its ability to ‘scale humanly’. This is not because we are incapable of designing incredibly sophisticated code, it’s because we have an instinct and desire for the conscious.
(This line of thinking extends from the “Being human is your problem” keynote given by myself and Donna Lanclos at the ALT-C conference.)