Technology isn’t human(e)

Why do we build our institutions on the principle that technology in-of-itself does useful or interesting things? I suspect it’s because culturally we cling to well ingrained assumptions, such as:

  1. Technology makes work more efficient – it reduces labour
  2. Technology is about automation – the machine ‘working’ while we control it
  3. Technology is neutral – it performs tasks without bias
  4. Technology is always developing – it is the ‘solution’ to our ‘problems’
CC Dennis Hill - https://www.flickr.com/photos/fontplaydotcom/504000141
CC Dennis Hill – https://www.flickr.com/photos/fontplaydotcom/504000141

These do hold true to a certain extent but only if you take a strict techno-centric view. The statements above are questionable as soon as we bring people into the picture and, of course, technology is an artifact. It’s designed, made and used by us.

The reason I feel the need to spell this, somewhat facile, principle out is because I worry that we do think of technology as an ‘other’, outside and beyond us. We can’t grasp its complexities so it becomes a mystery and, as is our habit, we develop superstitions about that which we do not fully understand. We almost go as far as personifying technology which is where the problem starts.

Take for example the last of the statements above: ‘Technology is always developing’. It sounds ok until you consider that technology doesn’t magically develop on its own. The statement should be: ‘People are continually developing technology’. Yet we seem comfortable to extract ourselves from the picture and think of technology, if not as an independent consciousness, then as a self-evolving entity.

The irony is that while on the one hand we lean towards personifying technology in its apparently neutral forms we are also extremely wary of those moments when it attempts to ape humanness directly. (again, I can’t get away from the forms of language here as I just said ‘it’ instead of ‘people design it to’) We like to know when we are interacting with a person and when we are interacting with code and feel at best conned and at worst abused if we confuse one for the other.

I’ve seen this in so many forms: suspicion of bots in text-based MUDs and MOOs, our response to avatars in virtual worlds (am I controlling ‘it’ or is it ‘me’?), our distaste for algorithmically generated news, our unease with talking to search bots in public and, in my case, a complex relationship with @daveobotic, my Twitter bot.

We dislike the idea of being socially or intellectually satisfied by an algorithm because we fear things we can’t clearly define as sentient, sensing a loss of our own humanity if we discover we’ve believed the code is a person. This is a classic human concern, whether it’s a Golem, Frankenstein’s monster, any number of cyborgs or artificial intelligence we have always been troubled by that which is animated but not explicitly alive. It’s one of the ways we explore the question of our own consciousness, a tantalising theme revisited throughout history in various forms.

I see these tensions playing out were education intersects with the digital. The business-like element of our institutions prefer to think of technology as in-of-itself efficient and neutral. The potential of technology to be the ‘solution’ for the ‘problem’ of teaching and learning at scale is attractive and, to a certain extent, operable if you frame education as a problem-to-be-solved. This breaks down if we see learning as transformational rather than transactional though – if we see it as a process of becoming. This is where education is intrinsically human with all of the  vulnerabilities, prejudices and generally messiness that comes as standard where people are involved – a form of education that anyone who has ever taught will understand.

Nevertheless, I see an emerging trend in which we set-out to synthesise ‘contact’ in the digital to scale-up what we claim to be transformational education using a shell of transactions masquerading as persons. An early example of this is the planned nudging messages of encouragement, warning or even advice sent to students driven by ‘learning analytics’.

We are being tempted by this line of thought even though we have explored all this before and know that we are masters of detecting soulless interventions. Even if our algorithms are efficient and effective our experience will be hollow and unsatisfying. I deeply doubt our ability to develop as individuals on this basis (the ‘becoming’ form of education I believe in) and argue that while the digital can be a valuable place for people to connect with each other, technology is inherently limited in its ability to ‘scale humanly’. This is not because we are incapable of designing incredibly sophisticated code, it’s because we have an instinct and desire for the conscious.

(This line of thinking extends from the “Being human is your problem” keynote given by myself and Donna Lanclos at the ALT-C conference.)


3 thoughts on “Technology isn’t human(e)

  1. Wilbert Kraan Reply

    Re: the personification of computers- Many years ago, I researched how people of very different IT skill sets managed to negotiate meaning when speaking about IT systems. Particularly: helpdesk interactions. When a user narrated the problem, they’d use embodied, spatial terms when everything went well, and when they were in control (“I started up x”, “I went through y”, “I logged into z”) until they point where they needed help, then the system became an adversarial person (“It wouldn’t let me in”, “It crashed my document”, “it asked me for x”)
    I wonder how that’s going to play longer term with the idea of voice activated AIs like Watson, Siri, Alexa, and Cortana et al becoming mainstream user interfaces. At least, that’s what all the big vendors seem to expect…

    With regard to the transactional versus becoming- isn’t that a matter of degree? A student on even the most transactional course can’t help but become, and even the most transformational course involves come mechanics of people management and content sequencing and assessment.

    Finally, I’d have thought that the learning analytics things that nudge learners are instruments, not really artificial people. Not least because what these analytics UIs show are some proxy measures for learning, but not really a measurement of learning itself. In that sense, terminology like “dashboard” is indicative.

    1. David White Reply

      That’s a great example of what I was trying to get at in terms of the personification of tech. Perhaps it’s an extension of self when it’s ‘working’ and an ‘other’ when it’s troublesome?

      Fair point about ‘becoming’ etc. I go on about it because I find that external pressures on educational institutions lead to the ironing out of the becoming side of things, so it’s worth advocating for.

      I do worry about our response to learning analytics though. I know they aren’t suppose to be a ‘person’ but I can see a future where they are counted as ‘contact’…

Leave a Reply to Wilbert Kraan Cancel Reply

Your email address will not be published. Required fields are marked *