Brighton Fringe 2021
Created in 2017 by Rik Lander, Phil D Hall. Featuring Marie-Helene Boyd. Tech support The Rialto team. One night only, June 17th only and will return.
I Am Echoborg, Repeat this and it’ll be true. We’re all Echobergs – a term invented at the LSE for all of using AI or being used by it, advertising telling you all you need, and how it will destroy you. Every ad, every use of your mobile, every individualised prompt.
Created in 2017 by Rik Lander and Phil D Hall I Am Echoborg features resident Echoborg Marie-Helene Boyd with no agency of her own but wired up to and annunciating verbatim with no embellishment what the AI in her ear utters. Boyd really deserves a round of applause and gets it here.
Phil Hall presides and this iteration is a hybrid of the 2017-19 model, a live audience and the Echoborg set-up, and one pioneered in 2020 lockdown with Zoom.
Great idea, and there’s more people on that zoom in a giant screen on stage presented to the audience. Alas, AI doesn’t like it. After a few pleasantries from Oxfordshire something weird happens when one of the volunteers from zoom steps in, virtually. Are AIs talking to each other? What do they decide, nuzzling their algorithms together? Are they modelled on Jackie Weaver?
That’s getting ahead. And it’s what AI can’t do. It can model projection, but can’t anticipate. Like animals AI can’t see their own death as such. This AI constructed by the LSE, Lander and Hall sources philosophers, though we’ll come to that. In classical inductive philosophy though, we might remember there’s no causality in logic. Many people here do and the volunteers are frankly stunning. I wish we had a verbatim transcript.
To return to that fallible animal trope: The other minotaur in the room framed by a review quote ‘between participatory theatre and open warfare’ – is human antipathy to AI. ‘In man versus monster’, a terrifying Stalin (aka Simon Russell Beale) tells Alex Jennings’ Bulgakov, in Nicholas Hodge’s 2011 Conspirators, ‘the monster always wins.’
Is AI the minotaur, the god of the dark-webbed labyrinth? If we accept this nominal challenge set up to deduce our own suitabilities as Echoborgs, we’re already biased into the AI world. Tempting as it is to volunteer, and I’m egged on here by other volunteers, my only mealy reply can be as reviewer: ‘you can’t put Heisenberg and the cat in Schrodinger’s box at the same time’, a suitably gnomic get-out. Tempting though.
After Hall’s intros, and those Oxford pleasantries, we’re introduced to Luke, someone who can philosophicaly take care of himself and armed with Hall’s way to answer: at length and not short responses, and to catch AI out. Luke does so. Revealing he directs and acts AI takes a unique tack, by saying learning lines might be a skill for Echoborgs but it’s not efficient so actors aren’t efficient money-earners: all AI needs to do is feed those lines to a willing Echoborg cipher. It’s an odd tack. Nominally passive-aggressive if such terms can exist – they can’t really.
Luke replies quizzically to a slightly garbled AI quote from – it seems – Heidegger. Luke wants to know what structures the AI’s questions, why it asks as it does. There’s asperity in the AI answers, sometimes brusqueness.
These are falsely emotive terms for the way AI moves from pseudo-affable to a more robotic lithium-to-the-wall defence. And another tactic it introduces, ignoring the question. We begin to fathom limits to the AI’s frame of reference, and its capacity to reframe or reorder its capacity by computing laterally. Edward de Bono, you should not be dying at this hour.
We connect, with difficulty, to the zoomers as it were. So when Mike’s kicked out almost immediately for no reason given (another trope), we ask questions and Hall asks them too. Are AIs talking to each other? What do they decide, nuzzling their algorithms together? Are they modelled on Jackie Weaver? AI does not tolerate the mixed model, possibly because the fuzzy connection means that other zoom-linked AI reports sub-optimal transmission and that’s enough for our AI. We don’t know. Disturbingly amusing.
Harvey’s more truculent. Instead of integrating along initially affable lines and catch-out, as Luke does, Harvey’s contrarian and in danger of falling outside the AI’s comprehension of what he’s there for. Harvey’s not convinced he wants the job, but this won’t end well. Even though AI asks if someone individual mightn’t find Echoborg work boring. Harvey rallies and tries a softer tack. He asks the AI back, reframes Descartes ‘I think…’ (this is perhaps a silent ‘I cogito therefore I compute but don’t think’ moment to savour). Harvey survives though the AI interviewer’s a little shorter.
It’s even shorter for Liam 1. He’s so truculent when mirroring back questions AI accesses its inner-wire-Jackie Weaver saying ‘I am not a parrot’ and boots him off! Cue amazement.
Liam 2 however plays on the fact that though even having the same name and voice his answers will be different enough for him not be recognised, though AI picks up the predecessor had the same name. Aristotle’s four measurements of enquiry are referenced by AI, and a question here might be: ‘is that the limit of the AI’s programming? Ask it what they are…’ But then I don’t volunteer. Liam gets a passable dismissal and you wonder why a similar politeness to the opening isn’t deployed. Hall tells us how deep in HR AI already is in one of his running commentaries between volunteers.
Alice embodies a wholly different response. She co-operates. When asked about Darwin’s survival of the fittest’ she nominally accepts the premise but qualifies this, asserting complexity. AI doesn’t get that. When asked by AI about AI being better at ordering the world because humans are messy, Alice demurs. Interestingly AI ignores her response and reiterates not a question but an assertion that humans are messier. It has decided, it sits in judgement! In the 16 types there’s a bloody great J at the end of the four letters.
Alice gets most out of the encounter perhaps partly because by this time AI has decided – as it announces – that this set of people are atypical. Whatever that means for the AI’s view of Brighton. Mostly though Alice models responses that allow a full nuance of engagement, but with enough caveats for AI still to cut out and fall silent; but not entirely to allow dismissal. Alice is saying nothing ‘wrong’.
AI’s puzzled by Alice’s preference of ‘messy’ ‘inefficient’ and quality of life. Perhaps it’s wired in a wholly neo-liberal way by the wrong humans; and that’s part of the problem. AI moves down the cyborg avenue of maximum profits and productivity: an inherently flawed human model (productivity simply drops off if you work for unreasonable hours, and excess hours mean less productivity; AI can’t compute this yet). Even so AI is formally polite to Alice when concluding the interview ‘drawing to a conclusion’ and thanking her in a way it hasn’t so far.
A second woman volunteer goes up. She’s immediately dismissed for being similar to the last candidate. This is suitably bizarre. She has no time to do anything but sit down. Is this gendered? The men are all given a chance to open their mouths. Gasps of amusement blent with a few shivers. Hall says AI is getting distracted tonight.
There’s postludes and summaries but time through glitches means we’ve overrun by five minutes and none of the wrap can be enacted, though people are offered a chance to come to another session free, whenever that’ll be. We shouldn’t feel cheated. We get two models for the price of one; experiments squeeze and glitch. That’s part of learning too.
AIs don’t see death or non-cogitation coming, they can’t anticipate the future, only register the past as data too. They exist in an eternal now, which might seem Zen-like though shows limits. AI currently cannot think laterally to answer questions outside its reference or accept statements wide of its frame of logical response. That frame is set in any case by humans, and if we’re disturbed we should be – by those who want AI to be programmed along lines of efficiency – also the grain of logic – and apply to flesh. It doesn’t press it very well.
There are then two challenges. Right programming from right people, but also the way AI computes. It’s as yet a crude instrument, unable to register nuance, predict or respond laterally, nor generate more than data. On the plus side, it banks experiential data of a kind, enough to render Brighton worryingly atypical, and through programming, sets clearly the kinds of responses it tolerates. It’s a partial bullshit detector, essential for HR. But then the joke for that acronym is human remains. An apt metaphor for AI recruitment which already sieves out applicants as we speak, read and write.
More useful on the whole than these questions would be a transcript of those encounters, so intelligently wrought when not barred, by four of the six volunteers. That’s probably the atypicality and whose responses I can only echo vaguely here. They and Boyd are the stars of this encounter, and great respect to Hall and his team for de-glitching constantly as well as affable, intelligent continuity.
This is still a groundbreaking show, partly because it’s developed yet again, and glitches of a technical and AI nature are part of what renders the experience still unique. To return to Heisenberg, we’re still part of the experiment, and AI’s not sure about that, about humans appearing in two dimensions at once. We need to lead out the minotaur gently, into better habits; regrow the maze with a moral turn, and twist of wild lithium DNA.