“I am because you are”: African language and perspectives in the future of artificial intelligence

Andrea Teagle

What does it mean to talk about African Artificial Intelligence? At a recent workshop co-hosted by the HSRC and the Centre for Humanities Research, University of the Western Cape, scientists and humanities researchers gathered to explore the impact of language, history and culture on AI, and how African perspectives might allow for the emergence of both different technologies and different ways of relating to them.

The chimp looked around curiously. He leant forward, rocked back, brought his face close to a leaf on the ground and then brushed it aside. He picked up a twig and inserted it into an ant-heap, his breath audible. Raising it up to his mouth, he chattered excitedly. His upper lip drew back and in response, the faces of the people watching softened.

The chimp was not a chimp. It was a wooden mannequin. Or, as Professor Jane Taylor, director of the Laboratory of Kinetic Objects (LoKO) put it, a conglomerate of beings: his creator, Adrian Kohler of the Handspring Puppet Company, the puppeteer Gabriel Marchand, who controlled his right side, and the actress Terry Norton, who controlled his left. He was something else too: the imaginations of the audience. In a matter of seconds, minds discarded the visual evidence of the two actors and the obvious material construct of the chimp, instead noting the micro-movements, the sounds, the light catching the cleverly crafted eyes. The mannequin became live.


Woman holding puppet
A conglomerate of beings: Puppeteer Gabriel Marchand and actress Terry Norton with the chimp mannequin at the performance lecture, Cape Town, 2 March 2020. Picture: Antonio Erasmus, HSRC. 

Taylor was speaking at a performative lecture introducing the workshop “Diverse Artificial Intelligence: Representation, Translation and Embodiment”, hosted by the HSRC in partnership with LoKO. Comprising a diverse range of speakers, the workshop explored people’s relationships with technology and the role of language in imagining and crafting different futures from African perspectives.

“The infant will only become fully human if we believe in its potential to do so,” Taylor argued, the chimp sitting quietly on a table nearby. “...We project ourselves onto one another. And this is the basis of sympathy. Sympathy in such terms might be implicated both in how we imagine identity and how we imagine difference.”

Throughout history, people have had a preoccupation with the ‘not quite human’, with finding the line between likeness and difference. The speakers traced the shifting front of this inquiry of what makes us us, from perceived distinctions between humans and apes, between population groups, and now, between human and artificial intelligence.

“We’ve been here before”

Dr Nedine Moonsamy, an English lecturer at the University of Pretoria, argued that common-place science fiction comparisons between human-AI relations and racial relations are more than casual metaphors. Referencing the scholar Louis Chude-Sokei, she argued that rather, this history of racial othering directly informs the problem of relating to AI today.

Historically, the concept of personhood developed in direct response to slavery, as a means of conferring legal protection on black people, writes Chude-Sokei. “As a pliable and politically fraught category, [personhood] depends on animating the inanimate, or giving soul to the soulless. It features the bestowing of ‘life’ or social recognition to objects…”

So, he argues, we have been here before. “The imminent moment when machines become citizen, separate and/or equal, or when things become people and emigrate to our shores, will merely reiterate a not so distant past. This is one where blacks, reduced to object status, [and] denied souls and intelligence... became and merged with human being.”

Although the question of how humans will relate to digital beings often revolves around whether AI achieves free will or consciousness, Moonsamy argued that in fact that question is tangential. As colonial history demonstrates, “humanness” is irrelevant to tolerance. In fact, AI that is too human-like has been shown to elicit more hostility than characters that are more obviously works of imagination. We relate easily to a mannequin, and we can entirely fail to recognise real humanity.

Moonsamy suggested that rather than an act of sympathy or an attempt to make a more ‘perfect’ or optimised intelligence what is required, if we are to move away from oppression of the ‘other’, are acts of vulnerability: the reciprocal embrace of complexity and revealed flaws.

“This is not just about how we will treat our robots,” she said. “The true relationality is where we become curious about how they will treat us.”

AI as a branch of literature

Language is a technology, Moonsamy said, observing that in science fiction, words constitute the building blocks of fictional worlds. Similarly, it is a kind of language that allows programmers to translate those imaginings into reality.

AI might be considered a branch of literature, as much as it is a branch of science, writes Professor Alan Blackwell of the University of Cambridge. “Algorithms are constructed as texts, composed in a programming language. Although programs are relatively formalised texts, resembling film scripts or plays more closely than prose, they are still undoubtedly literary works, protected by copyright, and attributed to one or more authors.”

Seen in this way, AI reflects not only the imaginations but also the biases of the people who create it. The HSRC’s Dr Rachel Adams pointed to the gendered biases reproduced in AI, evident, for example, in the feminised forms of virtual assistants like Siri. In machine learning, bias can occur because of the way the algorithms themselves are written.

AI also reflects the realities and biases in the data that it receives. Thus, fed a dataset of labeled images, an algorithm might learn that programmers are white men, for example. “The data reflects the realities, and the reality is biased,” the HSRC’s Dr Michael Gastrow said. Addressing such biases is challenging because it relies on improving data bases, or finding subsets that are more representative across race, gender and other sites of prejudice. It requires even more data collection, which raises new questions around privacy and data exploitation.

Professor Antoine Bagula of the University of the Western Cape, and other speakers emphasised the importance of developing local AI, both to correct bias and to open new possibilities for technology and its application. As Blackwell argues, if AI is understood as a branch of literature, then how could African AI not be different from AI written elsewhere? 

The chimp mannequin gazes out over the audience at the public lecture, Cape Town, 2 March 2020. Picture: Antonio Erasmus, HSRC. 

How African AI will look will not be uniform any more than the histories, and technological histories, of different African countries are uniform. As examples, Dr Tegan Bristow, Director of the Fak’ugesi African Digital Innovation Festival, pointed to the contrasting historical landscapes of technology in Johannesburg, where radio was used to stoke ethnic tensions across homelands, and Nairobi, where social media via mobile phones became a powerful democratic tool at a time when there was very little media freedom. “That’s a history that belongs to Kenya and only Kenya,” she said.

The digitisation of African languages

If language is a technology, it is also a weapon, as South African writers and activists like Steve Biko well understood. Anglonormativism the way that power is conferred to those who speak English fluently, while accented or broken English is often equated with a lack of intelligence continues to inform the experiences of people in South Africa and many other parts of the world. The absence of African language in the digital sphere is in some ways a continuation of the subjugation of non-English (and non-digital) speakers.

Mmanape Hlungwane, a sessional lecturer of Sesotho and masters student at the Wits department of African languages, observed that, even when indigenous language options are available at ATMs, for example, people still chose the English option.  “It is the narrative that we have, it’s not possible in our heads that we can use [our mother tongues] in that technical field,” she said. 

The language of technology is often associated with masculinity; and yet the history of programming is not a male one, as Bristow and Adams pointed out. And the structure of coding is in fact familiar to women in feminised work like knitting. Bristow spoke about programmes she is leading to introduce women to coding through beadwork, using this language as a portal to a previously inaccessible and foreign world, while simultaneously acknowledging and giving voice to their own histories and culture.

The absence of African languages in the digital realm, it was suggested, limits the kinds of AI that can develop. If language is a technology, then clearly mere digital translation is insufficient: it fails to recognise that the building blocks of English and other languages are fundamentally different, both texturally and ideologically. (For the same reason, the provision of school texts fails to understand or solve the language problem in South African schools.) Therefore, the localisation of technology, Hlungwane stressed, means creation.

What are the worlds we wish to create? As immediate applications for AI, Bagula spoke about addressing the genomics data gap most genomic research has been on European populations the problem of racial bias in facial recognition and a project for digital translation in African languages.

Technology as an extension of mind  

Arguably, even before technology as we think of it today, the human mind was not bound by the skull. The use of tools to communicate and to outsource executive function dates back to drawing marks in the sand, said the artist Leonard Shapiro, speaking at the workshop.

Shapiro demonstrated how students’ understanding of an object, and their ability to draw it, improves significantly when they’re encouraged to feel it with their hands, to focus on the internal structure rather than the appearance of the surface. Arguably, our own cognition is fundamentally based on an understanding of our bodies in space in relation to other bodies in space.  What then, are the limitations in relating to disembodied intelligence?

Part of the value in imagining African futures is in making room for different methods of knowing, or of approaching the unknown. The scholar Clapperton Chakenetsa Mavhunga writes that the people of pre-colonial Zimbabwe, vedzimbahwe, held a philosophy with a different premise to both capitalism and communism.

“Both the Western and Soviet system start with the individual and move toward the collective (class), whereas vedzimbahwe start from community or the communal,” Mavhunga writes. “[H]ence the sayings: “A person is not a person without others” and “It takes a village to raise a child.”

Critically, this world view moves away from a human-centric perspective.“Vedzimbahwe exhibited humility to learn from animals big and small. To them, animals were no mere fauna or species but indivisible from the human.”   Similarly, as the HSRC’s Buhle Khanyile observed, embedded in the isiZulu language – for example in the greeting “Sawubona”, a shortening of “I see you” –  is the recognition of consciousness as a phenomenon arising between beings.

The University of Cape Town’s Vedantha Singh suggested four possibilities for our current relationship with technology: that technology is separate from us; that humans and technology come together for specific tasks and separate; or that “technology and humans are so intrinsically linked to one another that after a while you can’t really tell them apart any more can’t really tell where the work of the technology started and where the work of the human started”.

In this last view, it makes little more sense to speak of people as separate from technology than people as separate from other people.

Perhaps then, in imagining African futures of AI, there is room for the beginnings of a different way to relate to the perceived ‘other’, and to conceptualise our relationship and responsibility to the earth more broadly; one that allows for vulnerability through understanding ourselves as part of rather than distinct from; like the mannequin, we too are a complex of beings we come alive in the eyes of others.

Read more about Rachel Adam's Languages of Artificial Intelligence in Africa project here>>

Andrea Teagle (@AnzTeagle on Twitter) is a science writer with the Human Sciences Research Council in South Africa.