Note: On Computation, Logical Impossibility and Zen

Note: Some thoughts on computation, logical impossibility and Zen: recent practice

Shinji Toya
10th Aug 2019
Revised on 28 Sep 2019

Recently, I've been working with a few practice experiments dealing with the limits of computation and computer vision (mainly facial recognition algorithms). I realised that when I find examples of the limit of computation through the practice, it is often found in situations where a kind of paradox or logical impossibility emerges that it conflicts with the quantifiable nature of AI operations.

 

1 - The impossibility of the undecidability of categories in a facial recognition system based on supervised machine learning

In this collaborative project with Murad Khan that was developed through the SPACE Art + Technology residency, by using image manipulation, we located a pixel that acts as a kind of threshold between different categories of "race" provided by Betaface - a commercial facial recognition system. In other words, by either adding or subtracting the single value (pixel) from a manipulated image of my face, the algorithmic reading switches the racial category of the face between two states - “Asian” and “White”. This suggests the presence of the discrete, preprogrammed, logical point on which one “race” must shift to another, in the system.

Because the supervised learning AIs cannot make up a new category innovatively and computation is fundamentally discrete operation (e.g. based on the 0/1 binary), the threshold pixel is discrete and conclusive. And, once this threshold pixel is mathematically deduced, we can (re)frame its identity not as something belonging to one single identity (either “Asian” or “White”), but it conceptually and ontologically represents the undecidability (switchability) of the racial category. [1] This undecidability as a concept is a logical impossibility for the AI, as it lacks indecisiveness (i.e. it has to decide 1 or 0 to do or process anything).

[1] See Matteo Pasquinelli, Machines that Morph Logic: Neural Networks and the Distorted Automation of Intelligence as Statistical Inference, for more about AI's lack of capacity to invent a new concept.

[An animation made and played on a smartphone for the exhibition of the project]

2 - The impossible vagueness: Sorties Paradox

The limit of the computer vision here is related to its intrinsic lack of (human) indecisiveness as opposed to the decisive discreteness of computation. Furthermore, the gap of perception between machines and humans may appear as a kind of capacity to perceive vagueness.

Given that, I used the Sorties paradox (that asks the question of - how many grains of sand do we need to remove from a heap of sand for it stops being a heap?) that is supposed to give a sense of vagueness of perception. This vagueness comes to be at play where we are not being able to be logically decisive about whether a grain of sand makes a heap one: the impossibility of the threshold in practice.

Yet, a face detection algorithm answers this question by overlooking the paradox; in the following work-in-progress video sketch, I'm asking a similar question by testing a face detection algorithm: "how many pixels do we need to have for a face emerging in a picture?". And of course, the face detection returns an exact number of pixels without hesitation. [2] The timing at which we start seeing a face or face-like pattern in the image, would usually differ from that of the computer vision, demonstrating the gap of the two ways of seeing.

 

Again, similarly to the practice of the racial threshold above, in this practice a certain single pixel acts as a threshold for the perception of a face. Yet, the human perception of the vagueness is not to be found in the machine “perception” of a face, demonstrating the impossibility of the machine being indecisive and embracing the paradox. In other words, the vagueness (like the undecidability) is the machine’s its own blind spot located beyond the logical limit.

[2] The analogy with the Sorties paradox was first suggested by Chris Williams.

3 - Zen, Paradox and AI

Given the use of the paradox or contradiction above in some ways works to reveal the limit of AIs and computation, I was looking at some other issues about contradiction and paradox. And, I found Douglas R Hofstadter (in Godel, Escher, Bach: An Eternal Golden Braid) discussing the relationship between Zen, paradox and computation. According to him, Zen practice like Koan provides questions that make us confront the limits of logic, reason, naming, categorisation and the other modes of conceptual discretisation of things in the world (or the “dichotomies”), forcing us to move away from these domains of thinking and unlearn. Otherwise, the paradoxical questions of Koan can remain non-sensical.

One example of Koan is...​
“When both hands are clapped a sound is produced; listen to the sound of one hand clapping.”

Now I'm thinking about what it could mean for AI to practice Zen through the use of paradox. AI bias occurs through the categorical decisiveness and replication of the existing cultural biases. Then, is it possible to let it unlearn the decisiveness, symbolically or operationally by adapting some Zen methods? In order for a computer to practice Zen, by definition, it has to give up reason and logic; this means it will have to fail or become unable to "compute" - signalling an inherent contradiction for its operation. This is theoretical speculation, but in practice, what do the confrontations with the paradox unfold, and what could it teach us?

​One work-in-progress example for this speculative practice can be shown through our confronting the following question:

What is the age of the face in the image below?

​If you are unsure, why can’t we use what you literally thought in mind (something like "...?" perhaps?) as a label for the “age” category of this image, which can be used to train/retrain a facial recognition algorithm. And this algorithm will have an emulated indecisiveness (as a defined category, paradoxically) when classifying a face.

After training an algorithm with many images produced through this procedure, will we have an AI that is both Zen and not Zen i.e. adapting some practice of Zen but inherently self-contradictory regarding Zen’s supposed lack of reason (which is what enables computation today)?

And, what does this gap between computation and the non-computable part of culture suggest for us in relation to the algorithmic society today?