NGI Salon Explainability with Beatrice Fazi

7/03/2021 - 09:00

As we are getting closer to pragmatic cybernetics, already functioning in a relatively young form in China, and as we are moving towards digitizing identity as a capability functioning in a hybrid human-machine reality  its is clear that we can not do this from the view of applications, services, infrastructure without an organizational model that takes. into account the ethical and theoretical-political aspects of governance.

As Cyber. Physical Systems (CPSs) are engineered systems “integrating information technol- ogies, real-time control subsystems, physical components, and human operators in order to in!uence physical processes by means of cooperative and (semi)automated control functions.... key features of CPSs are:

• (1) real-time feedback control of physical processes through sensors and actuators;
• (2) cooperative control among networked subsystems; and
• (3) a threshold of automation level where computers close the feedback control
loops in (semi)automated tasks, possibly allowing human control in certain cases.” (Guzman et al. 2019)

The human is seen as an integrated human “operator,” alongside components and subsystems, into an ultimate decision-making feedback control loop in which human control is allowed “in certain cases.” 

In: van Kranenburg R. et al. (2020) Future Urban Smartness: Connectivity Zones with Disposable Identities. In: Augusto J.C. (eds) Handbook of Smart Cities. Springer, Cham.


"Winner noted that, ‘upon opening the black box’, the risk was of ‘finding it empty’. In a parallel yet distinct sense, we can borrow Winner’s famous expression to consider now whether contemporary XAI’s imperatives of opening the black box are running a similar risk. If there is indeed such a risk, it is less of finding the black box empty than of realising that there is nothing to translate or to render precisely because the possibility of human representation never existed in the first place."

Beyond Human: Deep Learning, Explainability and Representation
M. Beatrice Fazi
First Published November 27, 2020 Research Article  

This article addresses computational procedures that are no longer constrained by human modes of representation and considers how these procedures could be philosophically understood in terms of ‘algorithmic thought’. Research in deep learning is its case study. This artificial intelligence (AI) technique operates in computational ways that are often opaque. Such a black-box character demands rethinking the abstractive operations of deep learning. The article does so by entering debates about explainability in AI and assessing how technoscience and technoculture tackle the possibility to ‘re-present’ the algorithmic procedures of feature extraction and feature learning to the human mind. The article thus mobilises the notion of incommensurability (originally developed in the philosophy of science) to address explainability as a communicational and representational issue, which challenges phenomenological and existential modes of comparison between human and algorithmic ‘thinking’ operations.