Explainable embeddings project

Motivation and theoretical outline of the project

Advancements in AI offer numerous opportunities for humanities to address diverse challenges, including image recognition, satellite-based identification of danger zones, text translation. However, a common issue iin contemporary AI systems is the lack of transparency and explainability, which impedes the development of fully integrated AI solutions for real-world problems.

For this we leverage the mathematical methods of differential geometry (manifold theory, concepts of affine space and geometric algebras, stochastic processes and dynamical systems analysis) and intergrate it with the state of the art computer science methods (topological and geometric analysis in deep learning, embeddings and dimensionality reduction algorithms). Since high-dimensional data is inevitably non-linear, creating challenges in interpreting the data patterns, developing robust interpretable methods of embeddings for such analysis is of high importance (on this please see our Blog on Infra-data analysis).

Through analysis of latent space of AI models, we are tracing patterns of AI systems processing the information, which hence contributes at designing the explainable AI systems.   Overall a new mathematical frameworks are needed for understanding the AI systems creates a new type of 'mirror' of how human thinking is organised. 

Context: human-machine interaction cotext

We are interested to investigate why modern neural networks, trained on large amounts of data (see recent models here) are often failing in solving simple tasks, while are able to solve complex ones. The general phenomenon of embedding and explainability grounding in current AI systems is of a high importance given that human - AI interaction systems are currently becoming more and more ubiquious. Hence we aim to develop interdisciplinary approaches to tackly and consider this problem across domain of AI, mathematics (neural network-based models in particular) and collaborating with neuroscientists. 

Relation to the computer linguistics and blind models
We hypothetize that in order to solve and simplify the problem of understanding the issues within the AI models generating 'wrong text', we need to consider the human cognition of humans with specific abilities (e.g. visually impaired people or people with some possible limitations of the reading or writing text). Through the collaborations and considering them we can then understand other dimensions of possible text generation issues in AI systems (see current projects below).

Project formalisation and funding
This is a collaborative effort in the interdisciplinary team of researchers from computer science, mathematics, physics, digital art, music, art history. Importantly, this project has several dimensions, including scientific and visual components together with resesrchers at Bell labs (France), and as collaborative effort with research collaborators from SEMF (Spain), LPI (France), Dark Matter labs (UK).
Current projects: Silbersalz Institute (Germany), Internship student project (Bell labs, France)