Bilateral AI aims at developing the foundation for broad AI systems with interacting sub-symbolic and symbolic components as depicted in the following Figure giving an overview of the research agenda of the BILAI Cluster of Excellence:
Eight interconnected Research Modules (RMs) will address the following topics:
- Graph-based structures (RM1)
- Context (RM2)
- Knowledge & Learning (RM3)
- Reasoning (RM4)
- Causality (RM5)
- Explainable AI (RM6)
- Ethical AI Systems (RM7)
- Demonstration and Benchmarking (RM8)
Graph-based structures (RM1)
Graph-based structures are highly relevant to all the essential properties of a broad AI. Graph-based structures are inherently symbolic and often equipped with sub-symbolic attributes such as costs or interaction strength. They are omnipresent when solving complex tasks, and can appear as navigation maps, as social or physical interaction networks, or as object relations. Graphs are ideal to transfer knowledge: their nodes and edges represent learned or known abstractions of real-world entities; their structure is typically very robust; they can be readily adapted to new situations or even constructed on the fly; they allow for advanced reasoning; and they allow employing efficient algorithms from computer science. Because of the inherent symbolic nature and their suitability for learning and sub-symbolic elements, they naturally constitute a promising starting point as a core component for a bilateral AI approach.
Context (RM2)
Context is typically used by humans for solving complex tasks using associations with recent or past experiences. New situations can be associated with stored ones to provide a context, which makes humans very robust against domain shifts and allows them to adapt quickly. As only the abstraction of a situation is stored to give a context, memory- and context-based processing is highly suited for knowledge transfer. We will investigate novel approaches to combine symbolic and sub-symbolic representations of context and memory in bilateral AI systems, inspired by the memory system of the brain. Our aim is to improve AI systems with respect to robustness, adaptivity, abstraction, and knowledge transfer using bilateral AI approaches.
Knowledge and Learning (RM3)
This research module focuses on the integration of efficient computational logic, symbolic reasoning, and expert knowledge into machine learning. While enhancing deep neural networks with symbolic background knowledge and reasoning has received considerable attention in the recent literature on neuro-symbolic AI, the problem goes far beyond this aspect. We will focus on integrating efficient methods from computational logic for optimizing symbolic machine learning models, as well as on integrating learned knowledge into reasoning modules rooted in expert and domain knowledge bases and knowledge graphs (KGs). In combining machine learning with symbolic approaches, these approaches will build a foundation of bilateral AI.
Reasoning (RM4)
This module aims at enhancing reasoning systems as needed for solving complex tasks. In this module, sub-symbolic methods will be used for leveraging reasoning systems. The ultimate goal is to transfer the success story of systems like AlphaGo into more general settings where the rules of logical reasoning replace the much simpler rules of board games. Advanced reasoning engines must be efficient as required by broad AI systems. These reasoning systems naturally rely on proper abstraction, but are robust and offer easy knowledge transfer.
Causality (RM5)
This research moduleaims at revealing and utilizing valid causal mechanisms underlying the data via efficient causal learning and inference methods based on hybrid, semi-symbolic representations that implement symbolic computations. Causal learning is relevant for advanced reasoning, can adapt well to new situations when causes remain the same, and helps knowledge transfer as it separates causes from effects. Causality shall be used both in symbolic and sub-symbolic techniques in our bilateral AI approach.
Explainable AI (RM6)
Explainable AIaims to exploit symbolic techniques in order to make deep neural networks more interpretable and explainable. This includes aligning deep neural networks with symbolic theories, as well as integrating symbolic reasoning into deep learning models. We also aim at improving the generation of natural language explanations and develop techniques for evaluating explainable AI, overcoming current shortcomings of natural language explanations by AI Systems. Explainable AI is a prerequisite for interactions of AI systems with humans and for knowledge transfer both from AI to humans and from humans to AI. Good explanations must also align with human abstraction.
Ethical AI Systems (RM7)
Ethical AI Systems are a pervasive topic across all the RMs. We aim at elaborating effective ways to put policies and norms and the ethical principles of privacy, fairness, and non-maleficence/beneficence into practice in broad AI systems. The results and insights shall provide valuable input to the other RMs and shape their agenda in an ethics-aware manner.
Demonstration and Benchmarking (RM8)
In this cross-cutting research module we shall provide showcases for the various bilateral methods developed in the other RMs and develops new concrete research questions and challenges to be fed back to these RMs.