Projects
Ongoing Projects
Funded by ERC (2021-2026)
ADIX strives towards a radical re-thinking of explainable AI that can work in synergy with humans within a human-centred but AI-supported society. It aims to define a novel scientific paradigm of deep, interactive explanations that can be deployed alongside a variety of data-centric AI methods to explain their outputs by providing justifications in their support. These can be progressively questioned by humans and the outputs of the AI methods refined as a result of human feedback, within explanatory exchanges between humans and machines. This paradigm is being realised using computational argumentation as the underpinning, unifying theoretical foundation.
Funded by Royal Academy of Engineering/JP Morgan (2020-2025)
The project aims to deliver machines, powered by a variety of AI methods, which can engage with humans in two-way explanatory dialogues. The machines will explain their recommendations and humans will question their explanations and provide feedback. Such explanations can be drawn to engage humans and make them trust the methods' outputs.
Funded by UKRI
The project uses AI for real time analysis of healthcare infodemics to autonomously create clinical guidance and identify misinformation.
Funded by Royal Society
This project supports a collaboration with Maurizio Proietti and Emanuele de Angelis at CNR to study the automatic learning of Assumption-based Argumentation (ABA) frameworks from data.