Salta al contenuto principale

Links condivisione social

Seminario di Paolo Giudici (U. di Pavia)

Lunedì 27 Aprile alle ore 14:30
Aula A109
SAFE Artificial Intelligence

Evaluating the reliability of machine learning classifications remains a fundamental challenge in Artificial Intelligence (AI), particularly when the target variable is multidimensional. Classification variables can be expressed by means of a categorical scale which, at best, is ordinal. Because ordinal data lack a natural metric structure in their underlying space, most conventional distance measures aimed at assessing the accuracy of machine learning classifications cannot be directly or meaningfully applied. In the talk, we develop a mathematical framework for comparing ordinal data based on a family of Rank Graduation (RGXp) metrics. We demonstrate that these metrics can quantify the proportion of variability of the response explained by the predictions, in a similar manner as the predictive R^2 for continuous response variables. After establishing theoretical connections between the RGXp family and other prominent metrics in AI, we conduct extensive experiments across diverse datasets and learning tasks to evaluate their empirical performance. The results underscore the versatility, interpretability, and robustness of the RGXp metrics as a principled foundation for developing trustworthy and SAFE AI systems. The results also show the potential of hybrid quantum machine learning models, which seem to improve the robustness of the results.