Filter by Type:

2025

Assessing the Reliability of LLMs Annotations in the Context of Demographic Bias and Model Explanation

Hadi Mohammadi, Tina Shahedi, Pablo Mosteiro Romero, Massimo Poesio, Ayoub Bagheri, Anastasia Giachanou

Workshop on Gender Bias in Natural Language Processing (GeBNLP), ACL 2025

Forthcoming LLM Evaluation Bias Detection Explainability

Do Large Language Models Understand Morality Across Cultures?

Hadi Mohammadi, Evi Papadopoulou, Mijntje Meijer, Ayoub Bagheri

2nd Workshop on Language Understanding in the Human-Machine Era, ECAI 2025

Forthcoming Cultural AI LLMs Ethics

Explainability in Practice: A Survey of Explainable NLP Across Various Domains

Hadi Mohammadi, Ayoub Bagheri, Anastasia Giachanou, Daniel L. Oberski

Under review at Journal of Information Science

Survey Explainable AI NLP

Explainability-Based Token Replacement on LLM-Generated Text

Hadi Mohammadi, Anastasia Giachanou, Daniel Oberski, Ayoub Bagheri

Under review at Journal of Artificial Intelligence Research

LLM Text Generation Explainability

Exploring Cultural Variations in Moral Judgments with Large Language Models

Hadi Mohammadi, Evi Papadopoulou, Mijntje Meijer, Ayoub Bagheri

Under review at Applied Artificial Intelligence

Cultural AI LLMs Moral Judgments

2024

A Transparent Pipeline for Identifying Sexism in Social Media: Combining Explainability with Model Prediction

Hadi Mohammadi, Anastasia Giachanou, Ayoub Bagheri

Applied Sciences, Volume 14, Issue 19, 2024

Published Sexism Detection Social Media Explainable AI