Hadi Mohammadi

About Me

I’m Hadi Mohammadi, a final-year PhD candidate at Utrecht University, specializing in Explainable Natural Language Processing (NLP) within the social sciences. My research focuses on building transparent and interpretable NLP models that bridge the gap between cutting-edge AI and real-world applications, empowering domain experts to better understand and trust model outputs.

With over 6 years of experience in data science and machine learning, including more than 3 years in the Netherlands, I have secured €38,400 in research funding and published my work in leading venues such as ACL, ECAI, and Applied Sciences. My research interests include explainable AI, fairness in machine learning, human–AI collaboration, and leveraging large language models (LLMs) to explore cultural and moral variations across societies.

You can learn more about my work on my personal website and view my CV here.

One of my most recent research papers in NLP was accepted at the GeBNLP Workshop at ACL 2025:

Assessing the Reliability of LLMs Annotations in the Context of Demographic Bias and Model Explanation

ACL 2025
Hadi Mohammad1, Tina Shahedi1, Pablo Mosteiro1, Massimo Poesio2, Ayoub Bagheri1, Anastasia Giachanou1
1 Department of Methodology and Statistics, Utrecht University
2 Queen Mary University of London, London, United Kingdom

Figure 1. We instruct LLMs to replicate human annotations for subjective NLP tasks from different perspectives using persona prompting and XAI techniques.

Abstract

Understanding the sources of variability in annotations is crucial for developing fair NLP systems, especially for tasks like sexism detection where demographic bias is a concern. This study investigates the extent to which annotator demographic features influence labeling decisions compared to text content. Using a Generalized Linear Mixed Model, we quantify this influence, finding that while statistically present, demographic factors account for a minor fraction (~8%) of the observed variance, with tweet content being the dominant factor. We then assess the reliability of Generative AI (GenAI) models as annotators, specifically evaluating if guiding them with demographic personas improves alignment with human judgments. Our results indicate that simplistic persona prompting often fails to enhance, and sometimes degrades, performance compared to baseline models. Furthermore, explainable AI (XAI) techniques reveal that model predictions rely heavily on content-specific tokens related to sexism, rather than correlates of demographic characteristics. We argue that focusing on content-driven explanations and robust annotation protocols offers a more reliable path towards fairness than potentially persona simulation.

Key Findings

  • - Gender and age group do not significantly influence labeling decisions.
  • - Black annotators are far more likely to label tweets as sexist, and Latino annotators are less likely to do so compared to White annotators.
  • - Annotators with a high school degree are significantly less likely to label tweets as sexist.
  • - Annotators from Africa are significantly less likely to label tweets as sexist.

GenAI Results

  • - Baseline GenAI models show strong performance on this task.
  • - Adding demographic persona information provides inconsistent benefits for improving annotation reliability.
  • - Guiding model attention using XAI based on content features shows more consistent improvements. This approach is especially effective when combined with capable models.
  • - Focusing on the text itself through XAI methods appears more promising than relying on potentially superficial persona simulation.

Mixed-effect Model

We ran a mixed-effects logistic regression model to understand how annotators’ demographic features affect their labeling decisions.

BERT Model

  • We fine-tuned a multilingual BERT model for sexism detection, using class weights to handle label imbalances and early stopping to prevent overfitting.
  • We evaluate the reliability of GenAI models as annotators, including the effect of persona prompting and XAI-guided explanations.
  • We analyze the impact of demographic and content features on model predictions and annotation reliability.

Example results or figures from the study.

Paper

BibTeX

@inproceedings{mohammadi2025assessing,
  title={Assessing the Reliability of LLMs Annotations in the Context of Demographic Bias and Model Explanation},
  author={Mohammadi, Hadi and Shahedi, Tina and Mosteiro, Pablo and Poesio, Massimo and Bagheri, Ayoub and Giachanou, Anastasia},
  booktitle={The 6th Workshop on Gender Bias in Natural Language Processing},
  pages={92},
  year={2025}
}