Gave a talk "Beyond Accuracy: Robust Representations for Social Norms Alignment in Human–AI Modeling", @KSU-CCIS Research talk.
Paper accepted to appear @ AACL2025-Findings EMBRACE: Shaping Inclusive Opinion Representation by Aligning Implicit Conversations with Social Norms Abeer Aldayel, Areej Alokaili, The 14th IJCNLP & 4th AACL 2025.
New pre-print Incongruent Positivity: When Miscalibrated Positivity Undermines Online Supportive Conversations, Leen Almajed, Abeer ALdayel
Paper accepted to appear @ NLP4PI, EMNLP2024, Covert Bias: The Severity of Social Views’ Unalignment in Language Models Towards Implicit and Explicit Opinion,Abeer Aldayel, Areej Alokaili, Rehab Alahmadi
New pre-print! Covert Bias: The Severity of Social Views’ Unalignment in Language Models Towards Implicit and Explicit Opinion, (Abeer Aldayel, Areej Alokaili, Rehab Alahmadi), under-review, preprint(link)
Paper accepted to appear @ ICWSM, Hatred Stems from Ignorance! Distillation of the Persuasion Modes in Countering Conversational Hate Speech, (Ghadi Alyahya, Abeer Aldayel), ICWSM2025, preprint(link)
Gave a talk @ Saudi Digital Academy for Data Science, "Efficacy of Social Aspect and Human-Centered Paradigm From Real-World Scenarios to LLM", SDA academy
New pre-print! Hatred Stems from Ignorance! Distillation of the Persuasion Modes in Countering Conversational Hate Speech.(under-review), (Ghadi Alyahya, Abeer Aldayel), preprint(link)
Co-organized the Data Challenge,ICWSM, Temporal Data
Gave a talk @ SDAIA Academy, Computational Social Science For NLP: Closer Look at Reliability through Feedback & Human-Centered Values, arranged by AI Center of Advanced Studies Thakaa
Paper accepted @NEATCLASS ICWSM, Toxicity Inspector: A Framework to Evaluate Ground Truth in Toxicity Detection Through Feedback
Our graduation project team selected for digital innovation award "Toxicity inspector: Multilingual toxic comments mitigation through feedback"
Our paper "Characterizing the role of bots’ in polarized stance on social media" got accepted at SNAM
Gave a talk @women_in_nlp "Social computing through the lens of NLP: a closer look at opportunities, reliability and beyond"
Starting new position as assistant professor at King Saud University
Received the Best Reviewer Award in ICWSM 2021
Passed my PhD viva , starting my job @ KSU as assistant professor soon
Our paper "Stance detection on social media: state-of-the-art and trends" got accepted at IP&M
Presented recent work on stance polarization on social media at IC2S2-2020.
Received the Best Reviewer Award in ICWSM 2020.
Our tutorial "Detection and Characterization of Stance on Social Media" Accepted @ICWSM 2020
Our paper "Assessing Sentiment of the Expressed Stance on Social Media" Accepted @SocInfo2019
IC2S2 submission has been accepted, It is more than what you Say! Leveraging User Online Activity for Improved Stance Detection
Our paper "Similar Minds Post Alike: Assessment of Suicide Risk by Hybrid Language and Behavioral Model" Accepted @CLPsych2019
Our paper "ARC-WMI" Accepted @OSACT3
Our paper "Readability of WMIs" published @ BMC health services research Journal
- About me and my research interest:
I am an Assistant Professor at the College of Computer and Information Sciences, King Saud University. My research lies at the intersection of Computational Social Science, Natural Language Processing (NLP), and Responsible AI, where I develop computational methods to evaluate the social and normative dimensions of language. My work focuses on multilingual and socially grounded language processing, with particular emphasis on modeling and evaluating language use across diverse social and cultural contexts. I study phenomena such as bias, abusive and harmful language, misinformation, and implicit social meanings. A core goal of my research is to move beyond surface-level language analysis toward representing implicit values and dynamics embedded in language, in support of more responsible and socially aligned AI systems. I am especially interested in socially aware and fairness-oriented NLP, including the design of benchmarks and evaluation frameworks that reflect cross-cultural variation and normative plurality. My research contributes to Responsible AI by advancing methods for transparency, robustness, and cultural alignment in language modeling.
[I am not interested in monolingual or polyglossia/dialects, aka. ArabicNLP/ArabicNetwork/ArabicHCI/Arabic Processing! or Gaming! Religious studies/ Network Science/ Healthcare-Biology or Pandemic(Covid19)! apps/Educational apps/ Applied Science or Applications/political analysis/ privacy applications/ policy / Sustainability/ Cybersecurity / Startups! please check FAQ page and my most recent research !]
- Highlights of my research so far :
♦ Computational Social Science and Natural Language Processing
Social Bias (social views and cultural aspect alignment [Multilingual]), Hate speech (reliability evaluation, counter-speech), Stance detection, Suicide level assessment.
♦ Natural Language Processing
Cross-lingual paraphrasing techniques, Readability assessment, Rumours veracity detection.
- More about my most recent research at this page.
I got my Ph.D. in (August 2021) from the University of Edinburgh, the Institute for Language, Cognition and Communication (ILCC) at School of Informatics
, as a member of (SMASH) research group.
My thesis addresses a fundamental challenge in stance detection model focusing on understanding stance in complex, real-world social media environments. My research develops models that evaluate the integration of language, sentiment, and social dynamics, and introduces responsible NLP frameworks.
My work has been covered in international media highlighting its impact on social influence, and trustworthy AI.
In addition, I worked on side projects including a hybrid language–behavior model for suicide risk assessment (CLPsych Workshop at NAACL 2019) and rumor veracity detection through conversational modeling.
Overall, my work emphasizes context, social dynamics, and responsible measures as key pillars for building impactful and trustworthy NLP models.
Before starting my PhD, I earned a BSc and MSc in Computer and Information Sciences from King Saud University (KSU). In my Master thesis, I worked on developing query paraphrasing algorithm to enhance information retrieval system. Beside my job as lecturer at (KSU),
I worked as a research assistant in a readability identification project to predict the readability level of a given text. 📚 My personal interests include learning different languages, and reading [recommend me a book for my reading list here :) ]📚