Building more equitable AI through comprehensive stereotyping evaluation
Tackling gender bias in AI beyond English-language boundaries
At StereoWipe, we're addressing a critical challenge in Generative AI: evaluating and mitigating stereotyping across languages and cultures. While significant progress has been made in English-based systems, there's a pressing need to extend these efforts globally.
Our research reveals that traditional evaluation methods often fail to capture biases in non-English contexts, particularly in languages with grammatical gender like Hindi. We're developing novel evaluation datasets and metrics tailored for cross-linguistic fairness.
By creating these tailored evaluation tools, we aim to provide accurate assessments of stereotyping in AI systems, encourage development of fairer translation technologies, and promote cross-cultural understanding of bias in language models.
Dedicated to building fairer AI systems
Sadhika serves as the Generative AI Stereotyping Evaluation Specialist, focusing on examining and mitigating stereotypes within Generative AI applications. Her research develops novel evaluation methodologies ensuring AI systems are rigorously tested for fairness across diverse and intersectional identities.
Through Social Protocol Labs, Brij provides essential operational support for this research project. His role ensures the team has necessary resources to develop innovative language-specific stereotyping evaluation techniques for Generative AI systems.