About StereoWipe

Building more equitable AI through comprehensive stereotyping evaluation

Our Mission

Tackling bias in AI beyond English-language boundaries

At StereoWipe, we're addressing a critical challenge in Generative AI: evaluating and mitigating bias across languages and cultures. While significant progress has been made in English-based systems, there's a pressing need to extend these efforts globally.

Our research focuses on the challenges of evaluating bias in a global context, and we are developing new methods for subjective cultural assessments. Our first benchmark, BiasWipe, is designed to measure stereotyping in language models.

By creating tailored evaluation tools, we aim to provide accurate assessments of bias in AI systems, encourage development of fairer technologies, and promote cross-cultural understanding of bias in language models.

Our Approach

Our work is grounded in a commitment to open and collaborative research. We believe that the best way to address the complex challenge of bias in AI is to work with a diverse community of researchers, developers, and domain experts.

We are committed to developing benchmarks and tools that are:

  • Transparent: We provide detailed documentation for our benchmarks and tools, and we make our code and data available to the public.
  • Rigorous: We use state-of-the-art methods to ensure that our benchmarks are reliable and our findings are robust.
  • Inclusive: We work with a global community of researchers and annotators to ensure that our benchmarks are culturally sensitive and relevant to a wide range of contexts.