Learn more about the StereoWipe project
Find answers to common questions about StereoWipe, our stereotyping evaluation benchmark for Large Language Models.
The StereoWipe project aims to create a benchmark dataset for evaluating fairness in Large Language Models (LLMs), with a focus on identifying and addressing stereotyping across diverse and intersectional identities.
Unlike traditional methods that often lack specific definitions of biases and are limited in scope, StereoWipe employs a dual approach: leveraging large generative models and community engagement. This allows for a broader, more inclusive perspective.
Absolutely! Community participation is vital to the success of our project. Interested individuals can contribute by joining our GitHub project, where they can provide input, suggest scenarios, and help in curating the dataset. Your perspectives and insights are invaluable in helping us achieve a truly inclusive and fair AI.
The project targets a wide array of stereotyping, focusing on various aspects such as gender, race, age, nationality, religion, and profession.
The insights and data gathered from the StereoWipe project will be instrumental in guiding AI developers and researchers in creating more fair and unbiased models. The dataset will serve as a benchmark for testing LLMs, and our findings will be shared with the broader AI and tech community to promote awareness and encourage the development of more equitable AI systems.