positive
39 days agoIIT Madras launches IndiCASA dataset to expose AI bias in Indian context
IIT Madras announced **IndiCASA**, a new dataset of ~2,500 human-validated sentences to help detect bias in language models tailored for India. It covers stereotypes and counter-stereotypes across caste, gender, religion, disability, and socioeconomic status. The team also released an AI evaluation tool to simulate human interaction for fairness testing, and a policy bot to simplify legal language for wider audiences. These tools aim to shape accountable AI systems sensitive to Indian social realities.
Tags:
- ai
- dataset
Explore:Mutual Fund AI Screening
positive
39 days agoIIT Madras launches IndiCASA dataset to expose AI bias in Indian context
IIT Madras announced **IndiCASA**, a new dataset of ~2,500 human-validated sentences to help detect bias in language models tailored for India. It covers stereotypes and counter-stereotypes across caste, gender, religion, disability, and socioeconomic status. The team also released an AI evaluation tool to simulate human interaction for fairness testing, and a policy bot to simplify legal language for wider audiences. These tools aim to shape accountable AI systems sensitive to Indian social realities.
Tags:
- ai
- dataset
Explore:Mutual Fund Screening
1 min read
73 words
IIT Madras rolls out IndiCASA to benchmark bias in Indian AI models.
IIT Madras announced **IndiCASA**, a new dataset of ~2,500 human-validated sentences to help detect bias in language models tailored for India. It covers stereotypes and counter-stereotypes across caste, gender, religion, disability, and socioeconomic status. The team also released an AI evaluation tool to simulate human interaction for fairness testing, and a policy bot to simplify legal language for wider audiences. These tools aim to shape accountable AI systems sensitive to Indian social realities.
IIT Madras announced **IndiCASA**, a new dataset of ~2,500 human-validated sentences to help detect bias in language models tailored for India. It covers stereotypes and counter-stereotypes across caste, gender, religion, disability, and socioeconomic status. The team also released an AI evaluation tool to simulate human interaction for fairness testing, and a policy bot to simplify legal language for wider audiences. These tools aim to shape accountable AI systems sensitive to Indian social realities.
Tags:
- ai
- dataset
- ai
- dataset
- ethics
- india
Oct 17, 2025 • 17:55