IIT Madras launches IndiCASA dataset to expose AI bias in Indian context
IIT Madras announced **IndiCASA**, a new dataset of ~2,500 human-validated sentences to help detect bias in language models tailored for India. It covers stereotypes and counter-stereotypes across caste, gender, religion, disability, and socioeconomic status. The team also released an AI evaluation tool to simulate human interaction for fairness testing, and a policy bot to simplify legal language for wider audiences. These tools aim to shape accountable AI systems sensitive to Indian social realities.
positive
2 days ago
IIT Madras launches IndiCASA dataset to expose AI bias in Indian context
IIT Madras announced **IndiCASA**, a new dataset of ~2,500 human-validated sentences to help detect bias in language models tailored for India. It covers stereotypes and counter-stereotypes across caste, gender, religion, disability, and socioeconomic status. The team also released an AI evaluation tool to simulate human interaction for fairness testing, and a policy bot to simplify legal language for wider audiences. These tools aim to shape accountable AI systems sensitive to Indian social realities.
positive
IIT Madras launches IndiCASA dataset to expose AI bias in Indian context
2 days ago
1 min read
73 words
IIT Madras rolls out IndiCASA to benchmark bias in Indian AI models.
IIT Madras announced **IndiCASA**, a new dataset of ~2,500 human-validated sentences to help detect bias in language models tailored for India. It covers stereotypes and counter-stereotypes across caste, gender, religion, disability, and socioeconomic status. The team also released an AI evaluation tool to simulate human interaction for fairness testing, and a policy bot to simplify legal language for wider audiences. These tools aim to shape accountable AI systems sensitive to Indian social realities.
IIT Madras announced **IndiCASA**, a new dataset of ~2,500 human-validated sentences to help detect bias in language models tailored for India. It covers stereotypes and counter-stereotypes across caste, gender, religion, disability, and socioeconomic status. The team also released an AI evaluation tool to simulate human interaction for fairness testing, and a policy bot to simplify legal language for wider audiences. These tools aim to shape accountable AI systems sensitive to Indian social realities.