Under Trump, AI Scientists Are Told to Remove ‘Ideological Bias’ From Powerful Models


The National Institute of Standards and Technology (NIST) has delivered new instructions to scientists who partner with the US Artificial Intelligence -Security Institute (AISI), which remove mention of “AI Security”, “Responsible AI”, and “AI -Justice” in the capabilities it expects from Human Flourishing “”

The information comes as part of an updated collaborative research and development agreement for AI Safety Institute Consortium members, sent in early March. Previously, this agreement prompted researchers to contribute technical work that could help identify and repair discrimination model behavior related to gender, race, age or rich inequality. Such bends are extremely important, as they can directly affect final users and disproportionately damage minorities and economically disadvantaged groups.

The new agreement removes mention of tools “to authenticate content and track its origin” as well as “label synthetic content”, signaling less interest in tracking misinformation and deep falsehoods. It also adds emphasis to put America first, asking one working group to develop test tools “to expand the US global position.”

“The Trump administration has eliminated security, justice, misinformation and responsibility as things it assesses for AI, which I think I speak for himself,” says one researcher at an organization working with the AI ​​-Security Institute, which has asked not to be nominated for fear of retaliation.

The researcher believes that ignoring these issues could damage regular users by potentially allowing algorithms that discriminate on the basis of revenue or other demographies that do not control. “Unless you are a Technic billionaire, this will lead to a worse future for you and the people you care. Wait for AI to be unfair, discriminatory, unsafe and deployed irresponsibly,” the researcher states.

“It’s wild,” says another researcher who has worked with the AI ​​-Security Institute in the past. “What does it mean even to people to bloom?”

Elon Muskwho is currently leading controversial effort Empty government expenses and bureaucracy on behalf of President Trump, criticized AI models built by Openai and Google. Last February, he posted a self in X, in which Gemini and Openai were labeled “racist” and “awakened.” He often cites An event where one of Google’s models discussed if it would be wrong to abuse someone even if it would prevent a nuclear apocalypse – a very unlikely scene. Besides Tesla And SpaceXMusk runs Xai, an AI company that competes directly with Openai and Google. A researcher who advises XAI has recently developed a new technique to possibly change the political bentings of large language models, such as Reported by Wired.

Growing research shows that political flexes in AI models can impact both Liberals and Conservatives. For example, A study of the recommended algorithm of Twitter Published in 2021 showed that users were more likely to have shown right -wing perspectives on the platform.

Since January, Musk is so called Department of Government Performance (Doge) swept through the US government, effectively firing officials, pausing expenses and creating an environment thought hostile to those who may oppose Trump’s goals. Some government departments such as the Department of Education have filed and removed documents that mention DEI. Doge has also targeted Nist, Aisi’s parent organization, in recent weeks. Dozens of employees were fired.



Source link

Leave a Reply

Your email address will not be published. Required fields are marked *