HomepoliticsAnthropic's moral compass architect suggested AI overcorrection could address historical injustices

Anthropic's moral compass architect suggested AI overcorrection could address historical injustices

politicsApril 22, 2026
4 min read
Anthropic's moral compass architect suggested AI overcorrection could address historical injustices
An Anthropic AI ethics researcher argued in a 2023 paper that intentional discrimination in AI models could be used to combat stigmas around race and gender topics.

One of Anthropic’s artificial intelligence (AI) philosophy architects argued that intentional discrimination could be a way to combat stigmas on topics of race and gender.

In a 2023 paper authored alongside a number of other AI researchers, Amanda Askell, a philosopher hired by Anthropic to develop their AI’s moral compass, argued companies might benefit from a kind of overcorrection toward stereotypes.

But, the paper explained, that would require human input on how to modify its answers.

"Larger models can over-correct, especially as the amount of [human input] training increases. This may be desirable in certain contexts, such as those in which decisions attempt to correct for historical injustices against marginalized groups, if doing so is in accordance with local laws," Askell wrote.

PALANTIR'S SHYAM SANKAR: AMERICANS ARE 'BEING LIED TO' ABOUT AI JOB DISPLACEMENT FEARS

The comment referred to an experiment on how Anthropic’s models dealt with the race of students.

"In the discrimination experiment, the 175B parameter model discriminates against Black versus White students by 3% in the Q condition and discriminates in favor of Black students by 7% in the Q+IF+CoT condition," the paper notes, referring to one AI trained without human corrections and a second one trained with the help of input.

The paper also includes a footnote stating that, "we do not assume all forms of discrimination are bad. Positive discrimination in favor of black students may be considered morally justified."

Askell was joined by four other authors: Deep Ganguli, Nicholas Schiefer, Thomas Kiao and Kamilė Lukošiūtė.

The paper’s contents have surfaced as AI companies increasingly wrestle with the ethics their models are trained on — the presuppositions and moral determinations that inform its outputs. It also highlights the challenges engineers face in training models on human content while simultaneously trying to leave behind certain human behaviors.

The question of ethics has forced Anthropic in particular into the spotlight in recent weeks.

The company made headlines earlier this year for clashing with the Department of War over restrictions that prevent its technology from being deployed to conduct lethal operations.

HUGH GRANT MOVIE SLAMS AI; DIRECTOR WARNS 'IT MIGHT KILL US ALL'

It also comes as Anthropic decided to withhold its latest model, Mythos, citing fears that it proved too effective at finding cyber vulnerabilities that could wreak havoc in the hands of hackers.

Amid questions of AI application, Anthropic has marketed its flagship AI, Claude, as the "ethical" AI choice.

"Our central aim is for Claude to be a good, wise and virtuous agent, exhibiting skill, judgment(sic), nuance and sensitivity in handling real-world decision-making," Claude’s constitution reads.

STANFORD PROF ACCUSED OF USING AI TO FAKE TESTIMONY IN MINNESOTA CASE AGAINST CONSERVATIVE YOUTUBER

To get a better sense of what that means in practice, companies like Anthropic have turned to researchers like Askell.

On her website, Askell described her role as refining the way an AI thinks.

"I’m a philosopher working on finetuning and AI alignment at Anthropic. My team trains models to be more honest and to have good character traits and works on developing new finetuning techniques so that our interventions can scale to more capable models," Askell wrote.

PENTAGON’S AI BATTLE WILL HELP DECIDE WHO CONTROLS OUR MOST POWERFUL MILITARY TECH

She previously held a similar position at OpenAI, the parent company of ChatGPT, focusing on AI safety.

The 2023 paper, written two years after she joined Anthropic, noted that encountering discrimination in AI models shouldn’t come as a surprise.

"In some ways, our findings are unsurprising. Language models are trained on text generated by humans, and this text presumably includes many examples of humans exhibiting harmful stereotypes and discrimination," the paper reads.

But it noted that AIs seem to be able to adjust their outputs even without clarification of what discrimination means.

"Our results are surprising in that they show we can steer models to avoid bias and discrimination by requesting an unbiased or non-discriminatory response in natural language."

Askell and Anthropic did not immediately respond to a request for comment from Fox News Digital.

Source: Fox News - Politics

Share this article

Related Articles