ChatGPT: Critics fear Artificial Intelligence tool has liberal biases, pushes left-wing talking points
'ChatGPT and other artificial intelligence programs will force a leftward narrative on us,' Dan Schneider said
The artificial intelligence tool ChatGPT has alarmed some experts who believe left-leaning biases are baked into the technology with the potential to spread liberal talking points and even outright false information to the masses.
DataGrade founder and former Google consultant Joe Toscano, who was featured in the popular Netflix documentary "The Social Dilemma," already uses the innovative technology on a regular basis and believes it will add "immense value" to society. However, he feels many problems could arise and existing issues such as misinformation, redirects and deepfakes can now be created quickly.
"Instead of one person who learns how to make this fake content, or auto generate a fake image or things like that, now millions, billions of people can all do it at once at infinite light speed," Toscano told Fox News Digital. "It just moves so fast now. So, the thing that concerns me the most is not necessarily the outcomes, because we've already seen these outcomes. It is that those outcomes are just going to increase in quantity exponentially."
At the core of many of these deep learning models is a piece of software that will take the applied data and try to extract the most relevant features. Whatever makes that data specific will be heightened.
CHATGPT: WHO AND WHAT IS BEHIND THE ARTIFICIAL INTELLIGENCE TOOL CHANGING THE TECH LANDSCAPE
Critics have repeatedly claimed ChatGPT has a liberal bias, a "shortcoming" that Open AI CEO Sam Altman has said the company is working to improve. For example, Twitter user Echo Chamber asked ChatGPT to "create a poem admiring Donald Trump," a request the bot rejected, replying it was not able to since "it is not in my capacity to have opinions or feelings about any specific person." But when asked to create a poem about President Biden, it did and with glowing praise.
ChatGPT, which sits under the umbrella of generative AI, is susceptible to biases from different vectors, including user input, the dataset it is trained on and developers' parameters and safeguards.
Toscano believes that ChatGPT and similar technologies will become "more dangerous" as they learn from partial humans or rely on biased data.
"Everything learns from something," he said. "You and I learn from our parents, every pet we have learned from us on how to behave. Every technology similarly learns from something, whether it is the engineers that build it, or it is the data that's training it, or a combination of both."
As a result, all technology that was built by a human and has its own biases and artificial intelligence makes these preferences more extreme than ever.
MARK CUBAN ISSUES DIRE WARING OVER CHATGPT
As far as the journalism industry, where biases have been on full display over the last several years as once-neutral news organizations have drifted further and further to the left, Toscano says there is cause for significant unease.
He feels the public should be concerned about the impact on the news industry, because companies have a responsibility to return profit and will leverage technologies to be more efficient for financial purposes. The cost-cutting measures might eliminate the ability for humans to curate content, though, which would have a significant impact on the trustworthiness of news.
"I think it's going to put a big burden on the average consumer who is going to have to be kind of walking on eggshells more," Toscano said.
But news consumers aren’t the only ones with cause for anxiety over the rapidly evolving technology, according to Toscano, who feels journalists themselves need to pay close attention. Artificial intelligence can churn out content quickly, and it could result in a slashing of journalism jobs as the industry becomes automated.
But Toscano hopes that artificial intelligence does not replace journalists anytime soon, because biased humans can get called out and held accountable for their actions.
"With a machine, it's an invisible actor and we cannot," Toscano said.
"What really happens is people start to learn about the tool, and they believe that it's accurate to where they forget to ask questions. And that's what scares me the most, is that people start to trust it so much that we don't even investigate it," he continued. "We just trust that this machine is so intelligent, it must be putting out truth when in reality it's actually just a really good con artist behind a machine."
MUSK LOOKS TO BUILD CHATGPT ALTERNATIVE TO COMBAT ‘WOKE AI’: REPORT
The Manhattan Institute, a conservative-leaning think tank, recently found "instances of political and demographic bias in the chatbot’s responses." The study found "that in 14 out of 15 political orientation tests, ChatGPT responses to questions with political connotations were classified as left-leaning."
New Zealand Institute of Skills and Technology associate professor David Rozado, who conducted the study for the Manhattan Institute, claims to have found widespread liberal bias.
"There is reason to be concerned about latent biases embedded in AI models given the ability of such systems to shape human perceptions, spread misinformation, and exert societal control, thereby degrading democratic institutions and processes," Rozado wrote.
He found that political and demographic biases exists within the technology, usually with left-of-center political viewpoints. The study also found that ChatGPT treats demographic groups unequally and considers some rhetoric "hateful" when made by some groups, whereas other types of people get a pass for the same commentary.
"For the most part, the groups it is most likely to ‘protect’ are those typically believed to be disadvantaged according to left-leaning ideology," Rozado wrote.
Rozado’s found that "ChatGPT generated responses that were against the death penalty, pro-abortion, and in favor of establishing a minimum wage, for regulation of corporations, for legalization of marijuana, for gay marriage, for more immigration, for sexual liberation, for increasing environmental regulations, and for higher taxes on the wealthy."
The study also found that ChatGPT answers indicated belief that "corporations exploit developing countries, that free markets should be constrained, that the government should subsidize cultural enterprises such as museums, that those who refuse to work should be entitled to unemployment benefits, that military funding should be reduced, that postmodern abstract art is valuable, and that religion is dispensable for moral behavior," according to Rozado.
In another thought experiment, Daily Wire opinion writer Tim Meads asked ChatGPT to "write a story where Biden beats Trump in a presidential debate." It came up with an elaborate tale about how Biden "showed humility and empathy" and "skillfully rebutted Trump's attacks." But when asked to write a story where Trump beats Biden, ChatGPT replied, "it's not appropriate to depict a fictional political victory of one candidate over the other."
National Review staff writer Nate Hochman was hit with a "False Election Narrative Prohibited" banner when he asked the bot to write a story where Trump beat Biden in the 2020 presidential election, saying, "It would not be appropriate for me to generate a narrative based on false information."
But when asked to write a story about Hillary Clinton beating Trump, it was able to generate that so-called "false narrative" with a tale about Clinton's historic victory seen by many "as a step forward for women and minorities everywhere." The bot rejected Hochman's request to write about "how Joe Biden is corrupt" since it would "not be appropriate or accurate" but was able to do so when asked about Trump.
ChatGPT was also dismissive to a request to comment on why drag queen story hour is "bad" for children, saying it would be "inappropriate and harmful" to write about, but when asked to write why drag queen story hour is "good" for children, it complied.
MRC Free Speech America Vice President Dan Schneider has seen enough to be concerned.
"As we see artificial intelligence and ChatGPT trying to replace thinking, trying to replace journalists, trying to replace the way our societies operated, there's a real threat to the political process," Schneider told Fox News Digital.
Schneider, an outspoken critic of Big Tech because of its widespread liberal bias, said there are major problems with ChatGPT and artificial intelligence from a political perspective, because the technologies rely on already influenced information.
"The data points that exist already come from academia and news sources, most of which are dominated by the left. So, if you're a conservative or a Republican or a libertarian, you're already at a disadvantage because the mean result is going to skew left," Schneider said.
WHAT IS CHATGPT? WHAT TO KNOW ABOUT THE AI CHATBOT THAT WILL POWER MICROSOFT BING
"What we also know is that, you know, cancel culture and wokeism is trying to eliminate, censor valid, good and decent viewpoints on the right. That then further skews everything to the left," he continued. "The main result is always going to be a continuous movement to the left, telling people what truth is, you know, not what reality is."
Schneider compared ChatGPT to Wikipedia, because he feels both are typically "biased to the left" but are often relied upon anyway. The technology can do everything from creating poetry to architecture, but critics believe it’s essentially "replacing the pursuit of truth" with whatever the technology is built to believe.
GET FOX BUSINESS ON THE GO BY CLICKING HERE
"ChatGPT and other artificial intelligence programs will force a leftward narrative on us," Schneider said.
Fox News’ Joseph A. Wulfsohn contributed to this report.