Google Gemini: Former employee, tech leaders suggest what went wrong with the AI chatbot
Partner with Google's startup program said Gemini AI was likely rushed to compete with Microsoft and OpenAI
Backlash to the Google Gemini artificial intelligence (AI) is prompting responses from tech leaders, including a former Google software engineer and a tech entrepreneur working closely with one of Google's startup programs.
Brett Farmiloe founded Featured.com, an AI startup in the Google for Startups Cloud Program and a Level 4 partner with OpenAI through Microsoft's Startup Program.
In the three months his company has worked with Google, they have been introduced to several AI models, such as Bison, Unicorn, Gemini, Gemini Pro and Gemini Ultra. Each model, he said, has had the same "internal cheerleading and hype," but none of them have proven to match where OpenAI was nine months ago.
As a result, Farmiloe and his colleagues have been unable to adopt Google's AI models into their workflows. Though he expects this to change in six months, GPT 3.5 and GPT 4 still best serve their workflows.
"What's happening at Google internally is that they're simply trying to catch up with Microsoft and OpenAI in the AI race. For the first time in decades, they're second best...at best," Farmiloe told Fox News Digital.
He suggested that Google is "trying to close the gap" with releases, PR, hype, and developer involvement but has been unable to close the technical gap despite putting all its engineering focus into AI.
He noted that the company approaches AI in three "buckets:" text, visual and audio. While Google likely feels its text is in a good place, according to Farmiloe, the visual category is the earliest inning.
"As an AI startup in their Cloud Program, we didn't even have access to their visual tools. That's how early they are...even developers with access don't have access. I suspect that Sora's release put even more pressure on Google to make progress in the visual category, and the technology likely wasn't ready for public use," he added.
Google halted Gemini's image generation feature last week after users on social media flagged that it was creating inaccurate historical images that sometimes replaced White people with images of Black, Native American and Asian people.
Google CEO Sundar Pichai told employees last Tuesday the company is working "around the clock" to fix Gemini's bias, calling some of the images generated by the model "completely unacceptable."
GOOGLE'S ‘ECHO CHAMBER’ WORKPLACE CLOUDING ITS IMPARTIALITY: FORMER EMPLOYEE
Farmiloe surmised that given the immense pressure on Google to match Open AI, it likely rushed the release of its image generation, resulting in a "hard and fast" feedback loop.
"What I will say is that when Google releases new models and tools for developers to use, they've been great at saying that the model may not be ready to be used in production. That step seems to have been skipped in this instance, unfortunately," he said.
Former Google product marketing manager Garrett Yamasaki told Fox News Digital that although Gemini's initiative to promote diversity may have been "well-intentioned," it is clear the company encountered "critical oversights in execution."
Yamasaki, who spent many years working as a software engineer for Google before founding WeLoveDoodles, said it can be challenging to balance AI's representation without tipping into bias either by omission or overcompensation. While the "diversity-friendly AI" could have been a step toward addressing historical underrepresentation in digital media, Yamasaki said the backlash reveals the "fine line" between promoting diversity and inadvertently creating a "new form" of bias.
"The implications of biased AI on society are profound, affecting not just image generation but decision-making processes in healthcare, law enforcement, and employment," he added. "The Gemini backlash serves as a cautionary tale about the societal impact of AI and the importance of designing these technologies with a nuanced understanding of human diversity."
Yamasaki suggested that, internally, Google is likely reassessing its algorithms and the ethical frameworks guiding AI development. He said Google is probably engaged in "intense discussions" on refining Gemini's AI to depict human diversity in a "more balanced and accurate manner."
Yamasaki also predicted that the situation might push Google and the tech industry towards "deeper collaboration" with "diverse communities," including experts in ethics, sociology, and cultural studies to inform AI development processes.
"What went wrong in this rollout points to a broader issue in AI development: the need for more inclusive datasets and ethical guidelines that prioritize fairness and representation," he said. "Moving forward, tech companies must engage in open dialogues with diverse communities and stakeholders to ensure AI technologies serve and reflect the richness of human diversity, avoiding biases that could perpetuate inequalities."
The controversy surrounding Google Gemini has also reignited concerns about bias in AI in the larger tech world.
LexisNexis Risk Solutions Global Chief Information Security Officer Flavio Villanustre said large language models (LLMs), on which generative AI products are based, can amplify implicit or explicit bias found in the corpus material they are trained on.
"Because these models are not deterministic but probabilistic, it is very difficult to eliminate bias through algorithmic means or by restricting or filtering the training material. Bias can be exhibited only under conditions depending on prompts and context, so it's not easy to identify every possible scenario that could bias a given response," he told Fox News Digital.
The potential implications for society, Villanustre said, are significant and can range from a "slightly inappropriate response" to an outcome that could break existing anti-discrimination laws.
GOOGLE GEMINI BACKLASH EXPOSES COMMENTS FROM EMPLOYEES ON TRUMP, ‘ANTIRACISM’ AND ‘WHITE PRIVILEGE’
For example, he said using these models to make hiring decisions raises the possibility of discriminating against certain groups or individuals based on ethnicity, gender, race, age and other factors. These models could also make incorrect decisions related to state benefits eligibility, loan rates, and college admissions.
"If these issues are not concerning enough, we are starting to see a more pervasive use of these models in medical applications for diagnostic and therapeutic purposes. If, due to bias, a model incorrectly assesses the condition of a patient or the appropriate treatment, it could lead to life altering consequences," he said.
But CUST Chief AI Architect Adnan Masood said open-source AI models like Gemini are "pivotal" for advancing the field because they enable a democratized approach to innovation. This, in turn, can accelerate the pace of discovery and application across many areas of study.
GET FOX BUSINESS ON THE GO BY CLICKING HERE
Masood said he believes open-source models can also promote transparency and ethical AI development by allowing broad scrutiny and understanding of the models' workings and biases, a fate that befell Gemini.
Masood, who also works as a regional director for Microsoft, said he believes Gemini will enable developers and researchers to innovate and build applications with powerful, efficient and scalable AI models.
"By providing open models like Gemma, Google aims to foster a collaborative ecosystem where the broader community can contribute to the advancement of AI, ensuring responsible development and deployment," he said. "This initiative reflects Google's commitment to open science and technology, encouraging widespread use and exploration of AI capabilities while addressing the need for models that are accessible on a variety of hardware configurations, from high-end GPUs to more modest CPUs."
Google did not return Fox News Digital's request for comment.