Latest News, Local News, International News, US Politics, Economy

Google AI chatbot bard draws criticism for misleading responses and inaccuracy

Internal mails from Google employees begged the corporation not to release the chatbot Bard and repeatedly called the system a pathological liar.

This is supported by a startling report that includes interviews with 18 current and former Google employees as well as screenshots of internal communications.

Google AI Chatbot Bard Gives Users Dangerous Advice

During these internal talks, one worker observed that Bard routinely provided users with risky advice, whether it related to scuba diving or how to land a plane.

In an effort to compete with rivals like Microsoft and OpenAI, Google appears to have disregarded moral considerations.

The corporation has long faced criticism for putting revenue ahead of safety and ethics work in AI, despite constantly praising it.

According to the publication, Google’s efforts to advance Bard reportedly picked up in the latter part of last year after ChatGPT’s success caused senior brass to issue a competitive code red.

Many people believe that Microsoft’s planned integration of ChatGPT into its Bing search engine will threaten Google’s monopoly in the Internet search market.

Last month, Google began rolling out Bard to American users as part of an experiment.

Many Google employees, however, expressed concerns prior to the launch when the firm asked them to test Bard in order to find any potential faults or problems, a procedure known in the tech community as ‘dogfooding.’

The chatbot was throwing forth information that ranged from erroneous to perhaps hazardous, according to Bard testers.

Google allegedly reduced standards for AI that are designed to specify when a specific product is acceptable for public usage as Bard got closer to a potential launch.

Read more: 6G Technology: What It Is And When We Can Expect It To Be Available

Embarrassment and Criticism

Google-ai-chatbot-bard-draws-criticism-for-misleading-responses-and-inaccuracy
Internal mails from Google employees begged the corporation not to release the chatbot Bard and repeatedly called the system a pathological liar.

An evaluation by members of her own team that Bard was not ready for deployment owing to its potential to cause harm was overruled by Jen Gennai, Google’s AI principles ops & governance head, in March.

The AI project’s future and any necessary modifications are then decided by a group made up of senior Google product, research, and business officials, Gennai continued.

The software juggernaut has already experienced some humiliation as a result of Bard’s introduction.

The Justice Department’s antitrust officials are pursuing legal action against Google, and last month, app researcher Jane Manchun Wong published an interaction in which Bard supported them by asserting that Google’s designers had a monopoly on the digital advertising market.

Social media users pointed out in February that Bard’s response to a prompt from a business advertisement for the James Webb Space Telescope was incorrect.

Read more: Netflix To Crack Down On Password Sharing With New Fees

Leave A Reply

Your email address will not be published.