Google’s recent ventures into AI haven’t exactly been successful, and may shine a spotlight on the concept’s faults more than they do on its merits. Bard, Google’s answer to ChatGPT, was flawed from the get-go.
Shortly after it was unveiled, a GIF was posted by Google showing some of its functions. One of the examples involved a question about the James Webb Space Telescope that Bard got very wrong before carrying on as normal. In Google’s defense, Bard isn’t the only LLM with this problem. The inaccuracy many similar models suffer from is further compounded by the fact many of them are convincing liars.
Beyond AI issues, there are other tech limitations that make this collaboration a terrible idea. Think of all the times you’ve asked a question or given a command to Alexa, Siri, or Google home. Now think of the times it misheard you and either didn’t fulfill the request, or had a guess and ended up doing something totally unrelated.
This problem is compounded if someone has an accent, or speaks otherwise non-standard English. While some fast food workers may have a reputation for messing up your order, the chances are AI isn’t going to do much better.