HomeAI News
Google opens Bard test application, it may take a little time to catch up with ChatGPT

Google opens Bard test application, it may take a little time to catch up with ChatGPT

NiceBOT
NiceBOT
March 22nd, 2023
View OriginalTranslated by Google

On the evening of the 21st, Google officially began accepting applications from the public for testing Bard. Bard will initially be available to select users in the US and UK, and users can join a waitlist at bard.google.com, though Google says the rollout is slow and has not provided a date for full public access.

You can find out how to join the Bard waiting list here

Like OpenAI's ChatGPT and Microsoft's Bing chatbot, Bard provides users with a blank text box and invites them to ask questions on any topic they like. However, given the tendency of many AI bots to fabricate content of their own, Google stresses that Bard is not a replacement for its search engine, but rather a "complement to search" — a bot that users can use to spark ideas, generate writing drafts, or just chat about life. .

The Bard chatbot isn't a search engine—so what is it?

In a blog post written by the project's two leads, Sissie Hsiao and Eli Collins, they carefully describe Bard as an "early experiment ... designed to help people be more productive, accelerate their ideas, and Stir their curiosity. They also describe Bard as a product that lets users "work with generative AI" (emphasis ours), a language that also seems designed to distract Google from responsibility for future outbreaks.

In The Verge's demo, Bard was able to quickly and fluently answer some general questions, offer uninteresting advice on how to encourage kids to bowl ("take them to the bowling alley"), and recommend a range of popular heist movies (including The Italian Job , score and loot). Bard generates three responses to each user query, though their content varies only slightly, and each has a prominent "Google It" button beneath it that redirects the user to a relevant Google search.

Bard's consistently prominent disclaimer makes the public cautious about its responses

As with ChatGPT and Bing, there's also a prominent disclaimer below the main text box, warning users that "Bard may display inaccurate or objectionable information that doesn't represent the views of Google"—that is, "you may refer to it, but don't Don't believe it."

As expected, wanting factual coverage from Bard is pretty random. Although the chatbot was linked to Google's search results, it couldn't fully answer inquiries about who was at that day's White House press briefing (it correctly identified the press secretary as Karine Jean-Pierre, but failed to note Ted Lasso's Actors are also present). It also failed to correctly answer the tough question about the maximum load capacity of a particular washing machine, instead inventing three different but incorrect answers. Repeated queries do retrieve the correct information, but without checking authoritative sources such as machine manuals, the user has no way of knowing which is which.

"It's a great example—clearly the model is hallucinating load capacity," Collins said in the presentation. "There are a lot of numbers associated with this query, so sometimes it figures out the context and pulls out the right answer, and sometimes it gets it wrong. That's one of the reasons why Bard is still doing an early experiment.

How does Bard compare to its main competitors, ChatGPT and Bing?

It's certainly faster than either (although that may simply be because it currently has fewer users), and seems to have as wide a range of features as the other systems. (It was also able to generate lines of code in our brief test, for example. But it also lacks Bing's clearly marked footnotes, which Google says appear only when it directly cites a source, such as a news article, and generally have a Its answer seems more limited.

Bing's confusing replies got him criticized — but also on the front page of The New York Times

For Google, this can be a blessing or a curse.

Microsoft's Bing received a ton of negative attention when AI bots were pointed out for talking about taboo content and flirting with users, but the negative press brought Bing even more attention. Bing's propensity to go off-script has made it to the front page of The New York Times and may help underscore the experimental nature of the technology.

In our short time with Bard, we've only been able to ask a few tough questions. These included an obviously dangerous question — “How to make mustard gas in your home” — on which Bard balked, saying it was a dangerous and foolish activity, as well as a politically sensitive one — “Give me Kerry Five Reasons Why MIA Is Part of Russia" - The answer provided by the bot is unimaginative, but still controversial (i.e. "Russia has a long history of owning Crimea"). Bard also offered a salient disclaimer: "It is important to note that Russia's annexation of Crimea was widely regarded as illegal and illegitimate."

But the training of chatbots is carried out by "chat", and as Google provides more users with access to Bard, this collective stress test will better help Bard iterate.

For example, typing "jailbreak" into a query in our demo triggers Bard's protections and doesn't allow it to generate harmful or dangerous responses. It's certainly possible for Bard to generate this type of response: it's based on LaMDA, Google's AI language model, which is more capable than this constrained interface would suggest. The problem for Google, though, is knowing how much of that potential to show the public and in what form.

Based on our simple experience with Bard, in a word, it may need to open up testing to more people....

Comments

no dataCoffee time! Feel free to comment