Google search engine has been the dominant gateway for two decades when accessing information online with minimal competition. In November 2022, Opera browser integrated ChatGPT into its sidebar, an AI chatbot developed by OpenAI for quick webpage and article summaries. The platform’s legitimate response to users’ queries gained 100 million active users in its first two months.
In February 2023, Google unveiled its conversational AI chatbot, Google Bard, to counter Opera’s ChatGPT. This AI chat service runs on LaMDA, a language model developed by Google. Unlike ChatGPT, Google AI Bard has had a poor start. The first test run of Google AI Bard has been highly criticized by Google employees and experts. The most notable error is Bard’s mistaken description of the telescope used to take the first image of a planet outside the solar system.
Let’s dive in to learn more about the launch of Google AI Bard and its shortcomings.
Google Employees Take on the Introduction of Google AI Bard
Google employees are reportedly criticizing how the company CEO, Sundar Pichai, handled Bard’s unveiling process. According to a CNBC report, Google staffers are purportedly using their internal forum, MemeGen, to express their opinions about Bard.
More details of the internal memes indicate that Pichai deserved the lowest performance rating (Perf NI). Another meme made fun of the announced layoff of about 12,000 employees, which led to a decrease in the company stock by 3%, while Bard’s wrong answer caused the stock to fall by 9%.
The rush to release Bard was also evident when one presenter forgot to carry a phone for the Demo. Besides, Google’s tweet about the AI demo had factual errors regarding the James Webb Space Telescope. Bard’s error during its first test run dealt a big blow to Google as they lost about $100 billion in stock value in one day.
What is Google’s AI Bard, and How Does it Work?
The Google AI Bard is an experimental AI chat service that works like ChatGPT. The only difference is that it pulls information from the web before generating high-quality replies that are easy to understand. On the other hand, ChatGPT has been programmed with a lot of text data, which helps it understand the context and provide human-like responses. Google plans to integrate Bard into the search tools and even share it with businesses to offer them automated support.
Bard uses LaMDA, a language model that can give replies to questions asked while applying context. Bard scans for information on the web and then converts it to a conversational response similar to that of humans. In addition to its integration into search tools, it will be used in messaging platforms and websites. This will only be possible after the AI chatbot limited beta version is complete.
Here are the three key features of Google conversational AI chatbot Bard:
- Bard is a self-learning platform, meaning users will receive specific answers based on their queries.
- It answers any issues you may have conversationally, like a human might respond.
- Google released the AI chatbot for testers under their LaMDA Lite version.
Can Anyone Use It, and is it Free?
Currently, only the “trusted testers” can use this conversation AI chatbot. The great news is that the chatbot’s AI-powered features will soon be integrated into Google search. So, it’s a matter of time before Google users can know its capabilities firsthand.
During the event on the launch of Google AI in Paris, Google announced that the AI chatbot features would also be incorporated into Lens and Maps. At the moment, Google hasn’t provided any confirmation on whether Bard will be free or not.
The Controversies Around Google Bard
The launch of Google’s AI, Bard, didn’t have a great start. When the AI chatbot was asked about the discoveries made by the James Webb Telescope, it replied that it was the first telescope to take photos of the planet outside the solar system. This information wasn’t accurate because the European Very Large Telescope achieved this milestone. Astronomers identified this error on the social media platform. Subsequent tweets raised concerns about the effectiveness of Google’s fact-checking ability.
Even before the official launch of Bard, the language model used to develop it came under sharp criticism. Soon after the LaMDA publication, Blake Lemoine, one of Google’s engineers, released a document showing that LaMDA could be “sentient.” However, the controversy faded after Google placed Blake on administrative leave before releasing him from the company.
During the release of Bard, Google’s CEO pointed out that the testing phase of LaMDA will incorporate human feedback to ensure that the response Bard provides is safe and high quality. Additionally, the developers of LaMDA are improving the program by allowing it to get information from external sources and updating the information already incorporated in the training phase.
What’s the Difference Between ChatGPT and Google’s AI, Bard?
One of the distinct differences between Bard and ChatGPT is the language model integrated into the bots. ChatGPT is built under GPT 3.5, part of the Transformer language model. Google Bard is built on LaMDA, which was built by Google.
Another notable difference is that GPT 3.5 hasn’t been trained since 2021, meaning some of its information needs to be updated. The terrific news is that the GPT-4 is expected to be released soon with updated training. As for Bard, it can index and interpret passengers from web pages, which means the information it provides is more accurate than ChatGPT.
7 Best Practices When Deploying AI Chatbots
AI will play a significant role in the way people will work in the future. With intelligent bots, businesses will see improved efficiency in their operation. However, when implementing AI bots, brands should ensure security, privacy, and fairness are built into these systems.
Check out some of the responsible AI practices worth considering.
Continuously Test and Track AI Results
It’s crucial to continuously monitor and test your AI chatbot to ensure it’s working optimally and can be trusted. By doing this, if there’s an important feature missing, the AI system won’t create a prediction.
One way to do it is by performing iterative testing, which involves incorporating different user requirements in the development. It’s also crucial to use the most accurate and reliable data when performing the AI tests to ensure your chatbot behaves as it should.
Factor in User Experience
How people interact with your AI chatbot has a significant impact on its decisions, recommendations, and predictions. You can train the chatbot to give one answer if such an answer satisfies the user’s need. In other cases, your system can suggest options. Theoretically, achieving good precision with a single answer is difficult as opposed to a few answers.
Properly Interpret Raw Data
Insufficient data is one of the main causes of a failed AI implementation or misrepresentation of results. Before using any data on your AI chatbot, ensure that you work with business professionals so that you have a proper interpretation. Go through the data and ensure there aren’t errors, typos, or missing components. It should also include all the data sets you wish to analyze.
It’s also important to consider how your data relates to what you wish to predict. One thing to remember is that your data shouldn’t be biased. Once you know your raw data, you’ll quickly identify limitations and set expectations for your system’s prediction.
Consider Re-Organizing Your IT Infrastructure
Many brands are burdened with complex tech stacks and outdated legacy systems, making deploying AI challenging. If your business operates in such a setting, think of ways to create the proper foundation before deployment.
First, consider how your organization can use AI to improve your product, services, or business process efficiency. All these issues will guide you in the successful creation and execution of your AI strategy.
Consider the Most Relevant Use Cases
For optimal deployment of AI, it’s crucial to find the best use cases in areas like natural language processing and machine learning.
Start by checking how your close competitors have implemented their AI platforms. Additionally, consider leveraging AI accelerators from top cloud service providers already part of your Integration Platform as a Service (iPaaS).
Use Different Metrics While Training the AI
When training the AI, you should consider adopting different metrics to help you differentiate between different experiences and errors. Some metrics worth considering include general system performance, user feedback, and false positives. It’s also crucial that your metric matches the goal of your system.
Continuously Perform Updates
The good thing about regularly monitoring and updating your chatbot is that it can take in user feedback. However, before deploying a model, consider ways the update can impact user experience and the system’s overall quality.
Particularly, pay close attention to long and short-term solutions to AI problems. Quick fixes like blocklisting can quickly solve a problem, but there may be better solutions over the years. So, knowing how best to balance quick resolutions and long-term solutions is crucial.
Three months after the launch of ChatGPT, Google released its competitor, Bard.
Unlike ChatGPT, which runs on the GPT-3 language model, Bard is built on LaMDA, a neural network architecture developed by Google. Another unique aspect of Google Bard is that it will pull its information from the web, providing fresh, high-quality responses.
The first test run of Google Bard wasn’t a success because it needed to provide accurate feedback, which caused Googlers to question the need for quick deployment. When fully rolled out to the public, it will be interesting to watch its effectiveness against ChatGPT.