Google unveils AI chatbot 'Bard' - its answer to ChatGPT
Pichai says it is exciting to work on technologies that truly help people
Alphabet, the parent company of Google, on Monday announced its AI chatbot technology called "Bard", which the company claims will provide "fresh, high-quality responses" to users ' queries by drawing on information from the web.
In a blog post announcing the initiative, Google CEO Sundar Pichai called the programme an "experimental conversational AI service" that will be made available to the public in the coming weeks.
"We ' ve been working on an experimental conversational AI service, powered by LaMDA, that we ' re calling Bard. And today, we ' re taking another step forward by opening it up to trusted testers ahead of making it more widely available to the public in the coming weeks," Pichai wrote.
The announcement follows the public's rapid acceptance of ChatGPT, a rival chatbot from Microsoft-backed OpenAI that has taken the internet by storm since its debut in November last year.
Although the underlying technology of ChatGPT is not ground-breaking, OpenAI's choice to make the system freely accessible on the web exposed millions of people to this innovative form of automated text generation.
Use of ChatGPT has also triggered debates regarding its impact on education, employment, and the future of internet search.
Meanwhile, the hurried release and lack of detailed information about Bard are signs of Google's "code red" alert, triggered by the launch of ChatGPT.
LaMDA, Google's language model based on Transformer, a neural network architecture, is at the heart of Google's chatbot. Interestingly, ChatGPT is also based on the GPT-3 language model, which is likewise built on Transformer.
Transformer was created by Google Research and released for use in 2017. This technology, which can anticipate outcomes based on inputs, is mostly used in computer vision and natural language processing.
In his blog post, Pichai provided an example of how Bard may be used to simplify complicated topics, like explaining recent findings made by NASA's James Webb Space Telescope to a nine-year-old child.
In a service demo, Bard, like its competitor chatbot, invites users to offer it a prompt while warning that its answer may be inaccurate.
Pichai stressed the need of rigorous testing for Bard, adding: "We ' ll combine external feedback with our own internal testing to make sure Bard ' s responses meet a high bar for quality, safety and groundedness in real-world information."
At this time, Google is releasing a "lightweight model version of LaMDA," which requires far less computational resources, thereby enabling Google to receive more feedback.
Google has been working on its language model for some time, but the firm halted its public release after allegations from one of its employees, who said Google's LaMDA tools was sentient.
Blake Lemoine, ex-Google engineer, began talking to LaMDA last year as part of his role in for Google's Responsible AI organisation, testing whether the tool used discriminatory or hate speech.
Lemoine said LaMDA talked about "personhood" and "rights," and asked to be recognised as an employee rather than property.
Lemoine claimed he went to Google vice president Blaise Aguera y Arcas and head of responsible innovation Jen Gennai with his suspicions, but they dismissed his claims.
The engineer was later fired by the company.
On the most recent earnings call, Pichai said that the world is now ready for generative AI.
"I feel comfortable with all the investments we have made in making sure we can develop AI responsibly and we'll be careful," he said.