As threat actors embace ChatGPT will Bard be more secure?

As threat actors embace ChatGPT will Bard be more secure?

Image:
As threat actors embace ChatGPT will Bard be more secure?

With Bard, Google has a chance to avoid some of the abuse issues from the start, but will it?

As the competition over AI chatbot technology heats up, one solution provider is hoping that Google is taking "more consideration" about the potential for malicious usage than OpenAI did with ChatGPT.

OpenAI's ChatGPT, the wildly popular AI chatbot that was created with the support of Microsoft, has quickly turned into a go-to tool for many.

Unfortunately, according to security researchers, that includes more than a few hackers.

Soon, ChatGPT will have some major competition from Google's forthcoming Bard chatbot. And this raises a big question from a cybersecurity standpoint: Will Google do more to prevent the malicious use of Bard, including for cybercrime, than OpenAI initially did with ChatGPT?

Rocky Giglio of SADA, a major Google Cloud partner, said he's "hopeful" that is the case.

On Monday, Google CEO Sundar Pichai wrote in a blog post that the company has opened up Bard to external testers and that Google will be making Bard "more widely available to the public in the coming weeks."

Notably, "we'll combine external feedback with our own internal testing to make sure Bard's responses meet a high bar for quality, safety and groundedness in real-world information," Pichai wrote.

Google sees it as "critical" to approach the newest AI frontiers in a manner that's both "bold and responsible," he wrote. As much as ever, the company remains "committed to developing AI responsibly," Pichai said.

Meanwhile, the rapid expansion of ChatGPT through our digital world appears on track to only pick up its pace.

On Tuesday, Microsoft announced a preview for new versions of its Bing search engine and Edge browser that leverage OpenAI technology, including by offering a Bing chatbot that appears to be similar to ChatGPT. Microsoft has invested billions into OpenAI—including a reported $10 billion in recent months—and has pledged to integrate the company's technology broadly across its product portfolio. OpenAI is also behind the widely used DALL-E 2 image generator.

And yet, since the release of ChatGPT in late November, numerous researchers have pointed to the potential for malicious actors to accelerate their development of malware and phishing emails using the AI-powered tool. A researcher from Deep Instinct, Bar Block, told CRN that ChatGPT was extremely effective at following her instructions for generating ransomware code, for instance.

For those intent on using the tool to write malware code for deployment in cyberattacks, "ChatGPT lowers the barrier to entry for threat actors with limited programming abilities or technical skills," researchers from threat intelligence firm Recorded Future said in a recent report.

"It can produce effective results with just an elementary level of understanding in the fundamentals of cybersecurity and computer science."

Likewise, the chatbot's speciality in imitating human writing "gives it the potential to be a powerful phishing and social engineering tool," the Recorded Future researchers wrote.

With Bard, on the other hand, Google would seem to have the chance to avoid some of these issues from the start. But will it?

Giglio, who is director of security go-to-market and solutions at Los Angeles-based SADA, told CRN that he's eager to see what Bard can do, but so far hasn't got wind of what Bard's capabilities will be.

His hope, however, is that Bard will have stronger preventions against malicious usage, such as the creation of malware and phishing emails.

"What I'm hopeful to see, in Bard, is a little bit more consideration there on how we control that platform," Giglio said, including with measures to limit the tool's usefulness for hackers.

And knowing Google's track record and focus around cybersecurity, "I do think there's some of that consideration being built into it, for sure," he said.

In the blog post Monday announcing Bard, which utilizes Google's Language Model for Dialogue Applications (LaMDA) technology, Pichai wrote that Google's commitment to "developing AI responsibly" goes back years.

"In 2018, Google was one of the first companies to publish a set of AI Principles," he wrote. "We continue to provide education and resources for our researchers, partner with governments and external organisations to develop standards and best practices, and work with communities and experts to make AI safe and useful."

Will Google's Bard be "useful"? Without a doubt. Will Bard be "safe"? Stay tuned.

This article first appeared on Computing's sister site CRN.