

The revolution in artificial intelligence is at the center of a debate ranging from those who hope it will save humanity to those who predict doom. And I think these are all things society needs to figure out as we move along. "And I think we have to be very thoughtful. "You know, one way we think about: How do you develop AI systems that are aligned to human values- and including- morality? This is why I think the development of this needs to include not just engineers, but social scientists, ethicists, philosophers and so on," Pichai said. Scott Pelley with Google CEO Sundar Pichai Society needs to adapt quickly, with regulations for AI in the economy, laws that punish abuse and treaties between nations to make AI safe in the world, Pichai said. He's walking a narrow line in how quickly AI advancements are released.Ĭritics argue the rush to AI comes too fast, but competitive pressure, among tech giants like Google and smaller start ups, is propelling humanity into the future - ready or not. Google is holding back on releasing more advanced versions of Bard that can reason, plan and connect to internet search on their own so that the company can do more testing, get more user feedback and develop more robust safety layers, Google CEO Sundar Pichai said.

Google has also built safety filters into Bard to screen for things like hate speech and bias.

To help cure hallucinations, Bard features a "Google it" button that leads to old-fashioned search. This very human trait, error with confidence, is called, in the industry, hallucination. In an essay the AI wrote about economics, it referenced five books each one was fabricated. Like the humans it's learned from, Bard is flawed.
