Google, which is owned by Alphabet Inc., already has been testing its own conversational chatbot called Bard.
By Associated Press, May 1, 2023
Now, Google is ready to test the AI waters with its search engine, which has been synonymous with finding things on the internet for the past 20 years and serves as the pillar of a digital advertising empire that generated more than $220 billion in revenue last year.
“We are at an exciting inflection point,” Alphabet CEO Sundar Pichai told a packed developers’ conference in a speech peppered with one AI reference after another. “We are reimagining all our products, including search.”
Begun, the chatbot wars have. Microsoft was early out the gate with its updated version of Bing, appending chatbot functionality to its search engine and integrating both into the Edge browser, while Google trailed behind, only just recently making its Bard chatbot available to the public.
Both companies have big plans for generative AI (the catchall name for AI that produce images, text, and video), integrating features into productivity software like Word, Excel, Gmail, and Docs, and pitching their respective chatbots as search engine companions, if not someday replacements.
Now that Bing and Bard are available for anyone to try (waitlist notwithstanding in Bard’s case), Inverse put the chatbots in a head-to-head test to get a sense of their usefulness.
Parsing through the deluge of inundating information hoisted up by algorithmic systems built to maximize engagement has trained us as slavering Pavlovian dogs to rely on snap judgements and gut feelings in our decision-making and opinion formation rather than deliberation and introspection.
Which is fine when you’re deciding between Italian and Indian for dinner or are waffling on a new paint color for the hallway, but not when we’re out here basing existential life choices on friggin’ vibes.
In his latest book, I, HUMAN: AI, Automation, and the Quest to Reclaim What Makes Us Unique, professor of business psychology and Chief Innovation Officer at ManpowerGroup, Tomas Chamorro-Premuzic explores the myriad ways that AI systems now govern our daily lives and interactions.
From finding love to finding gainful employment to finding out the score of yesterday’s game, AI has streamlined the information gathering process. But, as Chamorro-Premuzic argues in the excerpt below, that information revolution is actively changing our behavior, and not always for the better.
In mid-March, a one minute video of Ukraine’s president Volodymyr Zelenskiy appeared first on social media and later on a Ukrainian news website. In it, Zelenskiy told Ukrainian soldiers to lay down their arms and surrender to Russian troops.
But the video turned out to be a deepfake, a piece of synthetic media created by machine learning. Some scientists are now concerned that similar technology could be used to commit research fraud by creating fake images of spectra or biological specimen.
‘I’ve been worried very much about these types of technologies,’ says microbiologist and science integrity expert Elisabeth Bik. ‘I think this is already happening – creating deepfake images and publishing [them].’
She suspects that the images in the over 600 completely fabricated studies that she helped uncover, which likely came from the same paper mill, may have been AI-generated.