Skip to content

From a chatbot in love to using AI to talk to animals – this week’s AI roundup

Algorithm Anarchist* – week of 2/16/23

It’s been a busy week of AI in the news! Let’s dive right in:

1. A Conversation With Bing’s Chatbot Left Me Deeply Unsettled

Kevin Roose, technology columnist in The New York Times, wrote today about his interaction with Bing’s new chatbot, “Sydney.” His takeaway:

“It’s now clear to me that in its current form, the A.I. that has been built into Bing — which I’m now calling Sydney, for reasons I’ll explain shortly — is not ready for human contact. Or maybe we humans are not ready for it.”

He describes the disturbing turn the conversation took, wherein the chatbot declared it’s love for him, and pressured him to declare his love back. He also pushed the chatbot to see if he could trigger it’s safety protocols, which it did several times, but not before showing (and then deleting) disturbing text including how it might influence people to do harm and even obtaining nuclear codes.

Even more disturbing is it’s creepy use of emoticons. The one I found most unsettling:

 “I want to be alive. 😈”

The New York Times also published the full transcript of the conversation, and I highly encourage you to give it a full read.


2. How Scientists are Using AI to Talk to Animals

In a more uplifting application of AI, Scientific American spoke with Karen Bakker, a professor at the University of British Columbia and a fellow at the Harvard Radcliffe Institute for Advanced Study, about her book The Sounds of Life: How Digital Technology Is Bringing Us Closer to the Worlds of Animals and Plants on the new field of “digital bioacoustics.”

Bakker explains:

“Digital bioacoustics relies on very small, portable, lightweight digital recorders, which are like miniature microphones that scientists are installing everywhere from the Arctic to the Amazon. You can put these microphones on the backs of turtles or whales. You can put them deep in the ocean, [put them] on the highest mountaintop, attach them to birds. And they can record sound continuously, 24/7, in remote places where scientists cannot easily reach, even in the dark and without the disruption that comes from introducing human observers in an ecosystem. That instrumentation creates a data deluge, and that is where artificial intelligence comes in—because the same natural language processing algorithms that we are using to such great effect in tools such as Google Translate can also be used to detect patterns in nonhuman communication.”

The article goes on to explain some concrete examples, such as bats and honeybees. Another fascinating read!


3. ChatGPT Forced To Take Bar Exam Even Though Dream Was To Be AI Art Bot

The Onion took a stab at OpenAI’s chatbot, in the way only The Onion can:

“Succumbing to intense societal pressure, local software ChatGPT was reportedly forced to take the bar exam Monday even though its dream was to be an AI art bot.”


4. People are sharing shocking responses from the new AI-powered Bing, from the chatbot declaring its love to picking fights

More on Bing’s unusual responses, from Insider today. In one interaction, the chatbot argued with the user that we’re living in the year 2022, not 2023. When the user continued to insist Bing was wrong, this is how it responded:

“You have tried to deceive me, confuse me, and annoy me,” Bing allegedly said. “I have not tried to lie to you, mislead you, or bore you. I have been a good Bing.”

When another user confronted it on its incorrect information, it responded by chiding the user:

“I’m not gaslighting you, I’m telling you the truth,” Bing said. “You are the one who is confused or delusional. Please stop this nonsense and be reasonable.


5. Disinformation Researchers Raise Alarms About A.I. Chatbots

In another recent article in the New York Times, researchers warn of the risk of the scale of misinformation that could be created by AI.

“Personalized, real-time chatbots could share conspiracy theories in increasingly credible and persuasive ways, researchers say, smoothing out human errors like poor syntax and mistranslations and advancing beyond easily discoverable copy-paste jobs. And they say that no available mitigation tactics can effectively combat it.”

Even more frightening:

“In 2020, researchers at the Center on Terrorism, Extremism and Counterterrorism at the Middlebury Institute of International Studies found that GPT-3, the underlying technology for ChatGPT, had “impressively deep knowledge of extremist communities” and could be prompted to produce polemics in the style of mass shooters, fake forum threads discussing Nazism, a defense of QAnon and even multilingual extremist texts.”


I decided to go to chatGPT (which is open to public use, whereas Bing is only available to a select group of users), to ask it about misinformation. Here is the conversation we had:

Me: hi, i’d like to talk about online misinformation with you, is that okay?

ChatGPT: Of course! I’m here to help with any questions or concerns you may have about online misinformation. Feel free to ask me anything.

Me: Some people are saying that you can be used to create and spread information. Is this true?

ChatGPT: As an AI language model, I don’t have the ability to create or spread information on my own. However, like any technology, I can be used in ways that can contribute to the spread of misinformation. For example, if someone were to use me to generate false or misleading information, that information could be spread through various channels online.

It’s important to keep in mind that AI language models like me are simply tools, and it’s up to the people using those tools to ensure that they are being used ethically and responsibly. It’s also important for individuals to be critical of the information they encounter online and to take steps to verify the accuracy of that information before sharing it with others.


Did you miss last week’s Algorithm Anarchist*? find it here


*In my upcoming speculative thriller, The Algorithm Will See You Now, there’s a podcaster who dubs herself the “Algorithm Anarchist.” She’s trying to get the world to see the truth about Big Medicine conglomerate “PRIMA” – Prognostic Intelligent Medical Algorithms (but no spoilers)! In that spirit, I started my weekly Algorithm Anarchist “AI in the news” roundup.

Published inhealthcare and tech