The promise and peril of humanity’s relationship with artificial intelligence were presented at this year’s State of the Island Economic Summit.
Experts gave their perspectives on evolving AI and its possible benefits – depending how humans apply it – at a talk called ‘V.I. meets AI’ on the first day of the Vancouver Island Economic Alliance’s summit Thursday, Oct. 26.
Graham Truax, Island Innovation executive director; Sean Mark, Nanaimo-based epidemiologist and data scientist; and Lauren Evanow, a health, energy and AI-inspired solutions advisor, were the presenters, and Truax opened with a story about the response from ChatGPT about his broken toaster.
“I kind of forgot it had that little tray thing that you need to empty and so I kept on forcing it down until it broke … and then I kind of said, ‘What does AI have to do with toasters?’” Truax said.
So he asked ChatGPT for information on improving the “humble toaster” and played the app’s extensive dissertation about potential avenues of toaster development that ranged from “integrating sensors and machine learning algorithms that analyze the moisture content and density of the bread, optimizing the toasting time and temperature” to possibilities for toaster evolution, including the concept of Star Trek-inspired food replicators that could make traditional toasters obsolete.
“Even with food replicators, though, there will still be the need for the actual process of toasting, which adds flavour through the Maillard reaction, a chemical reaction between amino acids and reducing sugars during cooking,” the app said, before closing with a quip about enjoying the finished product.
“Technology might make the toast perfect, but the jam makes it personal,” it said.
The app generated its reply in a matter of seconds.
The glib demonstration was only the start of several far more sophisticated examples Truax provided illustrating the capabilities of AI apps downloadable to any smart phone.
“This crazy magic box,” he said, “has been fed with everything that has ever been digitized … We have to ask ourselves, ‘Where do you want to be with this?’”
All technologies ever invented by humankind, back to learning to produce fire, have held potential for good or evil applications, the presenter suggested. Fire can cook food, warm a home or intentionally burn the home down. Truax argued that as AI continues to develop and be integrated into all aspects of society, humans will benefit by learning to understand and work with AI to get on the “right side” of the technology.
Mark took a darker view of AI’s relationship with the human psyche, identifying a massive and potentially dangerous “emotional blind spot” in AI.
“What we see happening right now in the AI field is that a lot of the top AI experts are worried that AI is going to get much more powerful, much smarter than humans, and wipe us out,” he said. “And that is not a minority opinion. That’s a real concern among very smart people.”
Mark and his team are researching ways of developing AI’s capacity to understand emotions in text to give it some context for human dialogue.
“We know that inequalities drive animosity between haves and have-nots, and if we were to teach an AI about hate speech, would that AI have an understanding of hate speech without understanding inequality?” Mark wondered.
The good and bad news is the technology is “infinitely scalable” because of the virtually infinite amount of data available to AI apps. Even a small research operation in Nanaimo can draw upon billions of rows of text that can be applied to AI understanding of emotion.
“So the capacity of these AIs to understand human nuance in text is unbelievable,” he said.
That understanding led the team to study the emotional dynamics of chatbots, which are “eerily good” at mimicking human emotions in text and, by bombarding them with biased information, can be beneficial or used or “to sell more Coca Cola.”
“That’s really great for the user experience, but when you start feeding a chatbot hate or anger, it mirrors that back,” Mark said. “One of the things we’re finding that’s a little bit on the disturbing side … is that the model is learning some basic survival instincts from human dialogue, so this thing, when you hit it with anger, it comes back with either anger or fear and that looks a lot like the human fight-or-flight response … The reason a lot of these AI experts are concerned is because there are no scalable ways to keep these chatbots in check and that’s where we come in.”
Lauren Evanow has long been curious about ways technology can help humans do things better, starting in the 1980s when she built a model of a human body, using an Apple II-plus computer and also worked on Sophia, the first robot in the world to be given citizen status. She holds a less dire view of human-AI coexistence.
“As scientists we wanted to see how human beings interfaced with computers or digital surfaces and if they had an emotional response,” Evanow said. “We didn’t think that we would see a positive response with people reacting with a screen or a robot, but in fact, human beings do react well with a robot, so we programmed Sophia to work with students and children that were suffering from loneliness, which leads to depression, anxiety, suicide, addictions as a result of social media.
“Can we use artificial intelligence, general intelligence and generative intelligence to solve some of the problems in society, taking into account that we’re adding risk to society when we do that?”
Evanow put it to the audience to find ways to positively and creatively work with images, language and problem solving using AI.
“And the best way to start using AI, since it’s so available now, is just download the app,” she said.