Pranav Mistry, the CEO of Samsung's STAR Labs, speaks to Forbes India about his new technology, which allows human-like digital avatars to emote in realistic ways and interact in real time with original responses
Is it possible to have a fully virtual, computationally created being that looks and behaves like us? That’s what Pranav Mistry, president and CEO of Samsung STAR Labs, is working to build. One of the most awaited demos at the Consumer Electronics Show (CES) 2020 in Las Vegas, the Neon.Life involves human-like digital avatars who look, speak and move like real people, respond in real time, have millions of human expressions and are built to have their own personalities and original thoughts. A digital species of sorts.
Mistry, a computer scientist and inventor who grew up in the small Gujarati town of Palanpur, is known for creating Sixth Sense, a gesture-controlled wearable device. He has previously worked with Microsoft, Google and NASA, among others. At his CES 2020 presentation, Mistry says that if there was one cause he could dedicate his life to, it would be the Neon—a mode to make machines more human.
At the demonstration, which did have some technical glitches, Mistry introduced a few kinds of Neon: a yoga instructor, a flight attendant, a student.
Currently, the Neons work with two kinds of technology—Core R3 (which stands for reality, real time and responsiveness); and the in-progress SPECTRA, which will give Neons long-standing memory, like the human brain.
Mistry says that the technology is still raw, and the beta version of the Neon will debut at the end of 2020, at an event called Neon World. “This isn’t something that will be launched today and you can have in your homes tomorrow,” he said at his CES presentation. “We are becoming more like machines, rather than the other way around. That’s what we want to fix.”
Mistry spoke with Forbes India about the Neon avatars and the potential of the technology on the sidelines of CES 2020. Edited excerpts:
Q. What was the idea behind creating ‘artificial humans’ or Neons?
As you know, my professional pursuit has been about making machines more human. When I was in India, I worked with the India Incubation Centre to make Gujarati keyboards, for example—I’ve always wanted to connect technology to everyone at the grassroots.
With this project, I wanted to make technology more like us, so we don’t have to worry whether people can read and write. Rather than us learning the language of machines, can they learn the language of humans? That’s how it started.
Now, a continuous conversation can only happen with an avatar if it can exhibit all the expressions—the behaviours—that humans do. What we’re doing here is very unlike what artificial intelligence (AI) assistants like Siri and Alexa do. Our goal is not to have something that can answer questions for you. We want to give you technology that is humane to talk to.
Q. How are Neons different from an AI assistant?
They are fundamentally different in a couple of ways. Neons are not connected to the internet to give you answers, and unlike AI assistants, Neons can learn. They will have memory with SPECTRA, which makes them much more intuitive.
Right now, your devices need passwords, two-factor authentication. But Neons have the ability to recognise you and remember your interactions, just as a human friend would. Currently, anyone can come into my house and give Alexa or Siri instructions. A Neon would be more secure because it recognises you. Each Neon has a different personality, and character traits that will evolve over the years. When you interact with a Neon, it will register you and learn about your likes and dislikes. But it won’t know me, because it hasn’t met me yet.
Q. How can we use Neons?
We would start with the corporate world, with selected partners. The Neon has two core technology aspects—the Core R3 and Spectra. We’ve made good progress on giving it memory with Spectra, and we’re super excited about it. But currently, they come enabled with Core R3, and can connect to any third-party value-added service.
Think of a bank in India, let’s say in Andhra Pradesh. The bank could need people with domain-specific knowledge who can also speak Telugu. It can easily plug in this knowledge into the Core R3, and ‘hire’ a Neon to interact with customers. The Neon can show up on any phone or screen and give customers the comfort of talking to a human. Similarly, hotel services could use Neons for any-time bookings or concierge services. Another Neon could become your fitness instructor, or your Marathi or Spanish teacher. We’re doing a lot more with technology right now—can’t we make these interactions more human?
Q. Will this impact jobs?
The impact won’t be on cutting jobs, but on widening the reach of technology. For instance, Neons could be used in the media services industry. If news breaks in the middle of the night, a whole crew has to be woken up and brought in. Instead, we could have a Neon deliver the news break in 50 different Indian languages, automatically translated, instead of having to reshoot multiple times or with multiple anchors. The news anchor won’t be replaced in this scenario, but augmented.
Q. How would you combat concerns about deepfakes and fake news?
There are technologies that look and behave similarly, what we call image and video manipulation techniques. But they cannot generate original content. That’s the power of Core R3. We are not modifying or tweaking what someone has said or interfering with reality. There’s no fake news, because the content is generated by Core R3. It’s not a real person but a Neon, with its own personality and emotions.
When we’re talking to AI assistants on our phones, we are talking to Siri or Alexa, a universal character. But Neons will have their own voices and be unique to you. That’s what makes them private and ethical.