You’ve heard of the Spike Jonze film, Her, right? In it, a lonely guy slowly falls in love with what is, essentially, Siri on steroids: a voice-activated AI program, voiced by Scarlett Johannsen, that’s tailored to his needs and is supposed to help manage his life, but develops a consciousness of its own.

Ever since the film came out – in fact, long before the film came out – people have debated what it would take for a bot to get to the point that it can communicate like a real life human.

Experiments like the Turing test, for example. are designed to see whether an AI program can trick a user into thinking there’s a person at the other end. Trouble is, typically, this is achieved by inventing bots that emulate obtuse, difficult, irrational or paranoid elements of the human character.

For example, someone won the Turing test a few years ago by pretending to be a 13-year-old boy from Odessa. The idea was that, hey, this kid’s from Odessa so he doesn’t really understand English that well, and being a 13-year-old boy, they’re also sarcastic, evading answers or mocking the questions instead of providing useful insights.

Should AI Emulate Humans?

This is where the really interesting question comes in: what is the point of AI?

Ok, so if you ask a bot what the weather’s like today and it replies “why do you want to know?” you might think that you’re talking a person – an annoying, irrational, awkward person. But what’s the point in creating AI that replicates the infuriating elements of people? You have to ask yourself: is the vision to realistically impersonate people, or to do a job that humans do now, but do it better – even if you sound like a robot while you do it?

It’s All About Context

As Adi Azaria, co-founder of Sisense, explains in this panel discussion from the Collision Conference, the point is about creating and meeting expectations.

For example, if you’re using an AI tool like Sisense’s chat bot to find out about your sales figures for the week, you don’t need a system that’s so sophisticated it can understand the subtext of ordinary human conversation, or empathize with you about issues from your childhood or whatnot.

What it needs to do is process the 20 or so possible ways that you could ask it, in natural language, what you’ve sold that week, how many staff you have working on a project etc. – and then answer you clearly and accurately, sending you links to relevant dashboards and breaking this down in text as required.

In other words, in a professional context, you don’t need your chatbot to become your soulmate, understanding all the unarticulated background to what you say 90% of the time.

You need it to give you a clear answer to your key questions – actionable answers, not conceptual ideas.

The narrower the domain, the less sophisticated you need this to be. For example, if all you’re asking is for a breakdown of how many deals you’ve closed that week, your bot doesn’t need to comprehend the significance of what you’re asking – that’s an analysis you can do yourself, fast. Rather, it needs to take all those complex data streams, draw out the key numbers and present them to you in a way you understand.

So – if you’re talking about building Scarlett Johanssen’s “Samantha” character, sure, you’ll be waiting a good 40 years for innovators to find a way to embed sophisticated human psychology into an AI system.

But if there are only a handful of possible answers to draw from your question, it’s already possible to build a bot that does this to impressive effect – even if you talk to it using natural, everyday language.

In fact, that’s exactly what Sisense has already achieved.

Getting Fast Answers to Business Critical Problems

“A year or two years ago, we understood that people want access to their data, but in small doses,” says Adi. “If I ask you a question, you might ask in one answer and I can move on with my life. That’s the direction business is going in.”

“So we decided to connect our environment, the Sisense environment, to a Chatbot framework, [integrated with] Facebook Slack, Skype – whatever is out there for us to connect to – to allow the end businesses to say hey, what are my sales figures for today? How did I do from a quarter point of view? How many employees do I have? And get an instant answer to this kind of question.”

In other words, you can’t ask your Sisense Chatbot to advise you on marketing strategy for the next year. That’s where your expertise comes in. But you CAN ask it to rapidly extract all of the important insights you need that tell you how you’re doing, what’s working and what you should do next.

It automates all the hard graft so that you can save your time for the high-level thinking that drives your business forwards.

We’re Not About to Be Replaced by Robots, Then?

As the Collision panel explained at length, deep learning and AI, in general, has enormous potential to help you run your business better and more efficiently, but for now at least, it’s no match for the way human beings think.

We’re all familiar with the retro sci-fi vision, popular in stuff like Star Trek or 2001: A Space Odyssey, that you will one day just chat to a computer and it will reply with the sophisticated answers you want. Sure, we’ve had tasters of that through Siri and so on, but it hasn’t really delivered yet. Frankly, since the 1950s, every year people have proclaimed that “real” (i.e. humanoid) AI is around 20 years away. Yet here we are, 60 years later, and it still hasn’t got there.

You cannot have a deep conversation with Siri. AI can’t read subtext. It can’t do common sense reasoning.

What it CAN do is detect and process language. It can identify words and syntactic frames to figure out broadly what you’re talking about, and give you answers so long as they stick to the script. What’s more, it can read your tone and connect you with a human if you seem to be getting frustrated.

At present, bots are great with templates – and since most of our basic business queries are linear, this already proves super helpful in a professional environment. Chatbots allow you to interact with them through conversation: the user interface is your query and the way you frame the query through speech. It’s no Samantha, but it’s powerful nonetheless.

Where’s it Going Next?

As Adi explains, this is hardly the end of the line.

Like the human brain, AI’s knowledge and understanding is accumulative. It works through evolution.

“Everything is baby steps. We know that people have tremendous amounts of data and we’re not trying to solve all of this on the first day,” says Adi. “Any question they have requires huge amounts of feedback.”

Chatbots learn, he says, through the feedback they receive, constantly refining and adapting according to whether they get it right.

“If you ask a question and you get an answer, you react accordingly,” he says. “I can score your question and my answers, and others can make use of that. And that works more like a human than a bot, or a data bot – so it can understand from your feedback if it’s doing good or doing bad and it’s going to reduce the amount of wrong answers and give better answers all the time. It’s a learning system.”

Just like natural evolution, precisely where AI will go next is uncertain, but one thing is clear: it’s getting smarter, and giving rise to new functions and commercial applications all the time.

“It’s about micro steps,” says Adi. “Micro progressions towards better business use cases. In the next five years, we’re going to see the next phase of evolution.”

View the full Collision Conference Monster Panel conversation here.

Tags: | |