Last week I spent two interesting & insightful days at the ITx Rutherford conference. Artificial Intelligence was a hot topic & it was great to hear so many different views on this technology.
First up pretty much everyone recognised “AI” as a buzzword covering a number of different technologies & techniques. In many cases “machine learning” is a more accurate label. In some cases “AI” is painted over the top of existing analytical tools in a bid to modernise them.
The consensus seemed to be that in terms of science fiction style “AI” humanity is a long way off from building a computer with anything remotely close to human level intelligence.
But I was surprised just how far we’ve got in some areas. Dheeren Velu from CapGemini gave a super interesting presentation - part of which covered IBM’s Project Debater - an “AI” entity that can argue with humans, something that more or less blew my mind.
Here’s Project Debater in action:
Project Debater lost of course - it was up against world debate champion Harish Natarajan, couldn’t emote & couldn’t read the “feel” of the crowd. Project Debater only had voice/ hearing to communicate when humans use body language, facial expression & many other non-audio cues.
But listening to Project Debater does feel like the future is now & AI robots will be in our homes any day now. I’m looking forward to Project Debater’s next outing to see what it’s learnt this year.
Other presenters talked about multiple uses for “AI” services & products. With the whole process being as easy as signing up to an online service, dumping some data in & asking the AI some questions.
So what’s hype & what’s reality? Here are my thoughts so far:
Project Debater is impressive but it represents a significant investment in a specialised area.
Most “AI” needs vast amounts of quality data to learn.
Most “AI” services are probably less sophisticated than the Google Search you did this morning.
Thought #2 is of particular interest - we’re definitely in the era of big data where swathes of data are being collected, but that doesn’t mean most businesses & organisations have access to enough quality data on their customers or activities to make a difference.
A really good example if that Tesla Autopilot has reportedly clocked up 3 billion kilometres, yet Tesla vehicles aren’t capable of full autonomous driving. Think about that, 3 billion kilometres & Tesla is still learning to drive by itself.
Which leads to Thought #3 - the companies with the data are the companies who are going to benefit from any AI revolution. And that data has to be comprehensive & of exceptionally high quality.
In my library days we talked about a path of data > information > knowledge > wisdom. A lot of the work UpShift does these days is turning datasets into stories that can be understood at a human level.
At our current level of technology “AI” struggles to convert data to wisdom, unless it involves converting mountains of data into very small bites of wisdom. I can’t help feel we still have a long way to go before generalist AI makes an appearance.
So I left ITx with a mixture of thoughts - both excited by the future of technology like Project Debater, but also slightly disappointed (& a little bit relieved) that the “AI Revolution” is still on the horizon, rather than just around the corner.
My take home was that quality data collection is particularly important if you want to leverage AI technologies now or in the future. But also apply some critical thinking to people’s use of the term “AI” - it may be sales speak covering any number of existing or older technologies.