Artificial intelligence: Do we know what we're doing?
Already, AI systems seem to be getting closer to making better decisions than a human can, which is a relatively new development. Until now, while even the most advanced machine learning systems have been very good at sorting massive amounts of data and contextualising it to make sense of it, they haven’t been better than us at deciding what to do with their findings.
AI changes all this through one distinguishing quality: its ability to adapt its own behaviour, often in milliseconds – instead of days, weeks or years, as is the case with the average human.
These AI breakthroughs are moving at such a rate that people like SingularityNet founder Ben Goertzel give us between 5 and 30 years before machines are smarter than us.
Daniel Hulme, CEO of AI consultancy Satalia, has a similar view, predicting that by the middle of this century, we’ll build an AI system that’s smarter than us in every single possible way. In essence, he says, it will be the “last invention that we ever create”. There’s an air of inevitability to his thinking, that no matter how advantageous he sees AI as being to us humans, the fact that it will eventually outsmart us means that the outcome for us as a species is largely unknown.
So, if (when?) this point arrives, will AI be benevolent to us, or evil? If you think this sounds dramatic, imagine a future where AI no longer relies on humans for its decision making. What purpose will we serve for it? What decisions will it make, and will those benefit or harm humanity? Hulme rightly then goes on to question how we define “good” and “evil”.
Perhaps, he says, the only thing we can do is to prepare for the inevitability of an eventual superintelligence, and then help it leave the planet with us still intact.
There are two predominant schools of thought about AI: one is that as its intelligence scales up, it will eventually result in self-awareness, which some equate to consciousness. The other says that no matter how intelligent something becomes, it can never be conscious. That’s a uniquely human trait, and something a machine can never learn to be. If this is the case, we’ll always be able to control AI, as we’ll always have the upper hand.
Up until now, we’ve thought that creativity was part of this consciousness – our ability to use our imaginations and come up with many potential answers from a single problem – otherwise known as “divergent thinking”.
But as an article in The Atlantic points out, the new field of generative design, for example, uses AI systems that are fed masses of data and then asked come up with thousands of designs that meet certain criteria. How is this different from creative problem-solving? In essence, AI is displaying divergent potential as well.
The real problem with all of this is that we actually don’t understand what consciousness actually is. At all. We can’t even agree on whether it originates within the chemistry of our brain, or whether it’s something that transcends this dimension and that we can’t understand in our current physical construct.
Philosophical conundrums aside, it’s easy to see the appeal of AI in its business applications, across multiple industries.
Companies like Cisco, for example, are using it to power the next generation of chat and voice assistants for their customers, while US-independent financial advisor RAA is using it for their cybersecurity defence. In healthcare, on-demand house call company Heal is using AI to help doctors with diagnosing patients, and then advising them when their patients’ health is deteriorating and they need a medical intervention that could prevent them from being hospitalised.
There’s no doubt that AI has the potential to change our lives as we know them, in both a business and personal capacity. Hulme predicts that over the coming decade, AI will become commoditised, in that we’ll all have access to it. But despite these clear wins, some are hearing alarm bells about its potential dangers – Elon Musk is one, Bill Gates and Jeff Bezos are two others.
It’s curious that no matter how many tech experts seem to be scratching their heads about it, there are no signs of its development slowing down.
By our very nature, humans are good at building stuff. We’re inventors. Perhaps we’ve become fixated with the idea of building the ultimate “thing” – rather than stopping to ask whether that thing we’re building is a good idea for us as a species.
So, the big question is: despite all the benefits AI offers, do we really know what future price we’ll be paying to get them?