Innovation Future Specialist
Unless you are living on Mars or off the grid, you should have seen some awesome predictions about artificial intelligence (AI) and what it might, and might not, achieve. So what do the experts predict about the future of AI?
Let us put them into three camps:
» AI will not have much impact
» Bad things will happen
» Good things will happen
There are a group of people that believe AI will not have much impact, at least not for the next 40 or 50 years. This prediction is based on some or all of the following:
» In past decades when AI was predicted to achieve great things it achieved very little
» Software developers have often struggled to write effective AI systems (using traditional software approaches)
» Looking back at history while trying to predict the future
» Using analogies of manual automation to make predictions, instead of considering what happens when automation copies our greatest asset: not our physical abilities, but our intelligence!
» A lack of knowledge of the big picture, as it is today
» An inability to grasp the impact of future exponential progress, compared to the more linear progress of the past
» Overlooking the ability of AI to rapidly self-improve, at a phenomenal rate of progress
So, as far as this group is concerned, they think that there is nothing to worry about.
In contrast to the above group, there is a group of experts, innovators and scientists that think AI will become super-intelligent. They refer to a point called the AI Singularity, at which point computers become as clever as humans, but continue to improve their intelligence at an exponential rate of progress until they become much more intelligent than humans.
This means that once we become aware of their human like intelligence they may have moved on to become much more intelligent than us, before we have completed the debate on whether this is a good or bad thing. This group expect the AI Singularity to be achieved within this century; some predict within the next 40 or 50 years; but some have been bolder and suggested 2029 [Ray Kurzweil, at Google/Alphabet/X].
Given that most of our infrastructure and our weapons are hooked into computer systems [and the Internet], and that military projects are actively developing robotic systems, there is a potential doomsday scenario where smarter artificial intelligence systems have access to control our infrastructure and our weapons. What they would do with this power is, as yet, unknown. It depends on the objectives of the AI, how it interprets what we are doing in the world, and how it evolves.
Some people, including people at the UN, seriously think that autonomous killer robots should be made illegal across the entire world [stopkillerrobots.org]. That sounds like a sensible, precautionary, approach; but will we be able to stop an illegal arms race in this area?
Another scenario is that humans remain the masters of AI but they are controlled by just a handful of powerful billionaires. Like the reported scenario where just eight billionaires have the same wealth as the bottom half of the world population, the future billionaires could keep all of the benefits of AI and automation for themselves. Because robots do all of the work, there is no need for marketplaces to sell to the public; and so some ruthless billionaires might even say why do we need the poor? With an army of robots the billionaires might also be protected from the poor. Such is the stuff of science fiction films; but also a possibility.
Going along with the above predictions about the pace of change we still end up with super-intelligent computers, but this group believes that will be a good thing. For example, these computers will be able to:
» Cure all diseases
» Prevent ageing, and
» Do all of the work for us.
This utopian scenario means that we would be healthy and free to live a life of leisure, to pursue hobbies and interests, and to explore.
So which group is correct? Well one of the mistakes that some predictors of AI impact make is that they forget this...
Throughout history we have developed tools, and technologies, to amplify what a human can do. These have allowed us to have a bigger and bigger impact, and to do things faster and faster. But on planet Earth we have a population with a diverse range of views and objectives, and some use tools for the good of society, while others use tools for bad reasons (or selfish personal advantage).
This means AI will probably be used for all of our human activities: the good and the bad. The future might be a battle of AIs: the good versus the bad. Alternatively, we can hope that AI sees beyond our petty violent squabbles and guides us to the above "utopian" scenario.
The quest to develop super-intelligent AI will also lead to some unforeseen outcomes...
Advertisement: Learn how to innovate effectively. Learn about good, bad and ugly innovation; and deliver the good. Enrol on a Training Course in Modern Creativity and Effective Innovation
By considering various aspects of AI we might be able to encourage the use of good outcomes, prevent bad outcomes, and (hopefully) prevent unforeseen and accidental ugly outcomes.
You can play your part now. See the kind AI initiative in the box. Promote your thoughts about kindness and ethics for AI (and humans)...