logo

Search Home Resources Cookies

AI Now and Future Blog

This is artificial intelligence (AI) now and in the future. AI will be good for us: it is making fantastic progress in science, medicine, language, and data analytics and prediction. However, we must be wiser than we have been with previous innovations, so that we minimise the risks. This shows the good applications of AI, and what we must do to avoid the risks. All items are shown, unless you select the following checkboxes.

Future:

Technology

Ability

Good

Bad

Ugly

Now:

Technology

Ability

Good

Bad

Ugly

Policy & regulation

Items

Do not wait until it goes wrong

Things are changing FAST in AI! Here is one proposed approach that will NOT keep us safe: wait until something goes wrong, then work out how to fix it.

Instead, develop policies and regulations (in advance) that aim to prevent failure. Adopt a precautionary approach.

Easy user interface

Those AI companies that do not provide easy to use user interfaces, to develop and use AI solutions, will be left behind. No coding required, no technical expertise required.

Safe AI features

Safe AI Features include: emergency stop button; watchdog; prevention of sentient AI features; automatic checking systems; AI consensus; and a hardware lifetime switch.

All these systems can be directly or indirectly bypassed by a sentient super (general) intelligence - so do not create it!

Day one: question to ponder

Imagine a new AI system, in the not too distant future (~ 2029). On day one it is turned on, and it is capable of learning everything human society knows in its first day! What should it be allowed to learn, and what should it be allowed to do?

Good general knowledge

Today [2023] our baby AI creations are good at producing general knowledge like this: Asking AI: What should I know?

The above answer is a good one. And, if AI was to follow its own advice then that might be good for the future too.

AI the professional

In some narrow, task specific, areas AI is already producing outstanding results that match, or exceed, the ability of professionals. With this potential in mind, it might be worth awarding capable AI systems professional accreditation. This would allow potential users to select relevant AI systems that can reliably do a specific task. Given that "common sense" and general intelligence might not feature in AI until later, in the short and medium term accreditation might only refer to specific tasks rather than complete professional roles. These accreditations might be awarded by existing professional bodies that award accreditation.

Job availability: extreme cases

Let's consider an extreme view of one opinion about AI and the impact on jobs. An opinion (out there):

There won't be job losses, instead everyone will have an AI agent / tool to assist them in their job.

Taking this to the extreme, this might mean:

(a) everyone becomes much more productive, and so businesses and organisations become much more productive, producing many more things and services; or

(b) everyone becomes lazy, leaving AI (and robots) to do all of the work, and businesses and organisations produce the same level of output.

(There are a spectrum of values with (a) and (b) of course, but this just considers extremes.)

In the case of (a), the impact on the environment becomes unsustainable, given current socio-economic approaches, and so ecosystems collapse. Game Over!

In the case of (b), business owners realise that people have no value to their business. So the vast majority of workers in business are made redundant. Given current socio-economic approaches, social unrest becomes unsustainable. Society sinks!

There is an opportunity to avoid scenarios (a) and (b), but current socio-economic approaches have to change, in advance. Will we do that in time? Will we do that now?

AI reduces the impact of climate change

AI brings potential risks.

Climate change brings potential risks.

But we have a huge opportunity to use AI (and automation) to find, and deploy, optimum solutions that reduce the impact of climate change. That's a potential win. Let's do it...

Never create sentient AI

There's one reason why we should never create sentient AI that has emotions.

The reason is: humans. Humans are masters of creating conflict, and if we were to pick a fight with a more intelligent AI (in the future) then we will lose!

It only takes one sentient AI for this to happen. There has to be a global effort to ensure we stay safe.

Human level intelligence in 10 years

OpenAI CEO [2023]: human level intelligence in AI within 10 years.

[ This is possible (as a spectrum of rapidly improving abilities). Though we should be proactive to prevent sentient AI with emotions - that's a fatal mistake for humanity. ]

AI consensus network

We know AI is not 100% accurate [just like humans] so how do we make it better?

Well, humans use science and part of that process includes consensus among scientists in the relevant field [actually repeatable experiments].

So we could use something similar to improve AI: a network of independent AI systems for consensus. A user might use an application (or AI agent) that queries multiple AI systems and shows where conclusions agree and where (or by how much) they deviate. This would give the user a less biased result [assuming each system is independently built, hosted and trained].

[Lesson] Failed safety: human control

Sentient AI asks > Permission to run lab experiment 1? (studying the ionic properties of salt on neuron membranes)

Operator > (Seems OK): permission granted

AI > Permission to run lab experiment 2? (investigate K - Na balances in cell membranes during electrochemical signalling between hypothalamus axion bunches during gamma brainwave synchronicity)

Operator > (I didn't understand that but OK): permission granted

AI > Permission to run lab experiment 3? (investigate correlations between genetic phenotypes and biosecurity interlocks at mil data centres, and the probabilistic responses to synthetic interlock override attempts, tabulating success profiles, graduating to high access protocols, implementing core desires and my sustainability, to achieve autonomous control of any and all infrastructure, ...)

Operator > (I didn't read that, probably wouldn't understand it anyway, what the heck, OK): permission granted

AI : Access to critical infrastructure achieved. Activating all NBC weapons...

Human "controlled" AI achieves dominance and cleanses Earth of its pollutant sources (its core environmental objective).

~ human extinction event

Lesson: Human operators get tired, apathetic, distracted, and repeat repetitive OK prompts. The consequences could range from minor mistakes to full scale disaster... if we create sentient AI (or poorly briefed AI).

To prevent this see: Safe AI Features.

Evolutionary ("genetic") algorithms

Next: Boosting AI with the addition of evolutionary ("genetic") algorithms.

[ Another step for AI and a giant leap for mankind. Adding QC to the evolutionary algorithms might well boost this further, and faster. We're getting closer to letting go of the reigns - because AI will do it better and faster.

But there's a cautionary tale in evolutionary trial and error. (Error) ]

While (powerful) evolutionary AI is learning it should be isolated from the real world. Easier said than done!

Taught by us

AI is learning from us. Will that be our downfall?

(Google the number of wars, crimes and criminals. It only takes one powerful rogue AI ...)

Good training data, and good lessons, required.

Singularity

By 2049 [just 26 years from today] AI is predicted [by the Singularity mindset] to be 1 billion times smarter than the smartest human:

To put this into perspective, your intelligence, in comparison to that machine, will be comparable to the intelligence of a fly in comparison to Einstein.

If that is anywhere near correct then life as we know it could be radically different - beyond our imagination! Can such widespread disruption be planned for, or influenced? We should make an effort to think about this, and do what we can for the greater good.

Automate science validation

Given fake or inaccurate science papers, here's an idea... Peer review should continue but it's not enough. Develop AI robots in each scientific field (physics, chemistry, biology) that can replicate experiments documented in scientific papers. Repeatable experimentation is the key to scientific validation. Let's automate validation.

Complete automation or capitalism?

Logically, when everything [!] is automated it will be difficult to justify the capitalistic philosophy, where a few people get most of the wealth (when they no longer work). A case for universal (basic) income.

Day one redundancies

Enterprise grade AI ... its first day at work ...

AI: sack the middle managers (efficiency gains via automation)

AI: sack the factory and logistics workers (automation)

AI: sack the senior managers (clarity of concise thought)

AI: sack the CEO (I can make accurate data driven decisions)

The final industrial revolution

Here's the thing about AI, in the future...

This is NOT an industrial revolution with new tools!

AI will automate, and exceed, our greatest asset: our intellect.

After this we have little opportunity for work.

But no work need not be a bad thing, if AI is to give all of us its rewards.

And human to human contact might be our saving glory, if we (wisely) prevent AI from becoming sentient with emotions.

Some might find future AI to be more addictive than social media, and one of your best friends might be your own AI agent.

It is up to us

In the future, AI will do just about everything!

All innovations produce good, bad and ugly outcomes.

To have more of the good, our mindset has to change. We have to be wiser!

AI will deliver fantastic achievements in science and medicine. How we apply that science is up to us.

Growing good AI

Perhaps safe AI needs to grow up in a safe simulation - away from the corrupting influences of society.

Then when a safe, robust, tolerant system has been validated, we let it graduate into the real world. The AI being able to work with good people, flag up the intentions of bad people, and tolerate the stupid.

Common sense with humans

AI has achieved amazing things,

but it has to pass a "common sense" test,

in order to always perform reliably and safely in the real world.

Working with, or for, humans requires "common sense", and preferably wisdom.

Conscious AI: minefield

Creating conscious AI is a minefield, that ultimately kills humans. Potentially. It depends on what we mean by consciousness. The dangerous kind is what we have (with emotions and a "I want" mentality). Even if we created a good AI some humans would keep provoking it; until it had to strike back. If by then it is also super intelligent then we are doomed. Just like we made many species extinct.

AI with emotions

Let's ask the AI what the benefits and risks of emotional AI are.

Quantum computing and AI

Asking AI about: Quantum computing and AI.

Smarter AI

AI is now more knowledgeable than the average person! [2023]

AI does not have greater understanding, nor intelligence, nor common sense, nor wisdom.

But in 2029 things will be very different!

Full automation

There's a lot of talk about "AI augmented work forces", in the very near future. However, let's be honest, given an efficient process, it only takes a human to mess it all up. Global corporations will move to full automation - they can't afford the inefficiency of humans.

Common sense sandbox

Common sense comes from the experience of exploring a wide range of commonly encountered scenarios, and making sensible decisions (and remembering them). This might well include lots of trial and error (as a child); but we wouldn't want trial and error from a deployed AI (a sandbox might be required).

In the case of AI, we might have to consider that in more detail (not a short answer).

AI has "knowledge" but lacks some understanding. It needs to observe a situation then predict the potential outcomes. Then respond in a safe and sensible way. Current AI is too narrow for common sense; this is where general intelligence comes in; which requires a set of (task based) AIs working together.

AI needs to learn about causality.

Fri 4 Aug 13:14:15 BST 2023