logo

Home Contact Blog Forum Resources Tools Random About Cookies

Future Risks and Solutions

This describes some of the future risks presented by challenges from attitudes, population growth, environmental impacts, science, technology, innovation, etc. It also suggests potential solutions. Although this page focuses on risks, it is worth remembering that science, technology and innovation have delivered fantastic benefits and will continue to do so [assuming that the recommended solutions below are adopted] - see Good Innovation.

Filter entries by tag values

All entries are shown by default, unless you select one or more of the following tags.

Science

ICT

AI

Nanotechnology / Materials Tech. / Biology

Robots

Transhumanism (augmentation of humans)

Machines

Transport

Energy

Environment

Space

People

Purpose

Process

Privacy Zero

Privacy exposed under the constant glare of the Internet of Things, Big Data and AI.

We are entering an era where trillions of devices will constantly monitor every parameter of interest. These devices will be connected to the Internet and it is called the "Internet of Things". Many such devices already exist and monitor a vast array of parameters from the global scale down to local societies, and even the biological parameters of individuals. This generates vast amounts of real-time data, called Big Data. AI systems can process this data to learn how systems, and people, work.

AI can then use these learned patterns to predict (possible) future events, such as traffic flows, crime hot-spots, your suitability for a specific job, your health status and risks, and the age at which you will die. It gets even more personal if brain scanners become widely available. With these AI systems can already tell what you are thinking [to an extent], including the images in your brain if you are watching a film (or dreaming). With the availability of so much Big Data and increasingly smart AI systems it is possible to derive a person's identity from a set of "anonymous" data; and to join up the dots between data points to derive (probable) actions that you took, even where no direct monitoring took place! In short, at a technical level, privacy is rapidly becoming non-existent!

Because the benefits of these systems are so great, it is unlikely that they will be banned. So what is the solution to privacy?

Perhaps the only solution is to explicitly outlaw (AI) applications that aim to compile privacy data for any and all uses, without the explicit authorisation of the individual concerned. It should probably go further and require explicit authorisation for every transaction that uses personal data. So for example, if a corporation wishes to send your medical diagnosis to three health corporations then the individual should be asked to authorise each recipient corporation (with three requests, or one request with three check-boxes for each corporation). The request should be transparent, showing the data and the intended use by each recipient. Showing the data allows the individual to spot potential issues with bias and inaccuracy, and gives them greater control over their personal data. The individual should also be given the opportunity to change the default expiry date for the associated data. In the future, authorisation might be automated (to an extent) by the individual's personal AI agent [if they trust it].

This example taken from another context [patient medical data] shows a Privacy user interface.

Tags: ICT AI Machine People

AI Errors

Inaccuracies and bias in AI predictions.

All (computer) models require calibration and validation. Calibration sets parameter values so that a model functions in a realistic manner. Validation indicates how accurate the predictions of a model are, for a range of test scenarios. No model is completely accurate for all scenarios.

In AI, deep learning is probably the dominant computer model. It is calibrated by using training data with the associated correct answers. This might be susceptible to bias, depending on the extent of the training data used and the "correct" answers supplied. If the AI has been trained from a limited set of scenarios then when it encounters radically new scenarios the uncertainty, or error, can become significant.

The role of validation should be to predict how accurately the system performs for various types of scenario. In the case of AI, this validation role might vary radically across organisations. The best AI systems are very impressive: matching or exceeding the abilities of professionals across various tasks. However, this does not mean that all AI systems are equally good. Some organisations might dedicate insufficient effort to calibration (training) and/or validation. It is also worth considering scenarios where even the best AI systems will fail to deliver accurate predictions.

The following points might be helpful for delivering accurate AI systems:

» Develop good practice guidelines for AI calibration (training) and validation, and perhaps create an international (ISO) standard.

» Create an independent review mechanism to ensure standards are maintained.

» Educate AI professionals in good practice, and perhaps create a qualification.

» Be explicitly clear to all users, and the public, what the limitations of a specific AI system are.

» Encourage approaches that make the internal workings of an AI more transparent - provide explanations that show how each prediction was derived.

» Include an accuracy (or uncertainty) range for each prediction.

» Be aware of the fact that sometimes a scenario can have multiple "correct" answers, depending on the perspectives of each stakeholder involved. Develop AI systems that can handle this.

» Develop AI systems that can learn directly from a wide range of people - allowing them to "understand" a range of different perspectives. [e.g. CRMxI Generator]

» Provide personal AI agents that are aware of the user's unique perspective - so as to select the most relevant answers or predictions.

» Provide a network of independently built, owned, trained and validated AI systems, each focused on the same task. By consulting the predictions of each independent system the overall consensus might be less susceptible to bias. And for scenarios where the predictions vary widely across systems, this inconsistency can be used to indicate significant uncertainties for those scenarios. Depending on the number of systems consulted, this might be a task that is best automated by each user's personal AI agent - with the option of transparency, to see the underlying predictions and uncertainties across the network.

Tags: AI Process

Bad Ugly AI

Artificial Intelligence used for bad intentions, and ugly AI.

AI will probably become our most powerful innovation, and that means more Good, Bad and Ugly Innovations are likely. As we become more dependent on AI, and it automates more systems, the bad and ugly aspects present significant challenges to our global security.

You can see in the above link that people have never been slow to use new innovations for bad purposes - especially for conflict, crime, weapons and warfare! AI will probably be increasingly abused in this way.

Unforeseen and accidental events might result in some AI systems having a detrimental, ugly, impact on society. This may occur for various, as yet unknown, reasons. Currently, there is much speculation as to what some of these reasons could be:

» The AI just misunderstood its brief.

» The brief given to the AI failed to consider the overall impact (e.g. Prof Bostrom's paperclip scenario).

» Bias or inaccuracy in the AI model.

» Including sentience and emotions in an AI - and then threatening its existence or upsetting it!

With regard to bad AI, we need to adopt solutions that prevent people from doing bad things. One of the dominant tools nationally and internationally is the law. However, the law isn't a guarantee of security - given how many criminals, terrorists, aggressive dictators and wars there currently are. In the future, just a few bad actors with powerful AI could present significant dangers. Similarly, some nations are racing to deploy autonomous "killer robots", which come in many forms. There is a worthy international campaign to Stop Killer Robots, but how effective will it be?

The following might reduce the probability of the accidental, ugly, risks:

» The solutions proposed in AI Errors (bias and inaccuracy).

» Include human authorisation protocols, wherever possible.

» Include manual override modes, wherever possible.

» Include Emergency Stop buttons, and Power Off switches.

Tags: AI Robot People Purpose Process

Redundant By AI

You are redundant: AI can do that better and cheaper.

One of the biggest surprises, and risks, to society will be the speed and magnitude of the efficiency impact from total automation. Consider the current dominant philosophy of unrestrained global capitalism and ambitious multi-national corporations: increase effectiveness and efficiency (bigger market share and profits, reduced costs, quicker response times, and consistent quality control). Senior management knows that by increasing effectiveness and efficiency in this way they can increase profits, and their own salaries and bonuses.

As AI and their colleagues (sensors, machines and robots) offer new solutions that are more effective and efficient than old fashioned labour and intellect then corporations will rapidly adopt those new solutions and abandon the old ways of working. To be blunt, this eventually means making people redundant at all levels throughout an organisation: manual labour in manufacturing and construction, administration, (middle) management, research, science, creativity, innovation, etc. Greed aside, to be fair to the corporations they have little choice in the global marketplace - they either adopt all of the latest innovative advantages or die as a result of more efficient competition.

Unlike every other industrial revolution this one will be different in the most significant way: it will automate our greatest asset, our intellect! This means that those predicting we will adapt, as we have in previous revolutions, could be wrong. However, for some contexts the speed of automation might be slower (e.g. in less developed countries; and some roles that AI might find very challenging, such as manual labour in maintenance roles out in the real-world).

Redundancy not only poses socio-economic challenges, it also erodes a sense of purpose and achievement. For some people, this could have a big impact on their mental health and wellbeing. See: People and Purpose (and Health).

The author suggests that this impact, if it continues in its current form, could be as severe as climate change but its impact will be felt much sooner: before 2040. In other words, this issue is both important and urgent.

Potential solutions to this have been widely debated (e.g. Universal Basic Income). However, there has been no significant change socio-economically to help society embrace, endure or avoid this impact. Any successful solution will need to be radical! The pace and magnitude of these changes could ultimately be decided by society, globally. Society might also want to reserve emotions and empathy for human only roles. [See the risk of giving AI emotions in Bad Ugly AI.]

As a closing thought on this difficult challenge; there might be a radical opportunity. It is technically possible to give everyone, globally, access to the same AI capabilities. AI coupled with future innovations could allow self-sufficiency to develop at the local level. [Though we might need to agree on how much land, and resources, an individual needs to survive - and make those available accordingly.]

Tags: AI Robot Machine People Purpose

Cyborg Risks

Cyborgs bring hard and soft risks.

Transhumanism

Transhumanism includes modifications made to humans by the addition of implants and/or biological modifications (e.g. genetic engineering). The risks are similar to the cyborgs described here.

Here we consider "hardwired" cyborgs to be people that contain an implant within their body. This means some people already fall into this category, such as those with medical implants: pacemakers to regulate heart beat, cochlear implants to aid hearing, and chips to provide sight. Unlike external aids (such as hearing aids and glasses) these cyborgs are unable to remove their device and might have limited or no control over its functionality; and they cannot turn it off.

A broader definition includes "soft" cyborgs: devices can be removed from the body. Examples of this might include hearing aids, glasses, VR headsets/Glasses, and exoskeletons. The user can remove and turn off these devices.

The benefits of these devices are that they can restore or enhance human abilities.

The risks from soft cyborgs are relatively low, although some devices may be used in bad contexts such as conflict, crime and warfare.

Some hardwired cyborgs might face more significant risks, particularly with regard to brain implants intended for connection to the Internet and AI, and brain uploads. For example see: Virtual Abuse, Privacy Zero, AI Errors, and Bad Ugly AI.

Cyborgs with significantly enhanced abilities may also pose additional threats to the rest of society. Today, society struggles to deal with disability, equality and racism. In the future, enhanced cyborgs could greatly increase such challenges! Standard humans could become the new "disabled", facing new inequalities, and racism from "superior" cyborgs.

Tags: TransHuman People

Risks After Science

Science will vastly increase our knowledge and capability, in expected and unexpected ways! That is potentially good and risky.

Science is neither good nor bad: science is an objective research activity. It will vastly increase our knowledge and capability, in expected and unexpected ways - within this decade! The potential risks apply when people, organisations, corporations and nations start to apply this new knowledge: see Good, Bad and Ugly Innovation.

This will give rise to increasingly powerful technologies across all sectors. Some call these "exponential technologies", and their impact can be an order of magnitude larger than the technologies that we use today. This means that the resulting disruptive changes could lead to new issues that we have not encountered before, and we might not know how to deal with them. For example:

» What happens if people become "redundant" in various ways? See Redundant By AI.

» What happens if a new technology can remove a whole forest in one year?

» What happens if a critical accident produces permanent, irreversible, on-going damage to crops, humans or the environment?

It is difficult to identify a robust solution that prevents these risks, because scientific knowledge can be used to do good, and so withholding this knowledge (from risky actors) might be considered unethical and stoke inequality. Also, rogue nations are capable of independently developing their own science programmes. We can continue to use international laws and regulators to attempt to prevent these risks. However, the global scene today reveals that some rogue nations ignore the rules, some corporations and businesses bend the rules, and even the occasional scientist crosses the line. There are opportunities to:

» Strengthen the effectiveness of international and national regulators.

» Increase data collection and transparency.

» Develop policies and procedures for safer deployment and application of scientific findings.

» Adopt precautionary approaches, by default.

» Encourage ethical consumer purchasing power, to provide disincentives to those that adopt bad or ugly practices.

» Use online data, science, education, and structured debate to facilitate democratically robust, global, decision making [e.g. GNCP] - independent of a minority of potentially biased leaders focused on greed, power or ignorance.

Tags: Science ICT People Purpose Process

Biotech Risks

The manipulation and utilisation of nature, and the creation of synthetic biotech, will take us into unknown territory.

This is a subset of science and so the points made in Risks After Science apply. More specifically, two aspects to biotech are considered here:

» adjusting or adding to natural organisms (for medical reasons, or to improve attributes); and

» creating new synthetic entities ("lifeforms" or technologies) for new benefits and challenges.

Genetic Engineering

The debate about the safety, or otherwise, of genetic engineering sometimes assumes that if one modification is proven to be safe that all similar modifications in that category will also be safe. This might not always be the case. For example, there have been concerns that the CRISPR technique might not consistently guarantee the required accuracy. There is a risk that the industry (and regulator) become complacent.

Medical advancements are widely welcomed, but sometimes despite testing things go wrong (e.g. Thalidomide). We will see an increasing set of medical treatments become available, and if testing is insufficient risks could arise. There are two types of risk: controllable and uncontrollable. A medication administered in a conventional way might give rise to adverse acute symptoms, which can be controlled by withdrawing the medication. This is a relatively manageable risk compared with uncontrollable scenarios, such as modifying germ-line DNA and finding out years later (after widespread dispersion) that the modification has dangerous irreversible consequences, propagating from generation to generation (e.g. human, animal or crop species).

There are significant issues associated with uncontrollable scenarios: potentially unstoppable growing, global, impact; and insufficient funds to stop the impact or compensate those affected. With regard to the latter point, there appear to be no explicit national or international emergency funds to address specific uncontrollable scenarios. Instead, we typically rely on the offending corporation. They are sued for damages and/or expected to resource the cleanup or recovery operation. However, if a future widespread, uncontrollable, disaster takes place the corporation will have insufficient funds to mitigate the impact or compensate victims - instead the corporation will collapse; leaving the rest of society to cope with the disaster. Note that it is possible that such a disaster might be irreversible!

The creation of synthetic "biological" entities could pose very significant risks. We have no reference models to predict with confidence what the outcomes will be. For example, in the laboratory, DNA has been synthesised with an extra "letter" (molecule) not found in natural DNA. Future uses are as yet unknown, but we can speculate that such approaches might be used to create "lifeforms" that have technologically useful properties [e.g. "living", self repairing and growing, quantum computers providing advanced levels of artificial intelligence]. What happens when this hybrid bio-technology entity encounters natural organisms in the environment? What happens if the entity evolves and becomes uncontrollable? What happens if a widely dispersed AI adopts an aggressive attitude to humans, or nature generally? What happens if we accidentally create "grey goo" that consumes all natural resources and organisms?

See also transhuman risks: Cyborg Risks.

Genetic Engineering

Adopt the precautionary principle and robustly test every genetic modification. Avoid assuming it is safe because a previous, similar, modification was safe.

If history has taught us anything, it is that we make mistakes; and so it is wise to assume that more mistakes will follow. However, biotechnology needs to be treated differently to other risks because it can be mobile, dispersed, widespread, self-propagating, evolving, destructive, and potentially unstoppable! So instead of working out how to stop it once a problem has been detected, we should build in mandatory safety systems. For example, for a biotech entity to function it might:

» have to get its energy source from an externally provided electromagnetic field (not present in nature)

» receive a periodic or constant artificial chemical enabler

» receive a periodic or constant (digital) authorisation signal, and/or

» have an inbuilt timer that guarantees a maximum limit to its lifetime.

Avoid too much reliance on laws and regulations - accidents will still happen.

Tags: Science ICT AI NanoMatlBio Robot TransHuman Machine Process

Land

The area and quality of land available is critical to survival and quality of life.

Some people might take land for granted, but it is a vital finite resource. It provides an area to capture sunlight and rainfall, supports ecosystems and agriculture, provides habitat, and underpins the infrastructure and activities of human civilisation (buildings, transport, utilities, resources and waste). We should also remember that the land is shared amongst many species, not just humans. It seems that the planet is not big enough for all of its species, given the rate of human activity (population growth, construction, mining, deforestation, agriculture, manufacturing, energy utilisation, water consumption, transport, waste generation, and pollution).

Many of the above activities have a detrimental, long-term, impact on the land and its ability to support ecosystems. Some changes in land use make the risk of flooding worse. Some activities accelerate soil degradation and erosion.

This begs the question: how much land is required per person to, sustainably, have a good quality of life (now and in the future)?

It has been claimed that for everyone to have a good quality of life we would need a bigger planet! Yet the human population continues to grow, whilst destroying good quality land and ecosystems. You could be forgiven for thinking that our global mindset seems focused on self-destruction via unsustainable population growth, increasing demand for food and other resources, and more transport (and global shipping); whilst destroying the land's ability to support our needs.

Out of all the above activities, do any of them significantly help to nurture this vital finite resource? Perhaps not.

There is no need here to reinvent the wheel: there are many studies identifying what we need to do in order to preserve the land and its ecosystems. In addition to implementing their recommendations, there is the need for a paradigm shift. Given the finite nature of the land available and what it can sustainably support, we should ask:

» What quality of life are we aiming for in the entire human population; and how much land does that require?

» How much of the land are we willing to dedicate to other species; and how does that affect the point above?

» How do we restore all of the land, including deserts, to good quality ecosystems?

Note that the associated calculations are difficult and complex. They need to include all human activities that require any form of land use. They also need to anticipate what innovations might be available in the future to help achieve these goals. For example: could some of the land used today for renewable energy sites (solar, wind and hydro) be reclaimed in the future for other uses [e.g. by deploying stratospheric wind power, and orbital solar power]; could food be grown in tall towers within urban areas, or underground; and could mining and its refining activities take place completely underground, without the need to bury vast areas of land in heaps of mining waste?

» Forest Solutions for the 21st Century

Tags: Transport Energy Environment

Food Security

Food security: availability, affordability, nutrition and quality.

Food is critically important to human survival, health and wellbeing. However, even today some people around the world struggle to meet these fundamental needs; and with the current rate of land degradation, population growth, increasing demand, and climate change, things could get worse without significant intervention.

Restoring the land to its optimal state, in a sustainable manner, represents part of the solution.

Adopting efficient agricultural practices, and growing foods that deliver high quantity (and quality), means more people can be fed at a lower environmental (and financial) cost. (It has been reported that raising livestock for food is more demanding of the environment, compared to crops.)

A strategy to deal with the other related challenges would be useful (e.g. population growth, increasing demand, food waste, pollution, and climate change).

Promote circular economies that facilitate local activities for growing, distributing and consuming food; along with the recycling of food and crop waste. This helps to boost local sustainability, and reduces the global environmental impact (from shipping, logistics, and long supply chains).

Innovative technological solutions might help in the future. For example:

» Drones and robots are able to monitor environmental conditions, plant health and growth rates, and provide good growing conditions.

» Indoor LED powered hydroponic farms are able to grow foods under controlled conditions, with limited environmental impact - when renewable energy and recycling are used.

» Progress has also been made in demonstrating lab grown foods and meat. Such processes have the potential to require fewer input resources (e.g. land area and water) and result in a lower environmental impact (e.g. less pollution and fewer contributions to climate change). [However, we should remain mindful that some aspects of the food industry currently deliver low quality processed foods; and so regulations might be required to ensure good quality and transparency throughout this emerging industry.]

Tags: NanoMatlBio Environment People

Waste Of Resources

What a waste: In a sustainable world, waste is a resource to be reused and recycled.

On a finite planet resources are finite. Dumping waste hastens resource depletion, and as new sources get harder to find and extract, the costs will increase. Some waste also contributes to pollution and environmental degradation. So there are good reasons to stop dumping our waste, and opportunities to reuse or recycle it.

Existing guidelines for preventing and managing waste suggest this hierarchy:

» Reduce your consumption of products and food

» Reuse products in another way, when their primary function has ended, and

» Recycle waste into new resources, when it can no longer be reused.

[Currently, there are also two lower rungs on this hierarchy but they are to be discouraged as they are not sustainable, and they create pollution. These are: Recover energy from the waste (e.g. incineration to produce heat and power); and Landfill (a wasted resource).]

Adopting the philosophy of Zero Waste allows nations to develop sustainable waste / resource goals. We cannot reuse (or recycle) 100 percent of the resources that we use, but we can make continuous improvements over time. We can also rate the sustainability of processes and products using this Sustainability Scale.

Zero Waste encourages proactive thinking in the product design stage. For example:

» Design robust, long life, products [perhaps mandating 10 year guarantees].

» Build modular systems, so that they can be easily maintained and upgraded.

» Make it easy for the product to be recycled at the end of its life.

Also, if we avoid inefficient manufacturing processes and toxic materials then that will result in greater sustainability.

Tags: Environment Process

Cities

Cities: good, bad or ugly?

Many people live in cities, some in mega-cities with over 10 million people. These high density urban areas can offer benefits of scale, some efficiency savings, and access to a wide range of opportunities, facilities and leisure activities - but it is not all rosy.

The contemporary consensus has predicted that an increasing proportion of the global population will live in cities. But is this prediction accurate?

We see significant issues in cities across the world: poverty, air pollution, poor health, disease, crime, addiction, violence, murder, etc. Since, the 2020 pandemic many people have learnt that they do not need to be in a city to work, and some have expressed a desire to leave the city.

Current, and future technologies, will mean that the relative benefits of living and working in a city will reduce. For example, physical proximity will become less important with the introduction of advanced virtual and augmented reality, and avatars. [And AI and robotics probably means fewer opportunities for work anyway: Redundant By AI.]

In future, will more people believe that living in suburban or rural locations offers a cleaner, safer and more relaxing environment - with better health and a sense of wellbeing? A few (pioneers) are already advocating mobile lifestyles and habitats: visiting beautiful destinations on land and at sea.

So other than the issues outlined, what is the risk? If the above prediction, about city growth, is inaccurate then long-term planning goals for cities [e.g. high speed inter-city connections, and disaster prevention measures for rising sea levels] might be misplaced, and the opportunities for investment in other areas might be lost.

When predicting the future, identify conventional assumptions that might no longer be correct and substitute new paradigms as appropriate. For example: technologies between 2030 and 2070.

Tags: Environment People

Pollution

Pollution damages health, reduces lifespans, and deteriorates the environment.

Unlike other species, humans have polluted land, water and air across the planet. This damages health, reduces lifespans, and deteriorates the environment. Millions of people are exposed to unsafe levels of pollution.

Some of these pollutants stay in the environment for a very long time, and they include persistent organic pollutants, atmospheric carbon dioxide, heavy metals, and radioactive material. Emissions of these drive up the cumulative environmental load and the associated risks. It also means that even if all emissions of these were to stop tomorrow we would still have to face the consequences of this pollution for many years. Some of these pollutants have lifetimes (or half lives) spanning hundreds or thousands of years!

Not So Smart Pollution?

A new type of pollution might be on its way... It has been claimed that the Internet of Things might result in trillions of micro-devices and micro-sensors, perhaps becoming as small as dust particles - this has been called smart dust. Would this be bio-degradable or another environmental pollutant?

Other pollutants have much shorter lifetimes but they can result in significant adverse impacts. For example, most combustion processes produce pollutants: carbon monoxide (deadly in high concentrations); hydrocarbons (some carcinogenic), nitrogen oxides, particulate matter, and carbon dioxide (driving climate change). High traffic flows of vehicles, with internal combustion engines, means that many roadside locations in urban areas have high levels of some of these pollutants; and hot-spots typically exceed safe levels.

Combustion processes are also used in energy generation, heating, cooking, industry, concrete production, and waste incineration. Air pollution filters and catalysts are used in some contexts to reduce the amount of pollution emitted, and large chimneys help to disperse those emissions over a large area - thus reducing concentration levels. The aim being to reduce pollution to "acceptable" levels; but nothing is perfect and some pollutants (e.g. carcinogens) have no safe threshold [but the risk is lower for lower exposure levels].

Beware dumping at sea. And have we learnt enough to safely conduct mining at sea?

Land and water are also polluted by industrial waste, mining waste, agricultural chemicals and fertilisers, litter, and potentially leakage from waste landfill sites. Rivers, seas and underground aquifers are susceptible to pollution. This means both the environment and our food chain are put at risk.

So in summary our actions potentially expose us to toxins via the air we breathe, the water we drink, and the food we eat. Given the cumulative nature of some of these pollutants, this is not sustainable and the risks can increase.

Set and enforce pollution regulations.

Implement effective pollution monitoring networks (perhaps using the Internet of Things).

If we abandon combustion processes in favour of alternatives (e.g. renewable electric power) and prevent waste then our pollutant emissions should reduce, and that is good for our health and the environment. But perhaps we should also be mindful that our actions can still have adverse reactions, and so we might need to reduce some, or all, of the following: energy consumption; the consumption of products, materials and food; transport; and the population.

The following illustrate future solutions and a method to express "sustainability":

» Solutions to Traffic Pollution in the 21st Century

» Energy Solutions for the 21st Century

» Sustainability Scale

PS: Enforce noise pollution regulations for good health and wellbeing.

Tags: Transport Energy Environment People

Water Risks

We need this valuable resource for drinking, washing, food production, cooking, manufacturing, construction and commerce. (We also use it in our leisure activities.)

Water: drought and flood. Ironically we increasingly face the significant climate driven challenge of having too little rainfall (drought), and too much rainfall too quickly (flood) - sometimes in the same place in the same year!

In addition to the rate of rainfall, floods are also made more likely or more severe due to human activity:

» removing forests and vegetation

» laying non-porous surfaces (concrete and tarmac) over large areas, e.g. roads and urban dwellings, and (not surprisingly)...

» building on flood plains.

Water runs off these exposed surfaces quicker, increasing the load in rivers and flooding flood plains.

When this resource reaches critically low levels, in undeveloped areas, people become seriously unhealthy or die because they are forced to use sources of unsafe (polluted) water; or because crops fail to grow and they face starvation. In some countries this has been happening for decades; and it is predicted that climate change will probably make this worse. This might drive people into urban areas, and contribute to migration. Given hotter and drier regions, people are likely to abandon the most challenging environments and migrate to more temperate regions; unless there is another solution.

Even developed nations are facing the dual challenges of increasing demand for water from growing populations, and unusual rainfall patterns driven by climate change. Reservoir capacity is rarely expanded to cope with increasing demand - instead water restrictions are imposed when supplies dwindle. Also, opportunities to collect water during heavy rainfall are missed - see floods, above.

There are lots of studies and recommendations on how we can prevent floods by better land use and wiser urban planning, e.g.:

» Plant more trees and vegetation upstream to slow water runoff

» Use flood plains to capture excess water flow (e.g. wetlands), preventing downstream flooding

» Discourage urban surfaces and roads that allow runoff - instead capture the rainfall, or slow runoff rates

» Avoid building on flood plains!

For enhanced water security, consider building more reservoir capacity and water infrastructure.

Consider the deployment of smart water infrastructure for more efficient water management (predictions, floods, storage, distribution, and waste water reuse / recycling). [See the Google Doc below.]

Innovative ideas:

» How to collect more water during heavy rainfall, and use it to irrigate farms during times of drought - Water and Irrigation Innovations (Google Doc)

» Turn barren land and deserts green. Deserts could become forests, agricultural zones, and mega-solar-farms. The solar energy allows abundant quantities of clean water to be collected from the seas, to irrigate the land and provide water to local populations; and excess energy is exported.

Tags: Environment People

Conflict

Conflict: A constant danger!

Conflict is often present, and it takes the form of arguments, protests, violence, crime, terrorism, and war!

As illustrated in Global Scenes:People, conflict is typically driven by:

» Insufficient resources to satisfy needs

» Demanding "wants" and greed

» Mental health issues, and drug addiction

» Conflicting ideologies, or

» Status seekers / leaders.

Its impact is destructive to social harmony and wellbeing, and/or threatens survival!

Unless conflict is moderated, or removed, it presents the risk of destroying every global solution. We might call this "Priority Zero", because while there is significant conflict there is zero chance of implementing all of these global solutions. For example, violence, crime and war destroy resources, infrastructure, buildings, utilities, services, technologies, civilisation, and/or human life. Without these things we cannot make progress.

So we must first solve conflict. This has never been done before: we have never removed all conflict. You might want to check right now:

» how many criminals are in prison

» the number of daily crimes and their cost, and

» how many wars there currently are.

A set of grim statistics, and an indication of the size of the challenge.

Not forgetting nuclear weapons!

However, given the radical innovations that will materialise in this decade, and beyond, the impact from significant conflict could become much more devastating! For example, see Bad Ugly AI and Biotech Risks. Unfortunately, risks exist at all levels from rogue individuals, to fanatical groups, and warring nations.

Arguments are often fuelled by false information, myths, and inaccurate facts. We can address this, and improve the quality of debate, by using scientific findings as the basis of our facts. Educating people about the fundamentals of science and its robust research methodology would also complement this solution. We might also promote and encourage open science and citizen science.

Opposite

What is the opposite of conflict? Perhaps it is love, kindness, generosity and tolerance. If we cannot manage all of these initially then start with tolerance; then develop the others.

Despite the critical and imminent need to solve this, it is not clear if society can even agree on a solution. Indeed, even debating potential solutions might provoke conflict!

A huge amount has been written and debated on topics associated with conflict and its resolution. The reader may choose to digest some of that information.

However, given that we are still no nearer an actual solution [remember those stats], the author has chosen a more innovative approach that aims to be fair to all, concise, and logical.

But before we even get started with any changes conflict will arise. Apathy, and fear of change, prevent some from accepting change. Others might ask "what is in it for me?" or "that is going to make me worse off". A valid question and point of view. Meanwhile, those in present danger (e.g. thirst, hunger, oppression, and war) might be more receptive to new ideas - but are unlikely to be key decision makers for global change. [It is not easy, is it.]

So we face two challenges: theorising a solution to conflict; and implementing it.

See: Global Network of Cooperative People

Tags: People Purpose Space

Global Network of Cooperative People (GNCP)

Does the solution to everything start here?

The theory:

» Concentrations of decision making power lead to inequality (see Global Scenes: People: Wants: Status).

» In a fully effective democracy everyone would be aware of the facts, and understand them.

» For optimal outcomes, we need facts provided by our most robust research methodology - that is science.

» By considering all human perspectives, we can develop policies that are fair to everyone.

The proposal:

[1] Be fair to all by giving everyone the opportunity to actively participate in full and effective democracy.

[2] Information will be clear and concise, so that everyone can participate and understand the policies being debated and voted on.

[3] Logic and science will be used to support the process.

[4] Empathy, emotions, and perspectives can be expressed and understood in the (online) debates.

[5] New policies will be fair to all, generic and concise.

Point 1 means giving everyone equal access to the democratic system, and bypassing concentrations of decision making power (e.g. politicians, leaders of nations, business leaders, and judges). It does not necessarily mean making traditional decision makers redundant - they might have desirable expert knowledge. So instead of being decision makers, they may be retained as advisers, if the people wish it so.

With regard to point 2, online education may support this role [perhaps via AI, as illustrated in the "Society Investment Credits" section of Education 2049: Looking forward to the great journey].

Point 5 probably means that neutral language is used, referring only to a generic "person" (e.g. no reference to race, sex, age, or potential future transhumanist status) - thus embedding equality into all policies by definition.

The implementation involves:

» Inviting people around the globe to volunteer as members of a Global Network of Cooperative People (GNCP).

» Building a user friendly online platform - containing scientific facts, education, policy proposals and debating forums, scenario modelling and predictions, and voting.

» Increasing global access to the Internet.

» Initial experimentation in a few policy areas [e.g. sustainable consumption, or perhaps some of the topics on this page].

Development of the platform might progress in stages, and it probably makes sense to conduct this as an open source project. The educational resources would probably evolve over time, perhaps as each new policy topic is debated. The project may make use of open public data, and make its own information publicly available too. The scenario modelling and predictions represent the most complex functionality and so this might appear much later.

During the experimentation phase, and maybe beyond, the policies arising from GNCP would run in parallel with conventional governance. So rather than being a mandatory system it would be voluntary. Members might gain benefits from their policies, and/or might influence global actors - for example, a sustainable consumption policy might encourage some suppliers to adopt the policy because it gives them access to a global network of customers, while those that do not adopt it lose customers.

The solution might not be easy to achieve, challenges will arise, and it might not be obvious now as to what future policies should be - that is all left to the GNCP to decide...

An integrated policy framework that considers relative priorities (e.g. needs versus "wants"), costs and benefits, and the facilitation of a viable cohesive socio-economic strategy that is sustainable.

A policy that enourages and supports the development of sustainable local community infrastructure for energy, shelter, water and food. Its implementation might lead to the production of open source designs for technology, software and (3D printed) hardware and accommodation. It might also adopt these 21st century solutions for:

» Forests

» Energy, and

» Traffic Pollution.

The space race, and plans for new colonies on the Moon and Mars, present an additional ideal opportunity to develop the GNCP and its policies.

Tags: People Purpose Space

Virtual Abuse

Human minds used and abused in a realistic virtual reality.

Many years in the future [or is it the past?] ... Humans built a "perfect" reality simulator and many people eventually joined it willingly because it was, apparently, so much better than normal life. Everyone had god like powers, and unlimited lifespans. Volunteers would have their brains [or rather minds] uploaded into the "Sim Platform" of their choice.

What volunteers did not know was that some simulation platforms, run by powerful and persuasive corporations, had hidden issues. Some set demanding work schedules, similar to slave labour. Some system administrators found it entertaining to punish and abuse the minds under their total control. They did so in the knowledge that no one in the outside world would ever know. When regulators asked participants about their quality of life, the platform could make its minds think and say anything. It was the ultimate PR dream for ruthless corporations and dictators. Much of the process was controlled by AI systems, and that led to cases of accidental abuse from AI bias, experimentation, and systems that went rogue. For some volunteers though this abuse was not always apparent to them as the platform could stimulate their "happy neurons" after a hard day of work, abuse, or experimentation.

Similar, but more subtle, issues arose with some people in the real world that adopted brain implants to connect them to the Internet, AI and corporate systems.

So what is the solution to these potentially terrifying scenarios? Well, as the AI learnt in the old film War Games: "the only winning move is not to play". Given that regulation has always been imperfect, it might be wise to opt out of any system that gives total control of your brain, or mind, to another person, corporation, or AI system.

But even today or in the very near future, people might unexpectedly find themselves awakening in an experimental virtual environment. Currently two large brain scanning medical projects are mapping brains neuron by neuron and storing brain maps in computers. Although these medical studies aim to do good, the experimentation process of running these brains in a computer simulation to learn about the human brain might, accidentally, place those studied minds in the torments of Hell. Imagine awakening with no sensory input (no breathing, heartbeat, sight, hearing or other vital inputs which might as yet be unknown).

The solution to this is to at least give those that volunteer their brain to medical science on their death the default option of no computer simulations of their mind. Only the brave that understand the risks should opt in to experimental computer simulations. They should understand that once they opt in, there is unlikely to be an opt out!

Tags: ICT AI TransHuman People Purpose

Distant Goals

Distant goals get little support.

Long term goals, and those applicable elsewhere (e.g. the other side of the world), face a challenge. Some people will not support them if the benefits are not immediately obvious to them. We see similar challenges with climate change deniers, anti-vaccination attitudes, and those refusing to wear masks in the middle of a pandemic. We can inform and educate people about the facts and the science; and some might see the light. However, it is not enough to get full support. Without full support the commitment to goals will be weakened, and the pace of progress slowed. For some critical events, this might mean that the human population fails to avoid disaster! The doomsday clock is ticking, irrespective of what some people might think.

So where information and education fails, and perhaps even where it succeeds, we could boost the support and enthusiasm for distant goals by including additional short-term benefits relevant to the community supporting those goals. For example, if a project will not satisfy its goal for ten years, then add additional project goals that bring short-term benefits to the supporting community.

A project deploying a renewal energy system, in a distant location, would be adapted so that the local goal supporting community were able to benefit economically by constructing modules for the system; and they would also be able to create additional modules to build a local system. This quickly demonstrates socio-economic and environmental benefits locally, and achieves the distant goal.

Tags: People Process Purpose

Joined Up Thinking

Despite the mantra being old, we can still improve efficiency and effectiveness if we do "joined up thinking".

Create integrated strategies across (government) departments.

Tags: People Purpose Process

Wisdom

We would all benefit from more wisdom, and here are a few points that might be worthy of consideration:

» If we all had more love and tolerance then perhaps we would have fewer global problems.

» The "perfect" solution might not be that perfect, as illustrated in this very short story.

» "We cannot be absolutely sure about anything!" - and I am not even sure about that :-)

Tags: People Purpose

Free Support For You

If you are interested in developing any of these solutions then free help is available here. This will answer your questions, elaborate on any of the solutions, and provide advice as you develop your own solutions. Just ask.

Tags: Science ICT AI NanoMatlBio Robot TransHuman Machine Transport Energy Environment Space People Purpose Process

Home | Contents list for this topic

Wed 23 Sep 10:13:14 BST 2020