Artificial Intelligence (AI) is a key emerging technology of the future, with seemingly limitless capabilities. It has fervent supporters who praise its potential to improve our lives and boost socioeconomic conditions. But it also has critics, including high-profile stakeholders, who are vocal about associated risks.
Are our expectations and fears justified? And how does the European Union fare in the global race to harness the power of AI so that it serves us well?
To find out, I spoke to Darragh Mac Neill, senior industry expert at the European Investment Bank, who specialises in digital technologies.
What is AI?
AI is the simulation of human intelligence in machines. Its capability can range from quite simple AI, referred to as narrow AI, to advanced AI, referred to as general or super AI. Applications of AI are already widespread and include search algorithms or facial recognition, whereas advanced AI can be found in autonomous vehicles or advanced surgical robots in healthcare.
There are already applications where AI is able to perform as well as a human and even outperform human ability, such as in gaming applications.
How does AI work?
Advanced AI requires vast amounts of data, the quantity and quality of which really drives AI effectiveness. Its capability is then to extract certain features from this data and classify them to provide an output. In machine learning, some human intervention is needed to tell the machine how to extract features. In deep learning, which is a much more advanced level of AI, the machine can teach itself to extract and classify features.
Let’s take the example of autonomous driving. The vehicle receives data both visually through cameras but also using radar or different sensing technologies to recognise what’s happening in its environment. Simultaneously it is continuously receiving and monitoring data from how the vehicle is performing. AI uses and classifies that data to see whether the scenario the vehicle is facing requires some type of intervention and produces an output to the vehicle to navigate safely through the scenario it perceives.
How far from AI are we today?
The timeline of AI is always an interesting topic, but it’s something that nobody really knows with certainty. AI has been over-hyped in the past and, although advanced widespread AI is certainly decades away, there are some current applications. For example, the World Health Organisation recently indicated that AI is used to understand how the current pandemic is dissipating in some regions throughout the population. However, this type of application is still quite limited.
Autonomous vehicles are a good example to highlight some of the challenges that AI is looking to address. These systems have been in development for a long time and represent quite a complex application of AI due to the ever changing environment that these vehicles will be operating in. Autonomous vehicles need to achieve a similar type of cognitive ability that a human has so the challenge is to predict the type of situations that are going to arise.
What are the benefits and risks of AI?
AI is the result of our continuing quest for, among other things, improved safety, convenience and skills. It has the potential to support higher productivity and economic growth, reduce poverty, increase healthcare quality and, through that, life expectancy.
Autonomous vehicles could change and optimize our whole mobility system, reduce accidents and also the number of vehicles that are being produced, so there’s a climate component associated with it.
Driving licenses would become obsolete and asset utilisation would increase as we could potentially share autonomous vehicles among multiple users. There’s a lot of waste in the existing system with idle vehicle fleets or buses driving around empty. If vehicles were autonomous, then we would be able to have them on call where there’s a need and, instead of driving, we could be shopping or working from the car, so productivity and economic activity are increased.
Regarding the risks, there are a number of popular myths. Many of us have the image of an evil-looking robot carrying a weapon, but in actual fact AI will more likely be operating very discreetly in the background. It is unlikely to “ring the doorbell.” It doesn’t need a physical manifestation, it only needs an internet connection. It’s also quite rational and unlikely to be able to exhibit any of the human emotions such as love, hate or empathy.
The legitimate concern is that AI can either be programmed to do something sinister, if it falls into the wrong hands, or teaches itself to do something sinister, as AI can have the same cognitive ability as human beings and it will eventually surpass our cognitive abilities and learn from its own experiences.
Returning to the autonomous vehicle example, if we ask our vehicle to take us to the airport as quickly as possible it might get us there by knocking down pedestrians, by breaking speed limits, or making us car sick. It will have achieved its objective, but disregarded other decision making influences that a human driver would have had on the way.
Some very high-profile stakeholders with vested interests in AI are also concerned about its more controversial applications, such as autonomous weapons, or its potential to disrupt the labour market, increase the wealth gap, or jeopardise political and social stability.
These risks can be mitigated, but they need to be mitigated sufficiently in advance. It is essential to have safety protocols in place to make sure that AI is developed within an established framework.
And that’s another reason why it’s taking longer to address the challenges. It’s not only the application itself but also the regulations that are needed in order to make AI safe.
Are there AI regulations?
This will be the first time humans have created something that’s more intelligent than themselves. Past technology developments are difficult to use as a basis, because we’ve never had anything that could potentially be smarter than us and learn faster than us.
Regulations are required around the idea of AI safety. AI has to be developed in a way that there are safety protocols in place to prevent negative scenarios from happening. We must be able to regulate and have standards in place that AI will achieve the objective but using the same type of decision mechanisms that a human would use to the best of its ability. We need to start thinking about AI safety now to make sure it’s sufficiently mature when these applications start to become available. A core element of this safety framework is making sure that our objectives and the AI objectives are aligned.
Currently there is an AI global leadership race and the two superpowers are the US and China, with the European Union falling behind. The Commission did publish a white paper on AI quite recently which is very helpful, but we’re quite a long way behind in terms of investment compared to the two leading countries.
All countries have different policies and objectives regarding how significant this safety framework needs to be, but the aspiration is that this will converge into a global standard.
How is the European Investment Bank involved in AI projects?
The applications that we are looking at are obviously those that have a beneficial environmental and societal impact. For example, we worked with a company, Winnow, which has an AI application to manage and reduce food waste in professional kitchens.
We’re primarily focused on financing the research and development, because it’s something that these companies struggle to get financing for. We’re looking to advance these AI applications to a more mature state, which is typically where the R&D effort is focused.
So will machines turn against us one day?
We should err on the side of optimism. What we are striving for is that AI augments human ability rather than replaces it. As long as the goals and objectives that AI is looking to achieve don’t diverge from humanity’s objectives, it has that potential to help us flourish like never before.