The future of artificial intelligence and human rights

AI cartoon

The rise of artificial intelligence threatens to shake the foundations of our society and fundamentally change the way it functions. AI can involve robotics or just software. It’s a game-changing technology and is developing fast. AI has the potential to enhance our way of life, but at the same time, it could be a vehicle for discrimination, violation of privacy, new types of weapons and other harm.

It is time to think about an ethical framework that can protect human rights.

On 10 September 2018, the European Commission met with faith-based organisations to discuss AI, including Baha’i, Buddhist, Catholic and Protestant representatives, as well as the Quaker Council for European Affairs (QCEA).

During the meeting, QCEA asked about the regulation of AI to prevent discrimination on the basis of characteristics such as race, religion, sexual orientation, and disability. AI operates on algorithms. If human biases (whether intentional or not) are written into algorithms, this may compromise the objectivity of computers and reinforce discrimination.

AI is all about technology that thinks, learns and adapts for itself. What a challenge for regulation!

Another concern is job loss to automated machines. One study predicted 47% of jobs replaced by AI, whilst a more widely accepted OECD study estimated only 9%. Widespread job loss can devastate communities. And as QCEA said in the meeting, the current momentum of the far-right in Europe makes the next few years particularly dangerous timing for job loss.

The reality, as expressed by the European Commission, is that many jobs will not be completely replaced but become shared tasks between humans and machines. The meeting heard how this raises questions about the nature of work in our society and the need for consideration of universal basic income.

One participant expressed concern about proposals that machines be given legal personality. The humans that create these machines should not escape responsibility for the consequences of their actions. In addition, several faith groups raised concern about the global south (in particular, Africa) being left behind and not considered in the development of any global ethical framework.

Other human rights issues involving AI include lethal autonomous weapons, privacy implications (such as facial recognition software), and the threat to freedom of expression if governments were to use AI to police the Internet.

Funding military AI

The European Defence Fund (2021-27) is set to exponentially increase EU funding for arms research, and there is no specific exclusion for lethal autonomous weapons systems. QCEA called for restrictions on the development of AI systems that can take a human life without human control. The ethics committee needs to seriously consider how potential EU funding for the arms industry might be used for the development of autonomous weapons.

Europe’s positive contribution

The big players in AI, China and the USA, are leaving an ethical gap that the EU could fill. China has shown little interest in ethical considerations, and the USA appears disengaged at present. Some have said that it’s too late for the EU to take a leading position on AI, as the rest of the world has raced ahead. However, the European Commission doesn’t agree.

Fast progress on AI has been made in a few specific areas, and Europe has the advantage of good research labs and promising AI startup businesses. A European ethical framework on AI would be welcome in some parts of the world and could become the global standard. For this reason, for organisations interested in AI, engagement here in Brussels is a must.

In the absence of global governance, AI is an example of why the European Union adds value. Countries need to work together to have any chance of regulating global technological developments.

Next steps for the EU

A high level expert group is developing guidelines for the use of AI in Europe, guided by a statement produced by the European Group on Ethics in Science and New Technologies. One of the proposals being considered is that private sector companies will need to sign up to an ethics charter. Private companies are said to be positive about ethical guidelines, but some worry that this could keep the focus away from hard law and regulation. At present, the European Commission is clear that it understands the need for both ethics and regulation.

More detailed EU proposals will be developed on the ethics of AI by December 2018 with a consultation process running until March 2019. Given the huge potential impact of AI, this must be a public consultation, but it may be more low-key. However, anyone can join the EU consultation platform on AI, called the AI Alliance. You can post questions and see what officials have written.

Europe’s AI moment is now, but it must not forget its best asset – its founding value of protecting human rights.

 

Isabel Hamilton (American University) and Andrew Lane (QCEA)

 

Advertisements
%d bloggers like this: