The European Commission is looking to setup ethical guidelines for artificial intelligence which should be transparent, secure, have human oversight and have reliable algorithms. They should also be subject to privacy and data protection rules.
The European Commission stated, companies working with artificial intelligence need to install accountability mechanisms that prevent the misuse of AI. The statement comes in the wake of the European Commission looking to set up new ethical guidelines for AI, a technology that is being abused by authoritarian regimes.
Amongst its recommendations, the European Commission stated, AI projects should be transparent, secure, have human oversight and have reliable algorithms. They should be subject to privacy and data protection rules.
This new EU initiative, taps in to a global debate on when or even whether companies should put ethical concerns before business interests, and how tough a line regulators can afford to take on new projects without risking killing off innovation.
“The ethical dimension of AI is not a luxury feature or an add-on. It is only with trust that our society can fully benefit from technologies,” said EC’s digital chief Andrus Ansip in a statement.
While AI can help improve healthcare, detect fraud and cybersecurity threats, improve financial risk management and tackle climate change, it can also be used to support unscrupulous business practices and authoritarian governments.
In 2018, the EU executive took the help of 52 experts from academia, industry bodies and companies including Google, Bayer, SAP, Santander and others to help it draft the principles.
Organizations and companies can sign up to a pilot phase in June 2019, after which the experts will review the results and the Commission decide on the next steps.
IBM Europe Chairman Martin Jetter said guidelines “set a global standard for efforts to advance AI that is ethical and responsible.”