EU publishes draft guidelines for “trustworthy” AI

The EU Commission’s High-Level Expert Group on AI recently published its draft Ethical Guidelines for Trustworthy AI. These Guidelines, which will be primarily relevant to those involved in the design, development and deployment of AI systems in the EU, are open to feedback until 18 January 2019.

The Guidelines follow the Commission’s release of its coordinated plan to boost AI based innovation in Europe and are a significant step towards achieving the coordinated plan’s commitment to implementing clear ethics guidelines for the development and the use of AI in the EU.

The Guidelines propose certain ethical principles for AI which are expanded into 10 main requirements for trustworthy AI and suggestions on methods to implement those requirements. The Guidelines also propose a series of self-assessment questions for identifying whether the principles and requirements are being met.

Whilst the Guidelines, which will be finalised in March 2019, will not be legally binding, institutions will be invited to voluntarily endorse them, and they may therefore attain considerable soft law power. The AI Expert Group will be separately considering what regulation may need to be revised, adapted or introduced and will publish its recommendations in May 2019.

What is considered to be AI?
The Guidelines define AI as:

“systems designed by humans that, given a complex goal, act in the physical or digital world by perceiving their environment, interpreting the collected structured or unstructured data, reasoning on the knowledge derived from this data and deciding the best action(s) to take (according to pre-defined parameters) to achieve the given goal.”
AI developers and users will need to carefully assess whether their systems fall within this definition. This will no doubt prove difficult in a significant number of cases due to the various interpretational questions raised by the definition.

What is Trustworthy AI?

Trustworthy AI has two components

  1. It should be human-centric and respect fundamental rights, applicable regulation and core principles, ensuring an ethical purpose; and
  2. It should be technically robust and reliable.
What are the 5 key ethical principles?
  1. Do good (which can include commercial prosperity).
  2. Do no harm (which is defined very broadly to include psychological, financial and social harm and which can raise difficult questions in practice, such as for an AI vehicle which faces a decision on whether to injure a pedestrian or another road user).
  3. Preserve human agency (including a right for human users to opt out and to have knowledge of their interaction with an AI)
  4. Be fair (which includes protecting against discrimination and which should include effective redress if harm occurs).
  5. Operate transparently (which should include measures to ensure informed consent by users and measures to ensure accountability).
What are the 10 requirements?

To realise trustworthy AI, the 5 ethical principles are expanded into the following 10 requirements

  1. Accountability
  2. Data governance
  3. Design for all
  4. Governance of AI Autonomy
  5. Non-discrimination
  6. Respect for human autonomy
  7. Respect for privacy
  8. Robustness
  9. Safety
  10.  Transparency

Each requirement raises complex issues and, whilst existing regulation, such as the GDPR, may adequately cover certain aspects of certain requirements, the accompanying commentary suggests the AI Expert Group may well identify areas where this is not currently the case.

What methods can achieve Trustworthy AI?

The Guidelines recommend technical and non-technical methods to achieve the requirements, including a recommendation that organisations appoint a person (internal or external) or committee to advise on ethical issues.

The Guidelines also urge organisations to adopt codes of conduct to demonstrate their acceptance of ethical AI use (including the option of incorporating the Guidelines).

How do you know you are compliant?

The Guidelines include a non-exhaustive list of questions that relevant stakeholders can use as a self-assessment guide.

Regulation at the EU level?

We are expecting more AI-related publications to come from the AI Expert Group over the next few months, including

  • final guidelines in March 2019; and
  • policy and investment recommendations in May 2019.

This latter publication will feed into the European Commission’s assessment as to whether existing legislation is fit for purpose.

Separately, the Expert Group on Liability and New Technologies will be assisting the European Commission in drawing up guidance on the Product Liability Directive. The results of this will be particularly relevant to the European AI sector, given the influence of the product liability regime in providing for extra-contractual liability and existing debate as to whether AI systems are generally already within its scope.

All this activity suggests the real possibility of future AI specific regulation at the European level – we will keep you updated. For more information, please refer to our AI Toolkit or contact one of our team.