Since Isaac Asimov developed his three laws of robotics, ethical issues around artificial intelligence have become firmly established in the public consciousness. But what about trustworthiness?
Firms employing AI in any element of their business must make sure their artificial intelligence solutions can be trusted and relied upon to deliver appropriate outcomes.
Lack of trust can lead to a lack of investment
AI development is at a pivotal point right now. The technological advances in recent years have moved science fiction staples into the here and now. But if AI doesn’t win hearts and minds, it won’t be seen as a viable investment and opportunities for innovation may be put at risk.
Earlier this year, the European Commission introduced the ‘Ethics Guidelines for Trustworthy AI’, helping firms across the EU establish best practice and appropriate standards for the use of the evolving technology. The guidelines aim to offer a degree of assurance to the public and to stakeholders that AI is being applied fairly and effectively.
Respect for human autonomy and incorporating opportunities for human choice
Prevention of harm, be it mental or physical
Fairness including promoting equal rights and preventing bias
Explicability around how AI decision are reached and transparency around the processes involved
3 It must be robust with reliable and safe outputs, taking into account the context it was designed to be used in
These should be straightforward to achieve, but some elements will not be so easy – such as explicability. Also, a problem unique to AI is that developers themselves may not be able to explain how the AI reached its conclusion – making it intrinsically hard to govern and embed fairness.
Applying seven key guidelines
Expanding on these three points, the European Commission put forward seven supporting guidelines:
1 Human agency and oversight should be factored into AI design. AI should support fundamental human rights and if these rights are threatened, a fundamental human rights impact assessment should be conducted.
2 Technical robustness and safety is paramount and AI should be resilient and reliable. It should be accurate and able to reproduce results.
3 Privacy and data governance must be maintained, including control over data access and protecting the integrity of data used for machine learning.
4 Transparency is essential and it should always be clear when a person is dealing with an AI system or a human. All AI decision making must be traceable and explainable where possible. Where it is not possible, it is important to consider whether the use of AI is appropriate in that context.
5 Diversity, non-discrimination and fairness should underline all AI actions and measures should be taken to avoid unfair bias in terms of algorithms and accessibility.
6 Environmental and societal well-being should address the development and deployment of AI, ensuring this is done in a way which is sustainable and promotes sustainability.
7 Accountability should be clearly established for AI systems at every stage of the development and deployment cycle. This means algorithms, data and designs should be auditable with mitigating controls in place to reduce associated risks.
Establishing best practice
These guidelines are not enforceable but should be viewed as best practice. Organisations should review their existing AI programs, and the governance frameworks around them, and assess the degree to which they meet the above criteria. Where AI is being applied, organisations should be transparent around its use and all stakeholders should be given assurance that is lawful, ethical and robust.
For more information on developing trustworthy AI, please contact David Royle.