Guendalina Dondé of the Institute of Business Ethics explains how companies can empower their employees and other stakeholders to use artificial intelligence efficiently, effectively and ethically

So, hands up who was woken up by Alexa this morning? Or now has Google Home finding their favourite radio station for them? Or had fun over the holidays trying to get Siri to tell them a joke?  Artificial intelligence is now more accessible and becoming mainstream.

The rapid development and evolution of AI technologies, while unleashing opportunities for business and communities across the world, have prompted a number of important overarching questions that go beyond the walls of academia and hi-tech research centres in Silicon Valley.

Governments, business and the public alike are demanding more accountability in the way AI technologies are used, and are trying to find a solution to the legal and ethical issues that will derive from the growing integration of AI in people’s daily lives.

AI technologies are not ethical or unethical, per se. The real issue is around the use that business makes of AI, which should never undermine human ethical values.

The Institute of Business Ethics, together with organisations and technology experts, has identified the 10 founding values and principles that should form the framework for the use of artificial intelligence in business. This framework, which goes by the acronym ARTIFICIAL, will help to guide decision-making.

Ethics, compliance and sustainability practitioners, boards and senior leadership – anyone responsible for implementing ethics programmes and for upholding corporate ethical values – should also feel able to challenge and guide the development and use of AI within their organisations using this framework.

Companies need to ensure that the AI systems they use produce correct, precise and reliable results. To do so, algorithms need to be free from biases and systematic errors deriving, for example, from an unfair sampling of a population, or from an estimation process that does not give accurate results.

It is worth noting that in some instances, because AI can learn from data gathered from humans, human biases can be reflected in the machine’s decision-making. This indicates how, even in the era of artificial intelligence, influencing human behaviour to embed ethical values should remain at the forefront of every conversation about business ethics.

Many organisations include in their code of ethics (or similar document) guidance to support individual decision-making. This could be applied in a similar manner before adopting or using AI.

Key questions to ask include:

  • What is the purpose of our job and what AI do we need to achieve it?

  • Do we understand how these systems work? Are we in control of this technology?

  • Who benefits and who carries the risks related to the adoption of the new technology?

  • Who bears the costs for it? Would it be considered fair if it became widely known?

  • What are the ethical dimensions and what values are at stake?

  • What might be the unexpected consequences?

  • Do we have other options that are less risky?

  • What is the governance process for introducing AI?

  • Who is responsible for AI?

  • How is the impact of AI to be monitored?

  • Have the risks of its usage been considered?


(credit: Tonmaso79/Shutterstock Inc)

To be at the forefront in the use of AI, business decision-makers, employees, customers and the public need to be able to understand and talk about its implications. It is essential that companies know the impact and side effects that new technologies might have on their business and stakeholders.

The topic of AI and its applications and ethical implications for business is broad and requires a complex multi-stakeholder approach. However, there are some measures that organisations can adopt to minimise the risk of ethical lapses due to an improper use of AI technologies:

  • Design new and more detailed decision-making tools for meta-decisions to help ensure that the people who design algorithms and construct AI systems act in line with the company’s ethical values. This can come in the form of dedicated company policies that ensure proper testing and appropriate sign-off from relevant stakeholders, both internally and externally.

  • Engage with third parties for the design of AI algorithms only if they commit to similar ethical standards: the design of these systems might be outsourced and it is important to conduct ethical due diligence on business partners. A similar principle applies to clients and customers to whom AI technologies are sold. Testing a third-party algorithm in a specific situation is also important to ensure accuracy.

  • Establish a multi-disciplinary ethics research unit to examine the implications of AI research and potential applications; and be proactive in publishing its working papers to internal and external stakeholders.

  • Introduce “ethics tests” for AI machines, where they are presented with an ethical dilemma. Measure how they respond in such situations in order to predict likely outcomes in a real-life dilemma, and therefore assume responsibility for what the machines will do.

  • Empower people through specific training courses and communication campaigns in order to enable them to use AI systems efficiently, effectively and ethically. These training courses should be directed not only at the technical personnel building the tool, but also at senior business stakeholders who should understand the assumptions, limitations and inner workings of AI technology.


Guendalina Dondé of IBE

A key element of the IBE’s ARTIFICIAL Framework is learning and communication. Employees and other stakeholders need to be empowered to take personal responsibility for the consequences of their use of AI and they need to be provided with the skills to do so. Not only the technical skills to build it or use it, but also an understanding of the potential ethical implications that it can have. It is important that companies improve their communications around AI, so that people feel that they are part of its development and not its passive recipients, or even victims.

Ensuring business leaders are informed about these technologies and how they work is essential to prevent unintentional misuse. However, it is important that businesses engage with external stakeholders as well, including media reporters and the general public, to improve their understanding of the technologies in use and ensure that they can assess more accurately the impact of AI on all our lives.

Guendalina Dondé is senior Researcher at the institute of Business Ethics. IBE’s latest briefing Business Ethics and Artificial Intelligence is a free download 

Main image credit: Phonlama Photo

This is part of our in-depth briefing on AI. See also:



regulation  Big data  IBE 

comments powered by Disqus