Recent years have seen artificial intelligence advance at an unprecedented speed. These rapid technological developments bring significant promises and potential benefits, but with great power comes great responsibility.
As this technology permeated all aspects of our everyday lives, it raised key ethical questions about its impact on society.
To reduce the risks and negative consequences arising from the implementation of AI, it is necessary to tackle the full spectrum of ethical, social, and legal aspects of this emerging technology so we can build trustworthy AI systems that will be used responsibly.
Why AI triggers ethical questions
Before the current wave of AI developments, it was believed that by automating routine jobs, this technology would replace human involvement in basic repetitive tasks. However, as computers got more powerful, AI became more sophisticated, propelled by the vast data sets that became available as the world became more and more digitised. Today, we have robust specialised AI systems like the GPT-3 and GPT-4 text-generating algorithms and the DALL-E 2 and Stable Diffusion image- and voice-generating algorithms, with many more in the works.
In light of the evolving environment where AI is designed not only to enhance but, at times, to replace human cognitive capabilities, a prudent course of action is to evaluate its far-reaching impact. Similar to the development of ethical principles and legal frameworks that have governed human behaviour for generations, the imperative now is to recalibrate these established regulations to accommodate the emergence of technology designed to emulate human thought processes as decisions it makes grow in importance and impact.
Key ethical questions posed by AI
The deployment of AI technology has broad implications that affect various aspects of society, from the economy and socio-economic dynamics to legal frameworks.
In order to ensure that this powerful technology is used in a way that is consistent with widely held human values, ethics in AI are intended to serve as guiding principles and rules.
AI is a potent tool with power fueled by the enormous amount of data that became available with the rapid advancement of digitisation. Some of this data is privacy-sensitive.
When AI feeds on this personal data, a privacy invasion may occur. For example, using this data to build search algorithms or recommendation engines may violate people's privacy rights and risk their independence in decision-making.
Today, algorithms may determine who gets a loan or who to invite for a job interview. They also guide doctors in treating their patients or judges in sentencing decisions.
A common assumption is that technology remains impartial, yet reality diverges significantly. As products of human creation, algorithms can never be entirely objective. Instead, they are affected by biased human judgments as they mirror the perspectives of their creators and the data they learn from. If this systemic bias is left unaddressed, AI can pose a risk to people's well-being and, at times, even their lives.
AI can provide data analysis that is faster, more thorough, and often more accurate than human analysts. As it can collect and analyse vast volumes of data that far exceed human capabilities, it is often used to provide decision recommendations.
As a result, it revolutionised work and contributed to advancing human resources management. For example, AI is believed to provide a competitive advantage in the recruitment process by enabling a better understanding of the workforce, simplifying the hiring process and mitigating bias.
But, it is becoming evident that algorithms cannot eliminate discrimination on their own because their decisions are shaped by the initial data they receive. If the underlying data is biased, algorithms can only perpetuate bias, inequality, or discrimination.
AI is rapidly being utilised to create new works in the creative industries, including music and literature. Today, new texts and compositions may be produced using specialised AI algorithms, raising concerns about who owns the copyright to these AI-generated works.
AI in legislation
Amidst growing ethical concerns accompanying the widespread integration of AI systems into our daily lives, this emerging technology has gained prominence in numerous legislative initiatives. These include UNESCO's Recommendation on Ethical AI, the Council of Europe's "Towards Regulation of AI Systems" report, the OECD's AI Principles, and the European Commission's Ethics Guidelines for Trustworthy AI.
UNESCO's Recommendation, a pioneering global standard for AI ethics, highlights the importance of safeguarding human rights and dignity. Having garnered unanimous support from all 193 member states, it promotes essential principles such as transparency and equity while advocating for human oversight of AI systems.
It is often argued that existing data privacy laws already regulate many key data-related aspects of AI.
In Europe, the General Data Protection Regulation stands as a comprehensive privacy law.
In contrast, the US features a patchwork of state-specific privacy laws that cover areas relevant to AI. A dozen states have passed comprehensive state privacy laws, while a comprehensive federal data privacy and security law, like the American Data Privacy and Protection Act, is awaited. The White House also announced a blueprint for an AI Bill of Rights.
Data sharing refers to the process of allowing a single set of data resources to be available to multiple users, including private and public entities.
Europe's Data Governance Act is intended to create trust in data sharing in line with data protection legislation. This is achieved through various tools, from technical solutions such as anonymisation and data pooling to legally binding agreements by the reusers.
On the other side of the Atlantic, the National Strategy to Advance Privacy-Preserving Data Sharing and Analytics is intended to maximise the benefits of data sharing equitably by promoting trust and addressing risks inherent in data-sharing activities.
Europe's General Data Protection Regulation imposes strict requirements on data protection for AI systems that process employee personal data. At the same time, the US Algorithmic Accountability Act seeks to address bias and discrimination in AI systems deployed in employment decisions.
How to develop an ethical AI policy and framework
The Alan Turing Institute published Understanding Artificial Intelligence Ethics and Safety, which provides a practical approach to implementing AI ethics. The framework is based on four guiding values throughout the innovation lifecycle: respect, connect, care, and protect, as well as four principles: fairness, accountability, sustainability, and transparency. Finally, a process-based governance framework operationalises these values and principles through transparent design and implementation processes of an AI project.
How HLB can help
Boasting a team of experts well-versed in the complexities of emerging technologies such as AI, HLB is well-equipped to provide guidance on how to leverage this technology in an ethical manner to enhance business growth while ensuring legal compliance. If you are looking to leverage cutting-edge technology but also develop and implement ethical frameworks for responsible and sustainable AI development and deployment, get in touch today!