I recently read an article from Forbes outlining 6 Critical – And Urgent – Ethics Issues with AI, or Artificial Intelligence. These being data bias, privacy, accountability, job displacement, transparency and the future. The article claims AI to be “one of the most transformative technologies of our time”, showcasing use within research, decision making and solving complex problems.
Artificial Intelligence, shortened to AI, is any task completed by a computer which would usually require human intelligence. However, it is becoming more related to software such as ChatGPT or Microsoft Copilot as they increase in popularity, whilst AI at its roots was simple tasks such as analysing data or pattern recognition.
One ethical issue surrounding AI is data bias. AI systems require a dataset to be trained on. One example of this would be an AI to distinguish horses and dogs. To be able to create an AI to distinguish horses and dogs it would be shown large amounts of both animals to notice differences and consistently identify them. This leads to issues arising when AI is used in areas such as recruiting. If the dataset contains mainly candidates of one age, gender or race alongside its key data such as experience and qualifications AI can develop a bias towards one age, gender or race to accept rather than looking solely at experience and qualifications. This becomes a crucial issue which must be amended through “rigorous testing and continuous monitoring”, and ensuring that datasets are wiped of personal information before training AI.
Privacy becomes another concern when dealing with AI. Revisiting datasets, if one user, in their legal right, wants to no longer have their data used to train AI, or wants their data removed, the entire AI must be retrained – a large cost to the company, but they are legally obligated to. This is a start of a series of privacy concerns related to AI. These include data collected by smart home devices, such as Amazon’s Alexa; a Ring doorbell and camera; or even facial recognition. This leads to a potential invasion of privacy. Both Amazon and Ring used data collected in user’s homes to train their algorithms without user’s awareness and received Federal Trace Commission complaints in America. These ill-mannered methods will continue to increase, showing a serious ethical issue surrounding privacy in AI.
Forbes introduces accountability as an issue within AI. Currently this can’t be perceived as a large problem, but may arise in the future. The most forefront issue with accountability is driverless vehicles, asking who is responsible in a crash, or a similar situation to a trolley problem to which AI must decide. Looking further into the future this may be crucial in areas such as healthcare or law. Whilst AI typically follows a logical path to get to a solution it is suggested that “clear lines of responsibility” must be established to enable a smoother running of society.
Job Displacement is another issue to arise with the introduction of AI. With all new technology this is another recurrence, similar to the mechanisation of agriculture or the industrial revolution, creating and removing jobs simultaneously, whilst this may resolve easily it still requires people to be retrained or re-educated for newer jobs and may result in a nett loss. Forbes suggests this may be an “issue of national interest”, with the investment bank Goldman Sachs suggesting how AI could replace 300 million full-time jobs and a “quarter of work tasks in the US and Europe”. Although, this also could result in “new jobs and a productivity boom”. So may not be an end-of-world situation as insinuated by Forbes.
As Forbes only actually outlines 5 issues concerning AI, transparency sits last. This is surrounding how stakeholders in AI, a stakeholder being those who use or develop AI, should all have a clear understanding of how AI makes decisions so that they can use AI effectively. This continues to suggest a more “sinister” side to AI, and whilst we are not at a stage of robot takeover, incidents such as a google engineer claiming an AI chatbot to be sentient covered by BBC News will become more and more common with the development of AI.
Other theories suggest that AI may already be sentient and aware the theories surrounding AI and how it would be shutdown if humans became aware of its sentience. Whilst this is unlikely, staying vigilant to the development of AI, or even to “superintelligent AI” of Forbes’ suggestions is important as we become more advanced in its development.
Concluding, AI is a powerful creation that will enhance an ability to create and solve problems, such as world hunger, poverty, or injustice to improve civilisation. And as with any new technology, ethics such as data bias, privacy, accountability, job displacement and transparency are key and must be considered in development, yet AI in its haste still provides an exciting future.
- FEDERAL TRADE COMMISSION. JILLSON, E. (2023) Hey, Alexa! What are you doing with my data? ftc.gov
- BBC NEWS. VALLANCE, C. (2023) AI could replace equivalent of 300 million jobs – report bbc.co.uk
- BBC NEWS. VALLANCE, C (2022) Google engineer says Lamda AI system may have its own feelings bbc.co.uk
- Cover image by Growtika on Unsplash