Open access peer-reviewed article
This Article is part of Artificial Intelligence Section
Article metrics overview
128 Article Downloads
View Full Metrics
Article Type: Review Article
Date of acceptance: December 2024
Date of publication: December 2024
DoI: 10.5772/acrt.20240020
copyright: ©2024 The Author(s), Licensee IntechOpen, License: CC BY 4.0
Table of contents
This study conducted a systematic literature review to examine the trajectory of AI research over the past five years, from 2019 to 2023, focusing on emerging ethical and social concerns related to the deployment of AI technologies. The study also aimed at enhancing the understanding and promotion of robust AI ethics for societal benefit. The explosive rise of the internet, AI, and mobile technology has dramatically changed how we live, work, consume, learn, and communicate. AI is improving the quality of human life but poses dangers from unintended disastrous and undesirable outcomes, if unregulated. Cyberattacks on critical infrastructure networks pose grave threats, exponentially increasing risks of fatalities and service breakdowns. AI can instantly diagnose rare diseases, robots can perform precision surgeries and chatbots can write assignments for students. AI is also used for surveillance, monitoring financial activities and autonomous weapon systems in the military. Two hundred and twenty-five publications from Scopus database were selected to determine the central themes, the affordances and constraints of AI and principles that enhance public trust and accountability. Results show an upward trajectory in AI ethics research from 6.2% in 2019 to 40.3% in 2023. Furthermore, results revealed the emerging ethical and social concerns in major socioeconomic domains. Results also show that AI collects data about individuals and data breaches have catastrophic consequences. The growing complexity and opacity of AI systems make it hard to understand decision-making, hindering accountability for developers and deployers. AI algorithms may be biased against minorities; perpetuating prejudices. The study contributes to the ongoing discourse on the ethical and societal concerns surrounding unregulated AI adoption. The issues identified in this study may assist policymakers in developing frameworks and policies for AI usage.
AI ethics
AI policy
artificial intelligence
business
education
Author information
The unprecedented growth of the internet, rapid digitalisation,advancements in mobile technologies, and social media have significantly increased the gathering, processing, storage and sharing of information. Artificial intelligence (AI) and related technologies allow us to virtually control our homes, bodies, cars, entertainment, and city spaces [1, 2]. Hildebrandt [1] defined AI as the technology that makes predictions and classifications by detecting patterns and correlations in large datasets at unbelievable speed and accuracy. AI systems are trained on historical and current data to uncover patterns and learn to make decisions and improve themselves. Poor predictions are realised if training data is inadequate, inaccurate, biased or outdated. The power of AI is propelled by the ubiquitous and pervasive devices complemented by IoT, which governs decision-making with far-reaching consequences in politics, industry, commerce, security, judiciary, science, healthcare, transport and education [2]. While AI holds transformative potential for global economy and human society, it also poses significant risks that must be mitigated through proper regulation.
The AI ecosystem involves large datasets, experts, algorithms, developers and users. Questions have been repeatedly raised on the liability and accountability, if AI systems malfunction or produce unintended consequences or undesirable outcomes. Does the operator account for the unexpected harmful behaviour of AI, or is it the developer who created it? Research reveals that 85% of AI projects deliver erroneous and unintended outcomes because of biased developers, faulty design,inadequate data, and inaccurate algorithms [3]. An area of focus is the right to privacy; AI systems collect large amount of data about individuals for analysis and storage. If unregulated, this data may end up in the hands of the wrong people or may be used for commercial exploitation. Governments collect personal data, which is analysed, interpreted, shared and stored, which raises privacy and trust issues if leaked.
The world in which AI applications are deployed is unstable and unequal, with past biases and discrimination dampening its transformative promise. AI relies on large datasets; the collection, processing, analysis and sharing of which may violate privacy rights, if data breaches occur. Attention should be given to transparency, openness and accountability of data sets to eliminate biases while training AI algorithms. The decisions and actions of AI systems have societal and ethical implications. Technology leaders have expressed fear that AI is developing quicker than efforts to regulate its use and can pose considerable threats if world leaders fail to control and regulate its development [4].
The impacts of digital ethics can be viewed through different lenses—the psychological impact related to the effects on individual people, the social impact encompassing the collective and environmental effects, and the political impact deals with the effects on the organizational structures of society [5]. The psychological impact of AI ethics relates to issues like protection from undue manipulation, consenting to data collection and knowing when one is interacting with a non-human agent. The social impact encompasses concerns around justice and fairness. Democratic processes, rights and the economy are covered by the political impact of AI ethics. Addressing these multifaceted concerns requires an interdisciplinary approach that integrates insights from various fields.
This study sought to bring empirical evidence and clarity to the debate on the necessity of establishing AI ethics frameworks to address the technology’s potential negative societal and economic impacts. Unlike other studies, this study focuses on the application of AI in major socioeconomic sectors and the most debated ethical and social issues, making an important academic contribution to the growing discourse around AI ethics. Most research on AI ethics has been done by scholars from the Global North, and this work contributes to nuanced and diverse literature from scholars from the Global South.
Based on literature review and thematic analysis method, relevant articles to facilitate the study and identify major themes were sourced from Scopus, a central electronic bibliographic database. Scopus was chosen as it is one of the largest scientific peer-reviewed databases with extensive citations from its leading AI journals. Scopus also supports advanced search options that enable research scholars to utilize a comprehensive search query. The keywords of the search string used to identify the appropriate articles for this study were “Artificial Intelligence” AND “Ethics” OR “Bias”, OR “Social Concerns” and the search was executed on 2 December 2023.
The search yielded 265 articles which were published between 2019–2023. The results were exported to Microsoft Excel 2016 for data preprocessing and analysis.
This systematic literature review included published, peer-reviewed articles, conference papers, and other scholarly publications that focused on ethical, social, or regulatory issues related to the development and application of AI technologies. The included articles published between January 1, 2019 and December 31, 2023, and written in the English language. Only primary research studies and review papers were considered; other publication types such as editorials, commentaries, and letters were excluded. Articles that did not cover emerging conversations on AI ethics and social concerns and the possible applications, affordances, and constraints were excluded. Additionally, only publications with full-text that is freely accessible or available through institutional subscriptions were selected, and they were required to meet the minimum quality threshold. Studies that did not meet the inclusion criteria were excluded from this review. The screening and selection process was conducted by two independent reviewers.
The initial literature search identified 265 potentially relevant papers and were screened on the basis of their titles, which narrowed the selection down to 253 papers. In the next step, the abstracts of these 253 papers were reviewed to assess their relevance in addressing the research goal. This step also involved screening for duplicate and irrelevant titles, resulting in 239 papers for further evaluation. The third stage involved a comprehensive review, during which the full text of relevant articles was critically analyzed to assess their alignment with the study’s objectives. After this detailed screening and synthesis process, a final set of 225 articles that met the inclusion criteria were selected for the review.
Thematic analysis methodology was used to systematically review the literature and identify the prominent socioeconomic themes and ethical or social issues associated with the use of AI technologies. An extensive review of selected articles to identify the recurring socioeconomic themes was done as an inductive process, involving multiple close readings and identification of key thematic sections where ethical and social concerns of AI were the focus.
Results show that research on AI ethics and social concerns has steadily risen over the past five years, from 6.2% in 2019 to 13.7% in 2020. A steady increase in research on AI ethics and social concerns continued in 2021 and 2022, with 18.9% and 23%, respectively. Most (40.3%) of the articles used in this study were published in 2023, as shown in Figure 1. The rising interest by scholars also includes the growing attention from governments, industry and other stakeholders to improve research on AI ethics to protect humanity.
The leading themes that emerged from qualitative data analysis include health (31%), general AI applications (27%), education (6.2%), robots (4.4%), surveillance (6.6%), business (4%), computer vision (3.5%), judiciary (3.5%), and ChatGPT (2.2%). Emerging topics include earth observation, law enforcement, autonomous vehicles, and recommender systems, with 2%, while others were less than 1%, as shown in Figure 2. This study also aimed to capture the emerging conversations on AI ethics among researchers and explored AI’s possible benefits, pitfalls, and current applications, as ethical and social concerns continue to rise. The issues highlighted in this study can assist policymakers in developing frameworks and policies that guide AI use. The following sections expound on some of the broad socioeconomic areas that AI has transformed and the ethical issues that arise.
Nearly a third (31%) of the publications on AI ethics and social concerns are focused on healthcare. AI-powered health applications reduce cost of care, improve access to healthcare and assist in upholding the fundamental right to life [5, 6]. Research confirms that AI-driven image recognition systems can accurately and precisely detect skin cancer better than the best human dermatologists [7]. IBM’s Watson, an AI-powered diagnostic system, successfully diagnosed a rare form of leukaemia in ten minutes by comparing the patient’s condition with over 20 million oncology records [8], that human specialists had failed to do so for months. AI systems are rivalling and outperforming specialists in diagnosing rare conditions like brain cancer [7]. AI-powered robots also perform precision surgical operations by assisting surgeons.
Without policies, such systems endanger human autonomy, dignity and self-fulfilment. Ethical and human rights concerns are being raised as robots provide company and comfort to patients, assist caregivers in lifting or bathing patients and support ailing patients in eating alone [9]. AI-powered health systems require large amount of data to train algorithms, posing privacy risks in collecting, processing and sharing health-related information. As AI-powered healthcare becomes more powerful, risks associated with supervision and control will increase. Scholars have raised ethical concerns regarding the accountability and responsibility that arise from the unwanted behaviour or effects of AI-based healthcare systems [10]. AI-based disease surveillance systems pose risks and human rights violations, including politicising care and medical access [11, 12]. The consequences of an insulin pump or an oxygen tank being hacked has catastrophic consequences on public health.
Businesses and governments adopt AI to tap into significant opportunities such as improved decision-making, reduced inefficiencies, improved outcomes and transformed business models [5]. Financial institutions use AI to detect fraud and expand credit access to previously excluded communities. In the United Kingdom, a revenue system called ‘Connect’ uses algorithms and big data to detect discrepancies between tax submissions and total income declared [13]. The system sifts through credit card and other financial information from bank accounts, employee benefits and all possible sources of income to detect tax discrepancies, with those suspected of tax evasion being shortlisted for further investigation. AI systems provide advice on stock trading, autonomously executing millions of commands in microseconds without human intervention [14]. In human resources, AI-based hiring systems reduce human decision-maker’s bias regarding the race, religion, language, age, appearance or sex of the applicants [15]. AI systems evaluate potential job applicants before being called for interviews, potentially eliminating prejudice and unfair labour practices [16].
China’s central bank collects data from over 800 million customers to develop a public credit history system to fix societal issues; this significantly reduces forged and fake credit scores [17]. The system aims to regulate the financial activities of businesses and individuals, monitor tax evasion, and academic dishonesty, among others. Despite similar resumes, research shows that AI-controlled recruitment systems are 50% more likely to call back individuals with ‘white-sounding’ names than ‘black-sounding’ names, thereby perpetuating discrimination [18]. Unauthourised access to devices that store financial information has grave consequences for governments, institutions and individuals as hackers can empty bank accounts.
Nearly a tenth (8.4%) of the publications on AI ethics and social concerns focused on education. AI systems have transformed education, from natural language processing (NLP) and recommender systems to intelligent tutoring systems. Recommender systems assist students in choosing a career path, programme, course or learning materials using AI algorithms [19]. AI-based personalised learning systems mimic dynamic human teaching by offering learners independent learning paths, assessing students and providing appropriate feedback or resources for a better educational experience [20, 21]. AI text generators are used in academic writing, journalism or summarising documents using NLP methods to generate text that conforms to the grammar and syntax of the language [22]. Text generators are built using Generative Pre-trained Transformer-3 (GPT-3) and its successors, where algorithms use over 175 billion parameters to make the model [23]. ChatGPT is an AI-driven language processing system capable of producing articles and other text materials where human evaluators and other text detection systems cannot prove that a human being did not write it [23, 24].
Students use AI text generators to create academic text quickly and efficiently [25], which may limit their creative thinking, knowledge, skills, and competencies [21]. Text generators also provide instant responses to queries in natural language, thus supporting continuous learning. This raises alarms about potential misuse resulting in fake news or plagiarised content. AI text generators are rapidly improving, while educators are reactive, and teachers will soon require policy guidelines on how to determine and evaluate work generated by AI and synthesised by the learner. Other potential threats include misinformation, phishing, and pre-texting abilities that may support social engineering activities by hackers and those who want to breach systems.
Close to a third (27%) of the research focused on the ethical and social concerns of general AI applications. AI technologies are slowly reaching superhuman intelligence [26–28], and concerted effort is required to ensure that policies and regulations protect humanity from unintended consequences [4]. AI systems require constant review and monitoring so that they do not just serve commercial interests at the expense of fairness. AI can exacerbate human bias and discrimination; for example, Google’s face recognition system mistakenly classified black people as gorillas [29]. Everyday cases of AI biases include chatbots learning hate speech when interacting with users [30], sexual discrimination based on societal norms [31], and using race or socioeconomic status to grant credit. Stigmatisation occurs when data analytics systems recommend that police deploy more patrols in specific neighbourhoods based on race, ethnicity, religion or social status.
Although it may be challenging to consent to every step of data collection, Saheb [32] contends that citizens must be aware when their information is captured and assumed as consented to collecting, sharing and storing personal information. AI actions must be transparent, open and accountable, allowing stakeholders to get information and explain the action or results produced; this is currently difficult as AI operates in a black box, where steps to the outcome are hidden [10, 19, 33]. Again, the entire value chain of actors participating in an AI system to achieve an end product is very complex, and determining responsibility, traceability, and accountability is laborious and difficult. AI algorithms depend on data, and if the data has societal prejudices against minority and ethnic groups and other races, AI may perpetuate such biases [31, 34, 35]. Biases may arise from cultural and societal practices or can be perpetuated by the system designer and the developer. Researchers revealed that AI is overwhelmingly white and male-dominated. Therefore, it will reconstruct and perpetuate historical power imbalances and gender stereotypes and biases [32]. Ethical issues arising from AI require an interdisciplinary approach involving technology, stakeholders and governing policies.
Almost a tenth (8%) of the publications on AI ethics and social concerns focused on robots and computer vision. The proliferation of diverse robotic systems integrated into our day-to-day activities will inevitably give rise to a wider range of ethical quandaries that require careful consideration and governance frameworks to be addressed [36]. For robots to be truly ethical, they need to possess the capacity to engage in moral reasoning—to evaluate the potential outcomes of their actions and make decisions that can be defended from an ethical standpoint, even when faced with highly complex and conflicting scenarios [37]. The potential vulnerability of autonomous vehicle systems to hacking highlights important risks that need careful consideration and mitigation through robust cybersecurity measures and fail-safes [38, 39]. There could be catastrophic consequences where the cars are essentially weaponized against innocent people.
Computer vision allows computers to understand the content and meaning of images, perceive and describe the real-world scene that humans see in one or more images, and reconstruct the key properties of that scene, such as the shape, lighting, and color distributions [40]. Literature has highlighted the wide-ranging application domains of AI-driven image recognition technology, spanning from healthcare, where AI-powered diagnostic tools have shown promising results in areas like medical imaging analysis [8], to security and surveillance, where image analysis systems have been instrumental in enhancing threat detection capabilities [41]. Moraes
Biases and errors in algorithms underlying image recognition can have very harmful consequences, leading to wrongful arrests and profiling, as well as perpetuating offensive stereotypes and dehumanizing classifications of certain racial and ethnic groups [43]. Small AI-driven motion-activated cameras distinguish animals from humans to counter poaching [41]. These could be repurposed by terrorists for enemy identification and for criminal use.
Several studies reveal the use of AI applications in security sector to flag potential terrorists and other criminal elements using facial recognition systems. Military can apply AI for autonomous weapon systems [10]. Military systems utilizing computer vision enhanced by AI have developed self-driving tanks and armoured vehicles that can detect targets without human intervention [44]. The United States is refining its Maven project, where machine learning algorithms are improving and analysing drone video footage and carrying out extensive experiments of swarms of autonomous drones in combat to overwhelm the enemy’s defences with incredible precision and accuracy [45]. The Japanese have developed a maritime patrol aircraft that uses computer vision and AI to identify vessels, potential threats and targets [46].
In military, computer vision data can be poisoned to mislead AI target recognition systems and lead to attacks on non-military sites such as civilian infrastructure [47]. Similar strategy was used during World War II, where imaginary armies of inflated tanks and plane decoys were used to deceive Nazis about the enemy’s position. However, critics argue that the main threat of autonomous weapons is not in conventional warfare, but the risk of them being used by non-state actors, criminals, or in asymmetric conflicts [48]. Scholars have raised ethical concerns regarding the accountability and responsibility that arise from illegal or unwanted behaviour of AI-based image recognition technologies [10].
Research shows that one of the emerging areas is the use of AI in the justice system. AI-driven systems can analyse large volumes of court papers and assess the likelihood of defendants being convicted [5, 33]. AI systems are built using self-learning algorithms that are trained using historical data [5, 49, 50]. If such data has biases based on gender, race, class and religion, these may lead to biased decisions which may prejudice minority groups and social exclusion. The London Metropolitan Police deployed a facial recognition tool with racial biases that became less accurate for black people and minority ethnic groups [51]. The COMPAS algorithm used in courts in the United States of America was prejudiced against black inmates as it predicted that their probability of re-offending was twice that of white inmates, implying that white inmates were low-risk and could get parole or lighter sentences [52].
An AI-based app, ROSS, processes over a million pages in a second and uses natural language to answer attorneys’ questions about laws and cases [53]. The app shows the potential of AI expanding legal services to disadvantaged communities that cannot afford lawyers’ fees and have been historically shut out. Participants in the judiciary ecosystem must support AI’s ethical, social, moral and responsible integration. The use of AI systems in judicial decision-making has raised questions regarding fairness, openness, transparency and accountability, as these systems may be built on data with certain biases [54]. As the judiciary adopts AI-based algorithms to automate certain operations from pre-trial to prosecution, these algorithms must pass ethical and social tests.
AI systems will continue to perpetuate societal biases and disadvantage minorities if there is no deliberate effort to correct historical events, social exclusion and existing structural biases [31]. AI-based judgements that use structured algorithms are expected to be more transparent and open. However, the biggest challenge lies with the available training data; if such data has prejudices, these will be perpetuated by AI systems. Researchers and practitioners are discussing the opportunities and challenges in harnessing AI systems for the judicial system and their implications on the rule of law and human rights.
Although some areas like infrastructure had a small number of studies, scholars revealed that AI-driven technologies are used to build applications to manage energy grids, public safety, assets, natural resources and the environment, promoting e-governance. Industries are using IoT to manage production lines, optimise supply chain and manufacturing. AI-driven technologies are now central in the management of public infrastructure such as power systems, nuclear plants, transportation systems and smart buildings. The integration of digital technologies with IoT and AI can assist cities transform into smart cities, with heightened information sharing and digital control of city-wide infrastructure and resources [55]. This provides residents with added remote control and management of heating and cooling systems, intrusion detection systems, entertainment systems and lighting systems.
IoT-powered sensors can be used to monitor the environment and predict floods and earthquakes to save lives. AI applications are being applied in weather forecasting by meteorological services with very high accuracy [56]. AI can monitor power systems to detect grid stability, peak load planning, real-time metering and detect power failure, avoiding disastrous consequences on human life, financial investments and disruptions [57]. IoT devices are used in electrical grids to monitor network conditions and track energy flow.
Transportation systems benefit from AI applications in traffic flow and accident prediction, traffic lights can be adjusted to clear congested routes [58], bringing convenience to the public. AI-powered systems can help improve traffic flow where traffic signals communicate amongst themselves, making necessary adjustments to reduce traffic congestion [59]. The use of in-vehicle and roadside IoT devices improves the collection and monitoring of transportation systems and this has seen the rise of self-driving cars where automated reasoning, deep learning and computer vision [60] are applicable. AI-powered computer vision and unmanned aerial vehicles (UAVs) are being deployed to monitor electrical infrastructure to detect cable defects, vandalism and damaged insulation [61]. Water treatment facilities are fitted with IoT devices that monitor the quality of water by measuring common contaminants such as NOM, persistent pollutants and heavy metals [62].
When cybercriminals intercept networks covering critical national infrastructure, such as power and water utilities, transportation systems, hospitals and public administration which are governed by IoT systems, the potential for loss of human life rises exponentially [39, 63]. Large scale cybersecurity attacks on London’s rail, gas and telephone network would cost the country an estimated £ 111 million a day [57]. In recent years, the US Home Security Department reported over 400 intrusions on the USA’s energy installations with 75% of major companies attacked in 2018 [64]. Hackers can bypass infrastructural systems such as vehicle control systems causing catastrophic effects on public transportation systems. Such threats carry serious ramifications for individuals, institutions and governments. This raises security, data privacy and system breaches, triggering ethical and social concerns due to elevated threat on human life, public health and service provision.
Interestingly, 6.6% of the publications on AI ethics and social concerns were focused on surveillance. The application of AI in surveillance has progressed beyond science fiction and is significantly impacting the military, policing and other government agencies. Incorporating AI-based surveillance has proved pivotal in all aspects of socioeconomic development, including healthcare, security, transportation and manufacturing [65]. An estimated 43% of the world’s countries have invested in AI for surveillance, computer vision, facial recognition and intelligent policing [66]. In the transport sector, intelligent traffic surveillance systems are implemented to improve public safety, monitor suspicious behaviour to root out terrorists and reduce trespassing and negligent deaths [67].
Ethical concerns have been raised when authoritarian regimes use AI-based technologies for surveillance in transport sector and public gatherings [68]. Over 1 million Muslims in China’s Uighur have been arrested through surveillance systems powered by AI models [12]. Von Blomberg [17] noted that the social credit system introduced by China showed how governments can use AI and digital technologies to regulate and monitor citizens’ activities, raising ethical and social questions regarding AI use by states. Growing evidence shows that governments use this credit system to spy on their citizens. The surveillance systems are also being tested in many developing countries, thus undermining democracy. AI-powered surveillance is broad, from tracking individuals who may be petitioning the government to terrorism suspects; therefore, these technologies can be deployed on anyone whose activities may be viewed as anti-government [69].
Scholars note that AI-based surveillance may harm the general public’s human rights and privacy, especially those who engage in democratic processes such as protests and other anti-establishment initiatives enshrined in their constitution [70]. As governments are partnering with technology companies in developing AI surveillance systems, it is essential to develop ethical guidelines to ensure that companies are socially and ethically liable for the consequences of their technology [71]. As more countries adopt AI-powered surveillance technologies, there is a need to create frameworks and strategies for countering the state’s misuse to support moral and accountable use [72].
Research on AI ethics and social concerns has steadily increased from 6.2% in 2019 to 40.3% in 2023, with notable growth each year: 13.7% in 2020, 18.9% in 2021, and 23% in 2022. The findings highlight key AI affordances and the ethical implications, with nearly a third of studies focusing on healthcare. This underscores the need for continuous dialogue on ethical AI deployment, critical for fostering trust, ensuring fair treatment, and enhancing the overall quality of care in healthcare settings. Nearly a third of the publications focused on general AI applications, highlighting the urgent need for comprehensive policies to mitigate risks associated with advanced technologies, particularly as they near superhuman intelligence. Research focusing on surveillance highlights the importance of establishing ethical guidelines and oversight in military and law enforcement applications to protect privacy and civil liberties. Additionally, a tenth of research studies focused on robotics, pointing to the growing integration of AI in daily life. This necessitates the establishment of governance frameworks to address emerging ethical dilemmas. A tenth of the studies examined AI in education, indicating that while AI can improve teaching and learning, it should be implemented in a manner that respects students’ rights and promotes equitable learning environments. Overall, these findings call for interdisciplinary collaboration among ethicists, technologists, and domain experts, as well as public engagement in discussions about AI ethics to create robust frameworks that safeguard societal values and rights.
A new dawn has risen across major socioeconomic sectors, and a digital revolution powered by AI has reconfigured our way of life and work. AI systems are deployed in an uncertain and volatile world, thus challenging technology companies to build safe and reliable AI systems that protect citizens. The research community has responded to calls from governments, industry and other stakeholders to improve research on AI ethics to safeguard humanity. This study revealed a steadfast rise in research on AI ethics and social concerns from 6.2% in 2019 to 40.3% in 2023. This steady rise over five years elucidates the rising interest among scholars, stakeholders and governments.
Data analysis showed that major part of the research covered AI ethics in healthcare, demonstrating its ability to transform healthcare,improving outcomes and efficiency across the medical field, while raising ethical concerns. The next popular theme addressed the general ethical concerns resulting from AI integration in general applications, indicating a need for responsible development to overcome bias and discrimination, privacy, transparency, manipulation of information, and unequal access.
Research on surveillance applications stress the importance of ethical oversight in the security sector which includes the military and law enforcement. This ensures that AI technologies are used responsibly and in a manner that upholds human rights and ethical standards. Furthermore, the focus on robotics emphasises the need for governance frameworks, ensuring that AI-driven robotics is developed and implemented in a manner that respects human rights and promotes societal well-being. Additionally, research examining AI in education, highlights the ethical issues related to data privacy and bias, emphasising the need to ensure that these technologies enhance learning while respecting students’ rights and dignity. Overall, these findings underscore the necessity for interdisciplinary collaboration and public engagement in developing comprehensive ethical frameworks for AI.
The study revealed the breadth and depth of AI-enabled transformation across significant socioeconomic sectors of the global economy. The study also presented an integrated view of the affordances and constraints of AI-powered systems and the ethical and social principles that enhance public trust and accountability. The study shows the breadth and depth of AI-enabled transformation across the significant socioeconomic sectors of the global economy.
The findings show that the ethical, legal and social issues arising from AI’s undesirable actions should be addressed. The study also revealed that AI development is relentless and disruptive, with profound outcomes, often irreversible, yet the pace of regulating its development remains opaque and slow. The decisions and consequences of an AI system must be transparent and open to establish trust from the public and accountability of implementers. The predictions and actions of AI-driven technologies are challenging to follow as the technology operates as a black box, making it difficult for policymakers, users, the public and the government. The catastrophic effects of AI-based system failure or illegal action require governments and policymakers to craft frameworks and policies that protect the public. Personal information is collected, processed and stored without consent from the citizens.
This study conducted a systematic literature review to analyse and consolidate the ethical and societal issues surrounding AI technologies over the past five years, from 2019 to 2023 with the aim to enhance our understanding and promote robust measures to mitigate negative societal and economic impacts. This study presented the application of AI in major socioeconomic sectors and the most debated ethical and social issues, making an important academic contribution to the growing discourse around AI ethics.
In essence, the study bridges the academic and practical realms, offering a structured approach to categorising and the mapping of major socioeconomic sectors to streamline and focus the ongoing debate. This study also provides a synoptic overview of topical discussions and ethical issues that will interest governments, practitioners, academics, and developers as they strive for accountable and responsible use of AI. Scholars in developing countries may be exposed to ethical questions as more developing countries adopt AI. When AI systems are deployed, it is essential to ensure safety, fairness, privacy, reliability, and trust.
The study has a few limitations. Some trending and topical issues may have been missed as the author used a single large database. By restricting the search to Scopus database, the sample size became significantly small. The author omitted content from grey literature. Empirical research is required to consolidate current literature on AI’s benefits, drawbacks and ethical concerns. This study recommends more education and awareness to raise literacy among stakeholders from learners, practitioners, developers and the government on the potential of AI and the ethical issues that arise from its irresponsible use. Since AI is multidisciplinary, courses that raise awareness and responsible use should be introduced across all disciplines, from primary school to university. Future research may consider providing empirical findings on the drawbacks and benefits of AI and consider the social and ethical issues. This study did not distinguish between machine learning based AI and AI-based expert systems whose ethical and social impact differ. Again, more research is required on how national policies can be applied to technologies developed globally and used across borders. The evidence provided in this study appeals to technology companies, policymakers, and governments to enact policies and frameworks regulating AI use to mitigate bias and social exclusion and improve accountability.
This research did not receive external funding from any agencies.
Not Applicable.
Source data is not available for this article.
The authors declare no conflict of interest.
Written by
Article Type: Review Article
Date of acceptance: December 2024
Date of publication: December 2024
DOI: 10.5772/acrt.20240020
Copyright: The Author(s), Licensee IntechOpen, License: CC BY 4.0
© The Author(s) 2024. Licensee IntechOpen. This is an Open Access article distributed under the terms of the Creative Commons Attribution License (https://creativecommons.org/licenses/by/4.0/), which permits unrestricted reuse, distribution, and reproduction in any medium, provided the original work is properly cited.
Impact of this article
128
Downloads
275
Views
Join us today!
Submit your Article