The Erosion of Journalistic Integrity: How AI-Driven Fake News and Deepfakes Complicate Truth Verification in Journalism

Download as pdf or txt
Download as pdf or txt
You are on page 1of 9

Volume 9, Issue 8, August – 2024 International Journal of Innovative Science and Research Technology

ISSN No:-2456-2165 https://doi.org/10.38124/ijisrt/IJISRT24AUG1131

The Erosion of Journalistic Integrity: How AI-Driven


Fake News and Deepfakes Complicate Truth
Verification in Journalism
Samad Uthman
Agence France-Presse (AFP)

Abstract:- The introduction and consequent proliferation more efficient, offering new possibilities and transforming
of Artificial Intelligence (AI) and deepfakes have created traditional processes (Chakraborty, 2022; Kraus et al., 2022).
new challenges for journalists worldwide. These
technologies have made it alarmingly easy to generate and The world has rapidly evolved from traditional AI
disseminate fake news, complicating the verification applications, which primarily focused on tasks like data
process and undermining journalistic integrity. The rapid analysis and pattern recognition, to the more advanced realm
spread of AI-driven misinformation not only burdens of generative AI. This new wave of AI technologies, including
journalists with the task of distinguishing fact from fiction models like OpenAI’s ChatGPT and Google’s Gemini
but also erodes public trust in the media. This paper (formerly Bard), goes beyond simply processing information;
explores the implications of AI and deepfakes on truth they can create entirely new content. These generative AI
verification in journalism, highlighting the ethical models are capable of producing text, images, and even videos
dilemmas faced by journalists in this new digital that are highly convincing, often indistinguishable from
landscape. By examining the impact on public perception human-made content. This shift marks a significant milestone
and the challenges of maintaining credibility, the study in AI's development, as the ability to generate content
underscores the need for robust verification tools and autonomously introduces both opportunities and challenges
ethical guidelines to safeguard the integrity of journalism across various industries (Oksymets, 2024).
in the age of AI.
In journalism, the rise of generative AI has profound
Keywords:- Artificial Intelligence, Deepfakes, Journalism, implications for integrity and the spread of fake news. While
Misinformation, Public Trust, Truth Verification these technologies can assist journalists in content creation
and idea generation, they also make it easier to fabricate
I. INTRODUCTION information and create deepfakes, which can be used to
deceive the public. The realism and scale at which generative
The proliferation of artificial intelligence (AI) is AI can produce false narratives threaten the credibility of news
undoubtedly one of the most transformative developments of outlets, as distinguishing between authentic and AI-generated
the twenty-first century (Brynjolfsson et al., 2019; Dupont, content becomes increasingly difficult (Dalalah & Dalalah,
2013). AI has been continuously reshaping industries and 2023). This erosion of trust complicates truth verification and
revolutionising how tasks are performed and completed places a greater burden on journalists to ensure the accuracy
(Dauvergne, 2020). For students, AI has become an invaluable and authenticity of the information they publish.
tool, enabling them to write more effectively and perform
grammar checks with the help of AI-powered writing Journalism is no exception to this AI-driven
assistants (Pape et al., 2021; Singh & Hirah, 2022). These transformation (Helberger et al., 2022; Helberger &
tools not only improve the quality of their work but also save Diakopoulos, 2023). Journalists now use AI for an array of
time and enhance learning experiences (Li, 2020). In the tasks, ranging from news writing and idea generation to
business world, companies leverage AI bots to optimise sales sorting through large volumes of information and even
and marketing strategies, allowing for more personalised broadcasting (Arets et al., 2024; Moravec et al., 2024;
customer interactions and efficient data analysis (Ljepeva, Oksymets, 2024). AI algorithms can quickly scan through
2022; Mariani et al., 2022; Nalini et al., 2021). Teachers, too, data, identify trends, and generate news articles in a fraction of
have embraced AI, using it to plan lessons, tailor educational the time it would take a human journalist (Diakopoulos, 2019;
content to individual student needs, and discover innovative Stray, 2021). This has led to more efficient newsrooms and the
methods for delivering classes (Fitra, 2021). The integration of ability to cover a broader range of topics. Additionally, AI
AI across various fields has undeniably made work easier and tools assist in curating content, tailoring news to the
preferences of individual readers, and helping journalists stay

IJISRT24AUG1131 www.ijisrt.com 1626


Volume 9, Issue 8, August – 2024 International Journal of Innovative Science and Research Technology
ISSN No:-2456-2165 https://doi.org/10.38124/ijisrt/IJISRT24AUG1131

ahead in an increasingly fast-paced news environment (Singh II. AI-DRIVEN MISINFORMATION AND FAKE
& Pathania, 2024). The use of AI in journalism has expanded NEWS
the capabilities of news organisations, enabling them to reach
wider audiences and improve the overall quality of their Artificial intelligence (AI) is often characterised in two
reporting (Kalij et al., 2020). seemingly contradictory ways. One perspective defines AI as
intelligence that replicates human intelligence and behaviour.
However, despite these advantages, the integration of AI Conversely, AI is also described as intelligence that contrasts
into journalism has brought significant challenges, particularly with natural, or human, intelligence. As Russel and Norvig
in the areas of truth verification and combating (2016) explain, the first set of definitions evaluates AI's
misinformation (Kertysova, 2018). The same technology that success "in terms of fidelity to human performance," while the
enables AI to assist journalists can also be used to create second set of definitions assesses AI's success "against an
convincing fake news and deepfakes, making it increasingly ideal performance measure, called rationality" (p. 1).
difficult to distinguish between real and fabricated content
(Thompson et al., 2022). The ease with which AI can generate To gain a clearer understanding of fake news, Wardle
false information, combined with the rapid spread of such and Derakshan (2017) differentiate between three key
content through digital platforms, poses a serious threat to categories: misinformation, disinformation, and
journalistic integrity. Journalists now face the daunting task of malinformation. Misinformation is characterised by the
verifying the authenticity of news and images in an unintentional sharing of false information without any harmful
environment where sophisticated AI tools can mimic reality intent. Disinformation, in contrast, involves the intentional
with alarming accuracy (Thompson et al., 2022). This creation and distribution of false information with the specific
challenge has sparked a growing concern within the industry, purpose of causing harm or deceiving others. Malinformation,
as the line between fact and fiction becomes blurred, however, refers to the dissemination of true information with
undermining public trust in the media and complicating the the intent to inflict harm, often by revealing private or
role of journalists as gatekeepers of truth (Sharma et al., sensitive details to the public. These distinctions help to
2022). clarify the diverse motivations and impacts associated with the
spread of fake news.

Fig 1: The Psychology of Fake News (Wardle and Derakshan, 2017)

Artificial intelligence is used to create fake news leading disorders such as misinformation or misleading information.
to the spread of misinformation. Inobemhe et al. (2020) According to Pate et al. (2019), the proliferation of fake news
defined fake news as “the deliberate misinformation about an has emerged as a significant societal threat, prompting calls
issue or a matter” (p. 156). McGonagle (2017), in a similar for urgent government intervention through legislative
vein, defined fake news as “the deliberate misinformation measures. They emphasise that the impact of fake news on
about an issue or a matter” (p. 156). Lazer et al. (2018) define society is substantial and cannot be disregarded. Similarly,
fake news as information that is fabricated to resemble news Pulido et al. (2020) view fake news as a public health concern,
media content in form, often overlapping with information asserting that it poses harm to individuals exposed to it. This

IJISRT24AUG1131 www.ijisrt.com 1627


Volume 9, Issue 8, August – 2024 International Journal of Innovative Science and Research Technology
ISSN No:-2456-2165 https://doi.org/10.38124/ijisrt/IJISRT24AUG1131

perspective underscores the notion that fake news represents a another's body or alter their voice (Shen et al., 2018; Yadav &
serious public health risk for those who encounter it. Salmani, 2019). The implications of deepfakes are profound,
as they can make it appear that someone said or did something
AI-driven fake news refers to misinformation that is they never did, leading to potential damage to reputations,
deliberately created and spread using artificial intelligence political instability, or even threats to national security (Striuk
technologies (Broussard et al., 2019). This type of fake news & Kondratenko, 2023). The ability to create such convincing
often involves automated systems that generate, curate, or fake content challenges traditional methods of truth
amplify false information with the intent to deceive or mislead verification, making it increasingly difficult to distinguish
the public (Collins et al., 2021). The creation of AI-driven between real and fake media (Kalpokas & Kalpokiene, 2022).
fake news typically involves natural language processing
algorithms that can produce convincing articles, social media Deepfakes are highly realistic synthetic media, typically
posts, or even entire websites that appear legitimate videos or images, created using artificial intelligence to
(Humprecht, 2020). The purpose behind such content can convincingly superimpose someone's likeness onto another
range from political manipulation to financial gain, as it can person's body or voice. This technology has advanced rapidly,
easily sway public opinion, cause panic, or discredit making it increasingly difficult to distinguish between genuine
individuals or organisations (Karnouskos, 2020). and fabricated content. Deepfakes have become a powerful
tool in the spread of misinformation, with several high-profile
Deepfakes, on the other hand, are a specific subset of AI- instances illustrating their impact. One notable example is the
driven content that involves the use of machine learning deepfake video of former U.S. President Barack Obama,
algorithms to create highly realistic, yet entirely fabricated, where his likeness was manipulated to deliver a speech, he
audio and visual content (Masood et al., 2023; Mustak et al., never gave. The video, created by filmmaker Jordan Peele in
2023; Vasist & Krishnan, 2023). This technology relies on 2018, was initially intended to raise awareness about the
deep learning techniques, particularly generative adversarial potential dangers of deepfakes (Peele, n.d).
networks (GANs), to superimpose one person's face onto

Fig 2: Obama’s deepfake video1

1
https://www.youtube.com/watch?v=AmUC4m6w1wo.

IJISRT24AUG1131 www.ijisrt.com 1628


Volume 9, Issue 8, August – 2024 International Journal of Innovative Science and Research Technology
ISSN No:-2456-2165 https://doi.org/10.38124/ijisrt/IJISRT24AUG1131

American director, Jordan Peele, also created a before he meshes it up with the original clip of himself, which
photorealistic video of Obama to show the world that AI can is the original video. However, in the meantime, some people
be dangerous and truth verification can be more tasking. In the had circulated the video and some other fake videos, recreated
image in figure 3, Jordan Peele begins with a video with a with AI, with Obama and other celebrities saying nefarious
deepfake Obama educating the public on misinformation, things.

Fig 3: Jordan Peele shows himself in the video (Peele, n.d)

However, its realistic portrayal underscored how easily III. THE IMPACTS OF AI-CREATED FAKE NEWS
such technology could be used to spread fake news, leading to AND DEEPFAKES ON JOURNALISM
public confusion and the potential for significant political
ramifications. The advent of artificial intelligence (AI) has brought
about transformative changes across various sectors, including
Another significant instance occurred in 2020 during the journalism. While AI has enhanced the efficiency and
COVID-19 pandemic when a deepfake of Indian actress capabilities of newsrooms, it has also introduced
Deepika Padukone surfaced, showing her endorsing a false unprecedented challenges. Among the most significant are the
remedy for the virus. The video, which quickly went viral, was proliferation of AI-created fake news and deepfakes, which
eventually debunked, but not before it contributed to the threaten to undermine the core principles of journalism—truth
spread of dangerous misinformation. Such instances highlight and integrity. These AI-driven technologies have blurred the
the potential for deepfakes to not only distort reality but also line between reality and fabrication, making it increasingly
to endanger public health and safety by spreading false difficult for journalists and the public to distinguish between
information (Reuters, 2023). genuine news and falsehoods (Iqbal, 2023). As a result, the
integrity of journalism is at risk, with far-reaching
Social media platforms and digital algorithms play a implications for public trust and the role of the media in
crucial role in the proliferation of both AI-driven fake news society.
and deepfakes. These platforms, designed to maximise user
engagement, often rely on algorithms that prioritise  Challenges in Truth Verification
sensational or controversial content, inadvertently amplifying The rapid advancement of artificial intelligence (AI)
the reach of misinformation. Once fake news or deepfakes are technologies has revolutionised many aspects of journalism,
posted, they can quickly go viral, spreading across networks offering tools that enhance efficiency and broaden the scope of
and reaching millions of users within minutes. The speed and news coverage. However, these same technologies have also
scale at which this content spreads make it challenging for introduced significant challenges, particularly in the realm of
fact-checkers and journalists to keep up, allowing false truth verification. One of the primary difficulties lies in
information to embed itself in public consciousness before it distinguishing real content from AI-generated fakes, especially
can be corrected (Sharma et al., 2022). as deepfake technology becomes increasingly sophisticated.
Deepfakes, which involve the manipulation of audio, video, or
images to create realistic but entirely fabricated content, have
made it exceedingly difficult for journalists and the public to

IJISRT24AUG1131 www.ijisrt.com 1629


Volume 9, Issue 8, August – 2024 International Journal of Innovative Science and Research Technology
ISSN No:-2456-2165 https://doi.org/10.38124/ijisrt/IJISRT24AUG1131

differentiate between authentic and false information (Leiser, Another notable case occurred during the COVID-19
2022). pandemic when a deepfake of Indian actress Deepika
Padukone surfaced online, falsely showing her endorsing a
Journalists have traditionally relied on established remedy for the virus. The video, which was widely circulated
methods of fact-checking, cross-referencing sources, and before it was exposed as a fake, contributed to the spread of
using their judgement to assess the credibility of information. dangerous misinformation at a time when accurate information
However, the rise of AI-driven fake news has complicated this was critical for public health. Such instances illustrate how
process. AI can generate false articles, photos, and videos that AI-driven fake news can exploit the credibility of public
are nearly indistinguishable from real ones, leaving even figures to disseminate false information, further eroding trust
experienced journalists struggling to verify the authenticity of in both the media and the individuals involved (Mookdarsanit
content. The sheer volume of information available online & Mookdarsanit, 2021).
further exacerbates this challenge, as newsrooms are
inundated with content that must be vetted and verified in real- The long-term impact of these incidents is a growing
time. This has placed an enormous burden on journalists and distrust of traditional news sources, as people become
fact-checkers, who must now contend with a new level of increasingly uncertain about the accuracy of the information
complexity in their work (Cheruiyot & Ferrer-Conill, 2018). they consume. This scepticism is exacerbated by the fact that
AI-generated content is often designed to look and feel like
The verification process has also become more time- legitimate journalism, making it more difficult for the average
consuming, as journalists must use specialised tools and person to identify fake news. As a result, many people turn to
software to detect AI-generated fakes. This often involves alternative sources of information, such as social media and
checking the metadata of images and videos, analysing partisan outlets, which can further fragment the media
patterns of dissemination on social media, and employing AI- landscape and contribute to the spread of
driven fact-checking tools that can detect anomalies in misinformation (Chakraborty, 2022).
content. While these methods can be effective, they are not
foolproof, and the speed at which fake news spreads online  Ethical Dilemmas
often outpaces the ability to debunk it. This lag between the The rise of AI-generated content in journalism has also
creation of fake content and its detection can result in false introduced a host of ethical dilemmas that challenge the core
information reaching millions of people before it is corrected, principles of the profession. One of the most pressing ethical
causing lasting damage to the reputation of news organisations issues is the responsibility of journalists to verify the
and the public's trust in journalism. authenticity of the content they report. With the increasing
prevalence of AI-driven fake news and deepfakes, journalists
 Erosion of Public Trust must be more vigilant than ever in ensuring that the
The proliferation of AI-driven misinformation has had a information they publish is accurate and trustworthy.
profound impact on public trust in the media. As deepfakes However, this responsibility comes with significant
and fake news become more prevalent, the public's ability to challenges, as the pressure to deliver news quickly often
discern truth from falsehood diminishes, leading to conflicts with the need for thorough verification (Kaur &
widespread scepticism and cynicism toward news sources. Sharma, 2024).
The erosion of trust in journalism is particularly concerning,
as a well-informed public is essential for a functioning The balance between speed and accuracy is a
democracy. When people cannot trust the information they longstanding issue in journalism, but the advent of AI-driven
receive, it undermines the foundation of informed decision- misinformation has made it even more difficult to navigate. In
making and erodes the social contract between the media and today's fast-paced news environment, journalists are expected
its audience (Samuel-Okon et al., 2024). to report breaking news as it happens, often with little time for
fact-checking. This creates a tension between the imperative to
Several case studies highlight the damaging effects of be first with the story and the ethical obligation to ensure that
AI-driven misinformation on public trust. For example, during the information is correct. The risk of inadvertently spreading
the 2020 U.S. presidential election, deepfake videos of AI-generated fake news is heightened by this pressure, leading
political figures, including one of House Speaker Nancy Pelosi to potential harm to individuals, organisations, and the public's
appearing to slur her speech, went viral on social media (CBS trust in the media (Chakraborty, 2022).
News, 2019). Although the video was quickly debunked, it
had already been viewed millions of times, leading to Another ethical dilemma revolves around the use of AI
widespread confusion and reinforcing existing partisan tools in journalism itself. While AI can assist journalists in
divides. This incident exemplifies how quickly AI-generated various tasks, such as content generation and data analysis, it
content can spread and the difficulty of restoring trust once it also raises questions about the integrity of the profession. For
has been compromised. instance, the use of AI to write news articles or generate ideas
for stories could potentially compromise the originality and
authenticity of journalistic work. Moreover, reliance on AI

IJISRT24AUG1131 www.ijisrt.com 1630


Volume 9, Issue 8, August – 2024 International Journal of Innovative Science and Research Technology
ISSN No:-2456-2165 https://doi.org/10.38124/ijisrt/IJISRT24AUG1131

tools could lead to a homogenization of news content, where Another instance of AI-generated misinformation
the same AI-generated narratives are repeated across multiple occurred during the COVID-19 pandemic in Africa. Deepfake
outlets, reducing diversity in reporting and limiting the range videos and AI-generated articles spreading false information
of perspectives available to the public. about the virus, including fake cures and conspiracy theories,
proliferated across the continent (Okpara, 2023). In Nigeria, a
The ethical challenges posed by AI in journalism extend deepfake video surfaced in which a supposed health official
beyond individual journalists to news organisations and the falsely claimed that the virus was a government hoax aimed at
broader media industry. Newsrooms must establish clear controlling the population. This misinformation spread
guidelines for the use of AI tools, ensuring that they are used quickly, leading to public skepticism about the pandemic's
responsibly and transparently. This includes disclosing when severity and the government's response, which hindered
AI-generated content is used and implementing rigorous fact- efforts to contain the virus and protect public health (Okpara,
checking procedures to verify the authenticity of information. 2023).
Additionally, there is a need for industry-wide standards to
address the ethical implications of AI in journalism, including Another example can be seen in the widespread protests
the potential impact on employment, as AI-driven automation across Africa against police brutality, particularly during
could reduce the need for human journalists in certain roles. Nigeria’s #EndSARS movement in 2020. AI-generated fake
news was used to discredit the protests by creating and
IV. AI AND JOURNALISTIC INTEGRITY: spreading false narratives about violent actions by protesters
CRITICAL NIGERIAN PERSPECTIVES or fabricated government statements. These AI-generated
images and videos were shared on social media, causing
In today's digital age, the rapid advancement of artificial confusion and fear among the public. This not only disrupted
intelligence (AI) has revolutionized various sectors, including the peaceful nature of the protests but also complicated efforts
journalism. While AI offers tremendous potential for by journalists and fact-checkers to provide accurate coverage,
enhancing news production and dissemination, it also presents further eroding public trust in the media and the authenticity
significant challenges, particularly concerning the integrity of of information being reported (Oladele & Aiyetiran, 2023).
journalistic practices. The rise of generative AI tools has made
it increasingly easy to create and spread fake news, deepfakes, A more recent example was seen in the 2023 general
and other forms of misinformation. These developments have elections in Nigeria. The 2023 Nigerian general elections
profound implications for the credibility of media outlets and highlighted the growing influence of AI technologies,
the trust that the public places in the news they consume. This particularly in the creation and dissemination of deepfakes,
paper examines the critical perspectives from Nigeria and which significantly complicated the work of journalists
Africa on how AI-driven technologies have influenced (Ekpang et al., 2023). During the elections, AI-generated
journalism, with a focus on the ethical dilemmas, challenges in content was used to manipulate public perception, spread
truth verification, and the broader impact on democratic misinformation, and create fake narratives around political
processes. Through the lens of recent events, this study candidates. For instance, a deepfake audio purportedly
explores the complexities introduced by AI and generative AI capturing a conversation between Labour Party’s presidential
in the journalistic field and underscores the urgent need for candidate, Peter Obi, and Bishop David Oyedepo, was
effective strategies to safeguard journalistic integrity in the circulated widely. The audio, which was presented as a
face of these technological advancements. genuine conversation urging religious votes, was later
dismissed as a deepfake by Obi and his party. Despite the
In recent years, Nigeria and Africa have witnessed denials, the audio had already been viewed over 10.3 million
several instances where AI-driven technologies have been times, illustrating how quickly AI-generated content can
used to create and spread fake news, significantly impacting spread and influence public opinion. This scenario placed
public perception and socio-political stability (Gbanden et al., immense pressure on journalists to verify the authenticity of
2024). One notable example occurred during the 2019 such content, making their work more strenuous and time-
Nigerian general elections, where AI-generated fake news consuming as they navigated the challenges of distinguishing
circulated widely on social media platforms. False reports between real and fabricated information (Eleanya, 2023).
about candidates' health, election results, and fabricated
endorsements were disseminated, creating confusion and Similarly, another instance involved an audio clip
tension among voters. The AI tools used to create these fake allegedly featuring Atiku Abubakar, Governor Ifeanyi Okowa,
stories made them appear credible, making it difficult for both and Aminu Tambuwal, plotting to rig the presidential election
the public and journalists to distinguish between factual by compromising key officials. The deepfake audio, which
information and falsehoods, thereby undermining the was released shortly before the election, was designed to
democratic process (Christian, 2024). damage the reputations of the involved politicians and sow
doubt among the electorate. Like the Obi audio, this deepfake
required extensive verification efforts from journalists, who
had to counter the spread of false information in real-time. The

IJISRT24AUG1131 www.ijisrt.com 1631


Volume 9, Issue 8, August – 2024 International Journal of Innovative Science and Research Technology
ISSN No:-2456-2165 https://doi.org/10.38124/ijisrt/IJISRT24AUG1131

challenge of identifying and disproving these deepfakes [5]. CBS News. (2019, May 26). Doctored Nancy Pelosi
underscored the complexities introduced by generative AI in video highlights threat of “deepfake” tech.
the journalistic field, particularly during high-stakes events Www.cbsnews.com.
like national elections. The rapid proliferation of such content https://www.cbsnews.com/news/doctored-nancy-pelosi-
has made truth verification more arduous, highlighting the video-highlights-threat-of-deepfake-tech-2019-05-25/
urgent need for more robust tools and strategies to combat the [6]. Chakraborty, U. (2020). Artificial Intelligence for All:
spread of AI-generated misinformation in journalism Transforming Every Aspect of Our Life. Bpb
(Eleanya, 2023). publications.
[7]. Cheruiyot, D., & Ferrer-Conill, R. (2018). “Fact-
V. CONCLUSION Checking Africa” Epistemologies, data and the
expansion of journalistic discourse. Digital Journalism,
The convergence of AI-driven fake news and deepfakes 6(8), 964-975.
has ushered in an era of unprecedented challenges for [8]. Christian, E. C. (2024). Explainability Imperative of
journalists (Kalpokas & Kalpokiene, 2022). The ability to Generative Artificial Intelligence Navigating the Moral
create highly convincing synthetic media has eroded public Dilemma of AI in Nigeria and Charting a Path for the
trust in traditional news sources, making it increasingly Future. Universal Library of Arts and Humanities, 1(2).
difficult to discern fact from fiction. This crisis of credibility [9]. Dalalah, D., & Dalalah, O. M. (2023). The false positives
necessitates a fundamental rethinking of journalistic practices, and false negatives of generative AI detection tools in
including the adoption of advanced detection tools, rigorous education and academic research: The case of
verification processes, and transparent reporting ChatGPT. The International Journal of Management
methodologies, which can advance journalistic efforts and Education, 21(2), 100822.
bring about credibility (Whyte, 2020). Moreover, fostering [10]. Dauvergne, P. (2020). AI in the Wild: Sustainability in
media literacy among the public is crucial to equip individuals the Age of Artificial Intelligence. MIT Press.
with the skills to critically evaluate information and resist the [11]. Diakopoulos, N. (2019). Automating the news: How
spread of misinformation (Kalpokas & Kalpokiene, 2022). algorithms are rewriting the media. Harvard University
Press.
In conclusion, the future of journalism hinges on a [12]. Dupont, A. (2013). Intelligence for the twenty-first
proactive response to the threats posed by AI-generated century. In Twenty-First Century Intelligence (pp. 15-
disinformation. In this regard, by investing in technological 39). Routledge.
advancements, upholding ethical standards, and prioritising [13]. Ekpang, J. E., Iyorza, S., & Ekpang, P. O. (2023). Social
transparency, the media industry can reclaim its role as a media and artificial intelligence: perspectives on
trusted source of information. Collaboration between deepfakes use in Nigeria’s 2023 general elections. KIU
journalists, technologists, and policymakers is essential to Interdisciplinary Journal of Humanities and Social
develop comprehensive strategies for combating fake news Sciences, 4(2), 109-124.
and deepfakes, safeguarding democratic discourse, and [14]. Eleanya, F. (2023). “Hoe AI, deepfakes are deployed to
preserving the integrity of journalism in the digital age shape political outcomes”. Business Day.
(Whyte, 2020). https://businessday.ng/technology/article/how-ai-
deepfakes-are-deployed-to-shapepolitical-outcomes/
REFERENCES [15]. Gbaden, C. J., Gambo, S., & Shem, W. (2024).
Challenges and Prospects of Artificial Intelligence in
[1]. Al Jazeera. (2022, November 20). FIFA makes $7.5bn in Nigerian Journalism Practice: A Narrative Review.
revenue for Qatar World Cup. Www.aljazeera.com. ALSYSTECH Journal of Education Technology, 2(2),
https://www.aljazeera.com/sports/2022/11/20/fifa- 110-124.
revenue-hits-7-5b-for-current-world-cup-period. [16]. Helberger, N., & Diakopoulos, N. (2023). The European
[2]. Arets, D., Brugman, M., & de Cooker, J. (2024). AI- AI act and how it matters for research into AI in media
Powered Editorial Systems and Organizational Changes. and journalism. Digital Journalism, 11(9), 1751-1760.
SMPTE Motion Imaging Journal, 133(2), 58-65. [17]. Helberger, N., van Drunen, M., Moeller, J., Vrijenhoek,
[3]. Broussard, M., Diakopoulos, N., Guzman, A. L., Abebe, S., & Eskens, S. (2022). Towards a normative
R., Dupagne, M., & Chuan, C. H. (2019). Artificial perspective on journalistic AI: Embracing the messy
intelligence and journalism. Journalism & mass reality of normative ideals. Digital Journalism, 10(10),
communication quarterly, 96(3), 673-695. 1605-1626.
[4]. Brynjolfsson, E., Rock, D., & Syverson, C. (2019). [18]. Hirsch, A. (2018, July 25). Mesut Özil reminds us why
Artificial intelligence and the modern productivity minorities have more than one identity | Race. The
paradox. The economics of artificial intelligence: An Guardian. Amp.theguardian.com.
agenda, 23, 23-57. https://amp.theguardian.com/commentisfree/2018/jul/25/
mesut-ozil-minorities-identity-germany-football-turkish-
heritage?__twitter_impression=true.

IJISRT24AUG1131 www.ijisrt.com 1632


Volume 9, Issue 8, August – 2024 International Journal of Innovative Science and Research Technology
ISSN No:-2456-2165 https://doi.org/10.38124/ijisrt/IJISRT24AUG1131

[19]. Humprecht, E. (2020). How do they debunk “fake [32]. Mookdarsanit, P., & Mookdarsanit, L. (2021). The
news”? A cross-national comparison of transparency in COVID-19 fake news detection in Thai social texts.
fact checks. Digital journalism, 8(3), 310-327. Bulletin of Electrical Engineering and Informatics,
[20]. Iqbal, A., Shahzad, K., Khan, S. A., & Chaudhry, M. S. 10(2), 988-998.
(2023). The relationship of artificial intelligence (AI) [33]. Moravec, V., Hynek, N., Gavurova, B., & Kubak, M.
with fake news detection (FND): a systematic literature (2024). Everyday artificial intelligence unveiled: Societal
review. Global Knowledge, Memory and awareness of technological transformation. Oeconomia
Communication. Copernicana, 15(2), 367-406.
[21]. Kaklij, V., Shah, V., Kunal, M., & Mandawkar, M. U. [34]. Mustak, M., Salminen, J., Mäntymäki, M., Rahman, A.,
(2020). Microlearning based content-curation using & Dwivedi, Y. K. (2023). Deepfakes: Deceptions,
artificial intelligence for learning experience platform: a mitigations, and opportunities. Journal of Business
survey. Shah, Mr. Kunal and Mandawkar, Mr. Umakant, Research, 154, 113368.
Microlearning Based Content-Curation Using Artificial [35]. Nalini, M., Radhakrishnan, D. P., Yogi, G., Santhiya, S.,
Intelligence for Learning Experience Platform: A Survey & Harivardhini, V. (2021). Impact of artificial
(August 19, 2020). IJRAR-International Journal of intelligence (AI) on marketing. Int. J. of Aquatic Science,
Research and Analytical Reviews (IJRAR), E-ISSN, 12(2), 3159-3167.
2348-1269. [36]. Okpara, N. (2023). COVID-19 Reportage in Nigeria:
[22]. Kalpokas, I., & Kalpokiene, J. (2022). From GANs to Digital Media Ethics, Viral Lies, and Lessons Learned.
deepfakes: getting the characteristics right. In Deepfakes: In Humanistic Crisis Management: Lessons Learned
A Realistic Assessment of Potentials, Risks, and Policy from COVID-19 (pp. 139-162). Cham: Springer
Regulation (pp. 29-39). Cham: Springer International International Publishing.
Publishing. [37]. Oksymets, V. (2024). The impact of artificial intelligence
[23]. Karnouskos, S. (2020). Artificial intelligence in digital on journalism practices and content creation (Doctoral
media: The era of deepfakes. IEEE Transactions on dissertation).
Technology and Society, 1(3), 138-147. [38]. Oladele, T. M., & Ayetiran, E. F. (2023). Social unrest
[24]. Kertysova, K. (2018). Artificial intelligence and prediction through sentiment analysis on Twitter using
disinformation: How AI changes the way disinformation support vector machine: experimental study on Nigeria’s
is produced, disseminated, and can be countered. #EndSARS. Open Information Science, 7(1), 20220141.
Security and Human Rights, 29(1-4), 55-81. [39]. Papa, R., Jackson, K. M., Visone, J. D., Black, G.,
[25]. Kraus, N., Kraus, K., Shtepa, O., Hryhorkiv, M., & Abernathy, D., Dereshiwsky, M., & Petroff, P. D. (2021).
Kuzmuk, I. (2022). Artificial intelligence in established AI Transforms Twentieth-Century Learning. In Artificial
of industry 4.0. WSEAS Transactions on Business and Intelligence, Human Agency and the Educational Leader
Economics, (19), 1884-1900. (pp. 1-32). Cham: Springer International Publishing.
[26]. Leiser, M. R. (2022). Bias, journalistic endeavours, and [40]. Peele, J. (n.d.). Obama Deep Fake. Ars Electronica
the risks of artificial intelligence. In Artificial Center. Retrieved August 14, 2024, from
Intelligence and the Media (pp. 8-32). Edward Elgar https://ars.electronica.art/center/en/obama-deep-fake/
Publishing. [41]. Peele, J. (n.d.). Obama Deep Fake. Ars Electronica
[27]. Ljepava, N. (2022). AI-enabled marketing solutions in Center. Retrieved August 14, 2024, from
Marketing Decision making: AI application in different https://ars.electronica.art/center/en/obama-deep-fake/
stages of marketing process. TEM Journal, 11(3), 1308- [42]. Reuters. (2023, December 14). Bollywood star or
1315. deepfake? AI floods social media in Asia. The Hindu.
[28]. Lyjak, A. (2023, January 10). The Finances Behind the https://www.thehindu.com/sci-
2022 World Cup. Michigan Journal of Economics. tech/technology/bollywood-star-deepfake-ai-floods-
https://sites.lsa.umich.edu/mje/2023/01/10/the-finances- social-media-asia/article67636885.ece
behind-the-2022-world-cup/ [43]. Reuters. (2023, December 14). Bollywood star or
[29]. Mariani, M. M., Perez‐Vega, R., & Wirtz, J. (2022). AI deepfake? AI floods social media in Asia. The Hindu.
in marketing, consumer research and psychology: A https://www.thehindu.com/sci-
systematic literature review and research agenda. tech/technology/bollywood-star-deepfake-ai-floods-
Psychology & Marketing, 39(4), 755-776.148. social-media-asia/article67636885.ece.
[30]. Masood, M., Nawaz, M., Malik, K. M., Javed, A., Irtaza, [44]. Rivera, J. (2018, November 6). “Shut Up and Dribble”
A., & Malik, H. (2023). Deepfakes generation and Review: LeBron’s Gift to All the Haters. GQ; GQ.
detection: State-of-the-art, open challenges, https://www.gq.com/story/shut-up-and-dribble-review
countermeasures, and way forward. Applied intelligence, [45]. Russell, S. J., & Norvig, P. (2016). Artificial
53(4), 3974-4026. intelligence: a modern approach. Pearson.
[31]. Miroshnichenko, A. (2018). AI to bypass creativity. Will
robots replace journalists? (The answer is “yes”).
Information, 9(7), 183

IJISRT24AUG1131 www.ijisrt.com 1633


Volume 9, Issue 8, August – 2024 International Journal of Innovative Science and Research Technology
ISSN No:-2456-2165 https://doi.org/10.38124/ijisrt/IJISRT24AUG1131

[46]. Samuel-Okon, A. D., Akinola, O. I., Olaniyi, O. O.,


Olateju, O. O., & Ajayi, S. A. (2024). Assessing the
Effectiveness of Network Security Tools in Mitigating
the Impact of Deepfakes AI on Public Trust in Media.
Archives of Current Research International, 24(6), 355-
375.
[47]. Sharma, K., Qian, F., Jiang, H., Ruchansky, N., Zhang,
M., & Liu, Y. (2019). Combating fake news: A survey
on identification and mitigation techniques. ACM
Transactions on Intelligent Systems and Technology
(TIST), 10(3), 1-42.
[48]. Shen, T., Liu, R., Bai, J., & Li, Z. (2018). ‘deep fakes’
using generative adversarial networks (gan). Noiselab,
University of California, San Diego.
[49]. Singh, B., & Pathania, A. K. (2024). AI-Driven Content
Creation and Curation in Digital Marketing Education:
Tools and Techniques. International Journal of
Engineering Science and Humanities, 14(Special Issue
1), 14-26.
[50]. Singh, S. V., & Hiran, K. K. (2022). The impact of AI on
teaching and learning in higher education technology.
Journal of Higher Education Theory and Practice,
22(13), 135
[51]. Soraya Nadia McDonald. (2018, November 3). “Shut Up
and Dribble” is more than just a public clapback from
LeBron James to Laura Ingraham. Andscape; Andscape.
https://andscape.com/features/shut-up-and-dribble-
showtime-more-than-clapback-from-lebron-james-to-
laura-ingraham/
[52]. Striuk, O. S., & Kondratenko, Y. P. (2023). Generative
adversarial networks in cybersecurity: Analysis and
response. In Artificial Intelligence in Control and
Decision-making Systems: Dedicated to Professor
Janusz Kacprzyk (pp. 373-388). Cham: Springer Nature
Switzerland.
[53]. Thomson, T. J., Angus, D., Dootson, P., Hurcombe, E.,
& Smith, A. (2022). Visual mis/disinformation in
journalism and public communications: Current
verification practices, challenges, and future
opportunities. Journalism Practice, 16(5), 938-962.
[54]. Vasist, P. N., & Krishnan, S. (2023). Engaging with
deepfakes: a meta-synthesis from the perspective of
social shaping of technology theory. Internet Research,
33(5), 1670-1726.
[55]. Whyte, C. (2020). Deepfake news: AI-enabled
disinformation as a multi-level public policy challenge.
Journal of cyber policy, 5(2), 199-217.
[56]. Yadav, D., & Salmani, S. (2019, May). Deepfake: A
survey on facial forgery technique using generative
adversarial network. In 2019 International conference on
intelligent computing and control systems (ICCS) (pp.
852-857). IEEE.

IJISRT24AUG1131 www.ijisrt.com 1634

You might also like