AI-Powered Resilience: Combating Sophisticated, Coordinated, and Adversarial Multi-Modal Disinformation Campaigns
AI-Powered Resilience: Combating Sophisticated, Coordinated, and Adversarial Multi-Modal Disinformation Campaigns

Executive Summary
The digital age has witnessed a significant surge in the prevalence and sophistication of disinformation campaigns. These campaigns, often characterized by their coordinated nature, adversarial intent, and use of multiple media formats, pose a growing threat to individuals, organizations, and democratic processes. Simultaneously, Artificial Intelligence (AI) has emerged as a powerful force, playing a dual role as both an enabler of these sophisticated disinformation tactics and a critical tool in their detection and mitigation. This report delves into the intricate landscape of AI-powered disinformation, examining its characteristics, the ways in which AI facilitates its creation and spread, and the burgeoning field of AI-driven resilience aimed at countering these threats. The analysis encompasses the technical methodologies employed in AI-based detection, the inherent challenges and limitations of these approaches, the crucial ethical considerations that must guide their development and deployment, and the future trends shaping the battle against disinformation. Ultimately, the report underscores the necessity of a multi-faceted strategy that integrates technological innovation with robust policy frameworks, comprehensive educational initiatives, and the indispensable expertise of human analysts to foster a more resilient and trustworthy information ecosystem.
Defining the Modern Disinformation Threat:
Disinformation campaigns represent a significant cybersecurity challenge in the contemporary digital environment. These campaigns are defined as deliberate efforts to disseminate false or misleading information with the intention to deceive or manipulate target audiences for various objectives, including political or financial gain.1 While the fundamental concept of spreading false information is not new, the scale and sophistication of modern campaigns have been amplified by technological advancements.
Sophisticated disinformation campaigns often employ a range of intricate techniques. These include the creation of false narratives, where stories are crafted and propagated that are grossly distorted or entirely fabricated. Emotional manipulation is another common tactic, leveraging content that triggers strong emotions such as fear or anger to increase user engagement and broader dissemination of the false information. Furthermore, these campaigns frequently exploit existing societal or political divisions, aiming to sow discord and confusion by intensifying these fractures.1 The intentional and calculated nature of these strategies distinguishes sophisticated disinformation from mere misinformation, which is the unintentional sharing of inaccurate information.2
The element of coordination is crucial in amplifying the reach and impact of disinformation. Coordinated campaigns involve organized groups or networks of actors working in concert to spread specific messages and mislead target audiences. These operations often rely on a mix of authentic and inauthentic social media accounts, including bots and fake personas, to create a deceptive impression of widespread support or opposition to a particular narrative.4 The goal of such coordination is to manipulate public debate for strategic purposes, whether financial or political.8
Adding another layer of complexity, many disinformation campaigns are adversarial in nature. Adversarial campaigns are strategic deception efforts orchestrated to confuse, paralyze, and polarize an audience. These campaigns often involve state or non-state actors with malicious intent, seeking to undermine trust in institutions, damage reputations, or incite social unrest.1 The tactics employed can range from circulating incorrect information to creating uncertainty and undermining the legitimacy of official information sources.9
Contemporary disinformation campaigns are increasingly characterized by their multi-modal nature. This involves the strategic use of a combination of different media formats, including text, images, audio, and video, to enhance the believability and overall engagement of the false information.3 This multi-modal content often includes manipulated media such as deepfakes, which utilize AI algorithms to generate highly realistic but ultimately false images and videos.15 The integration of various modalities can significantly increase the persuasiveness of disinformation and make it considerably more challenging to debunk.19
The role of AI in the evolution of disinformation tactics is pivotal. AI technologies, particularly Natural Language Processing (NLP) and generative AI models, are now being leveraged to create and disseminate falsehoods with remarkable speed and at a massive scale.14 AI's capabilities extend to generating highly convincing fake content, encompassing everything from sophisticated deepfakes that can realistically depict individuals saying or doing things they never did, to AI-generated articles that are virtually indistinguishable from content produced by credible news organizations.1 Furthermore, AI plays a significant role in facilitating coordinated inauthentic behavior through the automated creation and management of fake online personas, the deployment of bots to amplify false narratives, and the overall automation of content dissemination across various digital platforms.1 The confluence of these factors has significantly lowered the barrier to entry for creating and spreading sophisticated disinformation, enabling a wider range of actors, including non-state entities, to launch impactful campaigns.
AI as an Enabler of Disinformation:
Artificial Intelligence has become a potent tool in the hands of those seeking to create and disseminate disinformation. Its capabilities in content generation, facilitating coordination, and enhancing personalization have significantly amplified the reach and impact of these malicious campaigns.
AI, particularly through Large Language Models (LLMs), has revolutionized the creation of text-based disinformation. These models can generate fluent and seemingly accurate text, making it considerably easier to produce believable fake news articles, social media posts, and other forms of written content designed to deceive.14 The ability of AI to mimic human writing styles and even adapt its tone to suit different audiences makes it a powerful tool for crafting persuasive falsehoods.
The landscape of image-based disinformation has also been transformed by generative AI tools such as Generative Adversarial Networks (GANs) and diffusion models. These technologies can produce highly realistic fake images that are often indistinguishable from authentic photographs, even to the trained eye.1 Projects like ThisPersonDoesNotExist.com, which utilizes GANs to create realistic but entirely synthetic faces, have demonstrated the potential for AI to generate seemingly credible visual content for disinformation campaigns.17
In the realm of audio, AI-powered voice cloning technologies have emerged as a significant enabler of disinformation. These technologies can create realistic audio deepfakes, allowing malicious actors to impersonate individuals and spread false statements attributed to them.29 The ability to convincingly replicate a person's voice opens up new avenues for scams, hoaxes, and political manipulation. The creation of fake audio messages from public figures, as seen in examples targeting elections, underscores the potential for this technology to mislead and influence public opinion.31
Furthermore, advancements in AI have extended to video generation, with tools like OpenAI's Sora demonstrating the capability to create highly detailed and realistic fake videos from simple text prompts.27 This capability represents a significant leap in the sophistication of AI-generated disinformation, as realistic fake video footage can be exceptionally persuasive and difficult to debunk. The increasing sophistication and accessibility of these AI-powered content generation tools across text, image, audio, and video formats are collectively amplifying the potential scale and impact of disinformation campaigns, making it easier for malicious actors to create and disseminate compelling falsehoods across the digital landscape.
AI also plays a crucial role in facilitating Coordinated Inauthentic Behavior (CIB). AI-powered bots can rapidly spread disinformation across various online platforms, operating simultaneously and creating a deceptive illusion of widespread consensus around a particular narrative.1 These bots can be programmed to engage with content, like and share posts, and even participate in online discussions, making it appear as though there is genuine grassroots support for a particular viewpoint. Moreover, AI can be employed to generate extensive networks of fake personas and websites, further enhancing the believability of disinformation messages.11 These fake online identities can be used to amplify narratives, lend credibility to false claims, and create echo chambers that reinforce disinformation. Additionally, AI-driven social bots are becoming increasingly adept at mimicking human behavior online, making them more challenging to identify as part of coordinated inauthentic activity.25 By learning from vast datasets of human interactions, these bots can generate more natural-sounding text, engage in more realistic conversations, and exhibit patterns of behavior that are less easily distinguishable from those of genuine users. This automation and scaling of CIB through AI enable malicious actors to create the false impression that their narratives have broad support and legitimacy, while simultaneously making it more difficult to trace the true origins of these campaigns.
Beyond content generation and coordination, AI significantly enhances the scale and personalization of disinformation campaigns. AI algorithms possess the capability to analyze vast quantities of user data harvested from social media platforms, search engines, and other online sources. This data analysis allows malicious actors to precisely target specific individuals or demographic groups with disinformation messages that are tailored to their unique biases, vulnerabilities, and interests.1 Furthermore, AI-powered micro-targeting and geofencing techniques can be employed to deliver disinformation to highly specific audiences based on their online behavior, geographic location, and even their social connections.1 By understanding the psychological profiles and online habits of their targets, disinformation campaigns can craft messages that are more likely to resonate and be accepted as true. AI can also personalize the content of disinformation itself, adapting the language, tone, and even the specific claims made to align with the individual's pre-existing beliefs and values.23 This level of personalization significantly increases the persuasiveness of disinformation and the likelihood that individuals will engage with it, believe it, and share it further within their networks. The ability of AI to scale and personalize disinformation in this manner poses a profound threat to public opinion, trust in institutions, and the integrity of democratic processes, as individuals are increasingly exposed to highly targeted and seemingly credible falsehoods.
AI-Driven Resilience: Detection and Countermeasures:
In response to the growing threat of AI-powered disinformation, the field of AI itself is being leveraged to develop sophisticated detection and countermeasures. Various AI methodologies are proving effective in identifying different forms of disinformation, ranging from text and images to audio and coordinated inauthentic behavior.
For text-based disinformation, Natural Language Processing (NLP) techniques offer powerful analytical capabilities. NLP can analyze the linguistic characteristics of text, including its language, sentiment, and overall structure, to identify patterns that are indicative of disinformation.14 These patterns can include the use of emotionally charged language, grammatical inconsistencies, or the presence of claims that contradict established facts. Furthermore, machine learning algorithms, encompassing supervised learning, deep learning, and advanced transformer models like BERT, are being trained to classify news articles and social media posts as either genuine or fake based on these linguistic features and comparisons with verified information.39 Explainable AI (XAI) methods are also contributing to this effort by helping to identify the specific linguistic features that influence a model's prediction, thereby improving the transparency and trustworthiness of these detection systems and aiding in the identification and mitigation of potential biases.42
AI is also playing a crucial role in the detection of image and video disinformation, particularly deepfakes. AI models can be trained to identify subtle inconsistencies in facial features, unnatural vocal patterns, and other visual or auditory artifacts that are often present in manipulated media.18 Frequency domain analysis techniques can be employed to detect unique "fingerprints" left by AI image generation processes, providing a valuable method for distinguishing between real and synthetic images.43 Convolutional Neural Networks (CNNs) have demonstrated the ability to learn the granular characteristics of AI-generated videos, enabling them to effectively identify synthetic video content.32 Additionally, AI-powered tools can analyze the metadata associated with images and videos and even embed digital watermarks as a means of verifying their authenticity and detecting subsequent manipulations.18
The detection of audio deepfakes and manipulated audio media is another area where AI is proving invaluable. AI-powered voice analysis techniques can identify anomalies in speech patterns, such as unnatural pauses or changes in intonation, that may indicate artificial generation.44 Acoustic analysis, combined with machine learning algorithms, can recognize patterns and inconsistencies in audio that are characteristic of synthetic speech.29 Specialized tools, such as PindropⓇ Pulse, utilize sophisticated AI algorithms to analyze audio segments for synthetic elements with a high degree of accuracy.44 These AI models are often trained on extensive datasets of both real and AI-generated speech to enhance their ability to differentiate between the two.
AI is also being effectively used to analyze Coordinated Inauthentic Behavior (CIB). Machine learning models can examine the network connections and online behavior of social media accounts to identify clusters of accounts exhibiting unnatural interaction patterns, which are often indicative of CIB.25 Graph-based anomaly detection and time-series analysis are specific AI techniques that can reveal coordinated activity typical of bot-driven disinformation campaigns.25 Furthermore, multi-modal data fusion, which involves combining and analyzing text, image, and behavioral data associated with accounts, can significantly enhance the accuracy of CIB detection.25 AI can also identify patterns in how content is shared across networks, such as instances of co-retweeting or the sharing of identical URLs by multiple accounts in a short period, which can be strong indicators of coordinated activity.46
The proactive identification of disinformation campaigns is another crucial area where AI is being applied. AI can power early warning systems that continuously monitor online platforms for emerging malicious trends, sudden spikes in specific narratives, and other indicators that may signal the beginning of a coordinated disinformation effort.23 Predictive models, driven by AI algorithms, can analyze the patterns of information spread and identify content that has the potential to become highly viral, allowing for early intervention by fact-checkers and platform administrators.38 Additionally, AI-powered systems can track the evolution of disinformation narratives over time and identify connections between seemingly disparate pieces of false information, providing a more holistic understanding of ongoing campaigns.48
Automated fact-checking and verification platforms represent a significant application of AI in combating disinformation. AI-powered tools can automatically extract factual claims from large volumes of text, compare these claims against existing databases of verified information and reliable sources, and provide a determination of their accuracy.49 Platforms like ClaimBuster and Full Fact utilize AI algorithms to identify claims that are likely to be false or misleading and flag them for further review or automatic debunking.51 AI can also assist human fact-checkers by identifying and clustering similar claims, as well as by providing relevant contextual information, thereby increasing the efficiency and scalability of the fact-checking process.49
Finally, AI is being explored for its potential in generating counter-narratives to combat the spread of harmful disinformation. AI can be used to create non-aggressive, fact-based responses to hate speech and misinformation, providing alternative perspectives and accurate information.14 Large Language Models (LLMs) can be leveraged to generate persuasive counter-arguments that are contextually relevant and incorporate background knowledge from diverse sources.55 AI-powered chatbots can also be deployed to engage with individuals who have been exposed to conspiracy theories or other forms of disinformation, providing accurate information and attempting to debunk false claims.36 This capability to generate and disseminate counter-narratives at scale offers a promising avenue for mitigating the impact of disinformation campaigns.
Challenges and Limitations in AI-Powered Disinformation Defense:
Despite the significant potential of AI in combating disinformation, there are numerous challenges and limitations that must be addressed to ensure its effectiveness and responsible use.
One of the primary challenges is the constantly evolving sophistication of disinformation tactics. Those who create and spread disinformation are continuously adapting their techniques to evade detection by AI systems.36 This includes the increasing use of multi-modal formats, which combine text, images, audio, and video in novel ways to enhance believability, making detection more difficult for existing AI tools that may be primarily trained on single modalities.14 Furthermore, sophisticated campaigns often employ subtle manipulation techniques and nuanced language that AI models, which typically rely on pattern recognition, struggle to interpret accurately.38 This ongoing arms race between those who create disinformation and those who seek to detect it necessitates continuous advancements in AI detection capabilities to keep pace with these evolving tactics.
Algorithmic bias and fairness concerns represent another significant limitation in AI-powered disinformation defense. AI models are trained on vast datasets, and if these datasets contain societal biases, the models can inadvertently perpetuate and even amplify these biases in their detection of disinformation.60 This can lead to content moderation algorithms disproportionately flagging content from marginalized communities as disinformation or hate speech, while potentially overlooking similar content from more privileged groups.36 Ensuring fairness and inclusivity in AI-powered disinformation detection is therefore not only an ethical imperative but also a significant technical challenge that requires careful attention to data curation, model design, and ongoing monitoring.
A critical vulnerability of AI models used for disinformation detection is their susceptibility to adversarial attacks. These attacks involve subtle manipulations of the input data, often imperceptible to humans, that are specifically designed to mislead the AI model into making incorrect classifications.69 Deepfake detectors and broader disinformation detection systems are both vulnerable to these types of attacks, which can effectively inhibit their performance and allow malicious actors to evade detection and continue spreading false information.76 The potential for even slight alterations to content to completely fool an AI detection system highlights the need for more robust and resilient models.
The accuracy and reliability of AI-powered disinformation detection in real-world scenarios can also be problematic. Current deepfake detection technologies, while promising, may have limited effectiveness when applied to diverse real-world data that differs from the controlled conditions under which they were trained.18 Similarly, AI-powered fact-checking tools may struggle to accurately discern between truly reliable and less trustworthy sources of information, potentially leading to inaccuracies in their assessments.81 Factors such as variations in lighting, facial expressions, video and audio quality, and the specific methods used to create the deepfake can all impact the accuracy of AI detection models.18
Finally, detecting nuanced and contextual disinformation presents a significant hurdle for AI. AI tools often struggle with understanding the subtleties of human language, including sarcasm, irony, and cultural references, which can lead to both false positives (incorrectly flagging legitimate content) and false negatives (missing actual disinformation).14 Disinformation that relies on manipulating the context of information or strategically omitting essential details is also particularly challenging for AI to identify.82 Furthermore, AI's limitations in comprehending intent can make it difficult to distinguish between satire, which may intentionally present false information for humorous or critical purposes, and deliberate deception aimed at causing harm.44 These challenges underscore the need for continued research and development of AI models that possess a deeper understanding of human communication and the complexities of context.
The Human-AI Partnership and Ethical Considerations:
While AI offers significant capabilities in combating disinformation, human expertise remains indispensable for effective and ethical defense. A collaborative partnership between AI technologies and human analysts is crucial for navigating the complexities of this evolving threat.
Human fact-checkers and domain experts play a vital role in verifying complex claims, providing nuanced judgments, and understanding the broader context of disinformation campaigns in ways that AI alone cannot replicate.38 Human oversight is also essential for identifying and mitigating biases that may be present in AI algorithms, ensuring that detection systems are fair and do not disproportionately impact certain communities or viewpoints.61 Hybrid systems that effectively combine the scalability and efficiency of AI with the critical thinking and contextual awareness of human experts have often demonstrated superior performance in detecting and countering disinformation compared to either approach in isolation.44
The development and deployment of AI countermeasures against disinformation must be guided by strong ethical frameworks. Key principles within these frameworks include transparency, accountability, and respect for privacy.36 Transparency is crucial for ensuring that the public understands how AI detection models work and for building trust in their outcomes.14 Accountability mechanisms are necessary to establish clear lines of responsibility for the decisions and actions taken by AI-powered systems in identifying and addressing disinformation.36 Furthermore, privacy considerations must be carefully addressed in the collection, storage, and use of data for training and deploying AI detection systems to ensure that individuals' rights are protected.36
A particularly complex ethical challenge lies in balancing the fundamental right to freedom of expression with the pressing need to combat harmful disinformation.2 Overly aggressive content moderation by AI systems, while intended to curb the spread of falsehoods, can inadvertently lead to censorship and the suppression of legitimate speech, particularly if the AI models are not sufficiently nuanced in their understanding of context and intent.36 Therefore, it is essential to establish clear guidelines for content moderation and to implement transparent appeals processes that allow users to challenge decisions made by AI systems, ensuring accountability and maintaining user trust.85 Striking the appropriate balance requires ongoing dialogue and careful consideration of the potential impacts of both disinformation and the measures taken to counter it on democratic values and individual rights.
Building a Resilient Information Ecosystem:
Combating the multifaceted threat of AI-powered disinformation requires a comprehensive approach that extends beyond technological solutions to encompass educational initiatives, policy frameworks, and collaborative efforts aimed at building a more resilient information ecosystem.
Media and digital literacy education play a foundational role in equipping individuals with the critical thinking skills necessary to evaluate the credibility of online information and identify various forms of disinformation.1 These educational programs teach individuals how to navigate the digital landscape safely and responsibly, including recognizing the red flags that often accompany misinformation and understanding the tactics used in disinformation campaigns.36 Furthermore, educating the public about the capabilities and limitations of AI can foster a healthy sense of skepticism towards online content, encouraging individuals to question the information they encounter rather than accepting it at face value.96 By empowering citizens with these skills, societies can build greater resilience against the manipulative effects of disinformation.
The policy and regulatory landscape surrounding AI and disinformation is also evolving rapidly as governments and regulatory bodies increasingly recognize the need to address these challenges.3 The European Union's Digital Services Act, for example, includes specific measures aimed at tackling disinformation online, placing obligations on large online platforms to assess and mitigate the risks their services pose to society, including the spread of false information.26 Policymakers are also actively considering regulations related to the labeling of AI-generated content, which would provide users with greater transparency about the origin of the media they consume, as well as measures to increase the overall transparency of AI algorithms used in content moderation and information dissemination.23 These policy and regulatory efforts are essential for establishing accountability, setting standards for the responsible development and deployment of AI technologies, and creating a more secure information environment.
Effective countermeasures against AI-powered disinformation also require fostering strong collaboration between a diverse range of stakeholders. This includes technology platforms, which play a critical role in the dissemination of information; governments, which are responsible for setting policy and regulatory frameworks; academic and research institutions, which contribute to the development of detection tools and the understanding of disinformation dynamics; and civil society organizations, which advocate for media literacy and information integrity.14 The sharing of information, threat intelligence, and best practices among these stakeholders is crucial for developing a comprehensive understanding of the evolving disinformation landscape and for enabling coordinated responses to campaigns.23 Public-private partnerships can also play a vital role in facilitating the development and deployment of effective countermeasures against increasingly sophisticated synthetic disinformation.95 This collaborative, multi-stakeholder approach is essential for addressing the complex and rapidly changing challenges posed by AI-powered disinformation and for building a more resilient and trustworthy information ecosystem for all.
Future Trends and the Path Forward:
The field of AI-powered disinformation and the corresponding resilience mechanisms are both rapidly evolving, with several key trends shaping the future landscape.
Next-generation AI models are anticipated to incorporate increasingly advanced techniques for analyzing and understanding the intricate relationships between different modalities of information, including text, images, audio, and video.114 Advancements in multimodal learning will enable AI systems to process and integrate diverse data types more effectively, leading to more accurate and nuanced detection of disinformation that strategically combines various media formats.114 Furthermore, future AI models may leverage sophisticated knowledge graphs and enhanced reasoning capabilities to better understand the context and assess the veracity of multi-modal content, moving beyond surface-level pattern recognition to deeper semantic analysis.108
Emerging technologies and ongoing research efforts are also poised to significantly advance the field of AI-driven disinformation detection. There is a strong focus on developing more robust and deployable AI detectors that can maintain high levels of accuracy and reliability when applied to real-world scenarios, which often present more variability and complexity than controlled research environments.120 Explainable AI (XAI) is expected to play an increasingly crucial role in making AI detection systems more transparent in their decision-making processes, thereby fostering greater trust among users and stakeholders.14 The integration of uncertainty quantification (UQ) into AI detection systems will provide valuable insights into the confidence levels associated with predictions, further enhancing the reliability and interpretability of these tools.14 The field of AI-driven media forensics will also continue to advance, offering increasingly sophisticated techniques for analyzing and authenticating digital content across various media types, which will be critical in identifying manipulated media used in disinformation campaigns.121
The future of maintaining information integrity in the face of increasingly sophisticated AI-powered disinformation will likely be characterized by a greater emphasis on synergistic partnerships between AI tools and human expertise.14 It is anticipated that AI will increasingly handle the initial screening and analysis of the vast volumes of online data, flagging potentially problematic content for further review by human experts. This division of labor will allow human analysts to focus their expertise on more complex and nuanced cases that require critical thinking, contextual understanding, and subject-matter knowledge that AI may still lack.36 Furthermore, there is a trend towards the development of more user-friendly AI tools that can be readily adopted by individuals to assess the credibility of the information they encounter online, empowering them to become more discerning consumers of digital content.27 This synergistic partnership between increasingly sophisticated AI tools and the critical thinking abilities of both experts and informed citizens represents the most promising path forward in the ongoing effort to maintain information integrity in an increasingly complex digital world.
Conclusion
The analysis presented in this report underscores the intricate and evolving relationship between Artificial Intelligence and the challenge of disinformation. AI serves as a double-edged sword, both empowering malicious actors to create and disseminate sophisticated, coordinated, and adversarial multi-modal disinformation campaigns, and providing the tools necessary to detect and counter these threats. The increasing sophistication of AI-generated content across text, image, audio, and video formats, coupled with its ability to facilitate coordinated inauthentic behavior and personalized targeting, presents a significant and growing challenge to information integrity.
However, the advancements in AI also offer a powerful arsenal of methodologies and tools for building resilience against disinformation. From NLP and machine learning for text analysis to advanced techniques for detecting manipulated media and analyzing coordinated behavior, AI is proving to be an indispensable asset in the fight for truth in the digital age. Nevertheless, the limitations of AI in understanding nuanced language, its vulnerability to adversarial attacks, and the ever-present risk of algorithmic bias necessitate a cautious and ethical approach.
The future of combating disinformation lies in a holistic and adaptive strategy that recognizes the crucial role of the human element. A synergistic partnership between increasingly sophisticated AI technologies and the critical thinking, contextual understanding, and ethical judgment of human experts and an informed public is essential. Furthermore, ongoing research and development in AI-driven detection methods, coupled with effective policy frameworks, comprehensive media and digital literacy education, and robust collaboration among technology platforms, governments, and research institutions, will be critical in building a more resilient and trustworthy information ecosystem for the future. The path forward requires a continued commitment to innovation, ethical responsibility, and a shared understanding of the complex interplay between technology and society in the ongoing battle against disinformation.
Works cited
What Is a Disinformation Campaign? | CrowdStrike, accessed May 17, 2025, https://www.crowdstrike.com/en-us/cybersecurity-101/social-engineering/disinformation-campaign/
Navigating the Landscape of Misinformation and Disinformation: Background - New America, accessed May 17, 2025, https://www.newamerica.org/future-security/reports/navigating-the-landscape-of-misinformation-and-disinformation/background/
Disinformation Meaning & Definition - Zevo Health, accessed May 17, 2025, https://www.zevohealth.com/glossary/disinformation/
pmc.ncbi.nlm.nih.gov, accessed May 17, 2025, https://pmc.ncbi.nlm.nih.gov/articles/PMC10060790/#:~:text=Coordinated%20inauthentic%20behavior%20(CIB)%20is,across%20multiple%20social%20media%20platforms.
Inauthentic Behavior | Transparency Center, accessed May 17, 2025, https://transparency.meta.com/policies/community-standards/inauthentic-behavior/
Coordinated Influence Operations— Fear, Uncertainty and Doubt | Bipartisan Policy Center, accessed May 17, 2025, https://bipartisanpolicy.org/blog/coordinated-influence-operations/
Coordinated inauthentic behavior - Semantic interoperability for platform governance, accessed May 17, 2025, https://platformglossary.info/coordinated-inauthentic-behavior/
Understanding Coordinated Inauthentic Behavior (CIB): What it is and How it Impacts the General Public - Vicarius, accessed May 17, 2025, https://www.vicarius.io/vsociety/posts/understanding-coordinated-inauthentic-behavior-cib-what-it-is-and-how-it-impacts-the-general-public
Disinformation attack - Wikipedia, accessed May 17, 2025, https://en.wikipedia.org/wiki/Disinformation_attack
Winning in the Information Environment: Recent Successes in Combating Adversary Disinformation and Recommendations for the Future - Air University, accessed May 17, 2025, https://www.airuniversity.af.edu/Wild-Blue-Yonder/Articles/Article-Display/Article/3630158/winning-in-the-information-environment-recent-successes-in-combating-adversary/
Don't be a target: How to identify adversarial propaganda > U.S. ..., accessed May 17, 2025, https://www.cybercom.mil/Media/News/Article/3551070/dont-be-a-target-how-to-identify-adversarial-propaganda/
GAO-24-107600, FOREIGN DISINFORMATION: Defining and Detecting Threats, accessed May 17, 2025, https://www.gao.gov/assets/gao-24-107600.pdf
FMNV: A Dataset of Media-Published News Videos for Fake News Detection - arXiv, accessed May 17, 2025, https://arxiv.org/html/2504.07687v3
AI in Disinformation Detection - Applied Cybersecurity & Internet Governance, accessed May 17, 2025, https://www.acigjournal.com/AI-in-Disinformation-Detection,200200,0,2.html
Fighting fake news with multimodal AI - Almawave, accessed May 17, 2025, https://www.almawave.com/en-fake-news/
Factsheet 4: Types of Misinformation and Disinformation - UNHCR, accessed May 17, 2025, https://www.unhcr.org/innovation/wp-content/uploads/2022/02/Factsheet-4.pdf
The evolving role of AI-generated media in shaping disinformation campaigns - DFRLab, accessed May 17, 2025, https://dfrlab.org/2025/05/01/the-evolving-role-of-ai-generated-media-in-shaping-disinformation-campaigns/
Science & Tech Spotlight: Combating Deepfakes | U.S. GAO, accessed May 17, 2025, https://www.gao.gov/products/gao-24-107292
Full article: A Picture Paints a Thousand Lies? The Effects and Mechanisms of Multimodal Disinformation and Rebuttals Disseminated via Social Media - Taylor & Francis Online, accessed May 17, 2025, https://www.tandfonline.com/doi/full/10.1080/10584609.2019.1674979
Multi-modal Misinformation Detection: Approaches, Challenges and Opportunities - arXiv, accessed May 17, 2025, https://arxiv.org/pdf/2203.13883
The Russian Disinformation Threat: Active Campaigns in 2024 - Kettering Foundation, accessed May 17, 2025, https://kettering.org/the-russian-disinformation-threat-active-campaigns-in-2024/
AI in disinformation detection - Applied Cybersecurity & Internet Governance, accessed May 17, 2025, https://www.acigjournal.com/pdf-200200-124467?filename=AI%20in%20Disinformation.pdf
AI and the Future of Disinformation Campaigns | Center for Security and Emerging Technology, accessed May 17, 2025, https://cset.georgetown.edu/publication/ai-and-the-future-of-disinformation-campaigns-2/
Don't be a target: How to identify adversarial propaganda - Air Force Reserve Command, accessed May 17, 2025, https://www.afrc.af.mil/News/Article/3550266/dont-be-a-target-how-to-identify-adversarial-propaganda/
Adversarial Machine Learning in Detecting Inauthentic Behavior on Social Platforms, accessed May 17, 2025, https://aithority.com/machine-learning/adversarial-machine-learning-in-detecting-inauthentic-behavior-on-social-platforms/
20240805-CIB-detection-tree.pdf - EU DisinfoLab, accessed May 17, 2025, https://www.disinfo.eu/wp-content/uploads/2024/08/20240805-CIB-detection-tree.pdf
AI and the spread of fake news sites: Experts explain how to counteract them, accessed May 17, 2025, https://news.vt.edu/articles/2024/02/AI-generated-fake-news-experts.html
The role of artificial intelligence in disinformation | Data & Policy | Cambridge Core, accessed May 17, 2025, https://www.cambridge.org/core/journals/data-and-policy/article/role-of-artificial-intelligence-in-disinformation/7C4BF6CA35184F149143DE968FC4C3B6
Audio Deepfake Detection: What Has Been Achieved and What Lies Ahead - MDPI, accessed May 17, 2025, https://www.mdpi.com/1424-8220/25/7/1989
How a Spanish media group created an AI tool to detect audio deepfakes to help journalists in a big election year | Reuters Institute for the Study of Journalism, accessed May 17, 2025, https://reutersinstitute.politics.ox.ac.uk/news/how-spanish-media-group-created-ai-tool-detect-audio-deepfakes-help-journalists-big-election
How AI-generated disinformation might impact this year's elections and how journalists should report on it | Reuters Institute for the Study of Journalism, accessed May 17, 2025, https://reutersinstitute.politics.ox.ac.uk/news/how-ai-generated-disinformation-might-impact-years-elections-and-how-journalists-should-report
On the Trail of Deepfakes, Drexel Researchers Identify 'Fingerprints' of AI-Generated Video, accessed May 17, 2025, https://drexel.edu/news/archive/2024/April/machine-learning-generative-ai-video-detection
Decoding Disinformation: Automating the Detection of Coordinated, accessed May 17, 2025, https://jerrygaolondon.substack.com/p/decoding-disinformation-automating
www.cisa.gov, accessed May 17, 2025, https://www.cisa.gov/sites/default/files/publications/tactics-of-disinformation_508.pdf
Countering AI-powered disinformation through national regulation: learning from the case of Ukraine - Frontiers, accessed May 17, 2025, https://www.frontiersin.org/journals/artificial-intelligence/articles/10.3389/frai.2024.1474034/full
Using Responsible AI to combat misinformation - Trilateral Research, accessed May 17, 2025, https://trilateralresearch.com/responsible-ai/using-responsible-ai-to-combat-misinformation
GenAI and the battle against misinformation - Duke Corporate Education, accessed May 17, 2025, https://www.dukece.com/insights/genai_and_the_battle_against_misinformation/
The use of artificial intelligence in counter-disinformation: a world wide (web) mapping, accessed May 17, 2025, https://www.frontiersin.org/journals/political-science/articles/10.3389/fpos.2025.1517726/full
Application of Artificial Intelligence Techniques to Detect Fake News: A Review - MDPI, accessed May 17, 2025, https://www.mdpi.com/2079-9292/12/24/5041
A comparison of artificial intelligence models used for fake news detection - The Distant Reader, accessed May 17, 2025, https://distantreader.org/stacks/journals/bulletin/bulletin-1680.pdf
How to Fight AI-Generated Fake News — With AI | Built In, accessed May 17, 2025, https://builtin.com/artificial-intelligence/fight-ai-generated-fake-news
Enhancing Disinformation Detection with Explainable AI and Named Entity Replacement, accessed May 17, 2025, https://arxiv.org/html/2502.04863v1
New tools use AI 'fingerprints' to detect altered photos, videos - Binghamton News, accessed May 17, 2025, https://www.binghamton.edu/news/story/5109/new-tools-use-ai-fingerprints-to-detect-altered-photos-videos
AI in the Role of Combating Misinformation | Pindrop, accessed May 17, 2025, https://www.pindrop.com/article/ai-in-the-role-of-combating-misinformation/
Deepfake Detection Technology - Pindrop, accessed May 17, 2025, https://www.pindrop.com/solution/deepfake-detection/
Detecting Influence Campaigns on X with AI and Network Science - USC Viterbi, accessed May 17, 2025, https://viterbischool.usc.edu/news/2024/05/detecting-influence-campaigns-on-x-with-ai-and-network-science/
A new AI early warning system to combat disinformation and prevent violence previewed in the Bulletin of Atomic Scientists - Kroc Institute for International Peace Studies, accessed May 17, 2025, https://kroc.nd.edu/news-events/news/a-new-ai-early-warning-system-to-combat-disinformation-and-prevent-violence-previewed-in-the-bulletin-of-atomic-scientists/
Detection of fake news campaigns using graph convolutional networks | palowise.ai, accessed May 17, 2025, https://www.palowise.ai/blog/palowise-papers/detection-of-fake-news-campaigns-using-graph-convolutional-networks/
How AI-Powered Fact-Checking Can Help Combat Misinformation | Ivy Exec, accessed May 17, 2025, https://ivyexec.com/career-advice/2025/how-ai-powered-fact-checking-can-help-combat-misinformation
Norwegian start-up Factiverse to revolutionize fact-checking with AI - Hello Future, accessed May 17, 2025, https://hellofuture.orange.com/en/factiverse-reliable-ai-fact-checking-in-more-than-100-languages/
Full Fact AI, accessed May 17, 2025, https://fullfact.org/ai/
Fact-checking AI with Lateral Reading - Using AI tools in Research - Research Guides at Texas A&M University-Corpus Christi, accessed May 17, 2025, https://guides.library.tamucc.edu/AI/lateralreadingAI
ClaimBuster: Automated Live Fact-Checking - IDIR Lab, accessed May 17, 2025, https://idir.uta.edu/claimbuster/
13 AI-Powered Tools for Fighting Fake News - The Trusted Web, accessed May 17, 2025, https://thetrustedweb.org/ai-powered-tools-for-fighting-fake-news/
Fact-based Counter Narrative Generation to Combat Hate Speech | OpenReview, accessed May 17, 2025, https://openreview.net/forum?id=GrvxqI3XL4&referrer=%5Bthe%20profile%20of%20Suman%20Kalyan%20Maity%5D(%2Fprofile%3Fid%3D~Suman_Kalyan_Maity2)
Counter-narratives: paradigm shift in the fight against hate and fake-news - FBK Magazine, accessed May 17, 2025, https://magazine.fbk.eu/en/news/counter-narratives-paradigm-shift-in-the-fight-against-hate-and-fake-news/
MIT study: An AI chatbot can reduce belief in conspiracy theories, accessed May 17, 2025, https://mitsloan.mit.edu/ideas-made-to-matter/mit-study-ai-chatbot-can-reduce-belief-conspiracy-theories
CISOs Urged to Prepare for Evolving Disinformation Tactics - Nexus Connect, accessed May 17, 2025, https://nexusconnect.io/articles/cisos-urged-to-prepare-for-evolving-disinformation-tactics
Artificial Intelligence and Disinformation in - Brill, accessed May 17, 2025, https://brill.com/view/journals/shrs/29/1-4/article-p55_55.xml
How to detect Fake News with AI | Founderz, accessed May 17, 2025, https://founderz.com/blog/detecting-fake-news-with-ai/
The Benefits and Risks of AI in Content Moderation | Datafloq, accessed May 17, 2025, https://datafloq.com/read/benefits-risks-ai-content-moderation/
Content Moderation - Mozilla Foundation, accessed May 17, 2025, https://foundation.mozilla.org/en/campaigns/trained-for-deception-how-artificial-intelligence-fuels-online-disinformation/content-moderation/
Biases in Artificial Intelligence: How to Detect and Reduce Bias in AI Models - Onix-Systems, accessed May 17, 2025, https://onix-systems.com/blog/ai-bias-detection-and-mitigation
FYS 101: Algorithmic Bias and Misinformation - Research Guides - Syracuse University, accessed May 17, 2025, https://researchguides.library.syr.edu/c.php?g=1092612&p=9865553
What Is AI Bias? | IBM, accessed May 17, 2025, https://www.ibm.com/think/topics/ai-bias
The Intersection of AI, Fake News, and Racial Bias - Business Wire, accessed May 17, 2025, https://www.businesswire.com/blog/ai-fake-news-racial-bias
When AI Gets It Wrong: Addressing AI Hallucinations and Bias, accessed May 17, 2025, https://mitsloanedtech.mit.edu/ai/basics/addressing-ai-hallucinations-and-bias/
The Limitations of Automated Tools in Content Moderation - New America, accessed May 17, 2025, https://www.newamerica.org/oti/reports/everything-moderation-analysis-how-internet-platforms-are-using-artificial-intelligence-moderate-user-generated-content/the-limitations-of-automated-tools-in-content-moderation/
Adversarial Attacks: The Hidden Risk in AI Security, accessed May 17, 2025, https://securing.ai/ai-security/adversarial-attacks-ai/
6 Key Adversarial Attacks and Their Consequences - Mindgard AI, accessed May 17, 2025, https://mindgard.ai/blog/ai-under-attack-six-key-adversarial-attacks-and-their-consequences
The Threat of Adversarial AI - Wiz, accessed May 17, 2025, https://www.wiz.io/academy/adversarial-ai-machine-learning
How do AI attacks impact people, organizations, and society? - AIShield, accessed May 17, 2025, https://boschaishield.com/resources/blog/how-do-ai-attacks-impact-people-organizations-and-society/
What Is Adversarial AI in Machine Learning? - Palo Alto Networks, accessed May 17, 2025, https://www.paloaltonetworks.com/cyberpedia/what-are-adversarial-attacks-on-AI-Machine-Learning
What is Adversarial AI? Uncovering the Risks and Strategies to Mitigate | FPT Software, accessed May 17, 2025, https://fptsoftware.com/resource-center/blogs/what-is-adversarial-ai-uncovering-the-risks-and-strategies-to-mitigate
Adversarial Attacks on AI-Generated Text Detection Models: A Token Probability-Based Approach Using Embeddings - arXiv, accessed May 17, 2025, https://arxiv.org/html/2501.18998v2
XAI-Based Detection of Adversarial Attacks on Deepfake Detectors - OpenReview, accessed May 17, 2025, https://openreview.net/forum?id=7pBKrcn199
Fake It Until You Break It: On the Adversarial Robustness of AI-generated Image Detectors - arXiv, accessed May 17, 2025, https://arxiv.org/html/2410.01574v2
XAI-Based Detection of Adversarial Attacks on Deepfake Detectors - arXiv, accessed May 17, 2025, https://arxiv.org/html/2403.02955v2
Deepfake detectors don't work in the real world - Information Age | ACS, accessed May 17, 2025, https://ia.acs.org.au/article/2025/deepfake-detectors-don-t-work-in-the-real-world.html
[2502.10920] Do Deepfake Detectors Work in Reality? - arXiv, accessed May 17, 2025, https://arxiv.org/abs/2502.10920
'Garbage in, garbage out': AI fails to debunk disinformation, study finds - VOA, accessed May 17, 2025, https://www.voanews.com/a/garbage-in-garbage-out-ai-fails-to-debunk-disinformation-study-finds/7830414.html
What is disinformation? - Bundesregierung, accessed May 17, 2025, https://www.bundesregierung.de/breg-de/service/datenschutzhinweis/disinformation-definition-1911048
AI and Content Moderation - Chase India, accessed May 17, 2025, https://www.chase-india.com/media/faengjxy/ai-and-content-moderation.pdf
The Role Of Humans And AI In Social Media's Battle Against Misinformation - Forbes, accessed May 17, 2025, https://www.forbes.com/councils/forbestechcouncil/2024/02/12/the-role-of-humans-and-ai-in-social-medias-battle-against-misinformation/
AI in content moderation : all you need to know - iPleaders, accessed May 17, 2025, https://blog.ipleaders.in/ai-in-content-moderation-all-you-need-to-know/
The Role of Explainability in Collaborative Human-AI Disinformation Detection - ACM FAccT, accessed May 17, 2025, https://facctconference.org/static/papers24/facct24-146.pdf
The Dual Nature of AI in Information Dissemination: Ethical Considerations - PMC - PubMed Central, accessed May 17, 2025, https://pmc.ncbi.nlm.nih.gov/articles/PMC11522648/
Policy Brief - Ensuring Ethical AI Practices to counter Disinformation - Media Futures, accessed May 17, 2025, https://mediafutures.eu/wp-content/uploads/2023/09/MediaFutures_Policy-Briefs_Ensuring-Ethical-AI-Practices-to-counter-Disinformation.pdf
AI Misinformation: Concerns and Prevention Methods - GlobalSign, accessed May 17, 2025, https://www.globalsign.com/en/blog/ai-misinformation-concerns-and-prevention
Regulating disinformation with artificial intelligence - European Parliament, accessed May 17, 2025, https://www.europarl.europa.eu/RegData/etudes/STUD/2019/624279/EPRS_STU(2019)624279_EN.pdf
Countering AI-powered disinformation through national regulation: learning from the case of Ukraine - PMC - PubMed Central, accessed May 17, 2025, https://pmc.ncbi.nlm.nih.gov/articles/PMC11747593/
Content Moderation Challenges : Fusion CX Services, accessed May 17, 2025, https://www.fusioncx.com/blog/content-moderation/content-moderation-challenges/
Content Moderation in a New Era for AI and Automation | Oversight Board, accessed May 17, 2025, https://www.oversightboard.com/news/content-moderation-in-a-new-era-for-ai-and-automation/
AI and Misinformation - 2024 Dean's Report, accessed May 17, 2025, https://2024.jou.ufl.edu/page/ai-and-misinformation
Disinformation Countermeasures and Artificial Intelligence - Frontiers, accessed May 17, 2025, https://www.frontiersin.org/research-topics/58930/disinformation-countermeasures-and-artificial-intelligence
Exploring How to Build Community-Level Resilience Against Disinformation - PEN America, accessed May 17, 2025, https://pen.org/report/how-to-build-community-level-resilience-against-disinformation/
The Science of Disinformation: Cognitive Vulnerabilities and Digital Manipulation, accessed May 17, 2025, https://moderndiplomacy.eu/2025/02/09/the-science-of-disinformation-cognitive-vulnerabilities-and-digital-manipulation/
Building Resilience to Misinformation: An Instructional Toolkit: Glossary - Research Guides, accessed May 17, 2025, https://libguides.ucalgary.ca/c.php?g=731350&p=5264938
How AI can also be used to combat online disinformation - The World Economic Forum, accessed May 17, 2025, https://www.weforum.org/stories/2024/06/ai-combat-online-misinformation-disinformation/
The Case for News Literacy Skills in an AI World | AASA, accessed May 17, 2025, https://www.aasa.org/resources/resource/the-case-for-news-literacy-skills-in-an-ai-world
Helping Students Spot Misinformation Online | NEA - National Education Association, accessed May 17, 2025, https://www.nea.org/nea-today/all-news-articles/helping-students-spot-misinformation-online
Enhancing media literacy skills in the age of AI - eSchool News, accessed May 17, 2025, https://www.eschoolnews.com/innovative-teaching/2024/11/08/enhancing-media-literacy-skills-in-the-age-of-ai/
Media Literacy Education and AI | Harvard Graduate School of Education, accessed May 17, 2025, https://www.gse.harvard.edu/ideas/education-now/24/04/media-literacy-education-and-ai
Tackling disinformation and promoting digital literacy - European Union - Learning Corner, accessed May 17, 2025, https://learning-corner.learning.europa.eu/learning-materials/tackling-disinformation-and-promoting-digital-literacy_en
How to teach students critical thinking skills to combat misinformation online, accessed May 17, 2025, https://www.apa.org/monitor/2024/09/media-literacy-misinformation
The Importance of Media Literacy In Countering Disinformation - EDMO, accessed May 17, 2025, https://edmo.eu/areas-of-activities/media-literacy/the-importance-of-media-literacy-in-countering-disinformation/
CounterCloud - AI powered disinformation experiment - Futurist.com, accessed May 17, 2025, https://futurist.com/2023/10/29/countercloud-ai-powered-disinformation-experiment/
Design of AI Applications to Counter Disinformation - UMass Boston, accessed May 17, 2025, https://www.umb.edu/news/recent-news/design-ai-applications-counter-disinformation/
How to spot AI fake news – and what policymakers can do to help | USC Price, accessed May 17, 2025, https://priceschool.usc.edu/news/ai-election-disinformation-biden-california-europe/
Regulating AI Deepfakes and Synthetic Media in the Political Arena, accessed May 17, 2025, https://www.brennancenter.org/our-work/research-reports/regulating-ai-deepfakes-and-synthetic-media-political-arena
Countering Disinformation Effectively: An Evidence-Based Policy Guide, accessed May 17, 2025, https://carnegieendowment.org/research/2024/01/countering-disinformation-effectively-an-evidence-based-policy-guide
Challenging misinformation in the age of AI: Building a collaborative future, accessed May 17, 2025, https://allianceforscience.org/blog/2025/03/challenging-misinformation-in-the-age-of-ai-building-a-collaborative-future/
AI Startups and the Fight Against Mis/Disinformation Online: An Update, accessed May 17, 2025, https://www.gmfus.org/news/ai-startups-and-fight-against-misdisinformation-online-update
5 AI Trends Shaping the Future of Public Sector in 2025 | Google Cloud Blog, accessed May 17, 2025, https://cloud.google.com/blog/topics/public-sector/5-ai-trends-shaping-the-future-of-the-public-sector-in-2025
LLM-Enhanced multimodal detection of fake news | PLOS One, accessed May 17, 2025, https://journals.plos.org/plosone/article?id=10.1371/journal.pone.0312240
Multimodal Learning In AI: Introduction, Current Trends, and Future - Scribble Data, accessed May 17, 2025, https://www.scribbledata.io/blog/multimodal-learning-in-ai-introduction-current-trends-and-future/
Reinforced Adaptive Knowledge Learning for Multimodal Fake News Detection, accessed May 17, 2025, https://ojs.aaai.org/index.php/AAAI/article/view/29618/31048
AnyGPT: The Next Generation of Multimodal Large-Scale Language Models Integrating Image, Speech, and Text | AI-SCHOLAR, accessed May 17, 2025, https://ai-scholar.tech/en/articles/large-language-models/anygpt
Multimodal AI: Everything You Need to Know - DataStax, accessed May 17, 2025, https://www.datastax.com/guides/multimodal-ai
Challenges and Safeguards against AI-Generated Disinformation - Hoover Institution, accessed May 17, 2025, https://www.hoover.org/challenges-and-safeguards-against-ai-generated-disinformation
Revolutionizing Investigations: The Impact of AI in Digital Forensics, accessed May 17, 2025, https://www.cyberdefensemagazine.com/revolutionizing-investigations-the-impact-of-ai-in-digital-forensics/
Keeping Pace with Rapid Advances in Generative Artificial Intelligence - UL Research Institutes, accessed May 17, 2025, https://ul.org/news/keeping-pace-with-rapid-advances-in-generative-artificial-intelligence/
Call for Papers: Special Issue on Unveiling Truth: Exploring the Frontier of AI and ML in Multimedia Forensics, accessed May 17, 2025, https://www.computer.org/digital-library/magazines/mu/cfp-unveiling-frontier-ai-mm-forensics
MediFor: Media Forensics - DARPA, accessed May 17, 2025, https://www.darpa.mil/research/programs/media-forensics
About The National Center for Media Forensics - College of Arts & Media - CU Denver, accessed May 17, 2025, https://artsandmedia.ucdenver.edu/cam-areas-of-study/national-center-for-media-forensics/about-the-national-center-for-media-forensics
Media Forensics Hub - Clemson University, accessed May 17, 2025, https://www.clemson.edu/centers-institutes/watt/hub/
How AI Quakes the Digital Forensics Landscape – Cyber - University of Hawaii-West Oahu, accessed May 17, 2025, https://westoahu.hawaii.edu/cyber/forensics-weekly-executive-summmaries/how-ai-quakes-the-digital-forensics-landscape/
Artificial Intelligence in Social Media Forensics: A Comprehensive Survey and Analysis, accessed May 17, 2025, https://www.mdpi.com/2079-9292/13/9/1671
(PDF) Artificial Intelligence in Social Media Forensics: A Comprehensive Survey and Analysis - ResearchGate, accessed May 17, 2025, https://www.researchgate.net/publication/380147461_Artificial_Intelligence_in_Social_Media_Forensics_A_Comprehensive_Survey_and_Analysis
Digital Forensics: The Good, the Bad, and the AI-Generated - ACEDS, accessed May 17, 2025, https://aceds.org/digital-forensics-the-good-the-bad-and-the-ai-generated-aceds-blog/
Key Trends in Digital Forensics for 2025: Technological Innovation and Core Challenges, accessed May 17, 2025, https://www.salvationdata.com/knowledge/key-trends-in-digital-forensics-for-2025/
CEO Lee Reiber: The Digital Forensics Landscape in 2025 - What Lies Ahead?, accessed May 17, 2025, https://www.oxygenforensics.com/en/resources/digital-forensics-trends-2025/
Future Trends in AI and Digital Forensics - ResearchGate, accessed May 17, 2025, https://www.researchgate.net/publication/387692303_Future_Trends_in_AI_and_Digital_Forensics
Enhancing, not replacing, human expertise with AI - Magnet Forensics, accessed May 17, 2025, https://www.magnetforensics.com/blog/enhancing-not-replacing-human-expertise-with-ai-in-digital-forensics/
Digital Forensics Trends for 2025 - Exterro, accessed May 17, 2025, https://www.exterro.com/resources/white-papers/digital-forensics-trends-for-2025
Future Trends in AI and Digital Forensics - IGI Global, accessed May 17, 2025, https://www.igi-global.com/chapter/future-trends-in-ai-and-digital-forensics/367321
AI-Driven Digital Forensics: Market Trends, Technologies, and Competition - KBV Research, accessed May 17, 2025, https://www.kbvresearch.com/blog/ai-driven-digital-forensics/
The Future Of Digital Forensics: Trends And Technologies - Brandefense, accessed May 17, 2025, https://brandefense.io/blog/drps/the-future-of-digital-forensics-trends-and-technologies/
Human-AI Cooperation to Tackle Misinformation and Polarization, accessed May 17, 2025, https://cacm.acm.org/research/human-ai-cooperation-to-tackle-misinformation-and-polarization/
Exploring Multidimensional Checkworthiness: Designing AI-assisted Claim Prioritization for Human Fact-checkers - arXiv, accessed May 17, 2025, https://arxiv.org/html/2412.08185v2
EMERGING TECHNOLOGIES AND AUTOMATED FACT-CHECKING: TOOLS, TECHNIQUES AND ALGORITHMS - Edam, accessed May 17, 2025, https://edam.org.tr/Uploads/Yukleme_Resim/pdf-28-08-2023-23-40-14.pdf
How Generative AI Is Helping Fact-Checkers Flag Election Disinformation, But Is Less Useful in the Global South, accessed May 17, 2025, https://gijn.org/stories/how-generative-ai-helps-fact-checkers/
“Fact-checking” fact checkers: A data-driven approach | HKS Misinformation Review, accessed May 17, 2025, https://misinforeview.hks.harvard.edu/article/fact-checking-fact-checkers-a-data-driven-approach/
Due Diligence: Disinformation, AI and the Charitable Landscape in 2025 | NPTrust, accessed May 17, 2025, https://www.nptrust.org/philanthropic-resources/philanthropist/due-diligence-disinformation-ai-and-the-charitable-landscape-in-2025/
(PDF) Artificial intelligence in the battle against disinformation and misinformation: a systematic review of challenges and approaches - ResearchGate, accessed May 17, 2025, https://www.researchgate.net/publication/388421309_Artificial_intelligence_in_the_battle_against_disinformation_and_misinformation_a_systematic_review_of_challenges_and_approaches
Putting AI to work in the fight against disinformation | EBU Technology & Innovation, accessed May 17, 2025, https://tech.ebu.ch/news/2023/12/putting-ai-to-work-in-the-fight-against-disinformation
Human-AI Collaboration Investigating Social Media Disinformation, accessed May 17, 2025, https://cyberinitiative.org/research/funded-projects/2021-curbing-disinformation/ai-investigating-social-media-disinformation.html
AI and the Future of Disinformation Campaigns | Center for Security and Emerging Technology, accessed May 17, 2025, https://cset.georgetown.edu/publication/ai-and-the-future-of-disinformation-campaigns/
MiRAGeNews: Multimodal Realistic AI-Generated News Detection - arXiv, accessed May 17, 2025, https://arxiv.org/html/2410.09045
Multimodal Misinformation Detection by Learning from Synthetic Data with Multimodal LLMs - arXiv, accessed May 17, 2025, https://arxiv.org/html/2409.19656v1
Comments
Post a Comment