Bani Adam
Senator (1k+ posts)
Indo-Pakistani Four-Day Kashmir War: New Frontiers in Disinformation
(Excerpts Follow)The New York Times recently analysed a new set of satellite images from Maxar Technologies and Planet Labs and cross-referenced these images with claims made by Indian and Pakistani officials during the conflict. The article concludes that from what we can discern so far, the majority of damage in the conflict has been inflicted by Indian forces, who also appear to have contained more damage to their own facilities than Pakistani officials had indicated. Inevitably, it is within both countries’ interest to demonstrate the strength of their military capabilities in both inflicting damage on rivals and protecting their own assets. Both sides have announced the effectiveness of their own attacks and the ineffectiveness of their rival’s, but according to this new satellite imagery, it appears that India’s strikes were the most effective overall.
Images show a strike on Pakistan’s Bholari air base, under 100 miles from Karachi. Indian officials claimed to have struck an air hangar at this location and satellite imagery shows substantial damage to what appears to be an air hangar. A hole 60-feet wide is visible in the images, which experts have claimed to be consistent with the impact of a missile. Images also show a strike at Nur Khan air base, located 15 miles from the Pakistani Army Headquarters and considered to be India’s most sensitive military target strike across the four days.
By contrast, satellite images of specific Indian military locations that Pakistan claims to have damaged show far less impact. One of the highest profile strikes claimed by Pakistani authorities was on India’s Udhampur air base, which was said to be “destroyed” by the conclusion of the offensive. Although one death has been confirmed at the site, satellite images show minimal damage and certainly nothing comparable to destruction.
At the time of the attack on the base, multiple false depictions of the strike were circulated. In one case, a video of a Rajasthan factory fire with black smoke billowing from the site was falsely depicted as the attack on the Udhampur air base. The video was likely chosen as the abundance of dark smoke created the illusion of a severe attack and heavy damage. As previously mentioned, the “drama” of this depiction would have ensured that algorithms widely circulated the footage, whilst also creating an impression of the power of Pakistani weapons and the damage they can inflict. Satellite imagery has directly contradicted this impression.
Additionally, an AI-generated video was shared online that also claimed to depict the destruction of the Udhampur air base. Fabricated audio was also added to the video, claiming that while Pakistan had been unable to destroy this air base during the 1965 and 1971 wars, its modern JF-17 fighter jets were able to succeed in 2025. While it has not been possible to trace the creator of this video, the narrative it projects is one very much advantageous to the state, showcasing the power of Pakistan’s new and modernised weaponry.
Although the creator cannot be identified, AI detection tools have indicated that there is a 99.7% chance the video was made with AI, leaving little doubt that this was a carefully crafted disinformation product. As more satellite imagery becomes available in the coming days, it should be possible to continue cross-referencing images with official statements to better understand which reports have been accurate and which have not.
While AI disinformation has been prevalent, the recycling of pre-existing images seems to be the most common form of disinformation deployed. Analysis from The Independent has revealed that one video allegedly showing Pakistani jets striking Indian territory was actually footage from the video game Battlefield 3. Another viral clip of Pakistani retaliation was actually revealed to show the 2020 Beirut port explosion, again showing how “dramatic” false footage is being chosen specifically.
Not only have fake visual sources been circulated, but it has also become clear that people have been impersonating official spokespeople to create further chaos and confusion. A fake Indian advisory has been identified who spread panic by posting false civil defence protocols that warned civilians to stock up on food and medicine. Such instructions can lead to panic-buying and sudden supply shortages and were evidently used to stir fear and unrest.
The Pakistani state has continued to deny any role in the spread of disinformation, however it seems not even its civilian and military leadership are able to agree on the narratives they communicate. A BBC report has traced a number of contradictory statements that together demonstrate the very active role the state has played in misleading people. The BBC highlighted how Pakistan’s Information Minister Attallah Tarar stated that between 40-50 Indian soldiers had been killed by Pakistan, but the defence minister put the total at 25.
Ultimately, although it is not possible to debunk and trace every instance of disinformation and concretely prove who created it, it is clear that disinformation has been used extensively and deliberately during this conflict.
SOURCE
~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~
Disrupting malicious uses of AI: June 2025
(Excerpts Follow)The accounts that we banned engaged in two primary workstreams. The most prolific involved generating short social media comments in English and Chinese, with a few in Urdu. We identified many of these comments being posted on TikTok and X, with some additional content appearing on Reddit, Facebook, and various websites. A typical pattern involved posting an initial comment from a “main” account—often apparently created solely for that post—followed by a series of reply comments from other accounts. This behavior appeared designed to create a false impression of organic engagement.
On TikTok, the commenting accounts used screen names in a variety of languages and alphabets—often unrelated to the language of the content they posted. For example, an account with a Korean name posted a comment in Urdu. One video posted to Facebook matched a video also shared by the network on TikTok; however, unlike its TikTok counterpart, the Facebook version was not accompanied by AI-generated comments.

Other content, in English and Urdu, targeted Pakistani activist Mahrang Baloch, who has publicly criticized China’s investments in Balochistan. A TikTok account and Facebook Page linked to the network posted a video falsely accusing Baloch of appearing in a pornographic film. The operation then generated hundreds of short comments in both languages to simulate widespread engagement. In total, we observed 220 comments produced, while the TikTok video displayed 199 comments, suggesting that the majority of visible engagement was AI-generated.

SOURCE
- Featured Thumbs
- https://www.veikkos-archiv.com/images/5/5d/W0379036.jpg