Secretary Blinken’s Participation in “Freedom House” Ceremony
PJ Media reports that Secretary of State Antony Blinken recently attended a ceremony hosted by “Freedom House,” a nonprofit organization also reported to be heavily supported by the U.S. government with as much as 90% of the organization’s funding being provided by the U.S. government.
Utilizing AI to Combat Russian Disinformation
During the ceremony, Secretary Blinken disclosed the government’s use of artificial intelligence (AI) to combat the spread of false information referred to as “Russian disinformation.” The State Department has developed the Ukraine Content Aggregator, an AI-powered online platform that collects and shares verified instances of Russian disinformation with global partners.
Critics have pointed out instances where the FBI, using the term “Russian disinformation,” discredited credible reporting, such as their attempt to suppress the Hunter Biden Laptop story. This raises concerns about the U.S. government’s own potential influence in their own elections, despite their efforts to combat foreign influence.
Detecting and Countering Foreign Disinformation
The Ukraine Content Aggregator detects and monitors foreign disinformation campaigns, empowering governments to devise effective countermeasures. The State Department utilizes AI to coordinate responses to cyber-attacks and false information, ensuring the safeguarding of national interests and maintaining a secure cyber environment. Additionally, it advocates for global online security standards by promoting transparency and government accountability in online activities.
According to PJ Media:
Secretary of State Antony Blinken appeared recently at an awards ceremony for the “Freedom House” — a nonprofit “founded on the core conviction that freedom flourishes in democratic nations where governments are accountable to their people.” Close to 90% of the organization’s funding comes directly from the U.S. government, providing the capacity to launder social control initiatives through the guise of “human rights.”
We can trust the government, of course, to use this technology responsibly because it lied for seven straight years about non-existent Russian interference in the 2016 and 2020 elections.
The announcement of the anti-“Russian disinformation” AI project comes on the heels of the Durham report, which found that the years-long Russiagate hoax should never have gotten off the ground and that the FBI had insufficient justification to ever launch its investigation in the first place.
According to the U.S. State Department:
In response, the State Department has developed an AI-enabled online Ukraine Content Aggregator to collect verifiable Russian disinformation and then to share that with partners around the world. We’re promoting independent media and digital literacy. We’re working with partners in academia to reliably detect fake text generated by Russian chatbots…
As a system that reflects the data on which it’s trained – including the biases embedded in that data – AI can, of course, amplify discrimination and enable abuses.
It also runs the risk of strengthening autocratic governments, including by enabling them to exploit social media even more effectively to manipulate their people and sow division among and within their adversaries.
Critics argue that the U.S. State Department’s claim, which attributes the entirety of the disinformation about Ukraine to Russia, oversimplifies the complex dynamics at play. They raise concerns about the State Department’s tendency to selectively release only approved information while disregarding any potentially relevant whistleblower documents. Moreover, critics point out that the term “Russian disinformation” has been exploited in the United States to discredit verified reports, allowing the government to suppress news stories it finds unfavorable or cannot immediately verify, even if they have no direct connection to Ukraine.
In addition, critics caution against the unintended consequences of AI technology, suggesting that it has the potential to inadvertently strengthen autocratic governments. By utilizing AI to exploit social media platforms, these governments can manipulate information, sow division among adversaries, and create internal strife within their own nations. The worry is that the increased effectiveness of AI-driven manipulation may further empower autocratic regimes, exacerbating the erosion of democratic values and stifling freedom of expression.
If the government has this technology, its most likely outdated.