X Welcome to International Affairs Forum

International Affairs Forum a platform to encourage a more complete understanding of the world's opinions on international relations and economics. It presents a cross-section of all-partisan mainstream content, from left to right and across the world.

By reading International Affairs Forum, not only explore pieces you agree with but pieces you don't agree with. Read the other side, challenge yourself, analyze, and share pieces with others. Most importantly, analyze the issues and discuss them civilly with others.

And, yes, send us your essay or editorial! Students are encouraged to participate.

Please enter and join the many International Affairs Forum participants who seek a better path toward addressing world issues.
Wed. May 13, 2026
Get Published   |   About Us   |   Donate   | Login
International Affairs Forum

Around the World, Across the Political Spectrum

Preventing Malevolent AI Usage while Embracing Economic Progress

Comments(0)

By Zac Knapp

Early in the morning of January 3rd, 2026, everything came to a standstill. Airstrikes could be heard through Caracas, the Venezuelan capital. The U.S. had just begun a military operation to capture Nicolas Maduro. In the days after the operation videos from the streets of Venezuela spread across the internet like wildfire. People were crying tears of joy and celebrating at the capture of Maduro - the feeling was palpable. One video in particular was a particularly heavy watch, showing an elderly woman clutching a Venezuelan flag in the streets with tears in her eyes. But something was off about these videos.

A second watch of the video and the voices sounded slightly robotic, their movements a little janky. This wasn’t the fault of the cameraman or low-quality equipment - it_was Al-generated. In fact, all of the previously mentioned videos coming out of Caracas were Al-generated. Starting from TikTok, these videos spread to other platforms like Instagram and X where they continued to spread. While the original posts on TikTok had a warning label telling users that the videos weren’t real, the more that it spread, the harder they became to moderate. Each share and download of the video pushed a false narrative. Those who genuinely welcomed the operation became immediately invalidated.

The First Step

While this rapid development of AI has helped continue America’s economic dominance, it has also meant that its abilities have far outpaced the capacity of governments to safely regulate it. Tools originally meant to streamline work and take on mundane tasks have become unintentional tools for spreading disinformation.

While creating and passing widely-accepted legislation at any level can be a daunting task, it remains one of the most effective ways that governments can reel in unregulated AI usage. One of the main barriers that American policy makers would have to face are the economic consequences of limiting the scope of the uses of AL. In a political culture that has historically prioritized the economic growth associated with artificial intelligence, limiting the scope of this vital industry will prove difficult. The more integrated that these programs become in American society the harder it will be to change the status quo. Moreover, as these companies grow in usage, so do their need for resources. AI datacenters, while detrimental to the environment, create numerous job opportunities - something that many policymakers welcome.

Another barrier that this type of legislation would face is the threat of infringing on America’s free-market-tech-environment. While Silicon Valley is the current hub of tech development in the world, these tech entrepreneurs could decide to move their business elsewhere. While this is an extreme outcome of these moves, the principle remains the same - these moves would be seen as threatening to America’s entrepreneurial persona.

There are numerous other potential barriers to passing legislation like this. The federal government’s spat with Anthropic in March over unrestricted access to Claude is a such case in point. Lobbying groups in Washington are other potential opponents, and these groups could very likely push a decision back towards the status quo. The future remains uncertain.

Mix and Match

So, how can more protections be put into place on these AI services? As elucidated in the previous section, solely relying on legal action to reformulate the Al-environment will be tricky, but not impossible. The approach to this problem, rather, should be to provide legal precedents in restricting how AI can be used while providing economic incentives to AI firms who are more willing to implement more strict restrictions.

Some examples of state legislation that have been passed regarding AI usage can be seen in California, with several relevant acts being passed Generative AI Training Data Transparency Act (AB 2013), California AI Transparency Act (SB 942 / 1000), Companion Chatbot Law (SB 243), and the Transparency in Frontier Artificial Intelligence Act. These pieces of legislation have all sought to limit divisive AI usage. While these acts lay out effective legislation to limit harmful uses of AI as well as notifying viewers that they are viewing AI content, the legislation only applies to the state of California.

In order to create more widely accepted norms around what AI could be used for, a state could provide economic incentives to these companies. Calling back the case between Anthropic and the United States government, what has been clear from this is that not all AI firms are strictly in favor of these unregulated markets. Anthropic proved their resolve in sticking by their own internal principles. A state like California, with guidelines as to what is allowed to be created with AI within the state could engage with directed industrial policy that provides better economic incentives to companies with more strongly-held internal incentives like Anthropic.

This could be done through methods like California’s research and development tax credit system, or even through selective corporate tax rate cuts to AI firms that fall in line with the legislation.

What this combination of carrot and stick legislation will do is promote these safety standards in more widely acceptable ways through better potential long-term outputs. Take this with how competitive the AI market is in the current moment, and firms will fall in line as not to lose their comparative advantages over other firms. This is especially true in a state like California, as they are home to some of the largest Al companies on the planet. Moreover, this combination of legal action and economic incentive could make it more appetizing for other states to follow suit. This could lay the groundwork for the norms of the industry moving forward.

Don’t Stop the Wave, Ride it

In the months after the AI deepfakes of Venezuelan protesters following the capture of Maduro, people have become more impervious to Al-generated content. In the weeks after the initial strikes in Iran, videos on social media supposedly “coming from the ground” have been interacted with more hesitation. What this shows is that the patience for Al-generated content is diminishing, and even more so with misleading media. Moreover, most Americans don’t trust the media like they used to, with Pew Research Center finding that 57% of Americans have “little to no confidence in the media at all”.

America’s AI firms stand at the precipice. These companies can either continue the way they have been, keeping their services mostly unregulated, or work towards what their customers really want, which is more regulated AI content. Companies are already starting to turn. On March 25th, Sora AJ was shut down in the face of both economic sustainability issues and public backlash for the content it has created; on March 26th, Open AI, the owner of Sora, shelved its explicit mode of ChatGPT in the face of public disgust given the illegal pornographic material it was creating; and on March 27th, Judge Rita Lin sided with Anthropic in their legal battle with the federal government.

The pendulum is swinging and America’s AI companies have started to follow the change in momentum. It would be foolish for states’ not to act upon this. Through these government-initiated incentives the United States can continue pushing the industry forward, creating positive spillover into other industries, all while prioritizing safeguards in order to prevent malevolent AI usage. This will make the changes people want, while continuing America’s AI-dominance.

Zac Knapp is a senior at the University of Idaho, triple-majoring in Economics, Political Science, and International Relations.

Comments in Chronological order (0 total comments)

Report Abuse
Contact Us | About Us | Donate | Terms & Conditions X Facebook Get Alerts Get Published

All Rights Reserved. Copyright 2002 - 2026