Regulating AI


 

AI Regulation, Bans, and the Right to Read: A Glimpse Into Our Digital Future
By Ron Gula | Gula Tech Adventures

In this week’s edition of Gula Tech Adventures, I dove into a crucial conversation that affects nearly every industry, consumer, and policymaker on the planet: AI regulation. From banning technologies to demanding transparency, and ultimately navigating the murky waters of copyright and digital rights management (DRM), the choices we make today around AI policy could define how freely we live and innovate in the decades to come.

Let’s unpack where we are, what we’re headed toward, and what lessons we should take from the past — and even science fiction.

1. The Many Faces of AI Regulation: Bans, Labels, and Lawsuits

When it comes to AI regulation, we’re seeing activity in three main areas:

  • Banning technology

  • Creating transparency standards

  • Protecting data copyrights and trademarks

Each is being shaped differently at the government, corporate, and societal levels.

AI Bans: From Drones to TikTok to Smuggled Intelligence

Bans on technology aren't new. The U.S. government, especially the Department of Defense (DoD), has long prohibited the use of certain foreign hardware like DJI drones and Huawei routers, largely over national security concerns related to China.

Now AI tools are starting to enter similar territory.

Imagine an AI that sends sensitive user data — perhaps via a car app, drone camera, or even malware scanner — back to servers in China or Russia. If we don’t fully understand what data it’s collecting, where it’s sending it, and how it's used, it may be just as dangerous as compromised hardware. Banning AI apps for this reason, especially when embedded in critical infrastructure or national defense, seems inevitable.

Even platforms like TikTok, still legal in the U.S. but heavily scrutinized, are caught in this regulatory crossfire. Why? Because ownership, data privacy, and national allegiance matter more than ever in our AI age.

Transparency: Knowing What’s in Your AI

While bans are blunt instruments, transparency is more nuanced. It's about giving people and institutions the information they need to make safe choices. That includes:

  • Where the AI model was developed

  • What data it was trained on

  • Whether it sends data back to foreign governments

  • How it handles your information and who it shares it with

Security labeling is one proposal already gaining traction — think of it like a nutrition label, but for AI software. These labels could tell users if an AI app is secure, private, and compliant.

As AI moves from centralized cloud models to decentralized edge deployments — embedded in cars, homes, factories, or personal robots — such labeling becomes critical. Without centralized oversight, the trustworthiness of each individual AI becomes harder to verify.

The Next Big Frontier: Copyright, DRM, and Data Licensing

But the most transformational regulatory fight might not be about bans or transparency — it’s about who owns the data that trains AI.

Take the New York Times vs. OpenAI lawsuit. The Times claims its copyrighted journalism was used without permission to train OpenAI’s large language models. Similar lawsuits are brewing from other publishers, artists, and content platforms.

This gets to the heart of the matter: What rights do creators have when their work becomes training fodder for machines?

We’ve seen this debate before — during the rise of MP3s, Napster, iTunes, and YouTube. Now we’re heading toward an AI-era version of DRM, where models may need licenses to be trained on specific datasets. You might even have AI models labeled and categorized based on what kinds of data they’re allowed to be trained on: public domain, paid licenses, Creative Commons, etc.

2. From Sci-Fi to Reality: The “Right to Read” and Digital Dystopia

To help paint this picture, I referenced Richard Stallman’s prescient short story, The Right to Read. The fictional tale explores a future where even reading a book requires government permission, thanks to advanced DRM, surveillance, and software restrictions.

In this world:

  • Every book is embedded with a copyright monitor

  • Reading someone else’s digital copy is a federal crime

  • Debuggers and free software are illegal

  • Sharing knowledge or lending a computer is considered “piracy”

While the story may seem extreme, it parallels many real-world debates:

  • Who controls what you read, stream, or use?

  • What rights do users have in sharing digital content?

  • Could AI regulation take us closer to this kind of restricted society?

If we let copyright and DRM concerns shape every AI policy, we risk losing the very innovation and freedom that these technologies promise. But if we fail to protect intellectual property, we erode trust, value, and creative incentive.

It’s a balancing act — one we must get right.

3. What Happens When AI Becomes Humanlike?

The urgency around regulation will skyrocket once AI systems become physically embodied.

Today, we interact with AI through screens and keyboards. But tomorrow’s AI will walk, drive, and talk — in cars, delivery bots, home assistants, and more. Once these machines are part of our physical world, societal demands for accountability and transparency will explode.

Imagine a robot injuring someone due to a faulty algorithm. Or a car assistant revealing private data mid-drive. The public won’t accept a black-box explanation. We’ll want to know:

  • Who trained it?

  • On what data?

  • Who’s responsible?

Regulation isn’t just about code — it’s about trust.

4. Closing Thoughts: From Chaos to Clarity

The AI regulation landscape is murky, fragmented, and evolving fast. But some principles are becoming clear:

  • Expect more bans — especially on foreign-made apps and tools with unclear data flows.

  • Transparency is coming — in the form of AI nutrition labels, security certifications, and on-prem standards.

  • Copyright will shape the future — with data licensing and model training rights leading to AI-based DRM.

If we don’t get ahead of this, we risk falling into a surveillance-driven, DRM-heavy, creativity-stifled future. But if we act smart — with open standards, strong ethics, and clear policies — we can unlock an AI-powered era of innovation and trust.

At Gula Tech Adventures, we’re keeping a close watch on these developments — and investing in companies that want to build AI responsibly.

If this conversation interests you, check out our other AI-focused content on YouTube or reach out on LinkedIn. We’re creating short animations, virtual podcast episodes, and educational tools to help founders, policymakers, and the public navigate this fast-changing landscape.

Thanks for reading — and here’s to building an AI future we can believe in.

– Ron Gula
Gula Tech Adventures

 

Watch More

 
Previous
Previous

Roadmap Warrior

Next
Next

The Right To Read