Running powerful artificial intelligence on your own devices—laptops, phones, home servers, edge boxes—is increasingly possible and increasingly contested. The right to compute local‑first AI means retaining the legal and practical ability to run AI models locally without being forced into cloud-only, permissioned services controlled by a handful of corporations.
“Protecting” this right has three distinct layers: protecting yourself from restrictive laws that could criminalize local model use, protecting your devices from vendor lock-in that channels you toward cloud APIs, and protecting your data from being forced onto other people’s servers where it can be surveilled, monetized, or censored.
The good news? You can act now. Choose open hardware and software. Back state legislation like Montana’s groundbreaking Right to Compute Act. Resist contracts that prohibit local inference. Between 2025 and 2027, a wave of state AI laws will shape whether computation remains a fundamental right or becomes a licensed privilege.
This is a practical, step-by-step article. Let’s get to work.
What is the “Right to Compute” and why it matters for local‑first AI?
The right to compute is the freedom to own, freely access, and employ computation resources and AI systems without discriminatory restrictions from governments, utilities, or platform vendors. It’s the digital age equivalent of owning a printing press and being allowed to use it.
Real policy is already catching up to this concept. Montana’s Right to Compute Act (SB 212), signed into law by Governor Gianforte in 2025, became the first state to enshrine computational rights in statute. Senator Daniel Zolnikov championed the bill, arguing that computational freedom is as essential to individual freedom in the modern era as property rights were in previous centuries.
New Hampshire and Idaho introduced proposed constitutional amendments in 2025 that would elevate the right to compute to constitutional protection. These efforts treat computation as infrastructure for free expression, economic growth, and human decision making—not a luxury that can be switched off at a regulator’s whim.
What does this mean for local‑first AI specifically?
It means being able to run models like Llama-3, Mistral, or Gemma on your own GPU instead of routing every query through centralized vendors who:
- Log your prompts and training data
- Can change terms of service to ban certain uses overnight
- May comply with government actions that restrict generated content
- Impose discriminatory rates or throttle access based on undisclosed criteria
Privacy is the obvious benefit—no forced data uploads to data centers you don’t control. But resilience matters too. Local AI tools work when cloud services are censored, down, or priced out of reach. And competition depends on it: if only a few hyperscalers can legally run powerful AI systems, economic competitiveness collapses into oligopoly.
As AI becomes more capable, governments and large firms have growing incentives to centralize control. The time to articulate and defend a right to compute is now, before the window closes.
The policy landscape: how state and federal AI rules can affect local compute
States are driving AI policy in the United States while federal level regulation remains fragmented. President Trump’s early 2025 repeal of President Biden’s executive order on AI removed the primary federal framework, leaving a patchwork of state legislation to fill the gap.
Analysts tracking 2025-2026 describe three broad approaches emerging across other states:
Approach | Examples | Implications for Local AI |
|---|---|---|
Innovation-first | Montana, New Hampshire | Explicitly protects computational rights; light-touch regulation |
Protection-first | California, parts of New York | Heavier guardrails; risk of overbroad restrictions on high risk AI |
Balanced/in flux | Texas, Colorado | Middle path; outcomes depend on lobbying and legislative details |
Montana’s SB 212 exemplifies the innovation-first model. It prohibits state entities from unreasonably burdening computational activities, requires any restriction to serve a compelling government interest and be narrowly tailored, and bars utilities from imposing discriminatory rates on residents running personal servers or AI workloads.
California’s approach includes laws like SB 53 on foundational models, which impose transparency and risk assessment requirements on large model developers. While aimed at corporate providers, vague definitions of “high-risk AI” and automated decision making technology could inadvertently sweep in researchers or small teams running local inference.
The danger lies in poorly drafted AI regulations that:
- Define “high-risk” so broadly that hobbyist fine-tuning triggers compliance burdens
- Require pre use notices or registration for any model above a certain parameter count
- Create liability for publishing open-source models without extensive audits
- Mandate that significant decisions involving AI only occur through approved, cloud-hosted systems
Some proposals copy the EU’s precautionary approach, while others explicitly resist “Europeanization” and seek frameworks that don’t unreasonably burden innovation. Users need to monitor bills in their own states and advocate for language that protects local compute—not just corporate cloud AI.

Legal and civic steps to protect your right to run AI locally
Protecting your right to compute isn’t just about buying the right hardware. It requires civic engagement with the state AI laws being written right now.
Track state legislation actively
Use public tools to monitor bills in your state:
- Official state legislature websites with bill tracking and alerts
- Services like PolicyNote, LegiScan, or state-specific trackers
- RSS feeds or email alerts for keywords relevant to AI law
Search for these specific terms in bill text:
- “Right to compute” or “computational resources”
- “High-risk AI” or “automated decision-making”
- “Model inference” or “end-user devices”
- “Generative AI tool” or “frontier models”
When you find relevant bills, read the full text. Summaries can obscure language that matters for local compute.
Support pro-compute bills
Montana’s SB 212 provides a template other states may copy between 2025 and 2027. If similar legislation appears in your state:
- Email or call your state legislators expressing support
- Submit written testimony for committee hearings
- Attend hearings in person when possible—physical presence signals constituent interest
- Ask for explicit language protecting local use of open-source models on personal hardware
Request that bills include carve-outs for:
- Personal, non-commercial use of AI tools
- Research and experimentation on privately-owned devices
- Development and distribution of open-source AI frameworks
Push back on restrictive proposals
When you encounter bills that threaten local compute:
- Request exemptions for personal, non-commercial local AI use
- Advocate for risk-based regulation focused on actual harms—fraud, critical infrastructure attacks, financial or lending services discrimination—rather than generic limits on computing power
- Argue that pre-crime restrictions on compute mirror the failed logic of the 1990s crypto wars
Point out that restricting local AI doesn’t stop bad actors (who will ignore the law) but does harm:
- Researchers who can’t experiment without corporate permission
- Small businesses priced out of cloud AI
- Journalists and activists who need privacy from surveillance
- Rural users with limited connectivity
Join advocacy organizations
Organizations actively working on these issues include:
- RightToCompute.ai and similar decentralization initiatives
- Digital rights groups like EFF that have experience with encryption and device freedom battles
- Local tech clubs and hacker spaces tracking AI legislation
- Industry groups representing small AI startups and open-source developers
Constitutional amendments (like New Hampshire’s 2025 proposal) and statutory protections both matter. Support whichever vehicle is realistic in your state’s political environment.
Key policy principles you should demand from lawmakers
When engaging with legislators, ask for specific protections:
Non-discrimination in infrastructure access
Utilities and networks essential to computation should not impose discriminatory rates targeting home GPU servers, small AI labs, or high-bandwidth research. Montana’s law explicitly bars this. Other states should follow.
Due process and narrow tailoring
Any restriction on compute must be narrowly tailored to a demonstrable public safety or public health need. The burden of proof should be on the government, not the citizen. Vague references to “AI safety” shouldn’t justify blanket bans.
Local-use carve-outs
Laws should explicitly exempt:
- Running models privately on devices you own
- Encryption and privacy-preserving computation
- Research, education, and experimentation
- Modifications to models for lawful purposes
Open-source protection
Development and distribution of open-source models and tooling should be shielded from vague liability. Holding tool creators responsible for all possible misuse would destroy the open AI ecosystem, just as similar theories would have killed the web browser.
Suggest that lawmakers anchor AI law to flexible frameworks like NIST’s AI Risk Management Framework rather than writing rigid technology-specific rules that become obsolete within years.
Technical choices that preserve your local‑first AI freedom
Your hardware, OS, and software choices can protect or erode your right to compute—regardless of what any law says. Platform lock-in is a policy choice by vendors, and you can choose differently.
Choose hardware that respects ownership rights
When selecting devices for local AI:
Prefer open bootloaders and OS flexibility
- Desktop PCs remain the gold standard for user control
- Some laptops allow alternative OS installation without fighting secure boot
- ARM-based single-board computers and self-hosted edge boxes offer full control
- Avoid devices that require cloud accounts just to complete initial setup
Ensure sufficient compute for inference
Consumer GPUs released 2024-2026 with 8-24 GB VRAM can run 7B to 70B parameter models when quantized:
- Entry level (8GB): Small models, fast responses
- Mid-range (16GB): Most practical local LLMs
- Enthusiast (24GB+): Larger models, less quantization needed
NPUs in recent laptops and phones can also run smaller models efficiently, but check whether the vendor locks NPU access to approved apps.
Use open-source AI stacks
Build your local AI setup on foundations you control:
Frameworks and runtimes
- PyTorch for general ML development
- llama.cpp for efficient CPU/GPU inference of LLMs
- Other specialized runtimes for vision and audio models
Open models with permissive licenses
- Llama-3 community versions
- Mistral and Mixtral variants
- Gemma from Google (check specific license terms)
- Numerous 2024-2025 open models from independent labs
Always verify the license of any model you download. Some “open” models restrict commercial use or require attribution. A resulting trained model may inherit license obligations.
Avoid cloud-tethered traps
Watch for AI tools and “appliances” that:
- Disable core features when offline
- Restrict sideloading of models or plugins
- Require periodic “license verification” via internet
- Include terms of service allowing vendor to remotely update and restrict functionality
Mobile AI apps are particularly prone to these patterns. A local assistant that stops working when you’re on airplane mode isn’t truly local.
Build a sovereign stack
Create at least one computing environment you fully control:
- A small Linux machine (desktop, mini-PC, or repurposed laptop)
- Local LLM inference (llama.cpp or similar)
- Local image generation if needed
- Vector database for document retrieval (RAG)
- All running on encrypted storage you physically possess
This “sovereign stack” is your fallback. When cloud services change terms, when vendors decide certain topics are forbidden, when your internet goes down—your sovereign stack keeps working.

Avoiding future lock-in: contracts, DRM, and remote attestation
The next frontier of control isn’t just law—it’s technology policy embedded in the devices themselves.
Read terms of service carefully
Software licenses for AI tools, cloud services, and OS updates may:
- Prohibit running “unapproved” models
- Ban benchmarking or publishing performance comparisons
- Reserve the right to disable features remotely
- Require arbitration in distant jurisdictions
Decline contracts that give vendors power to remotely disable local compute or enforce usage caps unrelated to safety.
Watch for remote attestation requirements
Some operating systems and chipmakers are tying AI features to “trusted computing” environments that require remote attestation—proving to a server that you’re running approved software. In 2024-2026, this is emerging in:
- Enterprise security contexts (sometimes reasonable)
- Consumer AI features (often about control, not safety)
When evaluating products, prefer those that:
- Let you disable remote attestation for personal workloads
- Don’t bundle AI “safety” enforcement with digital rights management
- Provide local administrative override capabilities
Firmware and driver concerns
Some GPU driver stacks now require cloud accounts or telemetry consent to unlock full performance. This trend may accelerate. Prefer:
- Vendors with a track record of supporting open-source drivers
- Hardware with published specs that enable community driver development
- Private ownership of firmware-level control where possible
Data, privacy, and security: protecting the content of your computation
Protecting your right to compute means nothing if your inputs and outputs leak to third parties. Local-first AI dramatically reduces exposure compared to cloud inference, but only if you actually keep data local.
Minimize telemetry and data harvesting
Modern operating systems and AI tools often include opt-out (not opt-in) telemetry:
- Prompts and completions may be uploaded for “product improvement”
- Voice assistants may have cloud processing even for “local” features
- “AI suggestions” in productivity apps may send document context to servers
Audit your settings. Turn off:
- Prompt/response logging to cloud services
- “Help improve AI” or similar feedback mechanisms
- Automatic syncing of local AI conversation histories to cloud notes apps
Use offline-capable models for sensitive tasks
For genuinely sensitive work—legal drafts, health notes, business strategies, attorney general communications, anything involving human control over critical decisions—use models that run entirely offline:
- Download model weights to local storage
- Disconnect from network during sensitive sessions
- Verify no background processes are phoning home
Secure your local AI infrastructure
If you’re running a home AI server or NAS hosting models:
- Enable full-disk encryption
- Use strong authentication (not just passwords)
- Segment your network so AI servers aren’t directly exposed
- Keep systems updated against known vulnerabilities
A compromised local AI setup is a single point of catastrophic breach. Excessive government interference isn’t the only threat—criminal hackers love unpatched home servers too.
Compliance and risk management for power users and small teams
For small businesses, researchers, and startups running local clusters, a lightweight risk management policy helps demonstrate responsibility:
Map your uses against existing law
You don’t need new AI-specific statutes to know that:
- Using AI to commit fraud is illegal
- Discriminatory automated decision making technology in lending services violates civil rights law
- Generating certain illegal content is criminal regardless of the tool
Document what you’re using local AI for and why it’s lawful.
Adopt lightweight governance
Inspired by NIST’s AI Risk Management Framework:
- Document model sources and versions
- Record intended uses and known limitations
- Log significant decisions where AI played a role
- Review periodically for drift or misuse
Why this matters
Having internal policies and logs helps rebut claims that local compute is inherently unsafe or unmanageable. When you can show deliberate, documented practices, you’re a responsible user—not a reckless risk.
Participation in voluntary standards and industry codes of conduct demonstrates that strong ownership rights for local AI can coexist with robust safety practices.
Building a local‑first AI movement: from personal practice to public norms
Legal rights solidify when they reflect widely adopted social practices. Running local AI tools daily—and being visible about it—helps normalize the behavior and makes it harder to restrict.
Share your setup (carefully)
Without exposing sensitive details:
- Blog about your local AI configuration
- Present at meetups and conferences
- Answer questions in forums and community spaces
- Demonstrate that “normal people” run local AI for legitimate purposes
Every visible user makes the practice harder to demonize.
Support the ecosystem
Contribute to or fund open-source AI projects that prioritize on-device performance:
- Inference engines optimized for consumer hardware
- Quantization and optimization tools
- Model releases with permissive licenses
- Documentation making local AI accessible to beginners
The Frontier Institute and similar organizations study and advocate for open AI development. Their research informs the action plans of lawmakers and industry groups.
Join or create advocacy efforts
Existing initiatives to study or join:
- RightToCompute.ai
- Haltia.AI and decentralization-focused groups
- Local chapters of digital rights organizations
- University AI ethics and policy groups
Organize “local-first AI” workshops teaching:
- How to install and run a local LLM
- How to keep data private
- How to avoid vendor lock-in
- How to engage with state legislation
Public familiarity reduces fear-based policy. When legislators understand that local AI is used by their constituents for mundane, beneficial tasks—not just by bad actors—they’re less susceptible to lobbyists framing local compute as exotic or dangerous.
Build coalitions
Effective technology policy advocacy brings together unlikely allies:
- Civil liberties organizations (privacy, free speech)
- Small AI startups (economic competitiveness)
- Hardware hackers and maker communities (device freedom)
- Academic researchers (scientific openness)
Joint position papers carrying signatures from diverse groups carry weight in 2026 legislative sessions.

What to watch in 2025–2027: signals your right to compute may be at risk
Stay alert for early-warning signs that computational freedom is under threat:
Legislative red flags
- Bills that cap or license GPU ownership or “excessive” home energy use for computing
- Proposals treating all model training or fine-tuning as high risk by default
- Requirements that any AI inference require connection to a government-approved monitoring system
- State attorney general powers to seize equipment or block internet access based on suspected AI misuse
Infrastructure threats
- Utility tariffs that quietly penalize running personal servers or persistent AI workloads
- ISPs imposing discriminatory rates on traffic patterns associated with local AI
- Cloud providers lobbying for regulations that exclude self-hosted alternatives
Platform lock-in escalation
- OS or hardware updates that remove the option to run unsigned local AI code
- GPU drivers that require “safety verification” to unlock compute capabilities
- Mobile platforms banning apps that enable local model inference
How to stay informed
- Set alerts in state bill databases for terms like “artificial intelligence,” “automated decision,” “computational resources,” and “generative AI”
- Follow 2026 and 2027 state policy outlooks from law firms and technology policy organizations
- Track news from states with large tech sectors (California, New York, Washington, Massachusetts) and states experimenting with right-to-compute models (Montana, New Hampshire, Idaho, Utah)
Vigilance now prevents emergency mobilization later.
Key takeaways
- The right to compute is emerging as a fundamental right in the digital age—Montana became the first state to protect it in 2025, with New Hampshire and other states considering constitutional amendments
- State AI laws vary dramatically; some protect local compute while others could inadvertently criminalize it through overbroad definitions
- Civic engagement matters: track legislation, support pro-compute bills, push back on restrictions, and join advocacy organizations
- Technical choices are political choices: open hardware, open-source AI stacks, and avoiding cloud-tethered tools preserve your freedom regardless of law
- Privacy and security practices validate local AI—document your uses, secure your systems, and demonstrate that private ownership of computing power is responsible
- Building a visible movement normalizes local AI and makes restrictive regulation politically harder
Take action this month
Don’t wait for the next legislative session to start protecting your right to compute:
- Audit your stack. Identify one cloud AI dependency you could replace with a local alternative.
- Spin up a local model. Install llama.cpp or a similar runtime and run one open-source LLM on your own hardware before the end of the month.
- Contact your legislators. Send one email to your state senator or representative asking where they stand on the right to compute and whether they support protections modeled on Montana’s Compute Act.
- Share what you learn. Write a post, give a talk, or just tell a friend. Every person running local AI is one more argument against restrictions.
The right to compute isn’t granted by governments—it’s exercised by citizens. Start exercising yours today.
Your friend,
Wade
