Key Takeaways
- Opposition to AI has surged since late 2022 when tools like ChatGPT, Midjourney, and DALL·E went mainstream, sparking concerns that span job losses, creative theft, surveillance, political manipulation, and environmental damage.
- Resistance to AI is not simply fear of new technology or ignorance—many critics are technically proficient experts, artists, educators, and industry insiders who understand exactly what these systems do. Critics often use the term 'ai bad' to describe a range of ethical and societal concerns, including harmful uses and negative impacts of artificial intelligence.
- Four core anxieties thread through the backlash: loss of work and skills, erosion of human creativity and identity, concentration of power in big tech and governments, and real-world harms like scams, deepfakes, and carbon emissions. Loss of human connection and authenticity is also a growing concern as people interact more with AI, which can lack emotional depth.
- Most people who dislike AI are not calling for total abolition—they want specific reforms like opt-in training data, mandatory labeling, deepfake bans, labor protections, and transparent environmental reporting. The proliferation of fake content generated by AI threatens to erode trust in media and democratic institutions.
- Understanding why some people hate AI helps clarify what concrete changes they are actually demanding, rather than dismissing the backlash as knee-jerk technophobia.
This blog post walks through the main reasons people are against AI, using concrete examples from 2023–2025 rather than abstract hypotheticals. The goal is not to promote AI or condemn it, but to map the arguments so you can understand where the backlash comes from and decide what you think.
Why Is There So Much Anger About AI Right Now?
When OpenAI released ChatGPT in November 2022, the initial reaction was mostly curiosity and excitement. Here was a chatbot that could write essays, debug code, and hold surprisingly coherent conversations. Within two months, it had over 100 million users. Midjourney and Stable Diffusion were generating photorealistic images from text prompts. The future had arrived, and it seemed fun.
Then the backlash began. By mid-2023, the mood had shifted dramatically. Artists filed lawsuits. Writers protested. Tech workers watched colleagues get laid off while executives talked about AI-powered restructuring. Educators scrambled to update academic integrity policies. Climate activists pointed to soaring data center emissions. And across social media, you could find people who genuinely hate AI—not just people who are skeptical, but people who view the entire project as morally wrong.
What happened? The short answer is that generative AI touched too many nerves at once. Most people never had strong feelings about spam filters or recommendation algorithms—older AI systems that worked quietly in the background. But generative AI produces complete outputs that feel human: essays, artwork, music, speech. Many have noticed quality issues with AI-generated content, such as fake images or robotic customer service, which undermines trust. It competes directly with human activity in domains that many people consider central to identity and meaning.
The opposition comes from many groups who rarely agree on anything else. You have illustrators suing Stability AI over training data, tech workers worried about their jobs, privacy advocates concerned about surveillance, educators watching students bypass critical thinking, and climate activists tracking water usage at data centers. Some critics lean left, some lean right, and many are just people who feel something important is being taken from them without consent. People imagine both utopian and dystopian futures with AI, envisioning either unprecedented benefits or severe societal disruptions. As mentioned earlier, concerns range from job loss to ethical dilemmas and environmental impact.

For the rest of this article, I will unpack the most common reasons people are against AI, grouping them into economic, cultural, political, ethical, environmental, and philosophical concerns. While some extreme claims are outliers, most arguments discussed here are grounded in specific events, lawsuits, or studies from 2023–2025. It is also important to note that AI-generated content is viewed as potentially devaluing human creativity and livelihoods in the arts.
Economic Fears: Jobs, Wages, and Hype-Driven Layoffs
The tech and media industries experienced a brutal layoff wave from 2022 through 2025. Over 260,000 tech jobs were cut in 2023 alone, across companies like Google, Meta, and Microsoft. In many earnings calls, executives justified these cuts with references to AI-powered restructuring, efficiency gains, and automation. As AI adoption increases, less people are required in various roles, leading to widespread concerns about job displacement.
For workers watching this unfold, the message was clear: AI tools are coming for your job, and companies are excited about it. There is significant concern that AI will automate tasks at scale, resulting in mass unemployment in both blue-collar and white-collar industries.
Specific examples fuel these fears. Customer service departments have been replaced by chatbots, and a 2022 study predicted that chatbots would become the main customer service channel for roughly 25% of companies by 2027, indicating a significant shift in job roles within customer service. McDonald’s has been testing Dynamic Yield AI for ordering since 2021 and expanded the system through 2023. White Castle’s Flippy robot has been frying 2,000 burgers daily since 2018. GitHub Copilot now automates an estimated 40–50% of routine coding tasks, reducing demand for junior developers who traditionally built skills through those assignments.
Critics argue that much of this AI adoption is less about genuine productivity and more about pleasing investors. Adding AI features—however superficial—became a reliable way to boost stock prices or secure funding during 2023–2024. Companies slapped AI labels on products that barely used the technology, chasing valuations rather than solving real problems. Many rushed into AI without a sustainable business model, risking long-term failure by not analyzing market gaps or building on proven models. This pattern looks familiar to anyone who watched the crypto and NFT hype cycles of 2017–2022.
Tech company investments and hype are also fueled by substantial AI research efforts. Major tech companies highlight their advancements in AI and AGI to attract investor confidence, shape market perceptions, and maintain a competitive advantage in the industry.
Research from 2023 found that workers using large language models complete tasks faster but often produce lower-quality or less original work. This faux productivity creates a troubling dynamic: employees report feeling more efficient while actually delivering weaker results. Meanwhile, consulting firms project $15.7 trillion in global AI economic impact by 2030, and predictions indicate that 40% of jobs could be disrupted by AI-powered automation by 2030, creating pressure for rapid deployment regardless of whether the technology delivers on its promises.
Many people see the AI boom as another speculative bubble driven by greed, FOMO, and hype. Ordinary workers bear the layoff risks while executives chase sky-high valuations. Critics advocate for slower deployment, stronger labor protections, retraining programs, union involvement in automation decisions, and taxation of AI profits to fund safety nets. They argue that using AI to replace humans should require more money invested in supporting those humans, not less.
Environmental costs are also a concern, as AI data centers consume vast quantities of water for cooling, impacting local water supplies.
Cultural and Creative Concerns: Is AI Hollowing Out Human Art and Learning?
Between 2023 and 2025, AI-written books flooded Amazon Kindle—some estimates suggest 10–20% of new ebooks by mid-2024 were AI-generated. AI music tools like Udio began mimicking famous artists’ voices. Image generators learned to replicate the distinct styles of studios like Studio Ghibli, sparking outrage among animators and illustrators who had spent decades developing those aesthetics.
For many artists, this feels like plagiarism—or at minimum, parasitism. Generative AI systems train on billions of images, songs, and texts scraped from the internet without permission or payment. They then produce in the style of outputs that compete directly with the creators whose work was absorbed. When someone generates a Ghibli-style scene for free in seconds, the illustrators who would have been hired for that commission lose both income and recognition.
The backlash has been concrete. In 2023, authors including Sarah Silverman filed class-action suits against OpenAI and Meta. Artists sued Stability AI over training data. Illustrators on ArtStation organized protests, plastering No AI Art banners across their portfolios. Social media erupted over AI-generated Ghibli-style scenes that circulated without any acknowledgment of the human work they mimicked.
These concerns extend beyond established professionals. Companies now use AI storyboards instead of human concept artists. Publishers experiment with AI-written genre fiction to cut costs. Record labels test synthetic voices rather than hiring session vocalists. The fear is that cheap AI content will devalue creative labor across the board, leaving humans competing against machines trained on their own work.

The deskilling argument extends to education. Since 2022, teachers and professors have watched students submit AI-generated homework with minimal critical thinking. There is growing concern that overreliance on AI may make students lazy, leading them to become passive or less skilled as they avoid effort and depend on automation. Unlike calculators or search engines—which assist with subtasks—generative AI can produce entire essays, codebases, or artworks. Students who bypass the difficult thinking steps may end up with shallower understanding of writing, coding, language learning, and problem-solving. Over-reliance on AI for tasks and decision-making is also feared to diminish human critical thinking, memory retention, and creativity.
In educational settings, AI models are often tested on logical tasks, such as the classic river crossing course involving a wolf, goat, and cabbage. While AI can sometimes provide an answer, it often struggles to adapt to variations of the problem and fails to explain its reasoning, highlighting the limitations of current large language models in true problem-solving.
Some opponents want clear labeling of AI-generated work, strong default bans on AI in certain contests and journals (Nature implemented such policies in 2023), and new norms in schools where AI is used transparently rather than secretly. The point is not necessarily to ban AI tools entirely, but to preserve spaces where human effort and skill still matter, and to prevent a loss of human connection and authenticity that can result from interacting with AI systems lacking emotional depth.
Power, Politics, and “Big Tech Bullshit”
For many critics, anger at AI is inseparable from anger at big tech companies like Meta, Google, Microsoft, and OpenAI. These firms have poured billions into AI development—Microsoft alone invested $13 billion in OpenAI by 2023—while maintaining minimal transparency about training data, safety processes, or lobbying efforts. OpenAI spent $1.6 million on U.S. lobbying in 2023. Different approaches to legal and regulatory frameworks are being debated globally, but there is no consensus on the best way to govern these powerful technologies.
This pattern feels familiar to anyone who watched the crypto boom. From 2017 to 2022, many of the same influencers, venture capitalists, and tech bros hyped tokens, NFTs, and blockchain games. When that bubble deflated, the same crowd pivoted to AI. For skeptics, generative AI looks like another VC-fueled fad, dressed up in revolutionary rhetoric but primarily designed to extract money from investors and eventually from everyone else.
The big tech hedge bet view holds that giants invest in AI partly to avoid disruption and partly to create subscription-based platforms that lock in customers. The result is walled gardens where only a handful of cloud providers can afford state-of-the-art models, while smaller players must rent access. Regulators and the public have little visibility into how these systems actually work, and unpredictable consequences could happen as AI continues to develop.
Surveillance concerns add another layer. AI-powered facial recognition has expanded through services like Clearview AI. AI's ability to process vast amounts of personal data raises fears of mass surveillance, especially with facial recognition technologies. Predictive policing tools like LAPD’s PredPol have shown documented biases. Risk-scoring systems in finance and welfare make consequential decisions about real people using opaque algorithms. Critics see AI as a tool that concentrates control in the hands of corporations and governments, with ordinary people left as subjects rather than participants.
Political manipulation fears intensified during 2023–2024. Fake Biden robocalls disrupted the New Hampshire primaries in January 2024, prompting an FCC investigation. Fabricated Zelenskyy surrender videos circulated online. As deepfakes become more convincing, citizens face growing difficulty knowing what is real. For some people, being against AI is as much about distrusting the institutions that build and deploy it as about the algorithms themselves.
AI systems trained on biased historical data can perpetuate or magnify discrimination in hiring, lending, healthcare, and criminal justice, raising further concerns about fairness and accountability.
Ethical and Legal Objections: Plagiarism, Bias, and Autonomy
Concerns regarding artificial intelligence (AI) technology and its impact on society include immediate ethical and economic issues as well as long-term existential risks. Key reasons for opposition to the development and use of artificial intelligence (AI) include fears regarding job displacement, the perpetuation of bias, lack of transparency, and environmental impact. Beyond political concerns, many people believe current AI systems have already crossed ethical and legal lines—regardless of whether superintelligence ever arrives.
The plagiarism and copyright debate centers on training data. AI tools learned from copyrighted books, code repositories, newspapers, and artwork. In December 2023, The New York Times sued OpenAI and Microsoft, alleging that their models could regurgitate verbatim passages from paywalled articles. Even if courts ultimately accept transformative use defenses, critics argue that regurgitation-plus-rewording is still morally wrong when the original creators received nothing.
Bias and discrimination present another set of problems. Amazon’s hiring tool, exposed in 2018, discriminated against women. A 2019 NIST study found that facial recognition systems misidentified darker-skinned faces at rates 100 times higher than lighter-skinned faces. Large language models trained on flawed data can reflect and amplify stereotypes in hiring, policing, and lending decisions. These are not hypothetical future harms—they are documented failures already affecting real people.
Opacity and lack of transparency are major concerns because AI systems often function as a black box, making it difficult to interpret their internal decision-making processes. This black box nature is similar to the human brain, as both AI models and the human brain are complex systems whose inner workings are not fully understood, which fuels skepticism about their reasoning and consciousness.
Autonomy and consent raise philosophical questions. People cannot opt out of having their online content scraped and used for training. They cannot see or delete how their words and images have influenced a model. Critics view this as a violation of personal and collective agency—your creative work, your writing, your likeness can all be absorbed without asking.
Professional codes of conduct are evolving in response. Universities updated academic integrity policies in 2023–2024 to address AI-assisted writing. Medical bodies like the AMA issued principles emphasizing human oversight in diagnostics. Law firms faced sanctions when AI-hallucinated citations appeared in court filings, as in the 2024 Mata v. Avianca case. The conversation about what reliance on AI is acceptable—when human judgment and accountability are supposed to be central—remains unresolved.
As people lean on AI for advice about relationships, mental health, or life decisions, there is also a risk of subtle manipulation. Outputs could be nudged by commercial incentives, hidden prompts, or policy goals without users being aware. Many who are against AI specifically call for tighter law: clearer copyright rules for training data, mandatory transparency around datasets and model behavior, and liability when AI systems are used negligently.
Concrete Harms: Deepfakes, Scams, and Information Decay
In 2024, financiers in Hong Kong lost $25 million after a deepfake video call convinced them they were speaking with their company’s CFO. The FTC reported $12.5 billion in U.S. losses to impersonation scams in 2023, with AI-enabled cases up 400%. Non-consensual explicit deepfakes targeted Taylor Swift in early 2024, prompting proposed federal legislation. These are not speculative risks—they are harms that have already happened.
Deepfake technology advanced significantly between 2018 and 2025, becoming far more accessible and convincing. Anyone with a laptop and free tools can now generate realistic fake speeches, news clips, or compromising footage. The same capability that lets hobbyists create funny videos lets bad people create devastating fraud.

The fraud angle scales globally. AI-written phishing emails evade filters with constantly evolving variants. Voice synthesis imitates CEOs or relatives to demand urgent wire transfers. Chatbots sound like genuine customer support while harvesting credentials. The tools that make AI useful for productivity also make scams cheaper and more convincing.
Information quality faces its own crisis. AI-generated SEO articles now fill search results—some 2024 studies detected AI content in 60% of top results for certain queries. Low quality ebooks crowd marketplaces. A 2023 Nature paper warned of model collapse when AI systems train on their own outputs, leading to gradual degradation. This ai slop clogs the internet with content optimized for algorithms rather than humans.
Fact-checkers, journalists, and courts struggle to keep up. Verifying the authenticity of images, audio, documents, and testimony becomes harder every year. This undermines trust in media, justice systems, and democratic processes. People who are against AI often point to these already-visible harms as evidence that regulation and guardrails are years behind the technology.
Environmental and Infrastructure Concerns: Energy, Water, and Hardware
Training large AI models requires enormous computing power. Estimates suggest training GPT-4–scale systems consumed around 50 gigawatt-hours of electricity—roughly equivalent to 18,000 U.S. households for a year. Inference at global scale adds continuous load as billions of queries are served daily. The International Energy Agency projected in 2024 that data centers could consume 8% of U.S. power by 2030.
Water usage presents another concern. Google’s U.S. data centers used 5.6 billion liters in 2022, up 20% with AI workloads. Cooling these facilities consumes vast amounts of freshwater, sometimes in drought-prone regions. Protests in Chile in 2023 targeted Google over water consumption, highlighting tensions between tech expansion and local resources.

Climate activists draw comparisons to crypto mining. Both are resource-hungry technologies—similar emissions, hardware waste, and power grid stresses—but AI gets marketed as essential progress rather than speculative gambling. Critics argue that even if AI can help optimize energy grids or climate modeling, its net impact may be negative if unchecked growth in compute outpaces efficiency gains.
Calls to slow AI expansion often include demands for transparent reporting on energy and water usage, sustainable data center siting, and alignment of AI projects with genuine social benefit rather than pure profit. While some companies pledge renewable energy commitments, skeptics note that adding new load still strains grids, and efficiency improvements cannot offset exponential growth in demand.
Identity, Autonomy, and the Fear of Losing What Makes Us Human
Beyond practical concerns, many people are unsettled by AI for psychological and philosophical reasons. Generative AI blurs the line between human and machine in domains like art, writing, and conversation—areas that many consider central to identity and meaning.
Chatbots that mimic empathy or therapy create discomfort. AI companions and romance bots attract users but also critics who see them as hollow simulations that cheapen genuine human interaction. Synthetic influencers rack up followers while raising questions about authenticity. Some find comfort in these tools; others see them as substitutes that erode our ability to connect with real people.
When AI performs creative tasks—writing poems, composing music, designing characters—it can feel like an attack on cherished ideas of human uniqueness. If a machine can produce something indistinguishable from what a person creates, what does that say about talent, effort, and the meaning we derive from making things? For some, this represents progress; for others, it represents loss. There are growing concerns about the possibility of artificial general intelligence (AGI) that surpasses humans in learning, reasoning, and adaptation. Some experts argue that advanced, self-improving AGI could surpass human intelligence and pose an existential risk to humanity.
Autonomy worries extend to everyday life. Recommender systems already shape what people read, watch, and buy—roughly 70% of Netflix views come from algorithmic recommendations. As large language models become more integrated into decision-making, critics fear subtle erosion of individual agency. Choices get nudged by opaque systems optimizing for engagement, revenue, or other goals that may not align with user interests.
For these critics, being against AI is partly about defending human dignity, self-determination, and the value of doing things the hard way. They argue that life should not be reduced to optimization problems, that slow and imperfect human activity has worth even when machines could do it faster. This is not just convenience at stake—it is what kind of future generations will inherit.
Is Being Against AI the Same as Being Against Technology?
Many AI skeptics and opponents are not Luddites in the caricatured sense. They often embrace other technologies—smartphones, open-source software, traditional machine learning—but see generative AI as a qualitatively different risk. The same argument cannot simply be applied across all technologies.
Historical comparisons are instructive. The Jacquard loom displaced weavers in the early 1800s. Photography threatened portrait painters. Recorded music changed how musicians earned a living. Calculators worried math teachers. The internet disrupted entire industries. In each case, society eventually adapted—but not without genuine costs to specific groups of people, and not without fights over who would bear those costs.
Some critics draw a hard line at tools that replace cognitive and creative labor rather than physical or repetitive tasks. When technology changes how people think, learn, and form identity, the stakes feel higher than when it changes how they lift boxes or sort mail. This is one reason people who happily use word processors might hate AI that writes entire documents.
Many critics call not for permanent bans, but for democratic control: public debate, regulation, labor input, and ethical frameworks. They want decisions about powerful tools to involve more than just CEOs and investors. The real divide may be less pro-tech versus anti-tech and more about who gets to decide when and how these systems are built and deployed.
How People Want AI to Change: Common Demands and Proposed Limits
Even among people strongly against AI as currently practiced, many articulate specific reforms rather than total abolition. Understanding these demands helps clarify what the backlash is actually about.
Common demands include:
Category | Specific Demands |
|---|---|
Training data | Explicit opt-in with compensation for creative work; transparency about what datasets include |
Labeling | Watermarking and mandatory disclosure of AI-generated media; C2PA metadata standards |
Deepfakes | Criminal penalties for non-consensual sexual deepfakes; restrictions on election-related disinformation (e.g., U.S. DEFIANCE Act 2024) |
Labor | Shorter workweeks or job guarantees if productivity rises; retraining programs; union consultation before automation |
Taxation | Profits from AI-driven automation taxed to fund social safety nets |
Transparency | Disclosure of datasets and model capabilities; independent bias and safety audits; public-interest research access |
Compute limits | Caps on training frontier models; temporary pauses until governance frameworks exist |
Groups like the Artists Rights Society pushed for opt-in training data in 2024. The Future of Life Institute’s 2023 pause letter gathered over 33,000 signatures. The EU AI Act established tiered risk requirements in 2024. These efforts show that people who are worried about AI are not just complaining—they are organizing around concrete policy goals.
Being perfectly honest, understanding why people are against AI helps clarify what changes might actually address their concerns. The conversation is not about stopping technology entirely. It is about shaping technology so that more people benefit and fewer people get harmed.

Q1: Are people who oppose AI just afraid of losing their jobs?
Job loss is a major concern, especially in writing, design, customer service, and programming. But it is not the only driver of opposition. Critics also worry about plagiarism, surveillance, deepfakes, climate impact, and erosion of human creativity and autonomy. Many vocal opponents are secure academics or professionals who are unlikely to be immediately automated, which suggests their objections extend beyond personal economic fear.
Q2: Is using AI always unethical or “cheating”?
Ethical judgments depend heavily on context. Using AI to brainstorm ideas or summarize your own notes is different from submitting AI-written work as your own in school, court, or professional settings. Many institutions are still updating their rules. If you are concerned, check specific policies—university honor codes, workplace guidelines, journal submission requirements—rather than assuming AI use is automatically acceptable or forbidden.
Q3: Can AI be regulated effectively, or is it too late?
While some damage is hard to undo—training data has already been scraped—many key decisions remain in play. The EU AI Act established tiered requirements in 2024. The Biden administration issued executive orders on AI safety in 2023. National debates continue over data protection, deepfake laws, and liability frameworks. Critics argue these are only first steps, but the idea that regulation is impossible ignores significant ongoing efforts.
Q4: Do all experts agree that current AI is overhyped?
Expert opinions are sharply divided. Some researchers and executives warn about existential risks and promise massive productivity gains. Others argue that large language models are powerful but fundamentally limited pattern-matchers, caught in a hype bubble similar to crypto. I would encourage treating claims of both utopia and doom with skepticism, and paying attention to who benefits from particular narratives about AI’s future.
Q5: Is it reasonable to use AI cautiously while still sharing many of these concerns?
Many people adopt exactly this stance. They use AI tools for specific tasks—drafting, translation, coding assistance—while advocating for stronger rights for creators, privacy protections, and environmental safeguards. The issue is not a binary for or against AI but an ongoing negotiation over how, where, and under whose control these tools should be developed and used. You can personally use AI while still being aware of its problems and pushing for better guardrails.
Your Friend,
Wade
