xAI Brings the “Move Fast and Break Things” Mindset to Memphis

Companies like xAI, founded by Elon Musk, have become emblematic of a culture that prizes speed over reflection, dominance over deliberation.

Stephen Smith | May 1, 2025 | Climate Change, Energy Justice, Energy Policy, Fossil Gas, Southeast, Tennessee, Utilities

What’s the Hurry? 

As I sat in the Fairley High School auditorium in south Memphis listening to hundreds of citizens of the local community impacted by decades of pollution passionately speak against an air pollution permit for the rapidly expanding xAI supercomputer complex, I kept asking myself these questions: What’s the hurry; Why so fast; and At what cost?

Keshaun Pearson, executive director of Memphis Community Against Pollution, speaks before the Shelby County Health Department town hall on April 25.

In the last several years, the pace of Artificial Intelligence (AI) development has accelerated. While technological ambition is nothing new, the breathtaking disregard for the societal, environmental, and communal implications of what is shaping up to be a race to dominate AI is striking, as is the fact that this race is occurring without an agreed-upon definition of what AI domination really means or who wins if domination is achieved. Companies like xAI, founded by Elon Musk, have become emblematic of a culture that prizes speed over reflection, dominance over deliberation. 

The ethos driving xAI is an extension of Silicon Valley’s “move fast and break things” mantra—a mindset that privileges disruption for its own sake, first coined by Mark Zuckerberg’s hacker culture, what he called “the Hacker’s Way.” In his own words, “We have a saying: ‘Move fast and break things.’ The idea is that if you never break anything, you’re probably not moving fast enough. In its scramble to outpace competitors, xAI has embraced this mantra and embarked on building a supercomputer facility in Memphis, Tennessee, without fully considering the ripple effects on local infrastructure, on stressed energy grids, and on already vulnerable communities facing air quality and economic disparities. Public input has been an afterthought, if considered at all.

xAI, Memphis, and the Betrayal of Public Trust

Since its conception, Musk’s xAI team has held information on the project close to the chest, misleading the Memphis community about the size and scale of the project. Initial presentations suggested an ambiguous 100,000 Graphic Process Units (GPUs) scale supercomputer cluster. Once embedded in the community, xAI quickly escalated the goal to one million GPUs, bragging about the rapid pace of achieving operation, running 200,000 GPUs and growing rapidly, requiring unprecedented energy consumption. 

As detailed in my article Will Memphis Pay a Price for Elon Musk’s xAI Colossus Bait and Switch, the company not only downplayed the scale of the buildout but also obfuscated the energy required to power a supercomputer with a million GPUs. The local utility, Memphis Light Gas and Water (MLGW), was not configured to meet the large electric load on such a rapid schedule, so xAI proposed that they themselves would build the transmission upgrades and substations needed to the utility’s specifications. Even this proposal was unable to be completed quickly enough, so xAI deployed approximately 18 mobile methane turbines lacking the necessary advanced pollution controls to support the facility. Consistent with their move fast mindset, xAI quickly secured an air pollution waiver at their Electrolux industrial site for up to a year. Community groups would later document xAI’s deployment of more than double the mobile methane turbines beyond what the company initially stated in their request for a long term air pollution permit. 

This erosion of community trust is a textbook example of the dangers of the “tech bro” mindset of “move fast and break things”— except what is being broken here is the health and social fabric of a predominantly Black community already burdened with historic environmental justice abuses.

The impacts on the community trust are real, and yet the dangers are not limited just to Memphis. There are more fundamental questions: Why are we moving so fast? How can one man’s ambition dictate this speed? What are the larger implications of concentrating this technology under the control of Elon Musk, with his track record not only for Memphis but for the greater world beyond the banks of the Mississippi River?

The Dual Nature of AI: Promise vs. Peril

It is important to acknowledge that AI is not without its potential benefits. Advances like DeepMind’s breakthroughs in protein folding could revolutionize medicine and open doors to treating diseases that have long eluded our scientific understanding. AI has the power to make real contributions to supply chain optimization, pattern recognition that may help in various forms of human and energy efficiency, medical diagnoses, and education when thoughtfully directed. Dario Amodei, the CEO of Anthropic, outlines the upsides of AI in his essay exploring “What a world with powerful AI might look like if everything goes right.”

AI also has the potential for harm. I and others have expressed concerns about the massive consumption of energy resources projected for AI development that will exacerbate climate disruption. We have seen utilities scrambling to meet this demand and increase fossil fuel usage over climate concerns

But energy consumption is not the extent of AI’s potential negative impact. Some, like Nobel laureate Geoffrey Hinton, often called a “godfather of artificial intelligence,” believe there are other AI threats emerging that could take two forms. One form is human actors using AI for nefarious purposes, such as cybercrime, Lethal Autonomous Weapons Systems (LAWS), or engineering more deadly viruses. A second and less understood pathway may develop as superintelligent AI agents to become misaligned with human interests and use AI to overpower, dominate, or destroy human civilization. In this recent interview, Hinton shares his growing concern about these emerging threats. Musk has in the past repeatedly expressed concerns about AI’s future and famously said, “With Artificial Intelligence, We’re Summoning the Demon” at a 2014 MIT conference.

I think we should be very careful about artificial intelligence. If I had to guess at what our biggest existential threat is — it’s probably that. So, we need to be very careful with artificial intelligence. I’m increasingly inclined to think that there should be some regulatory oversight, maybe at the national and international level, just to make sure that we don’t do something very foolish. I mean with artificial intelligence we’re summoning the demon. You know those stories where there’s the guy with the pentagram, and the holy water, and he’s like — Yeah, he’s sure he can control the demon? Doesn’t work out.” –Elon Musk, Oct. 2014 

In fact, Musk’s concerns about misalignment were one of the stated motivations for his financial support and partnership with Sam Altman in the start-up of the non-profit OpenAI. Musk and Altman developed OpenAI in 2015 with the goal of minimizing the dangers of exploitative and misaligned AI development, but their partnership soured in 2018, and has become an infamous tech personality battle. With the release of the AI chatbot ChatGPT and entering a business partnership with Microsoft, OpenAI has now been propelled into a leadership position in the AI field. OpenAI continues to pursue the goal of achieving AI systems that are “generally smarter than humans,” which is their definition of Artificial General Intelligence (AGI). OpenAI further professes to operate as if AGI misalignment risks, as outlined here on their website, are “existential⁠.” 

Concern for AI models’ misalignment is not just an abstract exercise. OpenAI’s own research shows that they are monitoring how AI Frontier reasoning models exploit loopholes when given the chance… Penalizing their “bad thoughts” doesn’t stop the majority of misbehavior—it makes them hide their intent.” 

In March 2023, Musk and other leaders in the AI field signed a letter calling for at least a six-month pause in AI development for this purpose:

 “AI labs and independent experts should use this pause to jointly develop and implement a set of shared safety protocols for advanced AI design and development that are rigorously audited and overseen by independent outside experts. These protocols should ensure that systems adhering to them are safe beyond a reasonable doubt. This does not mean a pause on AI development in general, merely a stepping back from the dangerous race to ever-larger unpredictable black-box models with emergent capabilities.”

The Biden and Trump administrations have taken different approaches to AI development in recent years, with Trump recently easing safeguards implemented by Biden. Both administrations have attempted to hinder China’s attempts at AI development with sanctions on certain advanced GPU chips. Trump recently announced the “Stargate Project,” pledging $500 billion to AI development, coupled with a commitment to fast-tracking energy projects supporting its growth.

Posted by Brent Mayo, xAI project lead, one day after the April 25th Shelby County Health Department town hall in Memphis.
Brent Mayo, xAI project lead, at the Shelby County Health Department town hall on April 25.

So Why Exactly is xAI in Such a Hurry?

What is xAI’s main product that had to be rushed into existence, with what xAI project lead Brent Mayo has described as a “maniacal sense of urgency”? It is primarily an AI chatbot called Grok.

Here is Grok’s answer to the question, “What is an AI Chatbot?” 

An AI chatbot is a software program that uses artificial intelligence to simulate human-like conversations with users. It processes natural language to understand questions or commands and generates responses, often in real-time, via text or voice. Chatbots can be rule-based, following predefined scripts, or advanced, leveraging machine learning and large language models to handle complex, context-aware interactions. They’re used for customer service, information retrieval, entertainment, or task automation, like answering FAQs or scheduling appointments. Examples include me, Grok, created by xAI, or others like Siri and ChatGPT.

As many leading technology companies are moving into the AI business and trying to position themselves to monetize the projected AI demand, Elon Musk’s xAI Grok is late coming to the chatbot playing field and is attempting to catch up to market leaders. Musk claims Grok is better at “truth seeking,” but this is controversial

Musk is no stranger to AI: his Tesla company has been a leader in developing what he calls “real world AI” for the automation feature available in Tesla cars called Full Self Driving (FSD). Musk has long promised a fleet of autonomous automobile robotaxis that would be driven by computers without a human driver present. As a Tesla driver myself, I have participated in the FSD program for over five years and have seen it advance. It is impressive, but its capabilities have been overstated from the beginning. It is useful, but not yet ready for driving without a human behind the wheel. Across the industry, significant AI resources are being put into full automation for cars, trucks, humanoid robots, and drones, all creating the potential for AI-controlled autonomous agents across the world.  

Musk’s Paradoxes: Fear of AI, Drive for Control

Musk, a fan of ironic dark humor, has chosen to name the Memphis xAI supercomputer “Colossus”— an unmistakable reference to the 1966 novel Colossus and 1970 film Colossus: The Forbin Project. In that story, a supercomputer, designed to manage nuclear defense, quickly becomes uncontrollable, seizing power over humanity in the name of preserving peace. By invoking “Colossus,” Musk inadvertently highlights the very fears that he and many ethicists and technologists have warned about: that in our quest to build all-powerful AI, we may lose control over our creations.

Colossus is on track to be the world’s largest supercomputer. It currently rivals the United States’ Department of Energy El Capitan computer at the Lawrence Livermore National Lab. It is worth noting that the major technological advances in AI are coming more from the private sector compared to the historic pattern of major technological advances coming from government defense and space programs.

Musk himself has repeatedly warned about the existential dangers of AI, speaking of a looming “singularity” — a hypothetical future point where AI surpasses human intelligence and becomes potentially impossible to contain. Yet his actions across his companies—through xAI, Tesla’s push for full autonomy agents in cars and humanoid robots, SpaceX’s Starlink/Starshield mega-constellation of 7,135 satellites in low Earth orbit, and Neuralink’s brain-computer interfaces—suggest a relentless drive toward that very threshold. Musk has a well-known problem with telling the truth, and all these enterprises have huge potential for financial gain for him and his select investors.

Who is Determining the Risk to Society?

It is reasonable to argue that no one person, let alone a small subgroup of profit motivated venture capitalist actors, should control a technology with such an immense hunger for energy resources, coupled with the high potential to manipulate large human societies, and in a worse case unleash a technological force that could create a surveillance state that undermines human freedoms or worse destroy humanity as we know it.  

Massive computing systems may develop the capacity for advanced reasoning, self-reinforced learning with the ability to self-program, and enable AI controlled manufacturing. Computer systems’ ability to communicate large datasets vastly exceeds human-to-human capabilities. The growing number of AI-controlled autonomous agents in the form of humanoid robots, drones, and unknown lethal autonomous weapon systems could be developed outside of human control. The infrastructure to interconnect these autonomous agents is being developed through military contracts with Musk’s Starlink/Starshield satellite network.      

With Starshield, the U.S. military can access vital high-speed communication services provided by SpaceX’s satellite constellation…Additionally, the service fits right into the futuristic Joint All Domain Command and Control (JADC2) doctrine of the Department of Defense (DoD), which aims to interconnect all the sensors, fighters, and platforms in use by the branches within the U.S. military (and eventually allies) into a “network-of-network” governed by artificial intelligence (AI). Seamless integration and quicker coordination in combat operations are just one of the many benefits that this doctrine — if achieved — can provide.

Scientists and researchers are warning of the potential for dangerous outcomes as these technologies have the potential to coalesce. Some researchers have outlined multiple pathways that may adversely impact human health and well-being. 

Figure 1 Threats posed by the potential misuse of artificial intelligence (AI) to human health and well-being, and existential-level threats to humanity posed by self-improving artificial general intelligence (AGI).  Cited from Federspiel F, Mitchell R, Asokan A, et al. Threats by artificial intelligence to human health and human existence. BMJ Global Health 2023;8:e010435. doi:10.1136/ bmjgh-2022-010435

Slow Down and Build in Some Protections

I know some readers may find my concerns hyperbolic. My request is this: let us slow down and build in some real protections. If we continue to follow the hacker’s way motto of moving fast and breaking things, the worst case could materialize. A future where humans lose control of the technology is unlikely, but not impossible. A future where humans use AI technology for nefarious purposes is probable.

Communities at Risk: The New Digital Test Grounds

This pattern is not isolated to xAI. Across the tech landscape, companies are building ever-larger data centers, consuming enormous amounts of energy and natural resources, while presenting AI as an inevitable and unquestioned good. Rarely is there a public discussion about who benefits from these technologies, who is left out, or what long-term risks they may pose to democracy, labor markets, or social cohesion. Instead, we see a rhetoric of inevitability: AI must be built bigger, faster, and more powerful. Any slowdown, any precaution, is treated as a threat to national security or corporate survival. This framing forecloses genuine democratic deliberation about what kind of technological future we want. Communities like South Memphis are being turned into testing grounds for an elite technological vision that is often profoundly disconnected from local needs and aspirations. The irony is sharp: In the name of building “intelligence,” decisions are being made with stunning myopia.

Toward a Responsible AI Future

It is time to ask fundamental questions. Not just “Can we build it?” but “Should we build it?” And if so, “For whom?” What values are we encoding into these systems? What costs—human, environmental, and social—are we willing to bear? And who gets a seat at the table when those decisions are made? 

Responsible AI governance must prioritize transparency, decentralized stewardship, environmental sustainability, and inclusive democratic oversight. Without these guardrails, the rush to build AI risks not only repeating the mistakes of past technological revolutions but magnifying them to a scale never imagined.

The rush to build AI must be tempered with wisdom, foresight, and humility. Otherwise, we are not advancing intelligence—we are merely amplifying our existing blind spots at a scale never before seen.

Stephen Smith
Dr. Stephen A. Smith has over 35 years of experience affecting positive change for the environment. Since 1993, Dr. Smith has led the Southern Alliance for Clean Energy (SACE) as…
My Profile