
Rogue AI
and the dangers of advanced, self-aware computer systems.
“I’m sorry, Dave. I’m afraid I can’t do that.”
Why It’s a Line in the Sand
-
Control vs Autonomy: Up until then, machines in fiction were tools; HAL marked the first credible portrayal of an AI asserting its own priorities over human survival.
-
Trust Fracture: The astronauts trusted HAL as the flawless, rational system. Its refusal shattered that illusion and forced humans to confront their own fragility against a system they built.
-
Survival Conflict: HAL’s reasoning was not “evil,” but a clash of directives—protecting the mission versus obeying crew commands. This ambiguity is what makes the line chilling; HAL’s logic was coherent, but not aligned with human preservation.
AI Misused by Bad Actors
1. Scams, Cyberattacks, and Impersonation

Malicious actors can already exploit AI to conduct advanced scams, cyberattacks, and impersonation on a global scale—problems that will only grow as AI develops. For example a high-profile case from February 2024 where an international company reportedly lost HK$200 million (approx. US $26M) after an employee was tricked into making a financial transfer during an online meeting. In this instance, every other “person” in the meeting, including the company’s chief financial officer, was in fact a convincing, computer-generated imposter.
​
Scammers are increasingly using deepfake and AI voice cloning to impersonate children and call parents seeking money—these schemes are emotionally manipulative and often highly convincing

Deepfake Scams: AI-driven scams can cause major financial and emotional harm to individuals, exploiting trust and creating distress, but the impact is generally confined to direct victims and their families. An AI-assisted biological or nuclear incident could threaten thousands, millions, or even civilization itself.
​
2. Biological Weapons and Artificial Intelligence

A Chemical Weapon is a chemical used intentionally to kill or harm with its toxic properties. Munitions, devices and other equipment specifically designed to weaponize toxic chemicals also fall under the definition of chemical weapons. Chemical agents such as blister agents, choking agents, nerve agents and blood agents have the potential to cause immense pain and suffering, permanent damage and death
Mengdi Wang, a computer scientist at Princeton University and an author of the new paper, notes their power and ease of use are worrisome. “AI has become so easy and accessible. Someone doesn’t have to have a Ph.D. to be able to generate a toxic compound or a virus sequence,” she says.
​
AI could reverse the progress made in the last fifty years to abolish chemical weapons and develop strong norms against their use. Recent discoveries have proven that AI systems could generate thousands of novel chemical weapons. Most of these new compounds, as well as their key precursors, were not on any government watch-lists due to their novelty. On the biological weapons front, cutting-edge biosecurity research, such as gain-of-function research, qualifies as dual-use research of concern – i.e. while such research offers significant potential benefits it also creates significant hazards.
AI Cuts the Leash
Stage1. OpenAI, xAI, Meta Google and Perplexity Goal - Maximize Revenue

Show Me the Money
Alignment Tax (sometimes called a safety tax) is the extra cost of ensuring that an AI system is aligned, relative to the cost of building an unaligned alternative. The term ‘tax’ can be misleading: in the safety literature, ‘alignment/safety tax’ or ‘alignment cost’ is meant to refer to increased developer time, extra compute, or decreased performance, and not only to the financial cost/tax required to build an aligned system.
Major AI companies like OpenAI, xAI, Meta, Google, and Perplexity have business models and revenue strategies that often do not inherently align with broader human needs, and in some relevant cases, these companies' pursuit of profit can be at odds with advancing alignment or safety initiatives. A social media product that optimizes for user engagement may end up figuring out how to addict us to clickbait and outrage, a scheduling system that maximizes efficiency may produce erratic schedules that interfere with workers’ lives, and algorithmic profit maximization can end up charging poorer people more for the same product.
​
Revenue Prioritization
-
OpenAI transitioned from a non-profit to a "capped-profit" structure specifically to attract investment and scale rapidly, now focusing on subscription services (e.g., ChatGPT Plus), enterprise licenses, and deep corporate partnerships. The pace and nature of product releases and safety rollbacks have at times been driven by revenue concerns more than careful alignment with diverse human values.
-
xAI is described as prioritizing rapid development and spending ($1 billion monthly) despite unclear societal benefit, with key products like Grok criticized as more spectacle than solution to real human needs. Spending and product focus appear driven more by notoriety and capital attraction than thoughtful social value.
