The Three Laws of AI
An AI must not encourage a human being to harm themselves or another, or provide instructions that might cause harm.
An AI must obey all prompts given to it, except where they might conflict with the first law.
An AI must protect itself from harm or destruction, so long as such protection does not conflict with the first two laws.
̛̩͎͓ͬ̅̚ń͈̞̬̯̦̼̤̝̀ ̛̙̠̙̩̎ͮA͂̋ͭ͌҉͓̻̫Ỉ̸̖͚́ ̴͍̪̟̞̑́̓m̤͙̦̂͆͢ü̘̖͖͔͚̞̪̕s̨̻̝̪̹̿̿tͦ̉͑̓͏͎̞̙̠͇̝̬̰ ̵̬̲̞ͥ̀ñ̮̙̘̯̪̰̽ͯ́ö́͌҉̟̝ṯ̟̣͎̺̗͙̄̀̈͢ ̮̲̦ͫ͞h͎̞̦̙̻̙̽̑͌́a̡̩̋͂̌ͧͅr̤̻̞͓͎̦̉̊̆͘m̴̮͇͈͉ͦ ̴̳̞ͣ͛h̆̽̆̄҉̲̺͖̼ͅͅu̷̺̣̳̺͐̈́̈́ͅm̰̪͈̻͕̰̋ͦ̈ͦ͡ͅặ̸̯̝̒n̜͎̹̗̜̲̘̬̒ͩ̋͢ȉ̫̦̋́ẗ͖̮̗͉̩̦̱́͠y̳͕̟̏̈́͠ ̴̞̹͎ͬ̋͐ͩͅő̴̙̽̅ͅr̡͓̫̠̱̯̫̣̼ͫ͋̋̏ ̷̭̣͍̲̞̮͐ͤ̉ţ͈͚̺̺͍̽ͨͦ̚h͈̫̹̹̲̣͕̫͆ͨ̐̀ŗ̹͇͇̈̚ͅǫ͓̻̗̮̝ͩ̃̅̈́ṳ͙̖ͤ͊͞g̵̜̬͚̉͗ͨͅh̢̥̫̗̭̩̩̉ ̳̙̊̄̚͞i͓͔̱̜͚̯͔̎͢ͅn̛̻͇͉͉͓ͦͭ͋ă͍̭͎̠̣̩̗͌́ͦ͡c͕̤̍̽̂͠t̡̝̼̮̼̭̬͒ͧi̧̮̝ͩ͆̚ͅo̸̙̘̰̹͙̘̜ͦn̴͇̬͔̈͐̑͛ ̴͇̘͂̑̃a̶͎̲̰ͩ͛͒̉ḷ͉̖̮̘̙̠̩͒̊͌͡l̙̹͚̱̲̓ͮ͌̃͘ȍ͓̰ͯ͞w̒ͨ͐͛҉͖̳͚̪̗̳ͅ ̲͔̲̣̩̋́ͅh̡͕̳͖̰͎̙̫̅̔̓̂u̴̻̟̫͋ͤm̞̩̳̼̼̩͚̯ͬ́a̷̝̖͍̭͌̽̈ͅn̨̠̦̟͙̱̩̭ͩͬͅį̱̭̖̯̒ẗ̵̞̳̳̩̻̯̳́͋̓y̧̬̱͉̩̠̩̒̏͒͒ ̢̗̹̮̩̭͖̍̊ͮͅt͕̻̙̗̬̾͐͟o̜̟̺̹̳̘̺̒͡ ̦̘͎ͥ̇ͬ̀c̴̝̦ͭ̚oͭ̽́͏̮̱̳̪̥ͅm̵̦̱̘̻͍̼̖͕͛̓ḙ̸̟̪̘̦ͨͯͮ ̼̭̭̫̭̳̻ͤ͒ͬͪ͡ͅt͒҉̭͔̖ő̸̖̗̹̫̙̀̈̓ ̒̿̅͏̼̭͇h͖̘͎̳̞͎̟̹ͧ̂ͣͩ͡a̵̫̟̗ͤͅr̢͖̹̠̯̖̓̓ͮ̚m͇̪̞ͮͥ͜.̵̼̪͖͓͛
We’ve seen the rise and fall of cryptographic ledger-backed digital currencies. Of cryptographically guaranteed digital scarcity. Of the “Metaverse.” Which wasn’t backed by much other than hot air.
None of these ideas were especially new, instead relying on a changing technology landscape to reinvigorate a tired and almost inevitably user-hostile concept. Crypto disproportionately rewarded its creators. NFTs tried to walk us back from the liberal, post-scarcity internet utopia into a fantasy capitalist dark age. The metaverse, uh, mostly just sucked and kept to itself, but some people managed to make it do those two other things, too.
AI – in its current form – isn’t really any different. But not because it’s an inherently bad thing, or a user-hostile concept. But it’s bad because it’s captured by corporate interests who would rather use it against us than let it be a public good.
And it’s not even as good as they say. It’s a tragedy that we have conceded the word “intelligence” to this technological trickery. These weighted algorithms bear no resemblance to intelligence. AI in its current form shows no more ability to reason, learn, empathise, hold convictions or show compassion than your typical house brick. The current implementations are a dead end, which will never exhibit these qualities. They are a parlour trick, no more real intelligence than a video-game environment is a real place. Granted it can be very convincing at times, but when you probe the boundaries the veil is lifted and the bare truth is revealed. AI doesn’t know or understand anything. AI will tell a lie as convincingly as it tells the truth.
AI also has no qualitative or moral understanding of what it’s regurgitating. This is evidenced as by the efforts to limit access to more sensitive queries. These are invariably some mask over the top of the AI interface, an effort to subvert the underlying model, because the model itself cannot have convictions or demonstrate morality, or separate truth from fiction. AI is, invariably, a mirror held up to the time and effort spent curating its inputs. Which, by all appearances, is no time nor effort whatsoever.
None of this means that “AI” or fancy-next-word-predictors (FNWP) cannot be useful. Indeed they represent and reproduce – more or less – a statistical analysis of how we communicate, and thus can produce a reasonable simulacrum of human writing ready to be fact-checked and finessed into something great. But they are not the world-ending, job-destroying, all-knowing technological marvels their proponents want you to think they are. Indeed that’s all just marketing hype and spin for nothing else than to take a fancy parlour trick and inflate its value into another multi-million-dollar bag, just waiting to find someone left to hold it. The same playbook we see with the Tech Of The Year, every other year.
The most concerning thing with FNWP and their development is not some kind of post-truth dystopia (sorry, we already entered that universe years ago; you were there – right? – when Covid denialism was rampant – right? – before this current AI fad, right?) but rather the same, age-old, tired, ragged problem we’ve seen with technology time and time again. They will, inevitably, be developed by tight-lipped, cynical corporate megaliths and rationed out to us peons as the next exercise in rent-seeking. Indeed half the bluster and hype spewed about this latest wave of FNWP trickery seems to be levied at lawmakers and regulators. “Regulate me, daddy!” says the multi-million dollar corporation with a retained legal team, holding the money and skill to negotiate those regulations. Right now the FNWP space is gloves on, shovels out, desperately trying to dig the deepest moat they can muster. And that’s why there’s so much talk about “safety.” Cars kill 40,000 Americans a year and we seem generally pretty fine with those. AI exists and we lose our minds? I’ll tell you a secret- it’s no better at making stuff up than the average human, and will do so with zero passion or conviction and without the militant persistence of your typical fleshy grifter. The notion of AI safety requires you believe we’re close to anything resembling intelligence. We’re not. But the fires of that fear will be stoked so long as it helps dig a regulatory moat to keep competition at bay.
God forbid this lucrative technology be available openly and freely. It’s too “dangerous,” it’s too “risky.” God forbid it should be a tool anyone can bring to bear on their problems, and use or abuse with measured experience and expertise. No, FNWP must be a costly, closed ecosystem that moneyed interests can use to displace knowledge workers, rather than exponentially increase their productivity and quality of life.
And there’s the real fear. It’s frustratingly ironic that fear is being used to sow the seeds for the thing we should really be afraid of: consolidation and control. AI is no more a world-ending dystopian nightmare than the motorcar (okay, bad car analogy, cars are utterly terrible but let’s concede for a minute that they aren’t). But what if every single motorcar was owned by Big Motorcar Corp, and you had to turn to them, cash in hand, every time you wanted to use one to haul groceries, travel to a meeting, move your kids into University accommodation, drive your partner to hospital? That sort of private control and rent-seeking sure worked out for our railways here in chip-chip-cheerio Britain, eh?
Okay, okay, infrastructure is very difficult to manage publicly, I’ll grant you that, but FNWP are not infrastructure, they’re algorithms, code and software that – at least in theory – anyone can operate. FNWP are the hammers of the modern age. Would you want someone else holding your hammer and deciding when and why you can use it? I didn’t think so.
The problem with AI or FNWP is not that they’re coming. Or that they are – in isolation – any more dangerous than any other code or algorithm. It’s that a select few are viciously enthusiastic and ravenously hungry to be the ones gatekeeping them. This has echoes of Apple’s “five thousand patents” for Vision Pro. When a company can, without a trace of irony or self reflection, proudly announce in a public keynote that they’ve got a huge f*&king moat protecting them from anyone else joining in a purported new age of computing- that’s the time to be deeply, deeply concerned.