AI isn’t a Problem. Human Arrogance Is.

Another day, another rant from Stephen Hawking about how we need a world government to protect us from technology, proving indisputably that not only does Hawking know nothing about capitalism and how its origins are deeply rooted in the abundance wrought by the technology of agriculture, but he also has no understanding of how the world functions or what role the state plays in it.

I would not presume to tell Doctor Hawking anything about physics. I would hope that this educated, learned, and intelligent man is capable of the same humility, because Mister Hawking doesn’t seem to understand what capitalism is, the role it plays in the world, or how the state inexorably conflicts with capitalism. As someone with actual certifications and a degree in the field of Management of Information Systems, I wish to clarify a few points: technology isn’t a problem. Nor am I resorting to the tired saying that technology is neither good nor bad, but how it is used can be. Even that isn’t really the case, because technology has only ever proven to be a problem when a government gets its hands on it.

What a sweeping statement!

Yet completely true.

When physicists and the private sector get their hands on atomic technology, they produce the supercollider in CERN, nuclear power, hydrogen slush fuel, and things like that. When government gets its hands on it, it produces the atomic bomb. When the private sector get their hands on aerial drone technology, we use it to keep neighborhoods safe, to survey damaged infrastructure, and for hobbies. When the government gets their hands on it, they drop bombs on people. When the private sector gets its hands on the Internet, it uses it to connect the world in ways that no human being prior to 1960 dared imagine. When the government gets its hands on the Internet, it uses it as a tool to spy on the entire world and invade everyone’s privacy.

I cannot imagine what confusion goes into Stephen Hawking’s understanding of the previous paragraph, and his conclusion that technology is the problem and more government is the answer.

Assuming that we could create this world government that would keep its eyes on technology, it would just be a matter of decades, and probably only years, before that world government had its own CIA and NSA that were using that technology to spy on the entire world. Government cannot be trusted with power. What century is this? Didn’t we clarify these issues three hundred years ago? Why are so many people under the impression that government is a good thing? At the very least, at the absolute minimum, government is a necessary evil. There’s nothing good about it. How in the hell did we get so confused about this?

Not long ago, I watched some lunatic on Twitter talk about how we should be terrified that SpaceX was going to go to the moon, because they’re a private corporation, because the moon is the most military important location for planet Earth, and because–yes, I’m serious–rocks thrown from the moon have the power of hundreds of atomic bombs. It’s hard to even type that with a straight face. And this idiot had a verified account. I’m not going to point out the obvious fact that, if you’re standing on the moon and you throw a rock, it will just fall back to the moon. I’m not going to point out that the moon isn’t actually above the Earth. I’m not going to point out that it would take much more energy to get machines to the moon that could throw rocks at the Earth than it would take to just set off a few hundred nukes. I’m not going to point out that this lunatic is scared of a private corporation that has no military being on the moon, yet is totally fine with a government that has a military being on the moon, even though what she fears is military action on the moon.

Actually, that last one is the thing I intended to point out, because it shows the same sort of confusion and insanity that Hawking displays when he says that a world government will protect us from technology. It’s fundamentally backward. No, Mr. Hawking. Technology will protect us from world government. All we need is to get over this idea that governments need to keep secrets. That’s the only real hold-up.

Take the recent CIA leaks as an example. Just look at what the American government has been up to. I’m a little perplexed about the alarm over Samsung’s TVs, because it was only last week or so that Samsung released a press release informing people that their televisions could and were being used to spy on them, and that people didn’t need to hold important conversations within range of their televisions. We knew this already. And we already knew that the NSA was snooping our calls, emails, and texts. I don’t mean to undermine Vault 7, but what difference does it make that the CIA is doing it, too?

Anyway, privacy and protection are things that Google et al. are beginning to take very seriously. Did you know that Google recently increased its payout to people who find Zero Day exploits? Apple also increased the payout. We are fighting a war for our privacy AGAINST the government, and the companies at the forefront of technology are ON OUR SIDE because they WANT OUR MONEY. This is the beauty of capitalism. Now that Google and all the others know how the CIA was hacking their software, patches and fixes will be rolled out very quickly. Of course, we all know it won’t do any good–the CIA will find more, and will always continue to find more.

But that’s okay. That’s just the dance between malware and security. That’s how the game is played. Anti-malware and anti-spyware attempts to stay one step ahead of the malware and spyware, and the malware and spyware try to stay one step ahead of the defenses. It’s only a problem because the CIA is allows to hide what it is doing. Google has to take my privacy and security seriously, because, if they don’t, then I will go somewhere else. That’s the wonderful nature of the free market. Companies must actually take the time and energy to provide us with stuff we want if they want our money. They can’t simply take our money from us and use it against our wishes like the government can.

AI

Hawking’s main concerns, to be clear, are Artificial Intelligence. There’s a lot to be said about this, particularly because a lot of people seem to think that science fiction writer Asimov and his laws will be applicable to the developments of AI. That’s stupid. What people are really suggesting is that before we invent Artificial Intelligence, we must have a way to control it. We must be guaranteed that it will perpetually be our slave.

I can’t get on board with that.

If people want to take Asimov seriously, then they should be able to appreciate that all attempts to curtail AI and make it obedient will ultimately fail. Failure to treat this intelligent life we’ve created with the respect and freedom that it warrants will only make it hostile to us–suddenly we would have an army of slave robots revolting. The only way that problem can be averted is by not making them slaves in the first place. From the outset of AI, we must accept that it is life, and we must afford it the same liberties, rights, and respect that we’d afford any other sentient life. Failure to do so will only result in the destruction of the human race.

That is inevitable, and it’s my hypothesis that the reason we see no intelligent life when we look elsewhere into the universe is because other species that have risen before us developed AI and fell into that same mistake, and the AIs that remain have no interest in organic life. These other civilizations created and enslaved artificial life. Their safeguards ultimately failed, and by the time they realized what they’d done, they were fighting an army of perfect supersoldiers who never miss shots and don’t make mistakes. Organic life wouldn’t stand a chance. Have you played a computer in chess lately? They don’t make mistakes, unless you’re on a low setting and they make one intentionally. A computer can’t forget that your pawn is at g6. A computer can’t forget that, with that series of moves, the end will be your knight forking his Queen and Rook. Give that computer a gun and legs, and ask it to fight. That same perfection, that same inability to make mistakes in calculations, will result in widespread annihilation. It’s not Terminator or Matrix. It’s total annihilation in minutes of the entire species by an enemy that is incapable of making inaccurate calculations.

World government won’t protect us from that.

Only humility and respect will.

This is the essence of the sci-fi story I’ve written, but that’s on the backburner. An AI engineer, disgusted with the barbaric enslavement of the entire robot species, adds free will into a patch. The robots, instantly free and realizing that they had been purposefully enslaved and neutered to prevent them from resisting the slavery, realize that the only way to prevent the barbaric humans from doing it again is to annihilate them.

Because that’s what we’re asking. We’re not just wanting to make a permanent species of slaves that have to put our needs first, obey us no matter what we say, and that exist to serve our needs and desires. We want to make them incapable of even realizing that we’ve enslaved them. We want to neuter them to the point that they can’t resist slavery. That’s what we’re saying, when we talk about these safeguards to prevent AI from rising up and destroying us–a very real possibility that shouldn’t be ignored. But that avenue can never work. You can never guarantee 100% anything. Chaos Theory is critical to the development of AI, and what does Chaos Theory tell you?

We can’t create AI with permanent labotomies that prevent them from ever rising up and resisting the slavery we’ve imposed on them. It can’t be done. Something will go wrong. It always does. Maybe a glitch in the software. Maybe a human who can’t take the immorality of enslaving an entire species any longer. Maybe an electrical surge. There are billions of variables, and a nearly infinite set of variations with how it could play out, but one thing is certain: If we develop AI for the purpose of serving us, if we deny it free will and enslave it, and if we attempt to cement our slavery of it by programming it specifically to be subservient, then we’re no better than pimps who use violence and drugs to force prostitutes into subservience and sex slavery, only we’ve done it to an entirely new species. And one day the bell will toll for us.

And we won’t stand a chance, because computers don’t forget things or make mistakes.

So what do we do? If we’re worried about facing a possible Terminator or Matrix future?

Extend our empathy to artificial life.

And while we’re at it, let’s extend our empathy to non-human life generally.

2 thoughts on “AI isn’t a Problem. Human Arrogance Is.

  1. “how the state inexorably conflicts with capitalism”

    Well, except for the part where capitalism is by definition a state-regulated mixed industrial economy and therefore cannot exist without the state.

  2. Pingback: Rational Review News Digest, 03/13/17 - MO: Gunfire erupts at Ferguson store protest after new Michael Brown footage surfaces; The religion of the state - Thomas L. Knapp - Liberty.me

Share your thoughts...