I am worried about AI, but not for the reason you think

True high level Artificial Intelligence, the kind we read about in novels and hear crazy stories about from futurists, won't be like the movies. There won't be a Her that we fall in love with. There won't be a HAL 9000 who turns against us. When we finally build a true self-learning AI, it won't be anything at all like a human intelligence. This shouldn't be a surprise. We have half a century of results to show this is true.

A long long time ago

In the early days of AI research, scientists attempted to create rules for understanding the world. At one time they believed that if you stuck enough if statements together they could create a program that would reason about the world and make decisions faster and better than humans.  They were wrong. Rule systems weren't the answer to AI. They have been very successful in solving other kinds of computer science problems, but all of the AI progress we've seen in the last ten years comes from something different: pattern recognition over massive data sets.

Google translate works because it has the entire world wide web to crawl through. Millions of pages were painstakingly translated by human hand, giving Google enough data to run through their software. The same with images and voice.  Their image recognition works by scanning through the web of images, each helpfully tagged by humans who wrote text next to them in their webpages. Google started their own 411 service not to serve residents of California, but to collect a massive number of voice samples. With enough data and processing power, pattern recognizers can be incredibly powerful.

AI systems have now beaten the best Chess and Go players in the world. So what? Chess is considered a feat of intelligence because it requires skills that humans are actually 'bad' at. Humans have limited working memory. Keeping 18 levels of moves in mind is truly amazing for a human, but quite simple for a machine made from solid state memory. Of course computers can beat humans at these games. Humans generally aren't very good at them either. But for all their processing power, none of these AI super computers can do something as simple as fold a towel.

Google's new AI powered services are quite useful, however I'm not worried they will quickly become 'as smart as humans' for the simple reason that the way they work is nothing like how actual human intelligence works.  Actually, we still don't really understand human intelligence either, but we know it's not based on scanning millions of documents.  My six year old son can recognize a bird, but he doesn't do it by looking at a million bird photos beforehand. Whatever the mechanism that allows human level intelligence to work, it is not the way that AIs work. They are fundamentally different.  AI will certainly improve as both data and processing power increases, but what it evolves into will be nothing like what we currently think of as 'human intelligence'. 

We should fear AI

So now that I have hopefully convinced you that AI is nothing to fear, I want to convince you that we *should* fear AI.  Artificial Intelligence, like any new technology, is a tool. A tool can be used for good or for bad. I'm not worried about an AI turning on humanity and launching all the missiles. I *am* worried about an evil human using an AI to gain access to those missiles. AI can be used for even mundanely evil tasks like  influencing elections will be possible long before we reach the singularity level AI that Elon Musk is worried about.

We now know that the source of an AI's power is the data it can access. My real fear of AI is that it will be a tool limited to only a few large organizations who have access to all the data. Google and Facebook are so powerful because we (myself included) willingly provide it access to the data. So much access that we must use their services to get our own data back.  If we don't find a solution then we may find ourselves in a world with just a few large companies and governments control all of our data, and therefore are the only ones who can build the AIs that will make our future economies run.  That is the real risk of AI. And furthermore, even if we manage to avoid the bad uses of AI, these systems absolutely will destroy jobs. There are actual things to be worried about.

So what can we do?

First, limiting AI will be counter productive.  When knowledge is outlawed then only outlaws will have it. Instead we should teach AI principles to all engineering students. Furthermore, all students, not just engineers, should learn the basics of computer science. Some people think of this as 'coding for all'. I prefer to call it 'computational thinking'. If our lives are going to be controlled by algorithms, then everyone should learn the basics of how these algorithms work, and how to control them. If we don't learn how to control algorithms, then they will control us. Neural Nets aren't that hard at a high level. We should all learn how they work.

Strong privacy and data ownership laws.  Everyone should have the right to know what information a company has on them, and the right to extract it and delete it as they choose. This will only happen through actual laws about data ownership. We also need laws to prevent collusion between different companies and branches of the government. Right now if company A goes belly up, it can sell itself to company B, who will use their customer data in ways that the customers never authorized.   Asking the companies themselves to fix this is unlikely to happen. Such change must (tragically) come from Congress.

Open source data

Currently we have open source natural language processing and computer vision tools. this is great. Democratizing access to a new technology is always a win. But we've learned that these tools are useless without the oodles of data that go with it.  I've heard that data is the new oil. If that's the case, then the data refineries are the ones who will make the money. They are the companies that sit at the juncture of large data streams and clever AI algorithms.  

Listen to the Luddites.

The word Luddite didn't original mean someone who was anti-technology. Rather, it was a group of people who saw that a new technology would take their lively hoods and fought against it. They didn't mind the technology itself, (new weaving machinery in this case), as long as the productivity was shared with the employees. New technology really did damage the textile workers lives and caused literal rioting in the streets. 

AI is beginning a more dramatic shift than that of the Luddites, but we don't have to let it end the same way. Change is coming, but by embracing ideas like worker retraining and higher minimum wages we can avoid the worst of the effects. Even I, a die hard free marketeer, think some basic income proposals might have a few good ideas.  

If we don't embrace the coming economic, social, and political changes; and plan ahead for them, then yes we should fear AI. But if we act reasonably with foresight, then they future can be a wonderful AI friendly place.

Talk to me about it on Twitter

Posted May 30th, 2017

Tagged: ai rant