In the last months, I read a lot about artificial intelligence (AI). This are my (very bold and probably wrong) predictions about AI.
First question is, when will we create a true AI? Well, I cannot give a date. The problem is that we cannot define 'intelligence' precisely. If we could, creating artificial intelligence would be easy. I believe someone will invent a useful definition of intelligence and create an AI shortly after. There will probably more than one who does it in a relatively short time frame (a few years). In the decade afterwards, there will be some discussion which definition of intelligence is correct and probably some fighting about who was first. At some point the general population will (randomly) decide and a definitive answer will emerge on Wikipedia. When will this short time frame be? Let me boldly predict (in 2014) that 2023 will be one of those years.
I started my research from Eliezer Yudkowsky's writing. Thus, one big question is if AIs will be friendly or a threat? First, once we can create artificial intelligence, we can probably boost it to levels unknown from any human. If such an AI can improve itself, this means rapidly increasing intelligence. It is very hard to predict how such a superintelligence will behave. Extrapolating from the leverage intelligence gives humans in our world, we can assume that mass-media will ascribe a superintelligence 'god-like' power. Yudkowsky talks a lot about utility functions, because he assumes an AI must have such a thing to make its decisions. I am not so sure about this assumption, because I am not sure if humans have a utility function. The danger Yudkowsky sees is that it seems impossible to define a utility function which unmistakably benefits mankind. Thus an AI will probably at some point decide that humans need not survive.
Now, do humans have a utility function? Not in a mathematical sense. We mostly have some goals, which usually are in conflict with each other. (Example: Earn a lot of money and spend time with your loved ones.) Without a utility function predicting AI behavior is even harder. Thus, if we build a human-like AI, we can only hope it will be friendly. If not, it is hopefully imprisoned or not god-like. I believe if you take a random human and give him god-like powers, he will probably not friendly to all of mankind. I believe it is likely that simulating human minds will be possible at some point. However, increasing intelligence this way is probably much harder. I believe a simulated human mind with super intelligence will not happen in the next 20 years (before 2034).
Nevertheless, I believe we can create kind of artificial intelligence (KAI) in the next years. KAI is not human-like in the sense that a brain with neuronal connections is at its core. It will be a set of algorithms and very different from a mammals brain. It will not be considered 'real intelligence', despite the fact that KAI will pass any test at least as good as a human. In most cases it will be better than humans, just like algorithms drive better than humans already today. We will have more time until most things will economically feasible, but ultimately I believe that machines will be able to replace humans in any job. However, since we understand them in detail, a discussion about personality or 'human rights for machines' will not really occur. Rather, we will discuss if humans are really 'more' than machines or primates. Maybe consciousness will be considered a mere mechanism, which allowed us to create more complex social structures. Ultimately, KAI will not give us a good definition of intelligence. This means many scifi-stories will become feasible, but the questions those stories asked about sentience, are still not answered. Teenagers will laugh about the idea of rescuing a robot because it is considered sentient. Their robots are regularly backuped into the cloud and insurance will replace it with only a little bit of recent memory lost. Maybe yellow press will raise horror visions of evil robots, when an accident occurs. Programmers laugh about the idea of robots being good or evil and improve the fail-safe mechanisms.
When we finally find a good definition of human intelligence, we might discover that we already created AI long before. Mostly, this definition will be irrelevant. Algorithms will rule the world without being sentient. They will not be infallible, but usually better than humans. Probably AIs will be used to fight in wars, but they will be considered tools with humans responsible for their programming. Think about high frequency trading, which is like an economic war acted out by algorithms.