In the next decade we will see AI techniques conquer almost any task we can think of at least as good as the average human. However, we will not discover 'true intelligence' or invent a good definition of intelligence.
In the last months, I read a lot about artificial intelligence (AI). I started my research from Eliezer Yudkowsky's writing. His one big question is if AIs will be friendly or a threat? The pessimistic vision looks like this: Some clever hacker group creates a decision making algorithm, which is slightly more intelligent than humans. This AI can improve its own intelligence with exponential speed, which is called the technological singularity. It is very hard to predict how such a superintelligence will behave. Extrapolating from the leverage intelligence gives humans in our world, we can assume that mass-media will ascribe a superintelligence 'god-like' power. This AI has a utility function, which is a policy set by humans and guides the decision making process. The main problem Yudkowsky sees is that such a policy is always flawed. For any given policy it is possible to interpret it in a horrible way, which eventually kills or tortures every human.
I am not sure about the assumption that artificial intelligence requires a utility function. Humans probably have no utility function. We mostly have some goals, which usually are in conflict with each other. (Example: Earn a lot of money and spend time with your loved ones.) I consider it likely that simulating human minds will be possible at some point. However, increasing intelligence this way is probably much harder. I believe a simulated human mind with super intelligence will not happen in the next 20 years (before 2034). Thus, if we build such a human-like AI, we can only hope it will be friendly. If not, it is hopefully imprisoned or not god-like. I believe if you take a random human and give him god-like powers, he will probably not be friendly to all of mankind.
Nevertheless, I believe we can create 'kind of artificial intelligence' (KAI) in the next years. KAI is not human-like in the sense that a brain with neuronal connections is at its core. It will be a set of algorithms and very different from a mammals brain. It will not be considered 'real intelligence', despite the fact that KAI will pass any test at least as good as a human. In most cases it will be better than humans, just like algorithms drive better than humans already today. We will have more time until most things will economically feasible, but ultimately I believe that machines will be able to replace humans in any job. However, since we understand them in detail, a discussion about personality or 'human rights for machines' will not really occur. Rather, we will discuss if humans are really 'more' than machines or primates. Maybe consciousness will be considered a mere mechanism, which allowed us to create more complex social structures. Ultimately, KAI will not give us a good definition of intelligence. This means many scifi-stories will become feasible, but the questions those stories asked about sentience, are still not answered. Teenagers will laugh about the idea of rescuing a robot because it is considered sentient. Their robots are regularly backuped into the cloud and insurance will replace it with only a little bit of recent memory lost. Maybe yellow press will raise horror visions of evil robots, when an accident occurs. Programmers laugh about the idea of robots being good or evil and improve the fail-safe mechanisms.
When we finally find a good definition of human intelligence, we might discover that we already created AI long before. Mostly, this definition will be irrelevant. Algorithms will rule the world without being sentient. They will not be infallible, but usually better than humans. Probably AIs will be used to fight in wars, but they will be considered tools with humans responsible for their programming. Think about high frequency trading, which is like an economic war acted out by algorithms.