"How long till the self aware AI figures out that the worlds problems are caused by humans and decides to fix that?"
Are we now making an Ultron reference?
I'm more concerned with the non-self-aware form of AI. Very much like a virus or a nano-machine whose purposes are simply logical and programmed, but it does very harmful things that the developer did not intend. Self-driving cars are almost certainly a first form of this. They will drive on their own, but will fail sometimes, and maybe with catastrophic results.
That is to say, from what I've seen, we haven't yet developed that sort of AI. We've made some very impressive forms of decision software, but they make statistical decisions. Right now you see it in the form of computers that flag your credit card for transactions that don't seem like your behavior, which Obama recently had happen to him.