It's makes me laugh how much people get this idea that an AI capable of wiping out humanity needs to have reached sentience. It wouldn't. Strong AI doesn't need to exist for the wrong set of rules to become part of a mesh of expert systems, the kinds of AI we use for sorting out Google searches, and directing self driving cars, for anticipating what Amazon should display in your suggested purchases, for when to flip on the traction control, when to allow lowering the landing gear. I work for a software company that does retail software, and many of our products have reached the point where they make business decisions for big brand retailers. When and what to order from whom, who to schedule for what days, what the best price is for a number of conditions. So much data is put into these decision systems that humans already have a hard time second guessing them, and rarely outperform them. And these are just collections of rules, decision trees. These are weak AIs, and many of them take their input from other weak AIs and given the wrong combination with the wrong goal and the wrong data, what makes us think we are able to stand up to the 3 ton machines we built? There is not an inherent property of machines that says they have to kill us, but it is at least a good idea to keep the idea that something could go wrong in our minds. Take a second look at all the things we don't directly control any more.
Anecdotally, my first job was in a structural tube mill (high carbon steel rollforming, the stuff that holds up the elevators you ride, we also made the tube for part of that border fence) and the computer that ran the mill was literally a small shack built within the factory building, to work on it, we walked into it. A sensor circuit got disconnected somehow, and even with a lockout on the saw, one of my coworkers lost 3 fingers when it just fired up. This kind of malfunction can crash the stock market, it can crash planes, it can fire drones, it can shut down cooling systems at your favorite nuke plant, or a million other things. This kind of bug in a program with global access and an internet of things can be much more damaging with weak AI calling the shots.
Machines don't need to become sentient to become a risk, just more people putting machine guns and hellfires on patrol drones and giving them a bunch of pictures of people we want dead to match facial features against. Technology is a tool, just like a gun, and ignoring the possible consequences could put you in the same situations. It doesn't have to be aware to kill.
All that said, I still don't expect this to be how the world will end.