Drudge Retort: The Other Side of the News

Drudge Retort

User Info


Subscribe to blah's blog Subscribe


Special Features


It's makes me laugh how much people get this idea that an AI capable of wiping out humanity needs to have reached sentience. It wouldn't. Strong AI doesn't need to exist for the wrong set of rules to become part of a mesh of expert systems, the kinds of AI we use for sorting out Google searches, and directing self driving cars, for anticipating what Amazon should display in your suggested purchases, for when to flip on the traction control, when to allow lowering the landing gear. I work for a software company that does retail software, and many of our products have reached the point where they make business decisions for big brand retailers. When and what to order from whom, who to schedule for what days, what the best price is for a number of conditions. So much data is put into these decision systems that humans already have a hard time second guessing them, and rarely outperform them. And these are just collections of rules, decision trees. These are weak AIs, and many of them take their input from other weak AIs and given the wrong combination with the wrong goal and the wrong data, what makes us think we are able to stand up to the 3 ton machines we built? There is not an inherent property of machines that says they have to kill us, but it is at least a good idea to keep the idea that something could go wrong in our minds. Take a second look at all the things we don't directly control any more.
Anecdotally, my first job was in a structural tube mill (high carbon steel rollforming, the stuff that holds up the elevators you ride, we also made the tube for part of that border fence) and the computer that ran the mill was literally a small shack built within the factory building, to work on it, we walked into it. A sensor circuit got disconnected somehow, and even with a lockout on the saw, one of my coworkers lost 3 fingers when it just fired up. This kind of malfunction can crash the stock market, it can crash planes, it can fire drones, it can shut down cooling systems at your favorite nuke plant, or a million other things. This kind of bug in a program with global access and an internet of things can be much more damaging with weak AI calling the shots.
Machines don't need to become sentient to become a risk, just more people putting machine guns and hellfires on patrol drones and giving them a bunch of pictures of people we want dead to match facial features against. Technology is a tool, just like a gun, and ignoring the possible consequences could put you in the same situations. It doesn't have to be aware to kill.
All that said, I still don't expect this to be how the world will end.

Article title says "green energy" and "failure" time to come up with retarded "I told you so" responses. Oh, and also, many people who can't read or learn new tricks.

"[Google's failure] prompted us to reconsider the economics of energy. What's needed, we concluded, are reliable zero-carbon energy sources so cheap that the operators of power plants and industrial facilities alike have an economic rationale for switching over soon -- say, within the next 40 years. Let's face it, businesses won't make sacrifices and pay more for clean energy based on altruism alone. Instead, we need solutions that appeal to their profit motives...
Consider an average U.S. coal or natural gas plant that has been in service for decades; its cost of electricity generation is about 4 to 6 U.S. cents per kilowatt-hour. Now imagine what it would take for the utility company that owns that plant to decide to shutter it and build a replacement plant using a zero-carbon energy source. The owner would have to factor in the capital investment for construction and continued costs of operation and maintenance -- and still make a profit while generating electricity for less than $0.04/kWh to $0.06/kWh...
In the electricity sector, that bottom line comes down to the difference between the cost of generating electricity and its price. In the United States alone, we're aiming to replace about 1 terawatt of generation infrastructure over the next 40 years. This won't happen without a breakthrough energy technology that has a high profit margin...
We're glad that Google tried something ambitious with the RE-C initiative, and we're proud to have been part of the project. But with 20/20 hindsight, we see that it didn't go far enough, and that truly disruptive technologies are what our planet needs."

There. There's your lernin. Smart people use their mistakes to learn lessons that make for better chances at success in the future (instead of never trying like [-redacted-] would have us do). Google learned that even if they could get renewable energy to be cheaper than coal, we still wouldn't switch unless someone could make butt-tons of money off it (see, this is why we can't have nice things). Nothing will change until its a huge jump (or we fall off the cliff like Nullifidian says).

Drudge Retort

Home | Breaking News | Comments | User Blogs | Stats | Back Page | RSS Feed | RSS Spec | DMCA Compliance | Privacy | Copyright 2014 World Readable