Drudge Retort: The Other Side of the News

Drudge Retort

User Info


Subscribe to blah's blog Subscribe


Special Features


Jeff, yes, but combustion engines couldn't also be a plow, a pc, a laser printer, an atm with little change. Robotics/Automation is a slightly different change than moving to looms. You aren't the buggy driver this time, you are the horse.
ANYTHING that can be done following algorithmic process can be automated now, its just a matter of how fast we learn the algorithm, but even that is being done by computer more and more.
New jobs can be created by the destruction of the old, but when it is faster to build a machine or write a script to fill the new job than to train a person, why train a person? You can easily buy new machines and copy all the skill of one to any/all others, instantly building a skilled workforce.
I work in software. Every time I run into some clerical task that might have gone to another person, I write a tool to do it instead. Instead of having human help run through a process across multiple platforms, I write scripts and feed those to automation tools. This is increased productivity. Twenty years ago it would have taken 20 people or more to do the same work I do, and I'm not particularly special in this. And as long as I can create tools, no one will be hired to work under me. I get promoted, and no one is hired to fill my old position, I represent the vertical stack of every position I had held before my current one. The team I work on has been whittled down by attrition, from about 20 people to six. We turn out the same amount of work, we are just more productive per person. Other projects are started, but with the same expectations of output as our team, meaning overall fewer people are hired as time goes on. Ironically, I've replaced the majority of a team dedicated to automation testing by writing tools to better parse natural language descriptions of processes. I've also made my job easier to do by documenting my tools, so when I leave, a new person can fill my role without a whole lot of training, so my replacement will be paid less to do the same work I do. There are a lot of jobs that aren't that hard, even in skilled work, to replace with automation once are motivated to do so.

The freedom of speech promised by the first amendment has never meant free to say what ever you want in public. The same year the bill of rights was signed, so was the Alien and Sedition acts. People were arrested for saying in publication such things as:
the Adams administration a "continual tempest of malignant passions" and the President a "repulsive pedant, a gross hypocrite and an unprincipled oppressor"
"[the administration is] ridiculous pomp, foolish adulation, and selfish avarice."

I cant remember, but I think the current judgment pertaining to the limitation of free speech is that what is said presents a clear and present danger. I think Kill the cops and calling out for one specific cop could be seen as reason to at least bring him in for questioning according to this guideline. Attempting to kill the cops, or anyone, can probably be expected to be met any force necessary to nullify the threat. Since a tackle wouldn't have done ----, I can see why they opened fire.

I'm just as anti abuse-of-police-power as the next guy, but this particular example doesn't exactly strengthen the argument you are trying to make. This is too close to the middle, it would be like the other side saying a dog barking and baring its teeth justifies its being shot.

I believe it would be possible, and remain yourself, but it wouldn't be the quick upload process, but a gradual transition in substrate over time.
(maybe this discussion will help that make sense to you:
reviewthefuture.com They cover most of your "its just two beings" arguments too, jump forward to about 25 minutes to get the discussion about gradual replacement. Warning to the Nulli types, the hosts are technotopians like me).
As you gradually replace parts of your brain, at some point more of your consciousness is generated through the digital substrate than the biostrate, unless consciousness turns out to be solely analog. Or who knows, maybe souls are necessary. If I were already reaching the end of my life, I'd be willing to give it a shot without knowing for sure.

As a linguistics nerd (Linguistics is my minor, focusing on the structure of language because I like writing parsers for automation), I have to shrug my shoulders and say "So?" Unlike French or German, English does not have a governing body that dictates the official language standards. We do have what is though of as Standard English and vernacular Englishes do not fall within that standard, but people still understand it. English standards are only enforced by social pressure, acceptance by the highest tiers of society, the most educated and the most powerful. But how many of you know the four cases of third person masculine pronouns? Not many I'd bet, since that practice has died off. How about when to use thou and you? No? Also a dead practice of English. English changes, has been for the past 1300+ years, and will continue to do so. "Them," "their," "that?" Not even English, those words beginning with eth (eth (ð) is the voiced alveolar-dental fricative we associate with 'th,' rather than the voiceless version, which is represented as theta (θ)) came from Old Norse, and "polluted" the "proper English" in the south of England, but did they ruin the language? "Knight," "enough," "what" used to be pronounced exactly as they are spelled, but country bumpkins and vernacular users simplified the pronunciations. Innovation in the language comes from the outcasts. Standard English has always represented the Powerful, Educated and Wealthy, and Non-Standard English has always represented the solidarity of a class or region of people not accepted by the elite. All the responses that can be paraphrased as "You can't get a job with Ebonics," illustrate this. The top tiers of society has always had an interest in controlling the language, but if they had been successful we'd probably all be speaking French (or hell, even Latin).
As to the protests, I see them as being about the same thing as they are always about. Someone doesn't feel like they are getting their fair shake, so every reason is a reason to to their cause. These aren't illiterate kids, they are getting marked down on their dissertation proposals in graduate school (IE more educated than half of us). I think race has become a sensitive issue again because so many in power refuse to acknowledge the real structural differences in opportunity that remain. Instead, it is ignored, and in reaction the aggrieved overstate their side, leading to a loss of legitimacy for their cause and more ignoring by those not affected. It's not what is being protested that indicates the problem, it's that there is a perceived need to protest that indicates there is a problem. The upside is that the participants still believe that someone will listen when they protest peacefully.

It's makes me laugh how much people get this idea that an AI capable of wiping out humanity needs to have reached sentience. It wouldn't. Strong AI doesn't need to exist for the wrong set of rules to become part of a mesh of expert systems, the kinds of AI we use for sorting out Google searches, and directing self driving cars, for anticipating what Amazon should display in your suggested purchases, for when to flip on the traction control, when to allow lowering the landing gear. I work for a software company that does retail software, and many of our products have reached the point where they make business decisions for big brand retailers. When and what to order from whom, who to schedule for what days, what the best price is for a number of conditions. So much data is put into these decision systems that humans already have a hard time second guessing them, and rarely outperform them. And these are just collections of rules, decision trees. These are weak AIs, and many of them take their input from other weak AIs and given the wrong combination with the wrong goal and the wrong data, what makes us think we are able to stand up to the 3 ton machines we built? There is not an inherent property of machines that says they have to kill us, but it is at least a good idea to keep the idea that something could go wrong in our minds. Take a second look at all the things we don't directly control any more.
Anecdotally, my first job was in a structural tube mill (high carbon steel rollforming, the stuff that holds up the elevators you ride, we also made the tube for part of that border fence) and the computer that ran the mill was literally a small shack built within the factory building, to work on it, we walked into it. A sensor circuit got disconnected somehow, and even with a lockout on the saw, one of my coworkers lost 3 fingers when it just fired up. This kind of malfunction can crash the stock market, it can crash planes, it can fire drones, it can shut down cooling systems at your favorite nuke plant, or a million other things. This kind of bug in a program with global access and an internet of things can be much more damaging with weak AI calling the shots.
Machines don't need to become sentient to become a risk, just more people putting machine guns and hellfires on patrol drones and giving them a bunch of pictures of people we want dead to match facial features against. Technology is a tool, just like a gun, and ignoring the possible consequences could put you in the same situations. It doesn't have to be aware to kill.
All that said, I still don't expect this to be how the world will end.

Article title says "green energy" and "failure" time to come up with retarded "I told you so" responses. Oh, and also, many people who can't read or learn new tricks.

"[Google's failure] prompted us to reconsider the economics of energy. What's needed, we concluded, are reliable zero-carbon energy sources so cheap that the operators of power plants and industrial facilities alike have an economic rationale for switching over soon -- say, within the next 40 years. Let's face it, businesses won't make sacrifices and pay more for clean energy based on altruism alone. Instead, we need solutions that appeal to their profit motives...
Consider an average U.S. coal or natural gas plant that has been in service for decades; its cost of electricity generation is about 4 to 6 U.S. cents per kilowatt-hour. Now imagine what it would take for the utility company that owns that plant to decide to shutter it and build a replacement plant using a zero-carbon energy source. The owner would have to factor in the capital investment for construction and continued costs of operation and maintenance -- and still make a profit while generating electricity for less than $0.04/kWh to $0.06/kWh...
In the electricity sector, that bottom line comes down to the difference between the cost of generating electricity and its price. In the United States alone, we're aiming to replace about 1 terawatt of generation infrastructure over the next 40 years. This won't happen without a breakthrough energy technology that has a high profit margin...
We're glad that Google tried something ambitious with the RE-C initiative, and we're proud to have been part of the project. But with 20/20 hindsight, we see that it didn't go far enough, and that truly disruptive technologies are what our planet needs."

There. There's your lernin. Smart people use their mistakes to learn lessons that make for better chances at success in the future (instead of never trying like [-redacted-] would have us do). Google learned that even if they could get renewable energy to be cheaper than coal, we still wouldn't switch unless someone could make butt-tons of money off it (see, this is why we can't have nice things). Nothing will change until its a huge jump (or we fall off the cliff like Nullifidian says).

Drudge Retort

Home | Breaking News | Comments | User Blogs | Stats | Back Page | RSS Feed | RSS Spec | DMCA Compliance | Privacy | Copyright 2015 World Readable