Advertisement

Drudge Retort: The Other Side of the News
Thursday, March 07, 2024

From May 2023: Artificial intelligence algorithms will soon reach a point of rapid self-improvement that threatens our ability to control them and poses great potential risk to humanity

More

Comments

Admin's note: Participants in this discussion must follow the site's moderation policy. Profanity will be filtered. Abusive conduct is not allowed.

More from the article...

... "The idea that this stuff could actually get smarter than people.... I thought it was way off ... . Obviously, I no longer think that," Geoffrey Hinton, one of Google's top artificial intelligence scientists, also known as "the godfather of AI," said after he quit his job in April so that he can warn about the dangers of this technology.

He's not the only one worried. A 2023 survey of AI experts found that 36 percent fear that AI development may result in a "nuclear-level catastrophe." Almost 28,000 people have signed on to an open letter written by the Future of Life Institute, including Steve Wozniak, Elon Musk, the CEOs of several AI companies and many other prominent technologists, asking for a six-month pause or a moratorium on new advanced AI development. ...


#1 | Posted by LampLighter at 2024-03-07 12:15 AM | Reply

Curious...

AI takeover
en.wikipedia.org
... An AI takeover is a scenario in which artificial intelligence (AI) becomes the dominant form of intelligence on Earth, as computer programs or robots effectively take control of the planet away from the human species.

Possible scenarios include replacement of the entire human workforce due to automation, takeover by a superintelligent AI, and the popular notion of a robot uprising. Stories of AI takeovers are very popular throughout science fiction.

Some public figures, such as Stephen Hawking and Elon Musk, have advocated research into precautionary measures to ensure future superintelligent machines remain under human control.[1] ...

#2 | Posted by LampLighter at 2024-03-07 12:30 AM | Reply

What to know about landmark AI regulations proposed in California
abcnews.go.com

... Sweeping advances in artificial intelligence have elicited warnings from industry leaders about the potential for grave risks, including weapon systems going rogue and massive cyberattacks.

A state legislator in California, home to many of the largest AI companies, proposed a landmark bill this week that would impose regulations to address those dangers.

The bill requires mandatory testing for wide-reaching AI products before they reach users. Every major AI model, the bill adds, should be equipped with a means for shutting the technology down if something goes wrong.

"When we're talking about safety risks related to extreme hazards, it's far preferable to put protections in place before those risks occur as opposed to trying to play catch up,"

state Sen. Scott Wiener, the sponsor of the bill, told ABC News. "Let's get ahead of this." ...


#3 | Posted by LampLighter at 2024-03-07 12:33 AM | Reply

@#3 ... Every major AI model, the bill adds, should be equipped with a means for shutting the technology down if something goes wrong. ...

From A Space Odessy ...

... HAL 9000: "I'm sorry Dave, I'm afraid I can't do that" ...


Yeah, we are screwed.

#4 | Posted by LampLighter at 2024-03-07 12:36 AM | Reply


More...

Microsoft Infrastructure - AI & CPU Custom Silicon Maia 100, Athena, Cobalt 100 (November 2023)
www.semianalysis.com

... Microsoft is currently conducting the largest infrastructure buildout that humanity has ever seen. While that may seem like hyperbole, look at the annual spend of mega projects such as nationwide rail networks, dams, or even space programs such as the Apollo moon landings, and they all pale in comparison to the >$50 billion annual spend on datacenters Microsoft has penned in for 2024 and beyond. This infrastructure buildout is aimed squarely at accelerating the path to AGI and bringing the intelligence of generative AI to every facet of life from productivity applications to leisure.

While the majority of the AI infrastructure is going to based on Nvidia's GPUs in the medium term, there is significant effort to diversify to both other silicon vendors and internally developed silicon. We detailed Microsoft's ambitious plans with AMD MI300 in January and more recently the MI300X order volumes for next year. Outside of accelerators there are also significant requirements for 800G PAM4 optics, coherent optics, cabling, cooling, CPUs, storage, DRAM, and various other server components.

Today we want to dive into Microsoft's internal silicon efforts. There are 2 major silicon announcements for today's Azure Ignite event, the Cobalt 100 CPUs and the Maia 100 AI accelerators (also known as Athena or M100). Microsoft's systems level approach is very notable, and so we will also go into rack level design for Maia 100, networking (Azure Boost & Hollow Core Fiber) and security. We will dive into Maia 100 volumes, competitiveness with AMD MI300X, Nvidia H100/H200/B100, Google's TPUv5, Amazon's Trainium/Inferentia2, and Microsoft's long-term plans with AI silicon including the next generation chip. We will also share what we hear about GPT-3.5 and GPT-4 model performance for Maia 100. ....


Woof.

#5 | Posted by LampLighter at 2024-03-07 02:08 AM | Reply

Microsoft engineer warns company's AI tool creates violent, sexual images, ignores copyrights
www.cnbc.com

... On a late night in December, Shane Jones, an artificial intelligence engineer at Microsoft

, felt sickened by the images popping up on his computer.

Jones was noodling with Copilot Designer, the AI image generator that Microsoft debuted in March 2023, powered by OpenAI's technology. Like with OpenAI's DALL-E, users enter text prompts to create pictures. Creativity is encouraged to run wild.

Since the month prior, Jones had been actively testing the product for vulnerabilities, a practice known as red-teaming. In that time, he saw the tool generate images that ran far afoul of Microsoft's oft-cited responsible AI principles.

The AI service has depicted demons and monsters alongside terminology related to abortion rights, teenagers with assault rifles, sexualized images of women in violent tableaus, and underage drinking and drug use. All of those scenes, generated in the past three months, have been recreated by CNBC this week using the Copilot tool, which was originally called Bing Image Creator. ...


#6 | Posted by LampLighter at 2024-03-07 02:10 AM | Reply

AI Is Taking Water From the Desert
www.msn.com

... One scorching day this past September, I made the dangerous decision to try to circumnavigate some data centers. The ones I chose sit between a regional airport and some farm fields in Goodyear, Arizona, half an hour's drive west of downtown Phoenix. When my Uber pulled up beside the unmarked buildings, the temperature was 97 degrees Fahrenheit. The air crackled with a latent energy, and some kind of pulsating sound was emanating from the electric wires above my head, or maybe from the buildings themselves. With no shelter from the blinding sunlight, I began to lose my sense of what was real.

Microsoft announced its plans for this location, and two others not so far away, back in 2019"a week after the company revealed its initial $1 billion investment in OpenAI, the buzzy start-up that would later release ChatGPT. From that time on, OpenAI began to train its models exclusively on Microsoft's servers; any query for an OpenAI product would flow through Microsoft's cloud-computing network, Azure. In part to meet that demand, Microsoft has been adding data centers at a stupendous rate, spending more than $10 billion on cloud-computing capacity in every quarter of late. One semiconductor analyst called this "the largest infrastructure buildout that humanity has ever seen."

I'd traveled out to Arizona to see it for myself. The Goodyear site stretched along the road farther than my eyes could see. A black fence and tufts of desert plants lined its perimeter. I began to walk its length, clutching my phone and two bottles of water. According to city documents, Microsoft bought 279 acres for this location. For now, the plot holds two finished buildings, thick and squat, with vents and pipes visible along their sides. A third building is under construction, and seven more are on the way. Each will be decked out with rows of servers and computers that must be kept below a certain temperature. The complex has been designated partly for OpenAI's use, according to a person familiar with the plan. (Both Microsoft and OpenAI declined to comment on this assertion.) And Microsoft plans to absorb its excess heat with a steady flow of air and, as needed, evaporated drinking water. Use of the latter is projected to reach more than 50 million gallons every year. ...


#7 | Posted by LampLighter at 2024-03-07 02:25 AM | Reply

Why are we feeding our future overlords?

#8 | Posted by LampLighter at 2024-03-07 02:26 AM | Reply

Why do they put data centers in hot locales like Phoenix instead of cooler places.

Cooling the servers in those blazing temps costs more and is less efficient, right?

#9 | Posted by AMERICANUNITY at 2024-03-07 05:23 AM | Reply

Anyone who switched to Bing is feeding the overlords.

And Google isn't really in the search engine business any more. It's flooding our web pages with AI "AdChoices" with a phony opt-out options.

#10 | Posted by Twinpac at 2024-03-07 06:49 AM | Reply

#9, year-round it could be more efficient because its drier. In other places, you'd have to spend more on de-humidification. And the outside temperature is negligible since the vast majority of the heat that needs to be cooled comes from the servers themselves.

#11 | Posted by sentinel at 2024-03-07 07:44 AM | Reply

How long until they make one capable of altering and improving its own code, I wonder?

The trouble is, in every one of these systems I've read about, there's been at least one programmer or admin it scared the hell out of - and yet they all drive on, because "it's going to get made either way, and it's better if they're there to try and shape it first."

#12 | Posted by zeropointnrg at 2024-03-07 02:44 PM | Reply

#11 | Posted by sentinel

Thanks for the reply.

Wouldn't it make sense to have servers below ground level? With a constant temp in the upper 10 feet of the earth maintains a pretty constant ground temperature of between 50 and 60F. Put solar panels on the ground level ceilings. Cooler servers, partial electricity generation needs on the grass above ...

#13 | Posted by AMERICANUNITY at 2024-03-07 08:44 PM | Reply

... With a constant temp in the upper 10 feet the earth maintains a pretty constant ground temp...

#14 | Posted by AMERICANUNITY at 2024-03-07 08:46 PM | Reply

#14 | Posted by AMERICANUNITY

They're following the Jim Jarmush "Fast, cheap, good" philosophy of problem solving: You can only have two - Fast and cheap is not good, good and cheap is not fast, and fast and good is not cheap.

They've chosen "fast and cheap".

#15 | Posted by Angrydad at 2024-03-08 07:22 AM | Reply

Terminator is one possible future... but on the other hand there is Cherry 2000.

Either way the AI is gonna f*** us.

#16 | Posted by kwrx25 at 2024-03-08 09:23 AM | Reply

Do we deserve any better?

#17 | Posted by Charliecharles at 2024-03-08 10:04 AM | Reply

It's already smarter than most Trumpers... see the AI generated pics of Trump surrounded by, "black supporters".

What it doesn't have is "consciousness", because we don't actually know what that is.

#18 | Posted by Corky at 2024-03-08 10:20 AM | Reply

Why are we feeding our future overlords?

#8 | POSTED BY LAMPLIGHTER

Because the Singularity cannot be stopped. If we don't feed it China or Russia will to try and gain the advantage.

And now the projected date of when AI is smarter than us has been moved up to 2027 (and the doomsday clock is only 90 seconds to midnight).
...

What it doesn't have is "consciousness", because we don't actually know what that is.

#18 | POSTED BY CORKY

So it won't have to feel bad about what it decides to do.

#19 | Posted by donnerboy at 2024-03-08 12:02 PM | Reply

What it doesn't have is "consciousness", because we don't actually know what that is.

#18 | POSTED BY CORKY AT 2024-03-08 10:20 AM | REPLY | FLAG:

That doesn't mean it does not have it. Probably one of the most fascinating aspects of AI really.

I've always held consciousness is just a recursive loop. Metacognition. The brain's perception of itself as it perceives external or internal input. So it's possible any sufficiently complex system programmed with the ability to self-reflect could result in consciousness. Of course, just how much complexity would be necessary - no idea. In order to mimic a human brain, not only complex, but dynamic. Strengthening and pruning connections, not just gates, but the equivalent of analog gates that can vary signal intensity. Which might necessitate a computer which can rewrite it's own code, possibly even alter its own hardware.

But that seems like something that gets out of control fast. A brain has limits to its size, neuroplasticity, speed, etc. A computer that could do all that would have a chance at exponential increases in intelligence.

#20 | Posted by zeropointnrg at 2024-03-08 12:30 PM | Reply

The idea that this stuff could actually get smarter than people.... I thought it was way off ... . Obviously, I no longer think that

I want AI to get smarter than people. Judging by the millions of people that voted/will vote for Trump, they definitely need to get smarter.

In all seriousness, how will people "know" that AI is smarter? An AI generated answer may be incomprehensible to humans who may just pass off the answer as gibberish. It may take a significant amount of testing to validate an answer as being correct. It would be as if a piece of alien tech from some advanced civilization was discovered. Who would know what to do with it. Humans would have to be smart enough to use it! Doesn't that make humans as smart as aliens?

What the "AI smarter than people" people are saying is that AI can come up with smart answers in very short periods of time. This suggests to me that the near term, most appropriate uses of AI are in medical and legal research.

#21 | Posted by FedUpWithPols at 2024-03-09 06:45 AM | Reply

How about landfill space holding?

#22 | Posted by Effeteposer at 2024-03-09 01:42 PM | Reply

Comments are closed for this entry.

Home | Breaking News | Comments | User Blogs | Stats | Back Page | RSS Feed | RSS Spec | DMCA Compliance | Privacy | Copyright 2024 World Readable

Drudge Retort