February 26, 2024

{“uid”:”3″,”hostPeerName”:”https://www.nytimes.com”,”initialGeometry”:”{“windowCoords_t”:0,”windowCoords_r”:5776,”windowCoords_b “:2202,”windowCoords_l”:3861,”frameCoords_t”:1524,”frameCoords_r”:2405,”frameCoords_b”:1774,”frameCoords_l”:1435,”styleZIndex” :”auto”,”allowedExpansion_t”:1524,”allowedExpansion_r”:1435,”allowedExpansion_b”:386,”allowedExpansion_l”:1435,”xInView”:1,”yInView “:1}”,”permissions”:”{“expandByOverlay”:true,”expandByPush”:false,”readCookie”:false,”writeCookie”:false}”,”metadata” :”{“shared”:{“sf_ver”:”1-0-40″,”ck_on”:1,”flash_ver”:”0″}}”,” reportCreativeGeometry”:false,”isDifferentSourceWindow”:false,”goog_safeframe_hlt”:{}}” scrolling=”no” marginwidth=”0″ marginheight=”0″ width=”970″ top=”250″ data-is-safeframe= “true” sandbox=”allow-forms allow-popups allow-popups-to-escape-sandbox allow-same-origin allow-scripts allow-top-navigation-by-user-activation” position=”area” aria-label= “Commercial” tabindex=”0″ data-google-container-id=”3″ type=”border: 0px; vertical-align: backside;”>

Geoffrey Hinton and Yoshua Bengio, two of the three Turing Award-winning researchers for his or her pioneering work in neural networks and who are sometimes thought of the “godfathers” of the fashionable synthetic intelligence motion, signed the assertion, as did others. main researchers on this area. (The third Turing Award winner, Yann LeCun, who heads Meta’s efforts for synthetic intelligence analysis, had not signed on as of Tuesday.)

The assertion comes at a time of rising concern concerning the potential hurt of synthetic intelligence. Current advances in so-called massive language fashions — the kind of system utilized by ChatGPT and different chatbots — have raised fears that synthetic intelligence may quickly be used on a big scale to unfold disinformation and propaganda, or that it may eradicate thousands and thousands of jobs. of labor.

Some imagine that, if nothing is completed to cease it, synthetic intelligence may turn into highly effective sufficient to trigger societal disruption inside a number of years, although researchers have but to elucidate how that may occur.

These fears are shared by many business leaders, placing them within the uncommon place of arguing {that a} know-how they’re creating—and, in lots of circumstances, dashing to construct sooner than their rivals—poses critical dangers and needs to be extra strictly regulated.

This month, Altman, Hassabis and Amodei met with US President and Vice President Joe Biden and Kamala Harris to debate regulation of synthetic intelligence. In testimony earlier than the US Senate after the assembly, Altman warned that the dangers of superior synthetic intelligence methods had been critical sufficient to warrant authorities intervention and referred to as for it to be regulated for its potential harms.

Dan Hendrycks, government director of the Middle for AI Safety, mentioned in an interview that the open letter represented a “popping out” for some business leaders who had expressed concern — however solely in non-public — concerning the dangers of AI. know-how they’re creating.

“There’s a false impression, even within the AI ​​group, that there’s just one group of catastrophists,” Hendrycks mentioned. “However actually, many individuals are privately expressing concern about this stuff.”

Some skeptics argue that synthetic intelligence know-how continues to be too fledgling to pose an existential risk. Relating to present AI methods, they’re extra involved with short-term issues, reminiscent of biased and incorrect solutions, than with long-term risks.

However others have argued that synthetic intelligence is enhancing so quickly that it has already surpassed human efficiency in some areas and can quickly accomplish that in others. They are saying know-how has proven indicators of superior capabilities and insights, elevating fears that “synthetic intelligence basic” (IAG), a sort of synthetic intelligence that may match or exceed human efficiency in all kinds of duties, will not be round. very far.

In a weblog publish printed final week, Altman and two different OpenAI executives proposed a number of methods to responsibly handle highly effective AI methods. They referred to as for cooperation between main AI makers, extra technical analysis on massive language fashions, and the formation of a world AI safety group, just like the Worldwide Atomic Power Company, which tries to regulate using nuclear weapons. .

Altman has additionally voiced help for guidelines that may require producers of enormous AI fashions to register for a government-issued license.

In March, greater than 1,000 technologists and researchers signed one other open letter calling for a six-month pause on improvement of the most important AI fashions, citing issues about “an out-of-control race to develop and deploy digital minds ever extra highly effective”.

That letter, coordinated by one other AI-focused nonprofit, the Way forward for Life Institute, was signed by Elon Musk and different well-known tech leaders, however didn’t have many signatures from main AI labs.

The brevity of the brand new assertion from the Middle for the Security of Synthetic Intelligence — simply 22 English phrases in whole — is meant to unite AI consultants who may disagree on the character of particular dangers or measures to stop them from occurring, however who share basic issues about highly effective AI methods, Hendrycks mentioned.

“We didn’t need to impose a really massive menu of 30 potential interventions,” he mentioned. “When that occurs, the message will get diluted.”

The assertion was initially shared with some high-level synthetic intelligence consultants, together with Hinton, who this month left his job at Google so he may communicate extra freely, he mentioned, concerning the potential harms of AI. From there, she made her solution to a number of main synthetic intelligence labs, the place some workers signed her on.

The urgency of the warnings from AI leaders has grown as thousands and thousands of individuals have turned to chatbots for leisure, companionship, and elevated productiveness, and because the underlying know-how improves at a dizzying price.

“I feel if this know-how goes mistaken, it might probably go fairly unhealthy,” Altman instructed the Senate subcommittee. “We need to work with the federal government to stop that from taking place.”

Kevin Roose is a know-how columnist and creator of Futureproof: 9 Guidelines for People within the Age of Automation.

Supply: NYT Espanol