Sunday, March 9, 2025
spot_imgspot_imgspot_imgspot_img
HomeGadgetsOpenai's pre-policy criticizes the company for the 'revival' of the lead company....

Openai’s pre-policy criticizes the company for the ‘revival’ of the lead company. Tekkachchan


A high-profile pre-operated policy researcher, Miles Bundage, Took on social media On Wednesday, to criticize OpenAII, potentially “write history” of its deployment approach to the probably risky AI system.

Earlier this week, Openai published document AI underlining its current philosophy on safety and alignment, the process of designing the AI ​​system that behaves in desirable and clear ways. In the document, Openai stated that it looks at the development of AGI, roughly defined as the AI ​​system that can do any task a human, as a “continuous path”, which requires “recurrence and learning and learning” from AI technologies.

“In an unsatisfactory world (…) safety lessons come from the treatment of today’s systems, with external caution relative to their clear power, (which) is the approach we have taken for (our AI Model) GPT) 2,” Openai wrote. “Now we first look at AGI as a point with a series of systems of growing utility (…), the way to make the next system safe and beneficial is learning from the current system.”

But Bundage claims that GPT -2, in fact, took abundant care at the time of its release, and that it was “100% consistent” with repetition strategy of openi today.

“OpenaiI’s GPT -2’s release, including me, was 100% consistent (and with) saw the current philosophy of OpenaiI as the current philosophy of repetition yarding,” Brundage Written in a post on X“The model was released incrementally, with lessons shared in each stage. At that time many security experts thanked us for this caution. ,

In 2018, Boundage, who joined the OpenIA as a research scientist, was head of the company’s policy research for many years. On Openai’s “Agi Readness” team, he focused special focus on the deployment of Openai’s AI Chatbot Platform Chatgpt responsible for the responsible deployment of the production systems.

GPT -2What Openai announced in 2019 was the ancestor of AI System Powering PuffyGPT-2 can answer questions about a theme, summarize articles, and generate lessons at a level that is sometimes indispensable to humans.

While GPT-2 and its outputs may look basic today, they were state-of-the-art at that time. Citing the risk of malicious use, Openai initially refused to release the GPT-2 source code, which was chosen instead of giving a limited access to a demo to the selected news outlets.

The decision was found with mixed reviews from the AI ​​industry. Many experts argued that the danger generated by GPT-2 Was exaggeratedAnd that there was no evidence that the model could be misbehaved in the methods described by OpenIA. AI-centric publication the gradient went so far to publish a gradient Open letter Requesting that the Openai model is released, arguing that it was very important to hold back.

Openai finally released a six-month partial version of GPT-2 after the model unveiled, followed by the complete system several months later. Bundage feels that this was the right approach.

Which part of “(GPT-2 release) was inspired or inspired to think about Agi? None of this, “He said in a post on X.” What is evidence that this caution was ‘unsafe’ east? Pre -post, this probe. It would have been fine, but this does not mean that it was responsible for Yollo (SIC) was given information at that time. ,

Bundage fears that OpenaiI with the document is the purpose of establishing a burden of evidence where “concerns are alarmists” and “you need a huge evidence of adjacent hazards to work on them.” This, he argues, is a “very dangerous” mindset for advanced AI systems.

“If I was still working in Openai, I am asking the manner in which this (document) was written, and what really openi expected to receive the worship caution in such a lop-sidelined manner,” Boundage said.

Openai is historically Is accused Give priority to “shiny products” at the cost of safety, and Product release To bring rival companies to the market. Last year, Openai dissolved its Agi readiness team, and a string of AI security and policy researchers left the company for rivals.

Competitive pressure has reached the ramp only. Sugar Ae Lab Deepsek Openly available with the world attention R1 The model, which matched OPENAI’s O1 “Reasoning” model on several major benchmarks. Openai CEO Sam Altman has accepted That Deepsek has reduced the technical leadership of openi, and Said This open “will pull some release” to make better competition.

There is a lot of money on the line. Openai loses billions annually, and the company has Allegedly It was estimated that its annual loss could be triple by $ 14 billion by 2026. A rapid product release cycle may benefit the lower line of openi in the near-period, but possibly at the long-term price of security. Experts like Bundage ask whether business is worth it.



Source link

RELATED ARTICLES

LEAVE A REPLY

Please enter your comment!
Please enter your name here

- Advertisment -
Google search engine

Most Popular

Recent Comments

Enable Notifications OK No thanks