OpenAI
Explainer: What may have caused OpenAI board to fire Sam Altman
In a surprising move, OpenAI, the artificial intelligence research lab, ousted its CEO, Sam Altman, raising eyebrows and leaving shareholders in the dark.
While concerns about the rapid advancement of AI technology may have played a role in Altman's termination, the handling of the situation has drawn criticism from various quarters, reports CNN.
The decision to remove Altman, credited with steering OpenAI from obscurity to a $90 billion valuation, was made abruptly, catching even major stakeholders like Microsoft off guard.
Human drama at OpenAI: Board reportedly ‘in discussion’ with Sam Altman to return as CEO
The CNN report suggests that Microsoft, OpenAI's most important shareholder, was unaware of Altman's dismissal until just before the public announcement, causing a significant drop in Microsoft's stock value.
OpenAI employees, including co-founder and former president Greg Brockman, were also blindsided, leading to Brockman's subsequent resignation. The sudden departure of key figures prompted rumors of Altman and former employees planning to launch a competing startup, posing a threat to OpenAI's years of hard work and achievements, said the report.
The situation worsened due to the peculiar structure of OpenAI's board. The company, a nonprofit, harbors a for-profit entity, OpenAI LP, established by Altman, Brockman, and Chief Scientist Ilya Sutskever. The for-profit arm's rapid innovation to achieve a $90 billion valuation clashed with the nonprofit's majority-controlled board, resulting in Altman's dismissal, it also said.
The tipping point appears to be Altman's announcement at a recent developer conference, signaling OpenAI's intention to provide tools for creating personalised versions of ChatGPT. This move, seen as too risky by the board, may have triggered Altman's removal.
ChatGPT-maker OpenAI fires CEO Sam Altman
Altman's warnings about the potential dangers of AI and the need for regulatory limits indicate a clash between innovation and safety within OpenAI. The board's concerns about Altman's pace of development, while perhaps justified, were mishandled, leading to a crisis that could have been avoided.
The aftermath sees OpenAI scrambling to reverse the decision, attempting to entice Altman back. The incident has strained relations with Microsoft, which now demands a seat on the board. OpenAI's future hangs in the balance, with possibilities ranging from Altman's return to a potential competition with a new startup, the report also said.
In the end, OpenAI finds itself in a precarious position, facing potential internal upheaval and external challenges, highlighting the importance of strategic decision-making in the rapidly evolving field of artificial intelligence.
Microsoft hires OpenAI founder Sam Altman to lead AI research team
Human drama at OpenAI: Board reportedly ‘in discussion’ with Sam Altman to return as CEO
The OpenAI board is reportedly "in discussion" with Sam Altman regarding his potential return as Chief Executive Officer (CEO) after he was suddenly fired on Friday (November 17, 2023), according to The Verge.
Quoting sources close to the matter, The Verge reported that Altman is “ambivalent” about coming back and would want significant governance changes.
Also read: ChatGPT-maker OpenAI fires CEO Sam Altman
Earlier, many staffers of OpenAI, the US-based AI research and deployment company that developed ChatGPT, gave an ultimatum to the OpenAI board to resign and bring back Sam Altman and Greg Brockman, chairman of OpenAI who resigned in protest of firing Altman.
As per The Verge, the board has initially reached an agreement to resign, making way for Altman and Brockman to return. However, there seems to have been a change in their stance since then.
A source close to Altman told Verge that if he decides to start a new company, those staffers will go with him, which could lead OpenAI towards a state of free-fall.
Also read: ChatGPT-4: All you need to know
Following Altman’s termination as CEO, a string of senior researchers of the organisation have resigned from their posts at OpenAI.
Meanwhile, in a memo sent to OpenAI staffers, one executive member of the company has reportedly said "we remain optimistic" about bringing back Sam Altman, The Verge reports, quoting The Information. The Verge, however, couldn't confirm whom the executive was referring to with the term "we."
ChatGPT-4: All you need to know
OpenAI’s ChatGPT-4 is the latest iteration of the groundbreaking Generative Pre-trained Transformer (GPT) series. Building on the success of its predecessors, GPT-4 offers enhanced capabilities, improved performance, and a more user-friendly experience. GPT-4 was publicly released on March 14, 2023, making it accessible to users worldwide. Let’s explore how to use ChatGPT-4, its new features, and more.
New Features of OpenAI's ChatGPT-4
OpenAI highlights three significant advancements in this next-generation language model: creativity, visual input, and longer context. According to OpenAI, GPT-4 demonstrates substantial improvements in creativity, excelling in both generating and collaborating with users on creative endeavors. Let’s see some of the top new features of ChatGPT-4.
Can Understand More Advanced Inputs
One of the major breakthroughs of GPT-4 lies in its enhanced capacity to comprehend intricate and nuanced prompts. OpenAI reports that GPT-4 showcases performance at equivalence with human-level expertise on diverse professional and academic benchmarks.
Read more: 7 Ways to Earn Money with ChatGpt
This achievement was demonstrated by subjecting GPT-4 to numerous human-level exams and standardized tests, including the SAT, BAR, and GRE, without any specific training. Remarkably, GPT-4 not only grasped and successfully tackled these tests, but it also consistently outperformed its predecessor, GPT-3.5, across all assessments.
GPT-4 boasts support for more than 26 languages, including less widely spoken ones like Latvian, Welsh, and Swahili. When assessed based on three-shot accuracy using the MMLU benchmark, GPT-4 surpassed not only GPT-3.5 but also other prominent LLMs such as PaLM and Chinchilla in terms of English-language proficiency across 24 languages.
Multimodal Functionality
In contrast to the previous version, ChatGPT, GPT-4 introduces a remarkable advancement in its range of multimodal capabilities. This latest model can now process not only text prompts but also image prompts.
Read more: How to Use AI Tools to Get Your Dream Job
This groundbreaking feature enables the AI to accept an image as input, interpret it, and explain it as effectively as a text prompt. The model seamlessly handles images of varying sizes and types, including documents that combine text and images, hand-drawn sketches, and even screenshots.
Enhanced Steerability
OpenAI further claims that GPT-4 exhibits a remarkable level of steerability. Notably, it has become stronger in staying true to its assigned character, reducing the likelihood of deviations when deployed in character-based applications.
Developers now have the ability to prescribe the AI’s style and task by providing specific instructions within the system message. These messages enable API users to customize the user experience extensively while operating within defined parameters. To ensure model integrity, OpenAI is also actively working on enhancing the security of these messages, as they represent the most common method for potential misuse.
Read more: ChatGPT ‘passed’ BCS exam, according to Science Bee’s experiment
'Out of control' AI race: Elon Musk, top tech personalities call for a pause
Several of the most important personalities in tech are urging artificial intelligence labs to halt training of the most powerful AI systems for at least six months, citing "profound risks to society and humanity."
Elon Musk was among the hundreds of tech CEOs, educators, and researchers who signed a letter, which was released by Musk's organization, the Future of Life Institute, reports CNN.
The letter comes only two weeks after OpenAI launched GPT-4, a more powerful version of the technology that powers ChatGPT, the popular AI chatbot application.
The system demonstrated in early testing and a corporate demo that it can write lawsuits, pass standardized exams, and develop a website from a hand-drawn design, it said.
Read More: How to Use AI Tools to Get Your Dream Job
According to the letter, the delay should apply to AI systems "more powerful than GPT-4." It also stated that the suggested pause should be used by impartial experts to collaboratively establish and execute a set of standard protocols for AI tools that are safe "beyond a reasonable doubt."
"Advanced AI could represent a profound change in the history of life on Earth, and should be planned for and managed with commensurate care and resources," the letter said. "Unfortunately, this level of planning and management is not happening, even though recent months have seen AI labs locked in an out-of-control race to develop and deploy ever more powerful digital minds that no one — not even their creators — can understand, predict, or reliably control."
If a pause is not implemented immediately, the letter suggests that countries step in and impose a moratorium.
Read More: Top 5 AI Chatbot Platforms and Trends in 2023
Experts in artificial intelligence are growing worried about the possibility for biased answers, the spread of disinformation, and the implications on consumer privacy.
These technologies have also raised concerns about how AI might disrupt professions, allow students to cheat, and change human relationship with technology.
The letter hinted at a larger dissatisfaction within and beyond the industry with the fast rate of AI progress. Early versions of AI governance frameworks have been introduced by several governing bodies in China, the EU, and Singapore.
Read More: Google's AI Chatbot Bard: All You Need to Know
ChatGPT maker releases tool to help teachers detect if AI wrote homework
The maker of ChatGPT is trying to curb its reputation as a freewheeling cheating machine with a new tool that can help teachers detect if a student or artificial intelligence wrote that homework.
The new AI Text Classifier launched Tuesday (January 31, 2023) by OpenAI follows a weeks-long discussion at schools and colleges over fears that ChatGPT’s ability to write just about anything on command could fuel academic dishonesty and hinder learning.
OpenAI cautions that its new tool – like others already available – is not foolproof. The method for detecting AI-written text “is imperfect and it will be wrong sometimes,” said Jan Leike, head of OpenAI’s alignment team tasked to make its systems safer.
Read More: What is ChatGPT, why are schools blocking it?
“Because of that, it shouldn’t be solely relied upon when making decisions,” Leike said.
Teenagers and college students were among the millions of people who began experimenting with ChatGPT after it launched Nov. 30 as a free application on OpenAI’s website. And while many found ways to use it creatively and harmlessly, the ease with which it could answer take-home test questions and assist with other assignments sparked a panic among some educators.
By the time schools opened for the new year, New York City, Los Angeles and other big public school districts began to block its use in classrooms and on school devices.
Read More: CES 2023: Walton's smart AI products get huge response
The Seattle Public Schools district initially blocked ChatGPT on all school devices in December but then opened access to educators who want to use it as a teaching tool, said Tim Robinson, the district spokesman.
“We can’t afford to ignore it,” Robinson said.
The district is also discussing possibly expanding the use of ChatGPT into classrooms to let teachers use it to train students to be better critical thinkers and to let students use the application as a “personal tutor” or to help generate new ideas when working on an assignment, Robinson said.
Read More: AI & Future of Jobs: Will Artificial Intelligence or Robots Take Your Job?
School districts around the country say they are seeing the conversation around ChatGPT evolve quickly.
“The initial reaction was ‘OMG, how are we going to stem the tide of all the cheating that will happen with ChatGPT,’” said Devin Page, a technology specialist with the Calvert County Public School District in Maryland. Now there is a growing realization that “this is the future” and blocking it is not the solution, he said.
“I think we would be naïve if we were not aware of the dangers this tool poses, but we also would fail to serve our students if we ban them and us from using it for all its potential power,” said Page, who thinks districts like his own will eventually unblock ChatGPT, especially once the company’s detection service is in place.
Read More: How Can Artificial Intelligence Improve Healthcare?
OpenAI emphasized the limitations of its detection tool in a blog post Tuesday, but said that in addition to deterring plagiarism, it could help to detect automated disinformation campaigns and other misuse of AI to mimic humans.
The longer a passage of text, the better the tool is at detecting if an AI or human wrote something. Type in any text -- a college admissions essay, or a literary analysis of Ralph Ellison’s “Invisible Man” --- and the tool will label it as either “very unlikely, unlikely, unclear if it is, possibly, or likely” AI-generated.
But much like ChatGPT itself, which was trained on a huge trove of digitized books, newspapers and online writings but often confidently spits out falsehoods or nonsense, it’s not easy to interpret how it came up with a result.
Read More: Ai and Future of Content Writing: Will Artificial Intelligence replace writers?
“We don’t fundamentally know what kind of pattern it pays attention to, or how it works internally,” Leike said. “There’s really not much we could say at this point about how the classifier actually works.”
Higher education institutions around the world also have begun debating responsible use of AI technology. Sciences Po, one of France’s most prestigious universities, prohibited its use last week and warned that anyone found surreptitiously using ChatGPT and other AI tools to produce written or oral work could be banned from Sciences Po and other institutions.
In response to the backlash, OpenAI said it has been working for several weeks to craft new guidelines to help educators.
Read More: Ameca: World’s Most Realistic Advanced Humanoid Robot AI Platform
“Like many other technologies, it may be that one district decides that it’s inappropriate for use in their classrooms,” said OpenAI policy researcher Lama Ahmad. “We don’t really push them one way or another. We just want to give them the information that they need to be able to make the right decisions for them.”
It’s an unusually public role for the research-oriented San Francisco startup, now backed by billions of dollars in investment from its partner Microsoft and facing growing interest from the public and governments.
France’s digital economy minister Jean-Noël Barrot recently met in California with OpenAI executives, including CEO Sam Altman, and a week later told an audience at the World Economic Forum in Davos, Switzerland that he was optimistic about the technology. But the government minister — a former professor at the Massachusetts Institute of Technology and the French business school HEC in Paris — said there are also difficult ethical questions that will need to be addressed.
Read More: ChatGPT by Open AI: All you need to know
“So if you’re in the law faculty, there is room for concern because obviously ChatGPT, among other tools, will be able to deliver exams that are relatively impressive,” he said. “If you are in the economics faculty, then you’re fine because ChatGPT will have a hard time finding or delivering something that is expected when you are in a graduate-level economics faculty.”
He said it will be increasingly important for users to understand the basics of how these systems work so they know what biases might exist.