OpenAI
OpenAI board unanimously rejects Elon Musk's $97.4b proposal
OpenAI says its board of directors has unanimously rejected a $97.4 billion takeover bid by Elon Musk.
“OpenAI is not for sale, and the board has unanimously rejected Mr Musk’s latest attempt to disrupt his competition," said a statement Friday from Bret Taylor, chair of OpenAI's board.
OpenAI attorney William Savitt in a letter to Musk's attorney Friday said the proposal “is not in the best interests of OAI’s mission and is rejected.”
TikTok returns to Apple and Google app stores in US
Musk, an early OpenAI investor, began a legal offensive against the ChatGPT maker nearly a year ago, suing for breach of contract over what he said was the betrayal of its founding aims as a nonprofit.
OpenAI has increasingly sought to capitalize on the commercial success of generative AI. But the for-profit company is a subsidiary of a nonprofit entity that's bound to a mission — which Musk helped set — to safely build better-than-human AI for humanity's benefit. OpenAI is now seeking to more fully convert itself to a for-profit company, but would first have to buy out the nonprofit's assets.
Throwing a wrench in those plans, Musk and his own AI startup, xAI, and a group of investment firms announced a bid Monday to buy the nonprofit that controls OpenAI. Musk in a court filing Wednesday further detailed the proposal to acquire the nonprofit’s controlling stake.
Savitt's letter Friday said that court filing added “new material conditions to the proposal. As a result of that filing, it is now apparent that your clients’ much-publicized ‘bid’ is in fact not a bid at all.” In any event, “even as first presented,” the board has unanimously rejected it, Savitt said.
Musk has alleged in the lawsuit that OpenAI is violating the terms of his foundational contributions to the charity. Musk had invested about $45 million in the startup from its founding until 2018, his lawyer has said.
He escalated the legal dispute late last year, adding new claims and defendants, including OpenAI's business partner Microsoft, and asking for a court order that would halt OpenAI’s for-profit conversion. Musk also added xAI as a plaintiff, claiming that OpenAI was also unfairly stifling business competition. A judge is still considering Musk's request but expressed skepticism about some of his claims in a court hearing last week.
1 month ago
Alibaba unveils AI model, claims it surpasses DeepSeek, ChatGPT
Chinese tech behemoth Alibaba has introduced its Qwen 2.5-Max AI model, boldly asserting that it has outpaced DeepSeek’s renowned DeepSeek-V3 model, reports First Post.
Unveiled on the first day of the Lunar New Year, the launch of Qwen 2.5-Max highlights the intensifying competition within China’s AI sector, reflecting the pressure DeepSeek’s rapid ascent has exerted not only on global competitors but also on domestic ones.
Asian stocks rise as DeepSeek panic fades
Alibaba's cloud division revealed on WeChat that Qwen 2.5-Max outperformed OpenAI’s GPT-4, DeepSeek-V3, and Meta’s Llama-3.1-405B across various performance metrics. The timing of this announcement, during the Lunar New Year festivities, underscores the urgency felt by Chinese firms to maintain competitiveness against DeepSeek, which has made waves in the AI market since its January debut.
DeepSeek’s Market Disruption
DeepSeek’s sudden success, beginning with the launch of its AI assistant powered by the DeepSeek-V3 model on January 10 and followed by the R1 model on January 20, has disrupted the tech industry. The Chinese startup’s cost-effective approach to developing powerful AI has raised concerns in Silicon Valley, particularly as investors question the high development costs associated with leading US companies. In response, Chinese competitors are racing to enhance their models.
ByteDance, for example, updated its flagship AI model shortly after DeepSeek’s R1 release, claiming it surpassed OpenAI’s GPT-1 in the AIME benchmark test, which assesses AI’s ability to comprehend and respond to complex commands. This move highlights how DeepSeek’s swift rise has spurred action among domestic firms, with Alibaba’s latest release being a response to DeepSeek’s innovations.
The Emergence of a New Competitor: Kimi k1.5 from Moonshot
Complicating the race further is Moonshot AI’s new Kimi k1.5 model, which launched just days after DeepSeek’s R1. Kimi k1.5 is being regarded as a direct rival to both DeepSeek’s models and OpenAI’s GPT-4, with reports suggesting it outperforms both on key benchmarks. Unlike DeepSeek-R1, which lacks multimodal capabilities, Kimi k1.5 is a multimodal model capable of processing and reasoning across text, images, and code, giving it a substantial advantage for tasks requiring both visual and textual data.
Kimi k1.5 has also been developed at a fraction of the cost compared to similar cutting-edge AI models in the US, positioning Moonshot AI as a growing force in the global AI arena. Its advanced reinforcement learning techniques further enhance its versatility, making it highly adaptable to a range of applications.
How DeepSeek stacks up against ChatGPT and Gemini
Shifting Dynamics in China’s AI Industry
China’s rising influence in AI is becoming increasingly apparent, as companies like DeepSeek, Alibaba, and Moonshot AI challenge the longstanding dominance of US tech giants. The launch of DeepSeek-V2 last May ignited an AI price war in China, prompting Alibaba’s cloud division to reduce prices by up to 97%. This price-cutting strategy has since become common practice among Chinese firms, including Baidu and Tencent, as they strive to develop AI models that can compete with OpenAI and other US-based giants.
DeepSeek, led by Liang Wenfeng, has taken a distinct approach, operating more like a research lab with a lean team of graduates and PhD students. Liang’s vision of achieving artificial general intelligence (AGI) with significantly lower overhead than larger tech companies contrasts with the more costly, hierarchical models of Alibaba and other Chinese tech giants.
As China’s AI sector continues to evolve at a rapid pace, its influence on the global market grows. The competition between DeepSeek, Moonshot AI, and Alibaba marks a crucial shift in the AI landscape, with these startups and tech giants pushing the boundaries of AI development. The race for AI supremacy is underway, and China is leading the way.
1 month ago
ChatGPT faces outage, users worldwide report problems
OpenAI’s generative AI chatbot, ChatGPT, is currently undergoing significant disruptions, preventing users from engaging in chats or accessing their previous conversations, reports NDTV.
Nearly 4,000 users had reported problems on the outage tracking site Downdetector, it said.
ChatGPT faces second outage in December
The outages seem to be affecting not only ChatGPT but also other OpenAI services, leading to speculation that the GPT-4o and GPT-4o mini models are down, causing the wider disruptions. Users have been sharing their experiences and reactions on platforms like X.com and Instagram.
In a recent podcast appearance, OpenAI CEO Sam Altman shared his vision of a future where AI surpasses human intellect, saying this shift will feel like a natural part of life for future generations.
Italy fines OpenAI for ChatGPT data privacy violations
"My kid is never going to grow up being smarter than AI," Altman commented on the Re: Thinking podcast with Adam Grant, acknowledging that AI will outperform humans in many areas. "Of course, it's smarter than us. Of course, it can do things we can't, but also who really cares?" he added.
1 month ago
Top Free Prompt Engineering Courses Online in 2025
Prompt engineering has become a critical skill in the age of artificial intelligence (AI), empowering users to create clear and effective instructions for generating accurate outputs. To meet growing demand, several online platforms are offering free online prompt engineering courses that teach you how to unlock the full potential of AI tools.
Best Free Online Prompt Engineering Courses
.
ChatGPT for Everyone by OpenAI and Learn Prompting
This 1-hour beginner-level course, led by Sander Schulhoff and Shyamal Anadkat, introduces the fundamentals of ChatGPT and generative AI. It explains how ChatGPT works, its diverse applications, and techniques for crafting effective prompts. The syllabus covers using ChatGPT as a personal assistant, enhancing productivity, and creating content.
Participants will learn prompt-writing strategies, including role assignment while understanding ethical considerations and ChatGPT's limitations. Real-world case studies further enhance the learning experience. The course is free, self-paced, and offers a certificate of completion through Learn Prompting Plus to showcase your skills.
Read more: 10 Best Free AI Image Generators in 2025
Course Link: https://learnprompting.org/courses/chatgpt-for-everyone
Free Prompt Engineering Course by Simplilearn
This 1-hour beginner-level course offers a free and comprehensive introduction to AI, NLP, and prompt engineering. It covers the fundamentals of AI and NLP, the concept and applications of prompt engineering, types of prompts, and techniques for creating effective and engaging prompts.
Taught by industry experts, the course combines theory with practical examples, real-world case studies, and hands-on exercises to enhance learning. Upon completion, participants receive a certificate, which can be shared on LinkedIn. Perfect for AIML engineers, chatbot developers, and data scientists, this course equips learners to design and optimize prompts for conversational AI systems.
Read more: 12 Most In-Demand Tech Skills for 2025: Stay Ahead in the Job Market
Course Link: https://www.simplilearn.com/prompt-engineering-free-course-skillup
Prompt Engineering for Everyone by IBM
Led by Antonio Cangiano, IBM’s AI specialist, this 5-hour beginner-level course offers a comprehensive introduction to prompt engineering. It uses notes, audio recordings, and hands-on labs to teach the art of crafting compelling prompts. The course covers foundational techniques, such as Persona and Interview Patterns, and advanced approaches like Chain-of-Thought and Tree-of-Thought prompting.
Learners will also explore bias mitigation, verbosity control, and IBM's Watsonx Prompt Lab. An optional final project allows participants to apply their knowledge. This free course, offering a certificate, is perfect for professionals aiming to revolutionize their interactions with AI systems.
Course Link: https://community.ibm.com/community/user/watsonx/blogs/nickolus-plowden/2023/10/15/learn-to-build-with-ai-series
Read more: 10 Best Free AI Infographic Generators for 2025: Transform Ideas into Stunning Visuals
Essentials of Prompt Engineering by Coursera
This 1-hour beginner-level course by Amazon Web Services introduces the foundational concepts of prompt engineering. It covers crafting, refining, and optimizing prompts, with techniques such as zero-shot, few-shot, and chain-of-thought prompting. Participants will also learn to identify and mitigate potential risks in prompt engineering.
A hands-on assignment allows learners to apply the skills acquired. Offered via a free trial with an optional $49/month subscription, the course includes a certificate upon completion. Updated in July 2024, this course is ideal for those interested in AI/ML and generative AI, providing in-demand skills for a competitive edge.
Course Link: https://www.coursera.org/learn/essentials-of-prompt-engineering
Advanced Prompt Engineering by Learn Prompting
Designed for intermediate to advanced learners, this 1-week course led by Sander Schulhoff provides in-depth training on advanced prompt engineering techniques. It explores concepts like in-context learning, chain-of-thought (CoT) prompting, problem decomposition, and self-criticism methods to craft effective prompts for complex AI applications.
Read more: How to Detect an AI-generated Image
Learners will enhance their understanding of AI tools like ChatGPT, DALL·E 3, GPT 3.5, and GPT 4. Taught by a renowned AI expert, the course combines theory with practical strategies, offering a certificate upon completion. Available with a free trial, access to all paid courses is $39/month via Learn Prompting Plus.
Course Link: https://learnprompting.org/courses/advanced-prompt-engineering
Prompt Engineering Specialization by Vanderbilt University on Coursera
Led by Dr. Jules White, this beginner to intermediate-level specialization spans 1 month (10 hours/week) and teaches participants to use generative AI for automation, productivity, and intelligence augmentation. The course includes three modules: composing queries for ChatGPT, advanced data analysis, and trusted generative AI.
Participants will gain hands-on experience in crafting prompts, automating tasks, and applying AI tools to real-world scenarios like social media content creation, data visualization from Excel, and PDF information extraction. The course is free with a trial and offers a certificate from Vanderbilt University upon completion.
Read more: Best Text-to-Speech Software
Course Link: https://www.coursera.org/specializations/prompt-engineering
Prompt Engineering and Advanced ChatGPT on edX
The Advanced ChatGPT course is an intermediate-level program designed to teach advanced techniques for using ChatGPT effectively. Spanning one week with 1-2 hours of learning per week, the course covers critical areas such as advanced prompting methods to generate accurate and engaging responses.
Learners explore how ChatGPT can be applied across various industries like healthcare, finance, education, and customer service. The course also addresses the integration of ChatGPT with tools like NLP and ML for developing sophisticated chatbot applications. Additionally, it discusses ChatGPT's limitations and how to mitigate them to build more robust applications. This self-paced course is free with limited access, but a certificate can be earned for $40.
Course Link: https://www.edx.org/learn/computer-programming/edx-advanced-chatgpt
Read more: Free Online AI Courses by Harvard University from Basic to Advanced Levels
Takeaways
These free online prompt engineering courses offer excellent opportunities to master AI tools like ChatGPT and enhance your skills in crafting effective prompts. With courses catering to different levels, from beginners to advanced learners, they provide valuable insights, hands-on exercises, and certification options to help you excel in AI applications and improve productivity in various industries.
2 months ago
Ex-OpenAI engineer who voiced legal concerns about technology dies
Suchir Balaji, a former OpenAI engineer and whistleblower who helped train the artificial intelligence systems behind ChatGPT and later said he believed those practices violated copyright law, has died, according to his parents and San Francisco officials. He was 26.
Balaji worked at OpenAI for nearly four years before quitting in August. He was well-regarded by colleagues at the San Francisco company, where a co-founder this week called him one of OpenAI's strongest contributors who was essential to developing some of its products.
“We are devastated to learn of this incredibly sad news and our hearts go out to Suchir’s loved ones during this difficult time,” said a statement from OpenAI.
Balaji was found dead in his San Francisco apartment on Nov. 26 in what police said “appeared to be a suicide. No evidence of foul play was found during the initial investigation.” The city's chief medical examiner's office confirmed the manner of death to be suicide.
His parents Poornima Ramarao and Balaji Ramamurthy said they are still seeking answers, describing their son as a “happy, smart and brave young man” who loved to hike and recently returned from a trip with friends.
Balaji grew up in the San Francisco Bay Area and first arrived at the fledgling AI research lab for a 2018 summer internship while studying computer science at the University of California, Berkeley. He returned a few years later to work at OpenAI, where one of his first projects, called WebGPT, helped pave the way for ChatGPT.
Italy fines OpenAI for ChatGPT data privacy violations
“Suchir’s contributions to this project were essential, and it wouldn’t have succeeded without him,” said OpenAI co-founder John Schulman in a social media post memorializing Balaji. Schulman, who recruited Balaji to his team, said what made him such an exceptional engineer and scientist was his attention to detail and ability to notice subtle bugs or logical errors.
“He had a knack for finding simple solutions and writing elegant code that worked,” Schulman wrote. “He’d think through the details of things carefully and rigorously.”
Balaji later shifted to organizing the huge datasets of online writings and other media used to train GPT-4, the fourth generation of OpenAI's flagship large language model and a basis for the company's famous chatbot. It was that work that eventually caused Balaji to question the technology he helped build, especially after newspapers, novelists and others began suing OpenAI and other AI companies for copyright infringement.
He first raised his concerns with The New York Times, which reported them in an October profile of Balaji.
He later told The Associated Press he would “try to testify” in the strongest copyright infringement cases and considered a lawsuit brought by The New York Times last year to be the “most serious.” Times lawyers named him in a Nov. 18 court filing as someone who might have “unique and relevant documents” supporting allegations of OpenAI's willful copyright infringement.
His records were also sought by lawyers in a separate case brought by book authors including the comedian Sarah Silverman, according to a court filing.
“It doesn’t feel right to be training on people’s data and then competing with them in the marketplace,” Balaji told the AP in late October. “I don’t think you should be able to do that. I don’t think you are able to do that legally.”
He told the AP that he gradually grew more disillusioned with OpenAI, especially after the internal turmoil that led its board of directors to fire and then rehire CEO Sam Altman last year. Balaji said he was broadly concerned about how its commercial products were rolling out, including their propensity for spouting false information known as hallucinations.
But of the “bag of issues” he was concerned about, he said he was focusing on copyright as the one it was “actually possible to do something about.”
He acknowledged that it was an unpopular opinion within the AI research community, which is accustomed to pulling data from the internet, but said “they will have to change and it’s a matter of time.”
He had not been deposed and it’s unclear to what extent his revelations will be admitted as evidence in any legal cases after his death. He also published a personal blog post with his opinions about the topic.
Schulman, who resigned from OpenAI in August, said he and Balaji coincidentally left on the same day and celebrated with fellow colleagues that night with dinner and drinks at a San Francisco bar. Another of Balaji’s mentors, co-founder and chief scientist Ilya Sutskever, had left OpenAI several months earlier, which Balaji saw as another impetus to leave.
Schulman said Balaji had told him earlier this year of his plans to leave OpenAI and that Balaji didn't think that better-than-human AI known as artificial general intelligence “was right around the corner, like the rest of the company seemed to believe.” The younger engineer expressed interest in getting a doctorate and exploring “some more off-the-beaten path ideas about how to build intelligence,” Schulman said.
Balaji's family said a memorial is being planned for later this month at the India Community Center in Milpitas, California, not far from his hometown of Cupertino.
2 months ago
Italy fines OpenAI for ChatGPT data privacy violations
Italy’s data protection watchdog said Friday it has fined OpenAI 15 million euros ($15.6 million) after wrapping up a probe into the collection of personal data by the U.S. artificial intelligence company's popular chatbot ChatGPT.
The country’s privacy watchdog, known as Garante, said its investigation showed that OpenAI processed users’ personal data to train ChatGPT “without having an adequate legal basis and violated the principle of transparency and the related information obligations towards users”.
OpenAI dubbed the decision “disproportionate” and said it will appeal.
“When the Garante ordered us to stop offering ChatGPT in Italy in 2023, we worked with them to reinstate it a month later,” an OpenAI spokesperson said Friday in an emailed statement. “They’ve since recognized our industry-leading approach to protecting privacy in AI, yet this fine is nearly 20 times the revenue we made in Italy during the relevant period.”
Amazon workers strike at multiple facilities as Teamsters seek labor contract
OpenAI added, however, it remained “committed to working with privacy authorities worldwide to offer beneficial AI that respects privacy rights.”
The investigation, launched last year, also found that OpenAI didn’t provide an “adequate age verification system” to prevent users under 13 years of age from being exposed to inappropriate AI-generated content, the watchdog said.
The Italian authority also ordered OpenAI to launch a six-month campaign on different Italian media to raise public awareness about ChatGPT, specifically in regard to data collection.
The booming popularity of generative artificial intelligence systems like ChatGPT has drawn scrutiny from regulators on both sides of the Atlantic.
Regulators in the U.S. and Europe have been examining OpenAI and other companies that have played a key part in the AI boom, while governments around the world have been drawing up rules to protect against risks posed by AI systems, led by the European Union's AI Act, a comprehensive rulebook for artificial intelligence.
2 months ago
OpenAI’s Whisper invents speech creating phrases no one said
Tech behemoth OpenAI has touted its artificial intelligence-powered transcription tool Whisper as having near “human level robustness and accuracy.”
But Whisper has a major flaw: It is prone to making up chunks of text or even entire sentences, according to interviews with more than a dozen software engineers, developers and academic researchers. Those experts said some of the invented text — known in the industry as hallucinations — can include racial commentary, violent rhetoric and even imagined medical treatments.
Experts said that such fabrications are problematic because Whisper is being used in a slew of industries worldwide to translate and transcribe interviews, generate text in popular consumer technologies and create subtitles for videos.
More concerning, they said, is a rush by medical centers to utilize Whisper-based tools to transcribe patients’ consultations with doctors, despite OpenAI’ s warnings that the tool should not be used in “high-risk domains.”
The full extent of the problem is difficult to discern, but researchers and engineers said they frequently have come across Whisper’s hallucinations in their work. A University of Michigan researcher conducting a study of public meetings, for example, said he found hallucinations in 8 out of every 10 audio transcriptions he inspected, before he started trying to improve the model.
A machine learning engineer said he initially discovered hallucinations in about half of the over 100 hours of Whisper transcriptions he analyzed. A third developer said he found hallucinations in nearly every one of the 26,000 transcripts he created with Whisper.
The problems persist even in well-recorded, short audio samples. A recent study by computer scientists uncovered 187 hallucinations in more than 13,000 clear audio snippets they examined.
That trend would lead to tens of thousands of faulty transcriptions over millions of recordings, researchers said.
Such mistakes could have “really grave consequences,” particularly in hospital settings, said Alondra Nelson, who led the White House Office of Science and Technology Policy for the Biden administration until last year.
“Nobody wants a misdiagnosis,” said Nelson, a professor at the Institute for Advanced Study in Princeton, New Jersey. “There should be a higher bar.”
Whisper also is used to create closed captioning for the Deaf and hard of hearing — a population at particular risk for faulty transcriptions. That's because the Deaf and hard of hearing have no way of identifying fabrications are “hidden amongst all this other text," said Christian Vogler, who is deaf and directs Gallaudet University’s Technology Access Program.
OpenAI urged to address problemThe prevalence of such hallucinations has led experts, advocates and former OpenAI employees to call for the federal government to consider AI regulations. At minimum, they said, OpenAI needs to address the flaw.
“This seems solvable if the company is willing to prioritize it,” said William Saunders, a San Francisco-based research engineer who quit OpenAI in February over concerns with the company's direction. “It’s problematic if you put this out there and people are overconfident about what it can do and integrate it into all these other systems.”
An OpenAI spokesperson said the company continually studies how to reduce hallucinations and appreciated the researchers' findings, adding that OpenAI incorporates feedback in model updates.
While most developers assume that transcription tools misspell words or make other errors, engineers and researchers said they had never seen another AI-powered transcription tool hallucinate as much as Whisper.
Whisper hallucinationsThe tool is integrated into some versions of OpenAI’s flagship chatbot ChatGPT, and is a built-in offering in Oracle and Microsoft’s cloud computing platforms, which service thousands of companies worldwide. It is also used to transcribe and translate text into multiple languages.
In the last month alone, one recent version of Whisper was downloaded over 4.2 million times from open-source AI platform HuggingFace. Sanchit Gandhi, a machine-learning engineer there, said Whisper is the most popular open-source speech recognition model and is built into everything from call centers to voice assistants.
Professors Allison Koenecke of Cornell University and Mona Sloane of the University of Virginia examined thousands of short snippets they obtained from TalkBank, a research repository hosted at Carnegie Mellon University. They determined that nearly 40% of the hallucinations were harmful or concerning because the speaker could be misinterpreted or misrepresented.
In an example they uncovered, a speaker said, “He, the boy, was going to, I’m not sure exactly, take the umbrella.”
But the transcription software added: “He took a big piece of a cross, a teeny, small piece ... I’m sure he didn’t have a terror knife so he killed a number of people.”
A speaker in another recording described “two other girls and one lady.” Whisper invented extra commentary on race, adding "two other girls and one lady, um, which were Black.”
In a third transcription, Whisper invented a non-existent medication called “hyperactivated antibiotics.”
Researchers aren’t certain why Whisper and similar tools hallucinate, but software developers said the fabrications tend to occur amid pauses, background sounds or music playing.
OpenAI recommended in its online disclosures against using Whisper in “decision-making contexts, where flaws in accuracy can lead to pronounced flaws in outcomes.”
Transcribing doctor appointmentsThat warning hasn’t stopped hospitals or medical centers from using speech-to-text models, including Whisper, to transcribe what’s said during doctor’s visits to free up medical providers to spend less time on note-taking or report writing.
Over 30,000 clinicians and 40 health systems, including the Mankato Clinic in Minnesota and Children’s Hospital Los Angeles, have started using a Whisper-based tool built by Nabla, which has offices in France and the U.S.
That tool was fine tuned on medical language to transcribe and summarize patients’ interactions, said Nabla’s chief technology officer Martin Raison.
Company officials said they are aware that Whisper can hallucinate and are mitigating the problem.
It’s impossible to compare Nabla’s AI-generated transcript to the original recording because Nabla’s tool erases the original audio for “data safety reasons,” Raison said.
Nabla said the tool has been used to transcribe an estimated 7 million medical visits.
Saunders, the former OpenAI engineer, said erasing the original audio could be worrisome if transcripts aren't double checked or clinicians can't access the recording to verify they are correct.
“You can't catch errors if you take away the ground truth,” he said.
Nabla said that no model is perfect, and that theirs currently requires medical providers to quickly edit and approve transcribed notes, but that could change.
Privacy concernsBecause patient meetings with their doctors are confidential, it is hard to know how AI-generated transcripts are affecting them.
A California state lawmaker, Rebecca Bauer-Kahan, said she took one of her children to the doctor earlier this year, and refused to sign a form the health network provided that sought her permission to share the consultation audio with vendors that included Microsoft Azure, the cloud computing system run by OpenAI’s largest investor. Bauer-Kahan didn't want such intimate medical conversations being shared with tech companies, she said.
“The release was very specific that for-profit companies would have the right to have this,” said Bauer-Kahan, a Democrat who represents part of the San Francisco suburbs in the state Assembly. “I was like ‘absolutely not.’”
John Muir Health spokesman Ben Drew said the health system complies with state and federal privacy laws.
4 months ago
OpenAI ready to launch Orion AI Model by Dec 2024
OpenAI, the developer of the widely-used ChatGPT platform, has revealed plans to launch its latest AI model, codenamed Orion, by the end of 2024.
According to a report from The Verge, the company aims to initially make the model available exclusively to select business partners.
Following the launch of OpenAI o1, Orion is expected to be a significant step forward in artificial intelligence, building upon the advancements of previous models.
Orion promises enhancements in reasoning, problem-solving, and language processing, addressing key challenges like AI hallucinations through advanced synthetic data techniques. While OpenAI’s new model is internally viewed as the successor to GPT-4, there has been no confirmation on whether it will be labelled as GPT-5 upon release.
In keeping with its phased rollout strategy, OpenAI will initially provide Orion to its close business partners rather than releasing it broadly via ChatGPT. This limited-access approach will enable these partners to develop specialised products and features using the cutting-edge platform before a broader public release.
Apple set to introduce Truecaller-like ID service for businesses
Microsoft Collaboration on Azure
Microsoft, OpenAI’s primary partner in AI model deployment, is expected to host Orion on its Azure platform as early as November. Microsoft engineers have reportedly been preparing for the rollout, which is anticipated to cater to industries where accuracy and reliability are paramount, such as healthcare and finance.
This strategic collaboration allows OpenAI to strengthen its presence in the rapidly advancing AI sector, competing with other tech giants like Google DeepMind and Meta.
OpenAI has been developing Orion for several months, utilising synthetic data generated by the recently launched OpenAI o1—an advanced model designed to approach human-like AI capabilities.
OpenAI o1 has demonstrated substantial improvements in handling complex, multistep challenges and generating code. Notably, the model is said to perform at a level similar to PhD students in benchmark tasks within the fields of physics, chemistry, and biology.
As OpenAI continues to evolve its AI offerings, the introduction of Orion aims to further push boundaries in artificial intelligence applications across various industries. Although the launch date remains tentative, with the potential for adjustments, Orion’s release is set to mark a major milestone in AI development, reflecting OpenAI’s ambitions to lead the AI landscape amid growing competition.
4 months ago
OpenAI Unveils 'Swarm': A flexible AI-driven framework for multi-agent research
OpenAI has quietly launched Swarm, a new experimental framework designed to advance the collaboration and interaction of multiple AI agents.
This innovative initiative offers developers a comprehensive toolkit to create AI systems that can operate autonomously, performing complex tasks with minimal human intervention.
Despite the low-key release, the introduction of Swarm has significant implications for the future of AI.
OpenAI positions Swarm as a research and educational experiment, similar to the early days of ChatGPT when it was released in 2022.
The framework is now available on GitHub, enabling developers to explore its potential for building multi-agent AI systems.
A Glimpse into the Future of AI Collaboration
Swarm provides an insight into a future where AI systems can autonomously collaborate across different tasks and sources of information.
The framework allows developers to create AI agents that can work together in networks, tackling sophisticated tasks. These agents can potentially perform activities across multiple websites or even act on behalf of users in real-world situations.
OpenAI emphasises that Swarm is not a commercial product but rather a "cookbook" for experimental code.
ChatGPT being used to influence US elections, alleges OpenAI
According to Shyamal Anadkat, an OpenAI researcher, "Swarm is not an official OpenAI product. Think of it more like a cookbook—experimental code for building simple agents. It’s not intended for production use and won’t be maintained."
How Swarm Works: Agents and Handoffs
At the heart of Swarm lies its focus on two key components: Agents and Handoffs. Agents in the Swarm system are AI entities equipped with specific instructions and tools, enabling them to autonomously perform tasks.
When needed, these agents can "hand off" their responsibilities to other agents, facilitating smooth task delegation.
This design allows for the breakdown of complex tasks into smaller, manageable steps, distributed among multiple agents. For example, an agent might retrieve and process data, then hand off the task of data transformation to another agent. This flexibility makes Swarm particularly useful for workflows and operations requiring intricate, multi-step processes.
Concerns Over Jobs
While Swarm offers exciting possibilities, it has also sparked concerns over its potential impact on the job market and the risks of autonomous AI.
One of the primary concerns is job displacement. With AI systems like Swarm becoming more autonomous and efficient, some fear that human roles, particularly in white-collar jobs, could be replaced by automated networks of AI agents.
Others argue that rather than eliminating jobs, such technologies may lead to job reshaping, where human workers collaborate with AI systems.
Security risks and biases in AI-driven decisions are also major points of concern. Autonomous systems operating without human oversight could malfunction or make biased decisions, potentially posing serious security threats.
OpenAI is aware of these risks and encourages developers to use custom evaluation tools to assess the performance of their agents thoroughly. The company underscores the need for responsible AI development as the conversation around balancing innovation with ethical concerns continues.
A Research Tool with Far-Reaching Potential
Though Swarm is experimental, its release marks a significant step in the development of multi-agent AI systems.
As developers explore its capabilities, the framework is expected to play an important role in shaping the future of AI, particularly in terms of collaboration and autonomy.
For now, OpenAI's Swarm stands as a powerful research tool, offering a glimpse into what AI could achieve while also highlighting the importance of careful oversight and responsible innovation.
Source: Agencies
5 months ago
ChatGPT being used to influence US elections, alleges OpenAI
OpenAI has disclosed alarming instances of its artificial intelligence models, including ChatGPT, being misused by cybercriminals to create fake content aimed at influencing US elections.
The findings underscore the growing challenge AI poses to cybersecurity and election integrity, raising fresh concerns about the role of emerging technologies in shaping democratic processes.
The report, revealed on Wednesday, details how AI tools like ChatGPT have been exploited to generate persuasive, coherent text at an unprecedented scale.
Cybercriminals have used the technology to craft fake news articles, social media posts, and even fraudulent campaign materials intended to mislead voters.
These AI-generated messages are often sophisticated enough to mimic the style of legitimate news outlets, making it increasingly difficult for the average citizen to discern truth from fabrication.
Google loses final EU court appeal against 2.4 billion euro fine in antitrust shopping case
One of the most concerning trends highlighted in the report is the ability of malicious actors to tailor disinformation campaigns to specific demographics. By leveraging data mining techniques, cybercriminals can analyse voter behaviour and preferences, creating targeted messages that resonate with particular audiences.
This level of personalisation enhances the impact of disinformation, allowing bad actors to exploit existing political divisions and amplify societal discord.
AI-Driven ‘Disinformation’
The US Department of Homeland Security has also raised concerns about the potential for foreign interference in the upcoming November elections.
According to US authorities, Russia, Iran, and China are reportedly using AI to spread divisive and fake information, posing a significant threat to election integrity.
These countries have allegedly employed artificial intelligence to generate disinformation aimed at manipulating public opinion and undermining trust in the democratic process.
The report from OpenAI indicates that the company has thwarted over 20 attempts to misuse ChatGPT for influence operations this year alone.
In August, several accounts were blocked for generating election-related articles, while in July, accounts from Rwanda were banned for producing social media comments intended to influence that country's elections. Although these attempts have so far failed to gain significant traction or achieve viral spread, OpenAI emphasises the need for vigilance, as the technology continues to evolve.
Challenges
The speed at which AI can produce content poses significant challenges for traditional fact-checking and response mechanisms, which struggle to keep pace with the flood of false information.
Family outraged as AI Chatbot mimics murdered daughter
This dynamic creates an environment where voters are bombarded with conflicting narratives, complicating their decision-making processes and potentially eroding trust in democratic institutions.
OpenAI’s findings also highlight the potential for AI to be used in automated social media campaigns. The ability to rapidly generate content allows bad actors to skew public perception and influence voter sentiment in real time, particularly during critical moments in the run-up to elections.
Despite the limited success of these operations to date, the potential for AI-driven disinformation to disrupt elections remains a serious concern.
Greater Vigilance
In response to these developments, OpenAI has called for increased collaboration between technology companies, governments, and civil society to address the misuse of AI in influence operations.
The company is also enhancing its own monitoring and enforcement mechanisms to detect and prevent the misuse of its models for generating fake or harmful content.
As artificial intelligence continues to reshape the information landscape, OpenAI’s report serves as a stark reminder of the need to balance technological innovation with robust safeguards.
The stakes are high, and the ability to maintain the integrity of democratic processes in the age of AI will require coordinated efforts and proactive strategies from all stakeholders involved.
5 months ago