OpenAI
OpenAI pauses operations for a week amid Meta’s billion-dollar talent battle
In a move sending shockwaves through Silicon Valley, OpenAI is shutting down operations for an entire week. Officially, the company cites employee burnout as the reason. However, the timing raises serious questions, especially as Meta aggressively courts OpenAI’s top talent with eye-popping offers. To many, the break feels less like a wellness initiative and more like a defensive response in the intensifying battle for AI expertise.
Why is OpenAI shutting down?According to OpenAI, the week-long pause is intended to help employees recover after enduring relentless, months-long stretches of 80-hour work weeks. The decision comes amid mounting internal concerns over burnout, fatigue, and declining morale across teams. Yet, the timing of the break coincides with Meta's aggressive efforts to poach OpenAI staff, leading many to suspect the shutdown is as much about damage control as it is about employee well-being.
New interstellar comet to make a distant flyby of Earth, NASA says
Meta’s aggressive talent poachingMeta is making no secret of its recruitment ambitions. Reports suggest the company is offering signing bonuses as high as $100 million to attract leading AI researchers and engineers, particularly those trained at OpenAI. Several former OpenAI employees have already migrated to Meta’s FAIR division and its newly revitalized AGI research teams. With OpenAI staff grappling with exhaustion and feelings of being undervalued, Meta's lucrative offers are proving hard to resist — and Meta is fully aware of the opportunity.
Internal response at OpenAILeaked internal memos from OpenAI's Chief Research Officer Mark Chen and CEO Sam Altman reveal the company’s growing unease. Chen admitted to heightened anxiety within teams and encouraged staff to "reconnect with the mission." Meanwhile, Altman has reportedly pledged to revamp compensation packages, improve internal recognition, and called for unity to resist external recruitment pressures. However, many insiders feel these promises have come too late, and Meta's offers are simply too enticing.
Risks and growing fearsThere is widespread concern that Meta will use OpenAI's shutdown week to accelerate its recruitment efforts, potentially blindsiding the company. While OpenAI’s technical teams are expected to rest, Meta's recruiters remain active. Only OpenAI’s executive leadership will continue working during the break — a clear sign that management views the situation as more than a routine wellness measure.
Meta’s new cloud processing feature raises privacy concerns for Facebook users
Broader implications for OpenAI and the AI industryThis shutdown exposes two escalating issues: the unsustainable working conditions at AI labs racing toward Artificial General Intelligence (AGI), and the fierce competition for elite talent. For OpenAI, the pause marks both a moment of vulnerability and a critical cultural inflection point. How the company navigates this period could not only determine its future but also influence the broader trajectory of the AI industry itself.
#With inputs from Hindustan Times
4 months ago
How ChatGPT and other AI tools are changing the teaching profession
Ana Sepúlveda, a math teacher in Dallas, wanted to make geometry exciting for her 6th grade honors class. Knowing her students are passionate about soccer, she decided to connect the subject to the sport. To help, she turned to ChatGPT.
Within seconds, the AI provided a detailed five-page lesson plan, complete with a theme: “Geometry is everywhere in soccer — on the field, in the ball, and even in stadium designs!” The plan explained how shapes and angles are used in the game, suggested discussion questions like “Why are these shapes important to soccer?” and proposed a project for students to create their own soccer field or stadium using rulers and protractors.
“AI has completely changed the way I work,” said Sepúlveda, who teaches at a bilingual school and uses ChatGPT to translate materials into Spanish. “It’s helping me plan lessons, communicate with parents, and keep students more engaged.”
Teachers nationwide are increasingly using AI tools to create quizzes, lesson plans, worksheets, assist with grading, and reduce administrative work. Many say this technology allows them to focus more on teaching.
Google may be forced to link to rival search platforms in the UK
A Gallup and Walton Family Foundation survey released Wednesday found that 6 in 10 K-12 public school teachers in the U.S. used AI tools during the last academic year. The survey, conducted in April with over 2,000 teachers, showed AI use is most common among high school educators and those early in their careers.
According to Gallup research consultant Andrea Malek Ash, teachers who use AI weekly reported saving an average of six hours per week, suggesting AI could help reduce teacher burnout.
States are issuing guidelines for using AI tools in classrooms
As concerns grow over students misusing AI tools, many schools are introducing guidelines and providing training to ensure teachers use the technology responsibly and avoid shortcuts that could negatively impact student learning.
Currently, around two dozen U.S. states have issued AI-related guidance for schools, but how consistently these rules are applied across classrooms varies, according to Maya Israel, an associate professor of educational technology and computer science education at the University of Florida.
“We need to make sure AI doesn’t replace a teacher’s professional judgment,” Israel emphasized. She added that while AI can be useful for basic tasks like grading multiple-choice tests, it struggles with more complex assessments requiring nuance. Students should also have a way to report unfair or inaccurate grading, with the final grading decision left to the teacher.
AI tools are already saving time for many educators. Roughly 8 in 10 teachers who use AI say it helps reduce workload by assisting with tasks such as creating worksheets, quizzes, or handling administrative duties. About 6 in 10 report that AI has improved the quality of their work, particularly in adapting materials for students or providing feedback.
Mary McCarthy, a high school social studies teacher near Houston, said AI has transformed her teaching and improved her work-life balance by easing lesson planning and other tasks. Training provided by her school district has also helped her demonstrate responsible AI use to students.
“If all we say is ‘AI is bad, and kids will get lazy,’ then of course that’s what will happen if we don’t guide them,” McCarthy said. “As the adult in the room, I see it as my duty to help them learn how to use this tool responsibly.”
OpenAI pulls Jony Ive partnership details after court ruling in trademark dispute
Teachers say the technology is best used sparingly
Since the launch of ChatGPT in late 2022, opinions on the use of artificial intelligence in education have changed significantly. Many schools initially banned the technology, but over time, educators have begun exploring ways to integrate it into classrooms. Despite this shift, concerns remain. According to a recent study, nearly half of teachers worry that students' reliance on AI could harm their ability to think critically, work independently, or persevere through problem-solving tasks.
However, teachers also believe that understanding AI better helps them recognize when students overuse it. Colorado high school English teacher Darren Barkett, for example, says AI-generated assignments often lack grammatical errors and contain unusually complex language—both signs that a chatbot was involved. Barkett himself uses ChatGPT for lesson planning and grading multiple-choice tests and essays.
In suburban Chicago, middle school art teacher Lindsay Johnson uses only AI tools approved by her school to ensure student privacy. She introduces AI technology later in the creative process so students can build confidence in their own abilities first.
For her eighth graders' final project, Johnson asked students to draw a portrait of someone influential in their lives. After they finished the facial details, she offered them the option to use generative AI for designing the background. She relied on an AI feature in Canva, a design platform vetted by her school district's IT team for safety and privacy.
“My goal as an art teacher is to show students the range of tools available and help them understand how to use those tools properly,” Johnson said. Interestingly, some students declined the AI assistance. “About half the class said, ‘I already have a vision, and I want to complete it myself,’” she added.
5 months ago
OpenAI pulls Jony Ive partnership details after court ruling in trademark dispute
A promising collaboration between OpenAI CEO Sam Altman and renowned iPhone designer Jony Ive to create a new AI hardware product has encountered a legal obstacle after a federal judge ordered a temporary halt to the marketing of the venture.
Last month, OpenAI announced the acquisition of io Products, a product and engineering firm co-founded by Ive, in a deal reportedly worth nearly $6.5 billion.
However, the project quickly ran into legal trouble when a startup named IYO, which is also working on AI hardware, filed a trademark infringement complaint. The startup claims its product was pitched to Altman’s personal investment firm and Ive's design company in 2022 and alleges that the new venture's name is confusingly similar to its own.
Tesla begins robotaxi test run in Austin
U.S. District Judge Trina Thompson ruled on Friday that IYO has a sufficiently strong trademark infringement claim to move forward with legal proceedings, scheduling a hearing for October. Until then, Altman, Ive, and OpenAI are barred from “using the IYO mark, and any mark confusingly similar thereto, including the IO mark, in connection with the marketing or sale of related products.”
In response to the ruling, OpenAI removed all references to the new venture from its website, including the original May 21 announcement. The replaced webpage now displays a message stating the content is "temporarily down due to a court order" and adds, "We don’t agree with the complaint and are reviewing our options."
IYO CEO Jason Rugolo welcomed the court's decision, issuing a statement on Monday asserting that the company will firmly defend its brand and technology.
"IYO will not roll over and let Sam and Jony trample on our rights, no matter how rich and famous they are," Rugolo said.
5 months ago
New York Times signs first AI content licensing deal with Amazon
The New York Times Company has signed a multiyear agreement to license its content to Amazon for AI-related uses, marking the newspaper’s first such deal in the generative AI space.
Announced on May 29, the partnership comes as the Times continues its legal battle against OpenAI and Microsoft over alleged copyright infringement involving the use of its journalism to train AI systems.
According to Variety, the agreement will bring The New York Times’s editorial content to various Amazon platforms, enhancing customer experiences across the tech giant’s services.
According to the companies, the collaboration aims to make the Times’s original content more accessible within Amazon products, including direct links to Times offerings, and reflects a shared commitment to delivering global news and perspectives via AI.
Under the deal, Amazon will license content from The New York Times, including NYT Cooking and The Athletic sports publication. This includes the real-time display of summaries and brief excerpts on Amazon products such as Alexa, and the use of content to train Amazon’s proprietary foundation AI models.
Rise in harmful content on Facebook following Meta's moderation rollback
New York Times CEO Meredith Kopit Levien said, “This deal is consistent with our long-held principle that high-quality journalism is worth paying for. It aligns with our deliberate approach to ensuring that our work is valued appropriately, whether through commercial deals or through the enforcement of our intellectual property rights.”
The Times’s move reflects the broader, mixed response of media companies to the rise of artificial intelligence, some opting for licensing partnerships while others pursue legal action.
Last month, The Washington Post, owned by Amazon founder Jeff Bezos, entered a “strategic partnership” with OpenAI.
6 months ago
OpenAI Codex: A Pair Programmer to Shape the Future Coding Paradigm
OpenAI, the company behind the phenomenal automated chatbot ChatGPT, launched its next-generation coding agent, Codex, on May 16, 2025. Designed to streamline several fundamental aspects of software programming, Codex will further dynamise the exponentially growing field of automated coding. Let’s look deeper at what Codex is and what it can do.
What is Codex?
OpenAI Codex is an AI model that translates natural language into computer code. Built on OpenAI 0.3 reasoning model technology and trained on vast codebases, it powers tools like GitHub Copilot, enabling users to write, understand, and automate code tasks across multiple programming languages using plain English instructions. Codex accesses GitHub repositories, collects inspiration from billions of lines of code already stored there, and uses its algorithms to come up with solutions to complex tasks.
Development History of OpenAI Codex
The invention of the Codex dates back to August 2021, right after OpenAI introduced the GPT-3, a neural prototype trained on text. To tune up the process through which GPT-3 generates results, developers added an extra model of 12 billion parameters and named it Codex. This initial version of Codex could write SQL queries, convert conversations into Python code, and build UI components.
Read more: Watch The Skies: World's First AI-dubbed Feature Film to Change The Way of Watching Movies
Later, throughout the period 2022 to 2023, Codex continued evolving in smaller versions and got integrated into GPT-4 and ChatGPT Pro. Recently, OpenAI has given Codex its environment and advanced coding functionalities. In doing so, the company has also shifted the reasoning model of Codex from GPT-3 to Codex-1.
What Can Codex Do?
The new Codex model is apt at executing multiple tasks related to coding. Here is a short glance:
Conversational Coding
Perhaps one of the most groundbreaking features of Codex is the ability to translate plain English into code. This allows users to instruct the AI with phrases and receive accurate, ready-to-run scripts across a range of programming languages.
Developing Software
OpenAI’s new code models are transforming the software landscape by allowing developers to generate functional code in seconds. With simple prompts, users can create entire programs without manually writing each line.
Read more: Basic to Flagship: Top 5 Official Smartphones Released in Bangladesh in 2025
Analysing Data
From spreadsheets to large CSV files, OpenAI’s code tools can now analyse, clean, and visualise data with minimal user input. Journalists, researchers, and business analysts can quickly generate charts or summaries by simply uploading a file and asking questions.
Educational Companion for Coding Learners
The models serve as a real-time tutor, capable of explaining code line by line, debugging errors, and answering conceptual questions. For students and self-learners, this offers an accessible alternative to traditional learning methods.
Accelerating AI-Powered Applications
Startups and enterprises alike are leveraging these tools to make prototypes of AI-driven apps without the need for large development teams, cutting costs and speeding up innovation cycles.
Common Issues and Concerns on the Rise
.
Accuracy and Reliability
While OpenAI's models can generate functional code with impressive speed, experts caution that the outputs are not always correct or optimal. In complex scenarios, the AI may produce code that looks fine but contains subtle bugs, inefficiencies, or security flaws. Relying on such code without proper review could pose risks, particularly in sensitive applications like healthcare, finance, or cybersecurity.
Read more: Best 10 Smartphones Releasing in May 2025
Security and Misuse Risks
OpenAI’s code generation capabilities could be exploited to create malicious software. Although safeguards are in place, the potential for misuse remains a real concern. Cybersecurity experts warn that making code generation more public at this scale could empower bad practitioners.
Programming Fields
As these tools improve, there's a growing debate about the impact on employment. While some argue that AI will complement human developers by automating repetitive tasks, others worry it could eventually reduce demand for entry-level programmers, especially in roles focused on routine coding or bug fixing.
Code Licensing Confusion
The use of AI-generated code raises legal questions around ownership and licensing. Developers and companies are seeking clarity on how such content can be safely used in commercial products.
Skill Dilution
Some educators and software veterans fear that easy access to AI-generated code may hinder learning. If new developers rely too heavily on tools like Codex, they may struggle to build a deep understanding of how software works. Over time, this could lead to a generation of coders with limited problem-solving skills or creative confidence.
Read more: How Does Fashion Waste Contribute to Environmental Issues?
Conclusion
OpenAI Codex brings a transformative shift in how we code, making programming faster and more open. While it offers immense potential to streamline development and learning, thoughtful use, ethical oversight, and continued human expertise are essential to harnessing its power responsibly in the evolving tech landscape.
6 months ago
OpenAI reverses course and says its nonprofit will continue to control its business
After months of exploring a transition to a for-profit model, OpenAI announced on Monday that it is changing direction and will keep its nonprofit entity in charge of the organization behind ChatGPT and other AI technologies.
“In light of feedback from civic leaders and discussions with the Attorneys General of California and Delaware, we’ve decided the nonprofit will maintain control,” CEO Sam Altman wrote in a message to staff.
Altman and Bret Taylor, the chair of OpenAI’s nonprofit board, explained that while the board chose to preserve nonprofit oversight, they are introducing a new strategy to support the company’s growth.
As part of what Taylor referred to as a “recapitalization,” the existing for-profit branch of the organization will be restructured into a public benefit corporation—one that must weigh both shareholder interests and its broader mission.
Shareholders will also receive stock and a cap on profit for some investors will be lifted, as part of the new plan. Altman said the changes would make it easier for the for-profit to behave more like a normal company.
Taylor declined to say Monday how large of an ownership stake the nonprofit will have in the new public benefit corporation. He said in a call with reporters that the nonprofit will choose the board members of the public benefit corporation and, at first, they will likely be the same people who now sit on OpenAI’s nonprofit board.
Grand Theft Auto VI delayed again, this time until May 2026
Public benefit corporations were first created in Delaware in 2013 and other states have adopted the same or similar laws that require the companies to pursue not just profit but a social good. Public benefit corporations, which include Amalgamated Bank and the online education platform Coursera, need to define that social good, which can vary broadly, when they incorporate.
Altman said that converting from a limited liability company to a public benefit corporation “just sets us up to be a more understandable structure to do the things that a company of our scope has to do.”
“There’s so much more demand to use AI tools than we thought there was going to be,” Altman said. Getting access to more capital will make it easier for OpenAI to pursue mergers and acquisitions “and other normal things companies would do,” Altman said.
OpenAI’s co-founders, including Altman and Tesla CEO Elon Musk, originally started it as a nonprofit research laboratory on a mission to safely build what’s known as artificial general intelligence, or AGI, for humanity’s benefit. Nearly a decade later, OpenAI has reported its market value as $300 billion and counts 400 million weekly users of ChatGPT, its flagship product.
OpenAI first outlined plans last year to convert its core governance structure but faced a number of challenges. One is a lawsuit from Musk, who accuses the company and Altman of betraying the founding principles that led Musk to invest in the charity and tried to block the conversion to a for-profit. A federal judge last week dismissed some of Musk’s claims and allowed others to proceed to a trial set for next year.
OpenAI also faced scrutiny from the top law enforcement officers in Delaware, where the company is incorporated, and California, where it operates out of a San Francisco headquarters. The California attorney general’s office said in a statement that it was reviewing the plan and, “This remains an ongoing matter — and we are in continued conversations with Open AI.”
The attorney general’s office in Delaware did not immediately return a request for comment.
Altman said he still expects a large investment from Japanese technology giant SoftBank Group, which in February announced plans to set up a joint company with OpenAI to push AI services.
6 months ago
Judge allows newspaper copyright lawsuit against OpenAI to proceed
A federal judge has ruled that The New York Times and other newspapers can proceed with a copyright lawsuit against OpenAI and Microsoft seeking to end the practice of using their stories to train artificial intelligence chatbots.
U.S. District Judge Sidney Stein of New York on Wednesday dismissed some of the claims made by media organizations but allowed the bulk of the case to continue, possibly to a jury trial.
“We appreciate Judge Stein’s careful consideration of these issues," New York Times attorney Ian Crosby said in a statement. “As the order indicates, all of our copyright claims will continue against Microsoft and Open AI for their widespread theft of millions of The Times’s works, and we look forward to continuing to pursue them.”
The judge's ruling also pleased Frank Pine, executive editor of MediaNews Group and Tribune Publishing, owners of some of the newspapers that are part of a consolidated lawsuit in a Manhattan court.
European startup scrubs first test flight of Orbital Rocket
“The claims the court has dismissed do not undermine the main thrust of our case, which is that these companies have stolen our work and violated our copyright in a way that fundamentally damages our business,” Pine said a statement.
Stein didn't explain the reasons for his ruling, saying that would come “expeditiously.”
OpenAI said in a statement it welcomed “the court’s dismissal of many of these claims and look forward to making it clear that we build our AI models using publicly available data, in a manner grounded in fair use, and supportive of innovation.”
Microsoft declined to comment.
The Times has said OpenAI and its business partner Microsoft have threatened its livelihood by effectively stealing billions of dollars worth of work by its journalists, in some cases spitting out Times’ material verbatim to people who seek answers from generative artificial intelligence like OpenAI’s ChatGPT.
8 months ago
OpenAI board unanimously rejects Elon Musk's $97.4b proposal
OpenAI says its board of directors has unanimously rejected a $97.4 billion takeover bid by Elon Musk.
“OpenAI is not for sale, and the board has unanimously rejected Mr Musk’s latest attempt to disrupt his competition," said a statement Friday from Bret Taylor, chair of OpenAI's board.
OpenAI attorney William Savitt in a letter to Musk's attorney Friday said the proposal “is not in the best interests of OAI’s mission and is rejected.”
TikTok returns to Apple and Google app stores in US
Musk, an early OpenAI investor, began a legal offensive against the ChatGPT maker nearly a year ago, suing for breach of contract over what he said was the betrayal of its founding aims as a nonprofit.
OpenAI has increasingly sought to capitalize on the commercial success of generative AI. But the for-profit company is a subsidiary of a nonprofit entity that's bound to a mission — which Musk helped set — to safely build better-than-human AI for humanity's benefit. OpenAI is now seeking to more fully convert itself to a for-profit company, but would first have to buy out the nonprofit's assets.
Throwing a wrench in those plans, Musk and his own AI startup, xAI, and a group of investment firms announced a bid Monday to buy the nonprofit that controls OpenAI. Musk in a court filing Wednesday further detailed the proposal to acquire the nonprofit’s controlling stake.
Savitt's letter Friday said that court filing added “new material conditions to the proposal. As a result of that filing, it is now apparent that your clients’ much-publicized ‘bid’ is in fact not a bid at all.” In any event, “even as first presented,” the board has unanimously rejected it, Savitt said.
Musk has alleged in the lawsuit that OpenAI is violating the terms of his foundational contributions to the charity. Musk had invested about $45 million in the startup from its founding until 2018, his lawyer has said.
He escalated the legal dispute late last year, adding new claims and defendants, including OpenAI's business partner Microsoft, and asking for a court order that would halt OpenAI’s for-profit conversion. Musk also added xAI as a plaintiff, claiming that OpenAI was also unfairly stifling business competition. A judge is still considering Musk's request but expressed skepticism about some of his claims in a court hearing last week.
9 months ago
Alibaba unveils AI model, claims it surpasses DeepSeek, ChatGPT
Chinese tech behemoth Alibaba has introduced its Qwen 2.5-Max AI model, boldly asserting that it has outpaced DeepSeek’s renowned DeepSeek-V3 model, reports First Post.
Unveiled on the first day of the Lunar New Year, the launch of Qwen 2.5-Max highlights the intensifying competition within China’s AI sector, reflecting the pressure DeepSeek’s rapid ascent has exerted not only on global competitors but also on domestic ones.
Asian stocks rise as DeepSeek panic fades
Alibaba's cloud division revealed on WeChat that Qwen 2.5-Max outperformed OpenAI’s GPT-4, DeepSeek-V3, and Meta’s Llama-3.1-405B across various performance metrics. The timing of this announcement, during the Lunar New Year festivities, underscores the urgency felt by Chinese firms to maintain competitiveness against DeepSeek, which has made waves in the AI market since its January debut.
DeepSeek’s Market Disruption
DeepSeek’s sudden success, beginning with the launch of its AI assistant powered by the DeepSeek-V3 model on January 10 and followed by the R1 model on January 20, has disrupted the tech industry. The Chinese startup’s cost-effective approach to developing powerful AI has raised concerns in Silicon Valley, particularly as investors question the high development costs associated with leading US companies. In response, Chinese competitors are racing to enhance their models.
ByteDance, for example, updated its flagship AI model shortly after DeepSeek’s R1 release, claiming it surpassed OpenAI’s GPT-1 in the AIME benchmark test, which assesses AI’s ability to comprehend and respond to complex commands. This move highlights how DeepSeek’s swift rise has spurred action among domestic firms, with Alibaba’s latest release being a response to DeepSeek’s innovations.
The Emergence of a New Competitor: Kimi k1.5 from Moonshot
Complicating the race further is Moonshot AI’s new Kimi k1.5 model, which launched just days after DeepSeek’s R1. Kimi k1.5 is being regarded as a direct rival to both DeepSeek’s models and OpenAI’s GPT-4, with reports suggesting it outperforms both on key benchmarks. Unlike DeepSeek-R1, which lacks multimodal capabilities, Kimi k1.5 is a multimodal model capable of processing and reasoning across text, images, and code, giving it a substantial advantage for tasks requiring both visual and textual data.
Kimi k1.5 has also been developed at a fraction of the cost compared to similar cutting-edge AI models in the US, positioning Moonshot AI as a growing force in the global AI arena. Its advanced reinforcement learning techniques further enhance its versatility, making it highly adaptable to a range of applications.
How DeepSeek stacks up against ChatGPT and Gemini
Shifting Dynamics in China’s AI Industry
China’s rising influence in AI is becoming increasingly apparent, as companies like DeepSeek, Alibaba, and Moonshot AI challenge the longstanding dominance of US tech giants. The launch of DeepSeek-V2 last May ignited an AI price war in China, prompting Alibaba’s cloud division to reduce prices by up to 97%. This price-cutting strategy has since become common practice among Chinese firms, including Baidu and Tencent, as they strive to develop AI models that can compete with OpenAI and other US-based giants.
DeepSeek, led by Liang Wenfeng, has taken a distinct approach, operating more like a research lab with a lean team of graduates and PhD students. Liang’s vision of achieving artificial general intelligence (AGI) with significantly lower overhead than larger tech companies contrasts with the more costly, hierarchical models of Alibaba and other Chinese tech giants.
As China’s AI sector continues to evolve at a rapid pace, its influence on the global market grows. The competition between DeepSeek, Moonshot AI, and Alibaba marks a crucial shift in the AI landscape, with these startups and tech giants pushing the boundaries of AI development. The race for AI supremacy is underway, and China is leading the way.
10 months ago
ChatGPT faces outage, users worldwide report problems
OpenAI’s generative AI chatbot, ChatGPT, is currently undergoing significant disruptions, preventing users from engaging in chats or accessing their previous conversations, reports NDTV.
Nearly 4,000 users had reported problems on the outage tracking site Downdetector, it said.
ChatGPT faces second outage in December
The outages seem to be affecting not only ChatGPT but also other OpenAI services, leading to speculation that the GPT-4o and GPT-4o mini models are down, causing the wider disruptions. Users have been sharing their experiences and reactions on platforms like X.com and Instagram.
In a recent podcast appearance, OpenAI CEO Sam Altman shared his vision of a future where AI surpasses human intellect, saying this shift will feel like a natural part of life for future generations.
Italy fines OpenAI for ChatGPT data privacy violations
"My kid is never going to grow up being smarter than AI," Altman commented on the Re: Thinking podcast with Adam Grant, acknowledging that AI will outperform humans in many areas. "Of course, it's smarter than us. Of course, it can do things we can't, but also who really cares?" he added.
10 months ago