Meta
OpenAI pauses operations for a week amid Meta’s billion-dollar talent battle
In a move sending shockwaves through Silicon Valley, OpenAI is shutting down operations for an entire week. Officially, the company cites employee burnout as the reason. However, the timing raises serious questions, especially as Meta aggressively courts OpenAI’s top talent with eye-popping offers. To many, the break feels less like a wellness initiative and more like a defensive response in the intensifying battle for AI expertise.
Why is OpenAI shutting down?According to OpenAI, the week-long pause is intended to help employees recover after enduring relentless, months-long stretches of 80-hour work weeks. The decision comes amid mounting internal concerns over burnout, fatigue, and declining morale across teams. Yet, the timing of the break coincides with Meta's aggressive efforts to poach OpenAI staff, leading many to suspect the shutdown is as much about damage control as it is about employee well-being.
New interstellar comet to make a distant flyby of Earth, NASA says
Meta’s aggressive talent poachingMeta is making no secret of its recruitment ambitions. Reports suggest the company is offering signing bonuses as high as $100 million to attract leading AI researchers and engineers, particularly those trained at OpenAI. Several former OpenAI employees have already migrated to Meta’s FAIR division and its newly revitalized AGI research teams. With OpenAI staff grappling with exhaustion and feelings of being undervalued, Meta's lucrative offers are proving hard to resist — and Meta is fully aware of the opportunity.
Internal response at OpenAILeaked internal memos from OpenAI's Chief Research Officer Mark Chen and CEO Sam Altman reveal the company’s growing unease. Chen admitted to heightened anxiety within teams and encouraged staff to "reconnect with the mission." Meanwhile, Altman has reportedly pledged to revamp compensation packages, improve internal recognition, and called for unity to resist external recruitment pressures. However, many insiders feel these promises have come too late, and Meta's offers are simply too enticing.
Risks and growing fearsThere is widespread concern that Meta will use OpenAI's shutdown week to accelerate its recruitment efforts, potentially blindsiding the company. While OpenAI’s technical teams are expected to rest, Meta's recruiters remain active. Only OpenAI’s executive leadership will continue working during the break — a clear sign that management views the situation as more than a routine wellness measure.
Meta’s new cloud processing feature raises privacy concerns for Facebook users
Broader implications for OpenAI and the AI industryThis shutdown exposes two escalating issues: the unsustainable working conditions at AI labs racing toward Artificial General Intelligence (AGI), and the fierce competition for elite talent. For OpenAI, the pause marks both a moment of vulnerability and a critical cultural inflection point. How the company navigates this period could not only determine its future but also influence the broader trajectory of the AI industry itself.
#With inputs from Hindustan Times
4 months ago
Prof Yunus urges Meta to find ways to fight disinformation more effectively
Chief Adviser Professor Muhammad Yunus on Wednesday urged Meta, which operates several social media and communication platforms, including Facebook, Instagram, Threads, Messenger and WhatsApp, to find an effective way to combat disinformation that disrupts social harmony and spreads hatred.
“This (disinformation) is a big problem. You must find a way to fight it,” said the Chief Adviser.
Prof Yunus made the comment when Simon Milner, VP, Public Policy, APAC at Meta and Ruzan Sarwar, Public Policy Manager, met him at the State Guest House Jamuna.
''Bangladesh is a densely populated country. One wrong word can destabilise the whole country. Some people do it deliberately,” said the Chief Adviser.
Germany dependable partner in our development journey: Prof Yunus
Milner said they were ready to engage with the interim government of Bangladesh to counter disinformation, especially ahead of the upcoming general election next year, and had had meetings with different Bangladeshi authorities and rights groups in the past few days.
“We have had a dedicated team for Bangladesh for the last five years,” he said.
The Chief Adviser said Meta platforms, especially Facebook, have the potential to promote business growth, but at the same time, they can be potentially dangerous provided they do not maintain ethical standards.
Chief Adviser's Special Assistant on Posts, Telecommunications and Information Technology, Faiz Ahmad Taiyeb, was among others present on the occasion, according to Chief Adviser's press wing.
Taiyeb urged Meta to increase its Bangla language proficiency, as Meta LLM AI is very much dependent on the English language, which is not helpful.
On Tuesday, the Meta officials held a meeting with ICT ministry officials together, where the Bangladesh side, referring to recent researches, urged Meta to increase investment in Bengali LLM & AI based sentiment analysis in Bengali, as well as to increase the number of human reviewers to tackle fake news and information.
The ICT Division has called on Meta to strengthen the enforcement of its community standards in the context of Bangladesh by recruiting more Bangladeshi content reviewers who possess a deeper understanding of local language, culture and sensitivities.
Besides, the Bangladeshi side has requested Meta to deploy cache servers and edge routers within the country to improve service efficiency, optimise bandwidth and protect Personally Identifiable Information (PII).
Bangladesh, South Korea eye stronger infrastructure ties
Representatives of Bangladesh Police, BTRC who were present at Tuesday’s meeting, asked Meta to improve the processing time for taking down any harmful posts to protect citizen safety.
The police also asked for Meta’s cooperation in a proactive and faster response in threat detection, crime detection, misinformation/disinformation alert, inciting mob violence alert and suicide alert.
5 months ago
Rise in harmful content on Facebook following Meta's moderation rollback
Meta's latest Integrity Report shows worrying spike in violent posts and harassment after policy shift aimed at easing restrictions on political expression.
Facebook has seen a notable rise in violent content and online harassment following Meta’s decision to ease enforcement of its content moderation policies, according to the company’s latest Integrity Report.
The report, the first since Meta overhauled its moderation strategy in January 2025, reveals that the rollback of stricter content rules has coincided with a drop in content removals and enforcement actions — and a spike in harmful material on its platforms, including Instagram and Threads.
Meta’s shift, spearheaded by CEO Mark Zuckerberg, was aimed at reducing moderation errors and giving more space for political discourse. However, the company now faces growing concern that the relaxed rules may have compromised user safety and platform integrity.
Violent Content and Harassment on the Rise
The report shows that violent and graphic content on Facebook increased from 0.06–0.07 per cent in late 2024 to 0.09 per cent in the first quarter of 2025. While the percentages appear small, the scale is significant for a platform used by billions.
Likewise, bullying and harassment rates rose in the same period. Meta attributed this to a March spike in violating content, noting a slight rise from 0.06–0.07 per cent to 0.07–0.08 per cent. These increases mark a reversal of a downward trend in harmful content seen in previous years.
Content Removals and Enforcement Plummet
The rise in harmful posts comes as Meta dramatically reduces enforcement activity. Only 3.4 million pieces of content were actioned under its hate speech policy in Q1 2025 — the lowest since 2018. Spam removals also fell sharply, from 730 million at the end of 2024 to 366 million in early 2025. Additionally, the number of fake accounts removed dropped from 1.4 billion to 1 billion.
Meta’s new enforcement strategy focuses primarily on the most severe violations, such as child exploitation and terrorism, while areas previously subject to stricter moderation — including immigration, gender identity, and race — are now framed as protected political expression.
The definition of hate speech has also been narrowed. Under the revised rules, only direct attacks and dehumanising language are flagged. Content previously flagged for expressing contempt or exclusion is now permitted.
Shift in Fact-Checking Strategy
In another major change, Meta has scrapped its third-party fact-checking partnerships in the United States, replacing them with a crowd-sourced system known as Community Notes. The system, now active across Facebook, Instagram, Threads, and even Reels, relies on users to flag and annotate questionable content.
While Meta has yet to release usage data for the new system, critics warn that such an approach could be vulnerable to manipulation and bias in the absence of independent editorial oversight.
Fewer Errors, Says Meta
Despite the concerns, Meta is presenting the new moderation approach as a success in terms of reducing errors. The company claims moderation mistakes in the United States dropped by 50 per cent between the final quarter of 2024 and Q1 2025. However, it has not disclosed how this figure was calculated. Meta says future reports will include more transparency on error metrics.
“We are working to strike the right balance between overreach and under-enforcement,” the report states.
Teen Protections Remain in Place
One area where Meta has not scaled back enforcement is in content directed at teenagers. The company has maintained strict protections against bullying and harmful content for younger users and is introducing dedicated Teen Accounts across its platforms to improve content filtering.
Meta also highlighted growing use of artificial intelligence, including large language models, in its moderation systems. These tools are now exceeding human performance in some cases and can automatically remove posts from review queues if no violation is detected.
As Meta pushes ahead with its looser content policies, experts and users alike will be watching closely to see whether the company can truly balance free expression with safety — or whether its platforms risk becoming breeding grounds for harmful content.
Source: With inputs from agencies
6 months ago
Meta's Movie Mate to Chat with M3GAN in Theater: Interactive Experience or Distraction
Meta has launched an experimental feature, Movie Mate. It allows theatregoers to engage in real-time conversation with the AI character M3GAN during screenings. M3GAN, a horror sci-fi blend, already blurs the line between technology and terror. Now, the viewing experience itself takes a bold new turn. Let’s explore whether this move signals a thrilling innovation or an unwelcome audience distraction.
Introducing Movie Mate
Blumhouse, the production house of ‘M3GAN,’ partnered with Meta to pilot a new kind of theatrical tech experiment: Movie Mate. This interactive chatbot was designed not to scare but to interact with, prompting viewers to tap away on their handheld screens. As the movie progressed, users received timed trivia drops and AI-generated quips, all in sync with the onscreen chaos. Meta, the parent company of Facebook and Instagram, pitched this as a bold strategy to pull distracted audiences back into theatre seats.
The rollout, however, wasn’t grand or heavily publicised. On April 30, 2025, Blumhouse and Universal quietly reintroduced ‘M3GAN,’ the horror-sci-fi tale of AI gone rogue. The horror movie marks the first title to feature Movie Mate integration. The occasion was Blumhouse’s “Halfway to Halloween” event, a nod to fans of the spooky season.
Read more: Karate Kid Legends Unites Two Miyagi-verse Icons- Jackie Chan and Ralph Macchio
Roughly 70 attendees gathered at AMC Universal CityWalk in Los Angeles. As the lights dimmed, a soft glow lit up the auditorium, dozens of phones ready for digital dialogue.
During the opening scenes, the app delivered around ten messages. One, cheeky and in-character, asked, “Do you think they’re inventing other dolls like me?” A simple yes was met with the wry retort, “Don’t be delulu.”
People’s Response
At first glance, the novelty of Movie Mate sparked genuine curiosity among some theatregoers. As the AI chatbot introduced itself in the voice of M3GAN, the initial reactions were light-hearted and intrigued. The tech felt fresh, almost playful, in its attempt to personalise the cinematic experience.
Read more: Sinners Makes Horror History: Blockbuster, Top Ratings, and Oscar Buzz
But that early excitement didn’t hold for long. Many audience members found themselves second-guessing the very act of looking at their phones. The fear of seeming rude or disruptive in a darkened theatre overshadowed the fun. Several admitted to ignoring most of the chatbot’s messages until the film had ended. It was unable to fully balance the screen in their hands with the one in front of them.
Industry observers echoed this ambivalence. Trade publications quietly questioned its staying power, suggesting that it leaned more toward a marketing stunt than meaningful innovation. Mashable noted that the real intent may be to hype ‘M3GAN 2.0,’ set for June 27, 2025, and boost Meta’s push for second-screen engagement.
M3GAN Makers’ Take
Blumhouse isn’t blind to shifting audience behaviour. Data from the National Research Group shows nearly 20% of moviegoers aged 6–17 text during films, despite theatre rules. Rather than resist the habit, the studio sees it as a chance to redirect it in support of the film experience.
Read more: Best 10 Korean Dramas Releasing in May 2025
The M3GAN maker believed the experiment was worthwhile, particularly because younger audiences seemed to embrace it. From their perspective, this indicated a growing appetite for interactive elements that enhance, rather than replace, traditional moviegoing.
The criticism hasn’t gone unnoticed, but the studio seems unbothered. They acknowledge that this approach won’t appeal to everyone and that not all innovations are meant for universal acceptance.
There also seems to be a strategic vision behind the app. Blumhouse executives view it as a way to turn movie releases into events. It would be experiences that generate buzz and compel younger viewers to participate. They see younger audiences as eager to interact with the media they consume.
Read more: Thunderbolts*: The First Non-Sequel Movie Breaking the 13-year Marvel Trend
Even so, there’s recognition of its limitations. For now, Blumhouse and Universal have no plans to extend Movie Mate to upcoming titles, including ‘M3GAN 2.0’.
Where Movie Theaters Stand
Movie Mate fits into a broader vision from Meta’s CEO, Mark Zuckerberg, who aims to integrate AI chatbots across his platforms and beyond. In Zuckerberg’s futuristic world, these chatbots are designed for personalised, fun interactions at any moment. This includes, it seems, even during movie screenings.
The timing for such a move comes amid significant challenges for the movie industry. Comscore reports a 33% drop in the North American box office since 2019, despite hits like ‘A Minecraft Movie’ and ‘Sinners,’ as the pandemic accelerated the shift to streaming. Given this, Hollywood is looking for innovative ways to lure audiences back into theatres.
Read more: South Indian Films Releasing in May 2025: Must-watch Lineup
However, not all movie theatres are eager to embrace this new technology. Alamo Drafthouse Cinema, known for catering to passionate film lovers, opted out of the AI chatbot experiment. Even several smaller theatres across the country followed in their footsteps.
Yet, the major chains, Regal Cineworld and AMC Entertainment, decided to test the waters. Their only condition: ticket buyers would be fully informed about what they were signing up for before entering the theatre.
Influential Backdrop
In many ways, it’s the younger audience that’s reshaping the theatre experience—often on their own terms.
Take A Minecraft Movie from Warner Bros., for instance. It turned into a surprise hit last month, not just for the story on screen but for the chaos unfolding in the aisles. Teen viewers ignored theatre etiquette. They shouted lines like ‘chicken jockey,’ climbed on each other’s shoulders, tossed popcorn in the air, and filmed it all for TikTok.
Read more: China's Ne Zha 2 Stays Atop Hollywood's ‘A Minecraft Movie’ at the Worldwide Box Office
The trend isn’t isolated. Universal’s Wicked soared to USD 754 million last year, helped by fans who turned screenings into sing-alongs and social media events. Similarly, Sony’s ‘Anyone but You’ (2023) pulled in a surprising USD 220 million after viewers began recording their reactions to the finale.
The creators behind ‘M3GAN’ paid close attention to all of this.
Takeaways
Meta's Movie Mate app brought real-time chatbot interaction into theatres, aiming to engage audiences during ‘M3GAN’ screenings. Reactions were mixed—some curious, others hesitant or distracted. Blumhouse viewed it as a chance to enrich the experience for younger fans. Meanwhile, theatres remained divided, with major chains on board and boutique cinemas opting out. In this early phase of AI-led second-screening, whether it’s an interactive experience or a distraction remains unsettled.
Read more: Tom Cruise’s Mission: Impossible 8 - The Final Reckoning Marks the Longest Film of the Franchise
7 months ago
Meta hit with fines by Turkey after refusing to restrict content
Meta said it has been hit with a hefty fine for resisting Turkish government demands to limit content on Facebook and Instagram.
President Recep Tayyip Erdogan's government has been trying to restrict opposition voices on social media after widespread protests erupted following the arrest of Istanbul's mayor, who's a key rival.
“We pushed back on requests from the Turkish government to restrict content that is clearly in the public interest, and have been fined by them as a consequence,” the company said in a statement.
The social media company did not disclose the size of the fine, except to say it was “substantial” and did not provide any more details about the content in question. The Associated Press has approached the Turkish government for comment.
“Government requests to restrict speech online alongside threats to shut down online services are severe and have a chilling effect on people’s ability to express themselves,” Meta said.
Meta's head of AI research stepping down
In recent years the Turkish government has increasingly sought to bring social media companies under its control. When protests erupted following the March 19 arrest of opposition Istanbul Mayor Ekrem Imamoglu, many social media platforms such as X, Instagram and Facebook were blocked.
More than 700 individual X accounts, including those belonging to journalists, media outlets, civil society organizations and student groups, were blocked, according to the Media and Law Studies Association. X said it would object.
Dozens have been arrested for social media posts deemed to be supporting the protests.
8 months ago
Meta to test crowd-sourced fact-checking using X's model
Meta will begin testing its crowd-sourced fact-checking initiative, Community Notes, on March 18, following the model used by Elon Musk's X, the company announced on Thursday.
Meta had previously discontinued its fact-checking programme in January, with CEO Mark Zuckerberg stating that fact-checkers had become “politically biased,” echoing criticisms long voiced by conservatives. However, media experts and social media researchers expressed deep concern over the policy change.
French publishers, authors sue Meta for AI copyright infringement
“The decision not only eliminates a valuable resource for users but also lends credibility to the widespread disinformation narrative that fact-checking is politically biased. Fact-checkers play a crucial role by providing essential context to viral claims that mislead millions on Meta,” said Dan Evon, lead writer for RumorGuard, the News Literacy Project’s digital tool that curates fact checks and educates people on identifying misinformation.
Meta first introduced fact-checking in December 2016 following Donald Trump’s election, responding to concerns about the spread of “fake news” on its platforms. For years, the company partnered with over 100 organisations across more than 60 languages to combat misinformation. The Associated Press withdrew from Meta’s fact-checking programme more than a year ago.
Community Notes will eventually replace fact checks, though not immediately. Meta stated that potential contributors in the U.S. can begin signing up for the programme, but their notes will not be visible right away.
“We will start by gradually and randomly admitting people from the waitlist and will take time to test the writing and rating system before any notes are published publicly,” Meta explained.
Meta emphasised that it would not determine what content gets rated or written, and notes will only be published if contributors with diverse viewpoints reach a broad consensus. Unlike the previous fact-checking system, where flagged misinformation saw reduced distribution, posts with Community Notes will not face penalties.
Meta initiates layoffs to reduce workforce by 5%
Fact checks will remain in place outside the U.S. for now, though Meta intends to expand Community Notes globally in the future.
8 months ago
French publishers, authors sue Meta for AI copyright infringement
French publishers and authors have announced legal action against Meta, alleging that the social media giant used their works without permission to train its artificial intelligence model.
On Wednesday, three trade groups stated that they were suing Meta in a Paris court, accusing the company of the “massive use of copyrighted works without authorisation” to train its generative AI model.
The National Publishing Union, representing book publishers, has highlighted that "numerous works" from its members are appearing in Meta’s data pool, according to the group’s president, Vincent Montagne, in a joint statement.
Meta has not responded to a request for comment. The company has introduced generative AI-powered chatbot assistants for users on Facebook, Instagram, and WhatsApp.
Montagne accused Meta of engaging in “noncompliance with copyright and parasitism.”
Another trade group, the National Union of Authors and Composers, which represents 700 writers, playwrights, and composers, stated that the lawsuit is necessary to protect its members from “AI that plunders their works and cultural heritage to train itself.”
Ex-Meta official’s 'explosive dispatch' on company to be published
The union is also concerned about AI generating “fake books” that compete with real publications, said the group’s president, François Peyrony.
The third organisation involved in the lawsuit, the Société des Gens de Lettres, represents authors. Together, they demand the “complete removal” of data directories Meta created without authorisation to train its AI model.
Under the European Union’s comprehensive Artificial Intelligence Act, generative AI systems must comply with the 27-nation bloc’s copyright regulations and be transparent about the material used for training.
This case is the latest example of the ongoing conflict between the creative and publishing industries and technology firms over data and copyright.
Last month, British musicians released a silent album in protest against proposed changes to the U.K. government’s artificial intelligence laws, which artists fear could undermine their creative control.
Meta initiates layoffs to reduce workforce by 5%
Meanwhile, media and technology company Thomson Reuters recently won a legal dispute against a now-defunct legal research firm over fair use in AI-related copyright cases. Other cases, involving visual artists, news organisations, and others, continue to progress through U.S. courts.
8 months ago
Ex-Meta official’s 'explosive dispatch' on company to be published
An insider account, described as an "explosive dispatch" about seven pivotal years at Facebook/Meta, is set to be released next week, reports AP.
Flatiron Books announced on Wednesday that Careless People will be published on Tuesday. Written by Sarah Wynn-Williams, Meta's former director of global public policy who left Facebook in 2018, the book delves into the inner workings of the company.
Which phones are getting One UI 7 first and when
The publisher's statement explains that Careless People takes readers behind the scenes of Meta’s boardrooms, private jets, and meetings with world leaders, shedding light on the appetites, excesses, blind spots, and priorities of executives Mark Zuckerberg, Sheryl Sandberg, and Joel Kaplan.
Wynn-Williams portrays them as deeply flawed, self-serving individuals, indifferent to the consequences of their actions on others for the sake of their own gain.
Flatiron further shares that Wynn-Williams will offer detailed accounts of Zuckerberg’s attempts to expand Meta in China and her efforts to urge the company to tackle hate speech and misinformation online.
Top 10 Holy Quran Apps on iOS and Android in Ramadan 2025
The book also touches on distressing incidents of workplace harassment and misogyny, as well as the challenges of working motherhood at a time when Sheryl Sandberg was globally recognised for advocating women to "Lean In."
8 months ago
Meta to invest up to $65 billion in AI projects in 2025
Meta Platforms, led by CEO Mark Zuckerberg, is planning a substantial investment of up to $65 billion (£559,051 crore) in artificial intelligence initiatives for 2025.
The funds will primarily be directed towards building a massive new data centre and expanding the company’s AI workforce, Zuckerberg revealed in a Facebook post on Friday.
The proposed data centre is set to be so expansive that it could occupy a significant portion of Manhattan. Meta aims to bring online approximately one gigawatt of computing power by 2025 and anticipates concluding the year with over 1.3 million graphics processing units (GPUs).
TikTok-loaded phones listed online for thousands amid app ban
“This is a massive effort, and over the coming years it will drive our core products and business, unlock historic innovation, and extend American technology leadership,” Zuckerberg stated.
Significant AI Investment
Meta has been heavily investing in AI over recent years, recently committing $10 billion (£86,440 crore) to a new data centre in Louisiana.
Additionally, the company has acquired advanced computer chips to support products such as its AI assistant and Ray-Ban smartglasses. Zuckerberg also highlighted plans to significantly expand Meta’s AI teams in 2025.
The announcement arrives shortly after OpenAI, SoftBank Group, and Oracle Corp. unveiled a $100 billion (£864,404 crore) joint venture, named Stargate, aimed at developing data centres and AI infrastructure across the US.
Musk clashes with OpenAI CEO Sam Altman over Trump-supported Stargate AI data center project
Increased Capital Expenditure
Meta’s planned 2025 capital expenditure marks a 50% rise compared to its estimated spending for 2024, and more than double the amount allocated in 2023. The company is expected to release finalised 2024 capital expenditure figures when it announces its fourth-quarter earnings on 29 January.
Wall Street analysts had anticipated Meta would allocate $51.3 billion (£443,444 crore) for 2025, according to Bloomberg-compiled estimates.
While Meta shares initially dipped during pre-market trading following the announcement, they later rose by as much as 1.7% after markets opened in New York. Broadcom Inc., a key provider of chip design services to Meta, also saw its stock climb by up to 3.9%.
Trump, a populist president, is flanked by tech billionaires at his inauguration
Balancing Overspending and Strategic Positioning
Zuckerberg acknowledged concerns about potential overspending in AI, reiterating comments he made in July. “There’s a meaningful chance that a lot of the companies are over-building now,” he noted, “but the downside of being behind is that you’re out of position for the most important technology for the next 10 to 15 years.”
Unconventional Disclosure
The decision to share Meta’s spending plans on Facebook, five days ahead of the company’s quarterly earnings announcement, deviates from typical corporate practice.
Such projections are usually issued alongside financial results or via formal regulatory filings. However, federal regulators have ruled that social media platforms are suitable for companies to disclose material information to investors.
Robert Schiffman, Senior Credit Analyst at Bloomberg Intelligence, commented positively on the announcement. “Meta’s sharp increase in 2025 capital spending … may be its best use of capital, driving future growth and positioning itself as a leader in AI capabilities,” he remarked.
Source: With inputs from news wirers
10 months ago
Dr Yunus urges Meta to intensify efforts against misinformation, fake news
Chief Adviser Prof Muhammad Yunus on Thursday urged Meta to step up efforts to tackle misinformation and fake news being spread through Facebook in Bangladesh.
Dr Yunus said "oligarchs and politicians" linked to the "toppled dictatorship of Sheikh Hasina" siphoned off tens of billions of dollars from Bangladesh during her 15 years of rule.
"These people are now spending their fortune to spread lies and misinformation about Bangladesh," the Chief Adviser told Sir Nick Clegg, the head of global affairs at Meta, on the sidelines of the World Economic Forum annual meeting in the Swiss city.
DP World, Maersk eye major investments in Bangladesh ports
Clegg is also a former deputy prime minister of the United Kingdom.
Probir Mehta, Director of Policy Planning of Meta; Lamiya Morshed, SDGs Affairs Principal Coordinator; and Ambassador Tareq Md Ariful Islam, Bangladesh's Permanent Representative in Geneva, also joined the meeting.
When Facebook launched in 2004, it changed the way people connect. Apps like Messenger, Instagram and WhatsApp further empowered billions around the world.
Now, Meta is moving beyond 2D screens toward immersive experiences like augmented, virtual and mixed reality to help build the next evolution in social technology.
Clegg said Facebook would continue to do fact-checking and digital verification in Bangladesh as it is an important country, with its population the world's eighth largest, Chief Adviser's Deputy Press Secretary Abul Kalam Azad Majumder told UNB.
Meta's decision to stop fact-checking in the United States would not be applicable for Bangladesh and countries in Europe, he said.
Switzerland reaffirms support to Bangladesh during important phase of transition
He said Facebook would likely scale up its digital verification service in Bangladesh and would explore ways to do fact checking by users -- similar to the operation of Wikipedia.
During the half-an-hour-long meeting, Clegg also offered Meta's expertise in drafting a new cybersecurity laws. "We have a lot of experience here," he said.
The Meta global affairs chief said Llama, the company's recently launched open-sourced large language model AI, could help revolutionise health care, farming and education.
He hoped it would be popular among the users in Bangladesh.
Prof Yunus asked Meta to organise month-long training programs on Llama in Bangladesh. "It will open up new opportunities for Bangladesh's young people," he said.
10 months ago