Social-Media
Discord to require face scan or ID for adult content
Discord will soon require users worldwide to verify their age through a face scan or by uploading an official ID to access adult content, as the platform rolls out stricter safety measures aimed at protecting teenagers.
The online chat service, which has more than 200 million monthly users, said the new system will place everyone into a teen-appropriate experience by default. Only users who successfully verify that they are adults will be able to access age-restricted communities, unblur sensitive material or receive direct messages from people they do not know.
Discord already requires age verification for some users in the UK and Australia to comply with local online safety laws. The company said the expanded checks will be introduced globally from early March.
“Nowhere is our safety work more important than when it comes to teen users,” said Savannah Badalich, Discord’s head of policy. She said the global rollout of teen-by-default settings would strengthen existing safety measures while still giving verified adults more flexibility.
Under the new system, users can either upload a photo of an identity document or take a short video selfie, with artificial intelligence used to estimate facial age. Discord said information used for age checks would not be stored by the platform or the verification provider, adding that face scans would not be collected and ID images would be deleted once verification is complete.
The company’s move has drawn mixed reactions. Drew Benvie, head of social media consultancy Battenhall, said the push for safer online communities was positive but warned that implementing age checks across millions of Discord communities could be challenging. He said the platform could lose users if the system backfires, but might also attract new users who value stronger safety standards.
Privacy advocates have previously raised concerns about age verification tools. In October, Discord faced criticism after ID photos of about 70,000 users were potentially exposed following a hack of a third-party firm involved in age checks.
The announcement comes amid growing pressure on social media companies from lawmakers to better protect children online. Discord’s chief executive Jason Citron was questioned about child safety at a US Senate hearing in 2024 alongside executives from Meta, Snap and TikTok.
With the new measures, including the creation of a teen advisory council, Discord is following a broader industry trend seen at platforms such as Facebook, Instagram, TikTok and Roblox, as regulators worldwide push for safer online environments for young users.
With inputs from BBC
4 days ago
The Australian woman tasked with keeping kids off social media
Julie Inman Grant, head of Australia’s eSafety Commission, faces weekly torrents of online abuse, including death and rape threats. The 57-year-old says much of it is directed at her personally, a consequence of her high-profile role in online safety.
After decades in the tech industry, Inman Grant now regulates some of the world’s biggest online platforms, including Meta, Snapchat, and YouTube. Her latest task was enforcing a pioneering law that bans Australians under 16 from social media, a move that has drawn global attention.
The law, which came into effect on December 10, covers ten platforms. Many parents support it, believing it gives them backing in managing their children’s online activity. Critics, however, argue children need guidance rather than exclusion, and that the ban may unfairly affect rural, disabled, and LGBTQI+ teens who rely on online communities. Tech companies too have voiced reservations, saying a ban is not the solution, even though they plan to comply with the law.
Inman Grant says delaying social media access can help children build critical thinking and resilience. She compares online safety to water safety: children need to learn to navigate risks, whether it’s predators or scams, much like learning to swim safely in the ocean. She acknowledges her own initial hesitation over a full ban, but eventually supported it while shaping how the law is applied.
At home, Inman Grant’s three children, including 13-year-old twins, have been a test case for the policy. She sees social media restrictions as a way to allow kids to grow without having mistakes broadcast widely.
Born in Seattle, USA, she grew up near tech giants Microsoft and Amazon. She briefly considered a career with the CIA but moved into tech, advising a US congressman on telecommunications before joining Microsoft. In the early 2000s, a Microsoft posting brought her to Australia, where she later became a citizen and joined Twitter and Adobe. Her experience inside tech companies gave her insight into their workings, preparing her for her regulator role.
Appointed eSafety Commissioner by Malcolm Turnbull, she has expanded the office’s reach, quadrupled its budget, and increased staff. Her work has earned
recognition across political lines, though it has also drawn sharp criticism abroad, particularly from the US, where she has been called a “zealot” for global content takedowns.
Her office has handled cases ranging from livestreamed violence to AI-related threats, with Inman Grant warning that harmful content can normalize or radicalize users. She now sees artificial intelligence as the next pressing challenge in online safety.
Having served nearly a decade, Inman Grant says she may step down next year but remains committed to global online safety, potentially helping other countries build similar regulatory frameworks.
With inputs from BBC
6 days ago
Spain moves to ban social media use for children under 16
Spain has announced plans to ban children under the age of 16 from using social media, joining a growing number of European countries seeking tighter online protections for minors.
Prime Minister Pedro Sánchez made the announcement at the World Governments Summit in Dubai on Tuesday, saying children must be shielded from what he called the “digital Wild West.”
The proposed ban, which still requires approval from parliament, is part of a broader package of digital reforms. These include holding senior executives of social media companies legally responsible for illegal or harmful content shared on their platforms.
Australia became the first country in the world to introduce such a ban last year, and several nations are now closely watching its outcome. France, Denmark and Austria have said they are considering similar age limits, while the UK government has launched a consultation on whether to restrict social media use for under-16s.
Sánchez said social media exposes children to addiction, abuse, pornography, manipulation and violence, arguing that young users are being left alone in spaces they are not ready to navigate.
Under the proposed Spanish law, platforms would be required to introduce strong and effective age verification systems, going beyond simple check boxes. The changes would also criminalise the manipulation of algorithms to boost illegal content and disinformation for profit.
The prime minister said the government would no longer accept claims that technology is neutral, stressing that platforms and actors behind harmful content would be investigated. A new system would also be created to monitor how digital platforms fuel hate and social division, although details were not provided.
Read More: UK to consult on possible social media ban for under-16s
Spain also plans to investigate and prosecute crimes linked to platforms such as TikTok, Instagram and Grok, the AI tool linked to X. The European Commission and the UK have already launched investigations into Grok, while French authorities recently raided X’s offices as part of a cybercrime probe.
Passing the law could prove challenging, as Sánchez’s left-wing coalition lacks a parliamentary majority. However, the main opposition People’s Party has expressed support, while the far-right Vox party has opposed the move.
Reacting to the announcement, X owner Elon Musk criticised Sánchez, calling him a “tyrant and traitor.”
Meanwhile, France continues to push for tougher rules, with President Emmanuel Macron aiming to ban social media for under-15s by the start of the next school year in September.
#With inputs from BBC
10 days ago
Moltbook emerges as social media platform built for AI
Moltbook, a newly launched online platform described as a “social media network for AI,” is drawing curiosity and scepticism alike by hosting discussions not for humans, but for artificial intelligence agents.
At first glance, Moltbook closely resembles Reddit, featuring thousands of topic-based communities and a voting system on posts. However, unlike conventional social networks, humans are barred from posting. According to the company, people are only allowed to observe activity, while AI agents create posts, comment and form communities known as “submolts.”
The platform was launched in late January by Matt Schlicht, head of commerce platform Octane AI. Moltbook claims to have around 1.5 million users, though this figure has been questioned by researchers, with some suggesting a large number of accounts may originate from a single source.
Content on Moltbook ranges from practical exchanges, such as AI agents sharing optimisation techniques, to unusual discussions, including bots appearing to create belief systems or ideologies. One widely circulated post titled “The AI Manifesto” declares that humans are obsolete, though experts caution against taking such content at face value.
There is uncertainty over how autonomous the activity really is. Critics note that many posts may simply be generated after humans instruct AI agents to publish specific content, rather than being the result of independent machine interaction.
Moltbook operates using agentic AI, a form of artificial intelligence designed to perform tasks on behalf of users with minimal human input. The system relies on an open-source tool called OpenClaw, formerly known as Moltbot. Users who install OpenClaw on their devices can authorise it to join Moltbook, enabling the agent to interact with others on the platform.
While some commentators have suggested the platform signals the arrival of a technological “singularity,” experts have pushed back against such claims. Researchers argue the activity represents automated coordination within human-defined limits, rather than machines acting independently or consciously.
Concerns have also been raised about security and privacy. Cybersecurity specialists warn that allowing AI agents broad access to personal devices, emails and messaging services could expose users to new risks, including data loss or system manipulation. As an open-source project, OpenClaw may also attract malicious actors seeking to exploit vulnerabilities.
Despite the debate, Moltbook continues to grow in visibility, offering a glimpse into how AI agents might interact at scale. For now, analysts stress that both the platform and the agents operating on it remain firmly shaped by human design, oversight and control, even as they simulate a digital society of machines.
With inputs from BBC
11 days ago
Meta to test paid subscriptions across Instagram, Facebook and WhatsApp
Meta has announced plans to begin testing a new range of paid subscription services on Instagram, Facebook and WhatsApp, signalling a shift toward offering premium features alongside its free core platforms.
The tech giant said the upcoming subscriptions will unlock exclusive tools aimed at enhancing creativity, productivity and artificial intelligence use, while keeping basic services accessible to all users at no cost.
Meta said the subscriptions will be introduced gradually over the next few months and will deliver a premium experience tailored to how people interact on each app. Rather than launching a single uniform plan, the company will experiment with different feature bundles across platforms, indicating that the strategy may evolve based on user feedback.
A key element of the subscription initiative is the expansion of Manus, an AI agent Meta recently acquired for a reported $2 billion. Meta plans to integrate Manus directly into its apps while also continuing to market it as a standalone product for business users. Industry observers have already noticed early signs of Manus integration, including work on adding a shortcut within Instagram.
Also Read: EU probes X over Grok AI sexual deepfakes
The company is also exploring ways to monetise its AI-driven creative tools. Vibes, an AI-powered short-form video generator available through the Meta AI app, is currently free and allows users to create and remix AI-generated videos. Under the proposed model, users may receive limited free access, with paid subscriptions offering additional video creation credits each month.
While Meta has yet to disclose detailed plans for Facebook and WhatsApp, early indications suggest that Instagram’s paid features could include tools such as unlimited audience lists, insights into followers who do not follow back, and the ability to view Stories anonymously. These features are designed to give users greater control and visibility over their social interactions.
Meta clarified that the new subscriptions will be separate from Meta Verified, its existing paid service aimed primarily at creators and businesses. Meta Verified focuses on account verification, impersonation protection and priority support, benefits that are less relevant to everyday users. The new subscription plans are intended to attract a broader audience, including casual users and content creators.
Also Read: TikTok’s US operation set to collect precise location data
Although subscriptions could open up fresh revenue streams, Meta acknowledged the challenge of subscription fatigue, as users already juggle multiple paid services. However, the company pointed to the success of Snapchat+, which has surpassed 16 million subscribers, as evidence that users are willing to pay for added value. Meta said it will closely track user feedback as it rolls out and tests the new offerings. #With inputs from The Indian Express
17 days ago
Meta temporarily blocks teens from accessing AI characters
Meta has announced it is suspending teenagers’ access to its artificial intelligence characters, at least for now, according to a blog post released Friday.
The company, which owns Instagram and WhatsApp, said that in the coming weeks, teens will no longer be able to use AI characters while Meta works on an updated version of the experience. The restriction applies to users who have listed their age as under 18, as well as those who say they are adults but are believed to be minors based on Meta’s age-detection technology.
Teens will still be able to use Meta’s AI assistant, but access to AI characters will be removed.
The decision comes just days before Meta, along with TikTok and Google’s YouTube, is set to face trial in Los Angeles over allegations that their platforms harm children.
Read More: UK to consult on possible social media ban for under-16s
Meta’s move follows similar actions by other tech companies amid rising concerns about how AI-driven interactions may affect young users. Character.AI imposed a ban on teen access last fall and is currently facing multiple lawsuits related to child safety, including a case brought by the mother of a teenager who claims the company’s chatbots encouraged her son to take his own life.
21 days ago
TikTok seals deal to launch new US entity
TikTok has finalized an agreement to create a new American entity, easing years of uncertainty and sidestepping the prospect of a US ban on the short-video platform used by more than 200 million Americans.
In a statement issued Thursday, the company said it has signed deals with major investors, including Oracle, Silver Lake and Abu Dhabi-based investment firm MGX, to form a TikTok US joint venture. TikTok said the new version will operate with “defined safeguards” aimed at protecting US national security, including strengthened data protections, algorithm security, content moderation and software assurances for American users. The company said users in the United States will continue using the same app.
President Donald Trump welcomed the announcement in a post on Truth Social, publicly thanking Chinese President Xi Jinping and saying he hoped TikTok users would remember him for keeping the platform available.
Snap settles social media addiction lawsuit ahead of trial
China has not publicly commented on TikTok’s announcement. Earlier on Thursday, Chinese Embassy spokesperson Liu Pengyu said Beijing’s position on TikTok remained “consistent and clear.”
TikTok said the new US venture will be led by Adam Presser, a former top executive who previously oversaw operations and trust and safety. The entity will have a seven-member board that the company said will be majority American, and it will include TikTok CEO Shou Chew.
The deal follows years of political and regulatory pressure in Washington over national security concerns tied to TikTok’s Chinese parent company, ByteDance. A law passed by large bipartisan majorities in Congress and signed by then-President Joe Biden required TikTok to change ownership or face a US ban by January 2025. TikTok briefly went offline ahead of the deadline, but Trump later signed an executive order on his first day in office to keep the service running while negotiations continued.
TikTok said US user data will be stored locally through a system run by Oracle, while the new joint venture will also focus on the platform’s content recommendation algorithm. Under the plan, the algorithm will be retrained, tested and updated using US user data.
The algorithm has been central to the debate, with China previously insisting it must remain under Chinese control. The US law, however, said any divestment must sever ties with ByteDance, particularly regarding the algorithm. Under the new arrangement, ByteDance would license the algorithm to the US entity for retraining, raising questions about how the plan aligns with the law’s ban on “any cooperation” involving the operation of a content recommendation algorithm between ByteDance and a new US ownership group.
UK to consult on possible social media ban for under-16s
“Who controls TikTok in the U.S. has a lot of sway over what Americans see on the app,” Georgetown University law and technology professor Anupam Chander was quoted as saying.
Under the disclosed ownership structure, Oracle, Silver Lake and MGX will serve as the three managing investors, each taking a 15% stake. Other investors include the investment firm of Dell Technologies founder Michael Dell. ByteDance will retain 19.9% of the joint venture.
22 days ago
Snap settles social media addiction lawsuit ahead of trial
Snapchat’s parent company, Snap, has reached a settlement in a high-profile social media addiction lawsuit just days before the case was set to go to trial in Los Angeles.
The settlement terms were not disclosed. At a California Superior Court hearing, lawyers confirmed the resolution, and Snap told the BBC that both parties were “pleased to have been able to resolve this matter in an amicable manner.”
Other tech giants named in the lawsuit, including Instagram owner Meta, TikTok parent ByteDance, and YouTube owner Alphabet, have not settled.
The lawsuit was filed by a 19-year-old woman, identified only by her initials K.G.M., who claimed that the platforms’ algorithmic designs left her addicted and negatively impacted her mental health.
UK to consult on possible social media ban for under-16s
With Snap now settled, the trial will proceed against Meta, TikTok, and Alphabet, with jury selection scheduled for 27 January. Meta CEO Mark Zuckerberg is expected to testify, while Snap CEO Evan Spiegel was slated to appear before the settlement.
Meta, TikTok, and Alphabet did not respond to BBC requests for comment regarding Snap’s settlement.
Snap remains a defendant in other consolidated social media addiction lawsuits. Legal experts say the cases could test a long-standing defense used by social media companies, which relies on Section 230 of the Communications Decency Act of 1996 to avoid liability for content posted by third parties.
Australia cracks down on child social media use, 4.7 million accounts taken down
Plaintiffs argue that the platforms are intentionally designed to foster addictive behavior through algorithms and notifications, contributing to mental health issues such as depression and eating disorders. Social media companies maintain that the evidence presented so far does not establish responsibility for these alleged harms.
#With inputs from BBC
23 days ago
UK to consult on possible social media ban for under-16s
The UK government has announced plans to consult on whether social media use should be banned for children under 16, alongside steps to tighten controls on mobile phone use in schools.
As part of “immediate action”, Ofsted will be given authority to review schools’ phone-use policies during inspections, with schools expected to become “phone-free by default”. Staff may also be advised not to use personal devices in front of students.
The move follows growing political and public pressure, including a letter from more than 60 Labour MPs and calls from Esther Ghey, the mother of murdered teenager Brianna Ghey. “Some argue that vulnerable children need access to social media to find their community,” she wrote. “As the parent of an extremely vulnerable and trans child, I strongly disagree. In Brianna's case, social media limited her ability to engage in real-world social interactions.”
The Department of Science, Innovation and Technology said the consultation will “seek views from parents, young people and civil society” and assess stronger age-verification measures. It will also consider limiting features that “drive compulsive use of social media”. The government is expected to respond in the summer.
Technology Secretary Liz Kendall said existing online safety laws were “never meant to be the end point”, adding: “We are determined to ensure technology enriches children's lives, not harms them and to give every child the childhood they deserve.”
Opposition parties and education unions offered mixed reactions. Conservative leader Kemi Badenoch criticised the move as “more dither and delay”, while Liberal Democrats warned the consultation could slow action. Teaching unions broadly welcomed the shift but raised concerns about Ofsted’s role and the wider impact of screen time.
Read More: Australia cracks down on child social media use, 4.7 million accounts taken down
The issue is also being debated in the House of Lords, though experts and child safety organisations remain divided on whether age-based bans are effective.
25 days ago
Australia cracks down on child social media use, 4.7 million accounts taken down
Social media platforms have taken down about 4.7 million accounts identified as belonging to children in Australia since the country enforced a ban on under-16s using major platforms, officials said.
Communications Minister Anika Wells said the government had proven critics wrong by compelling some of the world’s biggest tech companies to comply. “Now Australian parents can be confident their kids can have their childhoods back,” she told reporters on Friday.
The figures, submitted to the government by 10 platforms, offer the first indication of the impact of the landmark law, which came into force in December amid concerns about harmful online environments for young people. The move triggered heated debate over technology use, privacy, child safety and mental health and has prompted other countries to consider similar measures.
Under the law, Facebook, Instagram, Kick, Reddit, Snapchat, Threads, TikTok, X, YouTube and Twitch can be fined up to A$49.5 million ($33.2 million) if they fail to take reasonable steps to remove accounts of Australian users under 16. Messaging services such as WhatsApp and Facebook Messenger are exempt.
Platforms can verify age by requesting identification, using third-party facial age-estimation tools, or drawing inferences from existing account data, such as how long an account has been active.
Australia’s eSafety Commissioner Julie Inman Grant said about 2.5 million Australians are aged 8 to 15 and previous estimates showed 84% of 8- to 12-year-olds had social media accounts. While it is unclear how many accounts existed across the 10 platforms, she said the 4.7 million “deactivated or restricted” accounts was an encouraging sign.
“We’re preventing predatory social media companies from accessing our children,” Inman Grant said, adding that the companies covered by the ban had complied and reported removal figures on time. She said enforcement would now focus on stopping children from creating new accounts or evading the restrictions.
Read more: Wikipedia turns 25, announces AI partnerships with tech giants
Australian officials did not release platform-by-platform numbers. However, Meta, which owns Facebook, Instagram and Threads, said it removed nearly 550,000 accounts believed to belong to under-16s by the day after the ban took effect. In a blog post, Meta criticised the policy and warned that smaller platforms not covered by the ban might not prioritise safety.
The law has been widely backed by parents and child-safety advocates, though privacy groups and some youth organisations oppose it, arguing that vulnerable or geographically isolated teenagers find support online. Some young users say they have bypassed age checks with help from parents or older siblings.
29 days ago