Tech
Australia implements world-first social media ban for children under 16
Australia on Wednesday launched a landmark social media ban for children under 16, with Prime Minister Anthony Albanese hailing it as a step to give families control over tech giants and protect young users.
The new law affects major platforms including Facebook, Instagram, TikTok, Snapchat, X, YouTube, Reddit, Threads, Kick, and Twitch. Companies face fines of up to 49.5 million Australian dollars ($32.9 million) if they fail to remove accounts of underage users. The ban is enforced by eSafety Commissioner Julie Inman Grant, who said platforms already have the data and technology to comply. Notices will be sent to the companies Thursday, and preliminary compliance results will be reported by Christmas.
The measure has drawn mixed reactions. Many children posted farewell messages, while some tried to bypass age restrictions using face-altering tricks or VPNs. Communications Minister Anika Wells warned that attempts to evade detection would eventually fail, as platforms are required to routinely monitor accounts.
Albanese acknowledged implementation challenges, saying the law “won’t be perfect” but emphasized social responsibility for tech firms. Supporters cited online dangers as a key motivation, including the death of Mac Holdsworth, a sextortion victim, which inspired his father to advocate for age restrictions.
Young advocates like 12-year-old Flossie Brodribb praised the ban for promoting safer, healthier childhoods, while some families in the entertainment industry raised concerns about its impact on social media-based careers.
Privacy safeguards are included in the law. Platforms may use existing data, age-estimation technology, or third-party verification, but cannot compel users to submit government ID or use collected information for secondary purposes without consent, according to Privacy Commissioner Carly Kind.
Albanese and reform supporters framed the ban as a global example, signaling that Australia’s approach could inspire similar measures worldwide.
2 days ago
Social media ban for children under 16 starts in Australia
Australia has implemented a world-first law banning children under 16 from accessing social media platforms, a move Prime Minister Anthony Albanese described as empowering families and curbing the influence of tech giants.
The ban, effective Wednesday, affects platforms including Facebook, Instagram, TikTok, Snapchat, Reddit, YouTube, X, Threads, Kick, and Twitch. Companies failing to comply face fines of up to 49.5 million Australian dollars ($32.9 million). Parents reported some children were upset upon being locked out, and a few attempted to bypass age restrictions using virtual private networks or facial modifications.
The law will be enforced by eSafety Commissioner Julie Inman Grant, who said platforms already have the data and technology to implement the rules. Notices will be sent Thursday requiring details on account closures and age verification, with public updates expected before Christmas.
Albanese acknowledged the rollout would be challenging and “won’t be perfect,” emphasizing social responsibility for tech companies. Communications Minister Anika Wells said over 200,000 TikTok accounts had already been deactivated, warning children trying to evade detection would eventually be caught.
Australia to proceed with under-16 social media ban despite court challenge
Advocates hailed the move as a vital step for child safety online. Wayne Holdsworth, whose son died in an online sextortion scam, called the law “a start” to protect children. Twelve-year-old Flossie Brodribb said the ban would help kids grow up “healthier, safer and more connected to the real world.”
Some families, however, warned of financial impacts. Simone Clements said the law affects her 15-year-old twins, who rely on social media for their careers as actors, models, and influencers.
Source: AP
2 days ago
Google facing new EU antitrust probe over content used for AI
Google is facing fresh antitrust scrutiny in Europe as EU regulators on Tuesday opened a new investigation into the company’s use of online content to develop its artificial intelligence models and services.
The European Commission, the bloc’s top competition watchdog, is examining whether Google violated EU rules by using content from web publishers and YouTube uploads for AI purposes without compensating creators or allowing them to opt out. Regulators are particularly concerned about two services — AI Overviews, which produces automated summaries at the top of search results, and AI Mode, which provides chatbot-style responses.
The probe will also assess whether Google uses YouTube videos under similar terms to train its generative AI models while restricting access for rival developers.
Officials said they aim to determine whether Google gave itself an unfair competitive edge through restrictive conditions or privileged access to content.
Google said the complaint “risks stifling innovation” and vowed to continue working with news and creative industries as they transition into the AI era.
The investigation falls under the EU’s traditional competition rules, not the newer Digital Markets Act designed to curb Big Tech dominance.
Microsoft to invest $17.5 billion in India for AI and Cloud infrastructure
EU competition chief Teresa Ribera said AI innovation must not undermine core societal principles.
Last week, the Commission launched a separate antitrust probe into WhatsApp’s AI policy and fined Elon Musk's platform X €120 million for digital rule violations, prompting criticism from Trump administration officials.
The EU is “agnostic” about company nationality and focuses solely on potential anti-competitive behavior, spokeswoman Arianna Podesta said.
Google will be able to respond to the concerns, and U.S. authorities have been notified. The case has no deadline and could lead to fines of up to 10% of Google’s global annual revenue.
Source: AP
2 days ago
Microsoft to invest $17.5 billion in India for AI and Cloud infrastructure
Microsoft on Tuesday announced its largest-ever investment in Asia, pledging $17.5 billion over the next four years to expand India’s cloud computing and artificial intelligence infrastructure.
CEO Satya Nadella revealed the plan on X following a meeting with Indian Prime Minister Narendra Modi in New Delhi. He said the investment aims to help India develop “infrastructure, skills, and sovereign capabilities” to support its AI ambitions.
The announcement highlights intensifying global competition among tech giants in India, one of the world’s fastest-growing digital markets. In October, Google committed $15 billion to establish its first AI hub in Visakhapatnam.
Massachusetts court reviews lawsuit accusing Meta of making Facebook and Instagram addictive for minors
Nadella’s three-day India visit includes policy discussions and participation in AI-focused events in Bengaluru and Mumbai. The government has set ambitious targets to become a global AI and semiconductor hub, offering incentives to attract multinational technology firms.
Microsoft, which has been in India for over three decades and employs more than 22,000 people, plans to scale up cloud and data center operations nationwide, including a new hyperscale data center expected to go live by mid-2026.
Source: AP
3 days ago
Bangladesh inks MoU with Thales Alenia Space to boost earth observation capabilities
Bangladesh Satellite Company Limited (BSCL) on Tuesday signed a Memorandum of Understanding (MoU) with Thales Alenia Space of Italy to enhance the country’s capacity in Earth Observation (EO) systems and expand the use of satellite imagery.
The MoU was signed at the conference room of the Posts and Telecommunications Division at the Secretariat in the presence of Foyez Ahmed Tayyeb, Special Assistant to the Chief Adviser and Antonio Alessandro, Ambassador of Italy to Bangladesh.
Under the MoU both organisations will collaborate on local skills development, knowledge transfer, and pilot applications of EO data to support national priorities such as disaster management, climate monitoring, agriculture, and urban planning.
Foyez Ahmed thanked Italian Ambassador Antonio Alessandro for Italy’s continued support to Bangladesh, calling Italy ‘a trusted friend and partner.’
He said Bangladesh is prioritising advanced technologies, especially satellite and space-based solutions to strengthen land management, agriculture, disaster monitoring, climate resilience and national security.
“Every year around 25,000 technology graduates enter our workforce. Creating opportunities for them is our national responsibility,” he said, stressing the need for a National Satellite Image Repository and unified data access for all ministries.
He called for greater collaboration with Thales in capacity building, institutional training, university partnerships and cybersecurity, noting that global best practices can help Bangladesh accelerate digital transformation.
Foyez Ahmed said the MoU will open new doors of cooperation between Bangladesh and Italy in emerging technologies.
Italian Ambassador Antonio Alessandro expressed his pleasure at attending the ceremony.
He highlighted the significance of the partnership in Earth observation and satellite technologies, noting its strategic importance for national planning, disaster management, and environmental monitoring.
Antonio Alessandro said the programme combines optical and radar observation which is particularly suited for Bangladesh given its weather conditions.
This partnership marks the beginning of a long-term collaboration between Italy and Bangladesh in advanced technologies.
The Ambassador praised Bangladesh’s commitment to digitalisation, modernisation, and technological advancement, emphasising Italy’s readiness to support the country’s journey toward becoming a technology-driven nation.
"Post and Telecommunications Secretary Abdun Naser Khan, BSCL Managing Director and CEO Dr. Muhammad Imadur Rahman, officials from Thales Alenia Space, and other officials from the Post and Telecommunications Division were present at the event.
3 days ago
Paramount launches hostile bid for Warner Bros., aiming to top Netflix’s $72 billion offer
Paramount on Monday unveiled a hostile takeover attempt for Warner Bros. Discovery, setting the stage for an intense showdown with rival bidder Netflix for control of the company behind HBO, CNN and one of Hollywood’s most iconic studios — and with it, enormous influence over America’s entertainment industry.
The move comes just days after Warner executives agreed to Netflix’s $72 billion acquisition proposal. Paramount’s rival offer, valued at $74.4 billion, bypasses Warner’s leadership and appeals directly to shareholders with a richer deal that also includes purchasing Warner’s entire business — including its cable networks, which Netflix does not want.
Paramount said it went hostile only after making several earlier proposals that Warner management largely ignored following the company’s October announcement that it was open to a sale.
In its message to investors, Paramount emphasized that its bid includes $18 billion more in cash than Netflix’s and argued it would face fewer regulatory hurdles under President Donald Trump, who often inserts himself into major corporate decisions.
Over the weekend, Trump suggested that a Netflix–Warner merger “could be a problem” because of its potential market dominance and said he planned to review the deal personally.
Netflix, however, insists Warner will ultimately reject Paramount’s offer and that both regulators and Trump will support its acquisition. Co-CEO Ted Sarandos pointed to several conversations he has had with Trump focused on Netflix’s hiring and growth. “The president’s interest is the same as ours — protecting and creating jobs,” Sarandos said Monday.
Political spotlight intensifiesParamount’s bid gained immediate attention in Washington, where lawmakers from both parties raised concerns about how the competing deals might affect streaming prices, movie theater jobs, and the diversity of media voices.
Paramount CEO David Ellison — whose family has deep ties to Trump — said the company had submitted six proposals to Warner over the last three months. He argued that his offer would strengthen Hollywood, boost competition rather than reduce it, and increase the number of films released in theaters.
Regulatory filings also revealed another possible advantage for Paramount: an investment firm run by Trump’s son-in-law Jared Kushner plans to join the deal. Also participating are sovereign wealth funds from three Persian Gulf countries, widely believed to be Saudi Arabia, Abu Dhabi and Qatar — nations where Trump’s family business has recently expanded with major real estate partnerships.
Recent editorial changes at CBS News, such as installing Bari Weiss as editor-in-chief after Paramount’s acquisition of The Free Press, could also appeal to conservatives who view the network as historically left-leaning.
Trump remains unpredictableDespite the connections, Trump’s involvement may not favor Paramount consistently. On Monday, he criticized the company for allowing 60 Minutes to interview Rep. Marjorie Taylor Greene, calling the network “NO BETTER THAN THE OLD OWNERSHIP.”
The struggle for control of Warner escalated Friday when Netflix unexpectedly announced it had struck a deal with Warner management to acquire the studios behind “Harry Potter,” HBO Max, and the DC franchise.
Netflix’s proposal includes cash and stock valued at $27.75 per Warner share, for a total enterprise value of $82.7 billion including debt. Paramount is offering $30 per share and values the deal at $108 billion including assumed debt. Its offer expires Jan. 8 unless extended.
However, the two bids are difficult to compare because they would result in different acquisitions. Netflix’s offer only proceeds after Warner spins off its cable networks, meaning CNN and Discovery are excluded — and the transaction is unlikely to close for at least a year.
Although the DOJ typically evaluates such mergers, Trump has broken precedent by taking a hands-on approach, alarming experts. Usha Haley of Wichita State University said Trump’s personal interest may be driven by a desire for “greater control over the media,” pointing to Paramount’s ties to Trump supporter Larry Ellison.
John Mayo, an antitrust expert at Georgetown, noted that although political rhetoric may intensify, DOJ analysts are likely to maintain nonpartisan standards regardless of the administration.
On Monday, Paramount shares rose 9%, Warner Bros. climbed 4.4%, and Netflix stock dropped 3.4%.
3 days ago
AI-powered police body cameras tested on Edmonton’s “high-risk” face
Police in Edmonton, Canada, have begun testing artificial intelligence–enabled body cameras capable of recognizing about 7,000 people on a “high-risk” watch list — a trial that could signal a major shift toward adopting facial recognition technology long deemed too invasive for law enforcement in North America.
The program marks a sharp turn from 2019, when Axon Enterprise, Inc., the top body-camera manufacturer, backed away from facial recognition amid serious ethical concerns. Now, the new pilot — launched last week — is drawing intense scrutiny well beyond Edmonton, the northernmost city in North America with over a million residents.
Barry Friedman, the former chair of Axon’s AI ethics board who once helped block the technology, told the Associated Press he fears the company is moving ahead without adequate transparency, public discussion or expert review.
“These tools carry major costs and risks,” said Friedman, now an NYU law professor. “There must be clear evidence of their benefits before deploying them."
Axon CEO Rick Smith insists the Edmonton trial is not a full-scale rollout but “early-stage field research” to evaluate performance and determine proper safeguards.
Testing the system in Canada allows the company to gather independent insights and refine oversight frameworks before any future U.S. consideration, Smith wrote in a blog post.
Edmonton police say the system is meant to enhance officer safety by detecting individuals flagged as violent, armed, dangerous or high-risk. The main list contains 6,341 names, with another 724 listed for serious outstanding warrants.
“We want this focused strictly on serious offenders,” said Ann-Li Cooke, Axon’s director of responsible AI.
Sam Altman issues ‘Code Red’ to boost ChatGPT as AI competition intensifies
The outcome could influence policing globally: Axon dominates the U.S. body-camera market and is expanding in Canada, recently beating Motorola Solutions for an RCMP contract. Motorola says it can enable facial recognition on its cameras but has purposely chosen not to use the feature for proactive identification — at least for now.
Alberta’s government mandated police body cameras provincewide in 2023 to increase accountability and speed up investigations. But real-time facial recognition remains divisive, with critics warning of surveillance overreach and racial bias. Some U.S. states have restricted the technology, while the European Union banned public real-time face scanning except in extreme cases.
In contrast, the U.K. has embraced it, with London’s system contributing to 1,300 arrests in two years.
Details about Edmonton’s pilot remain limited. Axon declined to disclose which third-party facial recognition model it uses. Police say the trial runs only in daylight through December due to Edmonton’s harsh winters and lighting challenges.
About 50 officers are participating, but they won’t see any real-time match alerts; results will be reviewed afterward. In the future, police hope it may warn officers of nearby high-risk individuals when responding to calls.
Privacy concerns are growing. Alberta’s privacy commissioner received a privacy impact assessment only on Dec. 2 — the day the trial was publicly announced — and is now reviewing it.
University of Alberta criminologist Temitope Oriola said Edmonton’s past tensions with Indigenous and Black communities make this experiment particularly sensitive. “Edmonton is essentially a testing ground,” he said. “It could lead to improvements — but that’s not guaranteed.”
Axon acknowledges accuracy challenges, especially under poor lighting, long distances or angles that disproportionately affect darker-skinned people. It insists every match will undergo human verification and says part of the test is determining how human reviewers must be trained to reduce risks.
Friedman argues Axon must release its findings — and that decisions about such technology shouldn’t be left to police agencies or private companies alone.
“A pilot can be valuable,” he said. “But it requires transparency and accountability. None of that is happening here. They’ve found a department willing to proceed, and they’re simply moving forward.”
4 days ago
Massachusetts court reviews lawsuit accusing Meta of making Facebook and Instagram addictive for minors
Massachusetts’ Supreme Judicial Court heard arguments Friday in a state lawsuit claiming Meta intentionally engineered Facebook and Instagram features to be addictive for young users.
Attorney General Andrea Campbell’s 2024 lawsuit asserts that Meta designed these features to boost profits, affecting hundreds of thousands of Massachusetts teens who use the platforms.
State Solicitor David Kravitz argued that the case focuses solely on Meta’s own design tools, saying the company’s internal research shows these features encourage addictive behavior. He emphasized that the claims do not involve Meta’s algorithms or moderation practices.
Meta rejected the accusations on Friday, insisting it has long worked to support youth safety. Company attorney Mark Mosier argued the lawsuit would improperly penalize Meta for standard publishing activities, which he said are protected under the First Amendment.
Mosier added: “If the state claimed the speech was false or fraudulent, its argument would be stronger. But acknowledging the content is truthful places this squarely under First Amendment protection.”
Several justices, however, seemed more focused on Meta’s engagement-driving functions—such as persistent notifications—rather than the content itself.
Justice Dalila Wendland said the state’s complaint centers on “incessant notifications designed to exploit teenagers’ fear of missing out,” not on Meta spreading false information.
Justice Scott Kafker questioned Meta’s argument that this is simply about choosing what to publish: “This isn’t about publishing—it’s about capturing attention. The content doesn’t matter; the goal is to keep users looking.”
Meta faces multiple state and federal lawsuits accusing the company of creating features—like nonstop notifications and infinite scrolling—to hook young users.
In 2023, 33 states sued Meta, alleging it illegally collected data on children under 13 without parental consent. Several states, including Massachusetts, filed separate lawsuits targeting addictive design and youth harms.
Investigative reports starting with a 2021 Wall Street Journal series revealed Meta knew Instagram could be harmful to teens, especially girls, citing internal research showing that 13.5% of teen girls said the app exacerbated suicidal thoughts and 17% said it worsened eating disorders.
Critics argue Meta has failed to meaningfully address youth mental-health risks. A 2024 report by whistleblower Arturo Bejar and several nonprofits claimed Meta prioritized publicity-friendly features over substantive safety improvements.
Meta said the report distorted the company’s efforts to protect teens
5 days ago
EU fines Elon Musk’s X €120 million for violating social media regulations
The European Union on Friday slapped a 120 million euro ($140 million) fine on X, Elon Musk’s social media platform, for violating the bloc’s digital governance rules — a move likely to heighten tensions with Washington over issues of online speech.
The penalty follows a two-year investigation under the EU’s Digital Services Act (DSA), which requires major platforms to better protect users, curb illegal or harmful content, and increase transparency or face heavy sanctions. This is the first formal non-compliance ruling issued under the DSA.
EU officials said X committed three violations involving transparency, prompting the fine. The decision risks angering U.S. President Donald Trump, whose administration has criticized European digital rules as unfairly aimed at American tech firms.
U.S. Secretary of State Marco Rubio condemned the penalty on X, calling it an attack on American companies and citizens. Musk echoed Rubio’s message. Vice President JD Vance also accused the EU of trying to punish X for refusing to “censor” content.
EU officials rejected those claims. Commission spokesperson Thomas Regnier insisted the enforcement action is based solely on legal standards, not political motives or the nationality of companies.
X did not immediately respond to requests for comment.
Regulators first laid out their concerns in mid-2024, focusing on X’s blue checkmark system, which they described as a “deceptive design” that could mislead users and expose them to manipulation. Prior to Musk’s 2022 takeover, the badges signified verified public figures. Musk’s decision to sell checkmarks for $8 a month, without robust verification, left users unable to reliably assess account authenticity, the Commission said.
Officials also criticized X’s ad transparency database, which — under EU law — must display all ads, their funders, and target audiences. The Commission said X’s database suffers from poor design, limited accessibility, and long delays, hindering efforts to detect fraud and influence operations.
Additionally, the platform was accused of blocking researchers from accessing public data, limiting their ability to study risks faced by European users.
“Misleading users with blue checkmarks, hiding ad information, and restricting researchers have no place online in the EU,” said Henna Virkkunen, the Commission’s executive vice-president for tech sovereignty, security and democracy.
In a separate DSA case concluded Friday, TikTok agreed to modify its ad database to meet EU transparency standards.
6 days ago
EU fines Elon Musk’s X €120 million over Digital Services Act violations
European Union regulators on Friday imposed a €120 million ($140 million) fine on Elon Musk’s social media platform X for violating the bloc’s digital rules, citing risks that users could be exposed to scams and manipulation.
The European Commission’s decision follows a two-year investigation under the EU’s Digital Services Act (DSA), a wide-ranging law that obliges platforms to take responsibility for user protection and remove harmful or illegal content, with fines for noncompliance.
The Commission said X, formerly Twitter, breached three transparency rules under the DSA. Specifically, X’s blue checkmarks were deemed “deceptively designed,” potentially exposing users to scams. The platform also failed to meet ad database transparency standards, with delays and access barriers hindering research on digital ads, their sponsors, and target audiences.
Nvidia chief courts Republicans amid debate over accelerating AI competition
“Deceiving users with blue checkmarks, obscuring ad information, and restricting researcher access have no place online in the EU. The DSA protects users,” said Henna Virkunnen, EU executive vice-president for tech sovereignty, security, and democracy.
The company did not immediately respond to requests for comment. The decision underscores EU regulators’ efforts to enforce stricter accountability for tech platforms and could provoke reactions from U.S. officials, who have previously criticized Brussels’ digital rules.
Source: AP
7 days ago