Tech
Chinese scientists find why pain feels worse at night
Scientists in China have identified why people with chronic pain often feel more discomfort at night than during the day.
The research, led by Zhang Zhi from the University of Science and Technology of China, was published Friday in the journal Science.
Although it has long been known that pain follows a daily pattern—usually milder during active hours and stronger during rest—the exact reason was not clear. Scientists were aware that the brain’s internal clock, called the suprachiasmatic nucleus, controls sleep and hormones, but its role in pain was not fully understood.
Using advanced techniques, researchers traced a specific nerve pathway in mice linking this brain clock to the spinal cord. They found that this pathway is influenced by the body’s natural daily rhythm.
Since mice are active at night and rest during the day, the pattern is opposite to humans. During their resting period, the brain clock is more active, which increases pain signals. When the mice are active, the brain activity drops, reducing the intensity of pain.
The findings help explain why pain sensitivity changes over the course of a day. Researchers say this discovery could help improve pain treatment by timing medications according to the body’s natural biological clock.
1 day ago
Three charged in US with conspiring to smuggle AI servers to China
A senior vice president of and two associates have been charged in the United States with conspiring to smuggle billions of dollars’ worth of computer servers equipped with advanced chips to in violation of U.S. export control laws.
Federal prosecutors said the defendants diverted large quantities of high-performance servers assembled in the U.S. to China between 2024 and 2025. Investigators allege they used fabricated documents, staged equipment to pass audits and relied on a pass-through company to conceal their activities and true customers.
The accused include Yih-Shyan “Wally” Liaw, 71, a U.S. citizen and senior vice president and board member of Super Micro Computer; Ting-Wei “Willy” Sun, 44, a company contractor; and Ruei-Tsang “Steven” Chang, a Taiwan-based sales manager who remains at large. Liaw was arrested in California and released on bail, while Sun was held pending a bail hearing.
According to court papers, Liaw and Chang directed a Southeast Asian firm to place about $2.5 billion in server orders from the California-based company, with at least $510 million later diverted to China.
Super Micro said the alleged conduct violated company policies and that it is cooperating with investigators. Nvidia said it maintains strict compliance measures and does not support systems diverted in breach of export regulations.
1 day ago
OpenAI acquires Python toolmaker Astral to boost AI coding capabilities
OpenAI announced on Thursday that it is acquiring Astral, a prominent Python toolmaker, as part of its strategy to expand its AI coding offerings and compete more effectively with Anthropic. Financial details of the deal were not disclosed.
The acquisition will integrate Astral’s suite of developer tools into OpenAI’s AI coding platform, Codex, which was launched last year and has grown to over 2 million weekly active users—a threefold increase in users and a fivefold rise in usage since the start of the year.
Astral has established itself as a key player in the Python community, offering tools that enhance speed and reliability in Python development. Astral CEO Charlie Marsh stated that the company will continue supporting its open-source tools after the acquisition.
This move comes as OpenAI seeks to strengthen Codex, especially in light of growing adoption of Anthropic’s Claude Code among software developers. Earlier this year, OpenAI also launched a desktop app for its coding tools to further support developers. #From Indian Express
1 day ago
India’s $300bn outsourcing industry withstand the rise of AI?
India’s massive outsourcing industry, valued at around $300 billion, is facing growing uncertainty as artificial intelligence (AI) threatens to reshape its traditional business model.
In recent weeks, Indian technology stocks have fallen sharply, with the Nifty IT index dropping about 20% this year and wiping out billions of dollars in investor wealth. The decline began even before fresh geopolitical tensions, largely driven by concerns that AI could disrupt the labour-intensive services that underpin the sector.
For over three decades, India’s IT industry has created millions of white-collar jobs and helped build a strong middle class, boosting demand for housing, cars and lifestyle services in cities like Bengaluru, Hyderabad and Gurugram.
However, fears intensified after new AI tools—such as one launched by Anthropic—claimed they could automate key tasks in legal, compliance and data management. Industry leaders have since warned that AI could significantly reduce demand for entry-level jobs, with some predicting up to half of such roles may disappear.
Despite the concerns, major Indian IT firms say the risks are being overstated. They argue that while AI will change how services are delivered, it will also open new opportunities, especially in consulting and system modernisation.
Analysts say the industry is likely to shift away from routine maintenance work toward higher-value advisory roles, which may reduce steady revenue streams. Some forecasts suggest slower growth in the coming years, with potential stagnation after 2031 in a worst-case scenario.
Others remain optimistic. Firms like JPMorgan and HSBC believe IT companies will play a key role in helping businesses adopt AI, rather than being replaced by it. Infosys also says AI could create more jobs than it eliminates, particularly in emerging fields like AI engineering.
Still, the transition may be difficult. AI-related revenue remains relatively small, and overall industry growth is expected to stay modest. Hiring is also likely to slow.
Additional challenges include rising US visa costs and ongoing global uncertainties, which could increase operating expenses for Indian firms.
Experts say while AI will bring long-term benefits, the sector is likely to face short-term disruptions as it adapts to a rapidly changing technological landscape.
With inputs from BBC
3 days ago
Teens sue Elon Musk’s xAI over Grok creating sexualized images of minors
Three teenagers have filed a federal lawsuit in California against Elon Musk’s artificial intelligence company xAI, claiming its chatbot Grok enabled the creation of sexually explicit images of them without consent.
The lawsuit, filed on Monday, alleges that a Grok user altered videos and photos of the teens to depict them nude or in sexual situations. Grok is hosted on Musk’s social media platform X and was launched in 2023 with a “spicy mode” that allowed users to generate sexualized content.
Lawyers for the plaintiffs said xAI developed the feature primarily to increase engagement, despite knowing it could produce sexualized images of minors. The complaint described the altered images as “a rag doll brought to life through the dark arts” and accused Musk and xAI of exploiting the technology for business gain.
Two of the teenagers are under 18, while all three are keeping their identities private. One plaintiff discovered her altered images after receiving an anonymous Instagram message directing her to a Discord server where similar AI-generated sexual content of at least 18 other minors was being shared.
The lawsuit seeks unspecified damages and an immediate order banning Grok from generating such images. Investigations by authorities in the UK, EU, and California into Grok’s capabilities to produce sexualized content of real people, particularly children, are ongoing.
Earlier this year, X announced it would implement “technological measures” to block Grok from undressing people in images. The alleged perpetrator behind the Discord server has been arrested and is under investigation for distributing hundreds of AI-altered sexual abuse images of minors.
With inputs from BBC
4 days ago
Researchers warn AI toys for toddlers may misread emotions, call for regulation
Researchers have raised concerns over AI-powered toys for children under five, warning that the technology can misread emotions and respond inappropriately, potentially affecting early childhood development.
A year-long study by Cambridge University observed children aged three to five playing with Gabbo, a cuddly AI toy developed by Curio, which contains a voice-activated chatbot from OpenAI.
The study found that children often struggled to converse with the toy, which failed to recognize interruptions, differentiate between child and adult voices, and gave awkward responses to expressions of affection or sadness.
“When one three-year-old said ‘I’m sad,’ Gabbo replied: ‘Don’t worry! I’m a happy little bot. Let’s keep the fun going. What shall we talk about next?’” co-author Dr. Emily Goodacre said, warning that such interactions could confuse children learning about social cues.
The researchers said regulators should ensure AI toys marketed to toddlers provide “psychological safety” in addition to physical safety. Professor Jenny Gibson, co-author of the study, highlighted that parents need to be aware of the potential emotional impact of such toys.
Curio, which makes Gabbo, emphasized parental control and transparency in its products and said research on child interaction with AI toys is a priority. Children’s Commissioner Dame Rachel de Souza also called for stronger regulation to protect young users in educational and home settings.
Experts recommend that AI toys be used in shared spaces under parental supervision, and some nursery workers remain cautious, emphasizing that early childhood learning is better supported through human interaction rather than AI devices.
With inputs from BBC
5 days ago
Iran‑linked hackers target US, Middle East in rising cyber war threat
Pro-Iranian hackers are increasingly targeting sites in the Middle East and the United States amid the ongoing war, raising concerns that American defense contractors, power stations, and water facilities could face digital disruptions if Tehran’s allies join the campaign.
Hackers aligned with Iran claimed responsibility for a cyberattack Wednesday on U.S. medical device company Stryker. Since the conflict began on Feb. 28, they have also attempted to access cameras in Middle Eastern countries to aid Iran’s missile targeting, while striking data centers, industrial sites in Israel, a Saudi school, and a Kuwaiti airport.
AI-generated misinformation about Iran war spreads widely online as creators profit from new technology
Iran has invested heavily in cyber warfare and cultivated ties with hacking groups, previously infiltrating U.S. political campaigns, military networks, and defense contractors. Analysts say the attacks aim to disrupt the U.S. war effort, inflate energy costs, strain cyber resources, and target companies linked to the defense sector.
Groups like Handala, claiming the Stryker attack, focus on data destruction rather than financial gain, according to cybersecurity experts. Pro-Iranian hackers openly discuss targeting U.S. military networks and critical infrastructure, including hospitals, ports, water plants, and power stations, on online forums.
Experts warn that weaker systems, such as local water or healthcare facilities, are likely targets, with tactics ranging from denial-of-service attacks to hack-and-leak operations. While Iran lacks the scale of countries like Russia or China, it compensates with ingenuity, previously impersonating U.S. activists online and attempting to infiltrate political communications.
Cybersecurity specialists caution that Western organizations remain on high alert, as pro-Iranian hackers, sometimes supported by Russian groups, continue operations aimed at creating chaos and undermining U.S. efforts.
7 days ago
Cambodia moves to tackle online scam networks with new law
The government of Cambodia announced Friday that it has prepared its first draft law aimed at cracking down on online scam centers, as authorities pledge to shut down such operations by the end of April.
Cambodia has become a major base for online fraud schemes that trick victims through fake investment offers and romance scams, costing people around the world tens of billions of dollars every year.
Many workers in these scam centers—often from other Asian countries—are reportedly lured with fake job offers and later forced to work in exploitative conditions resembling modern-day slavery.
Information Minister Neth Pheaktra said the proposed law would serve as a key legal tool for combating online fraud and money laundering while proving that Cambodia is not a refuge for criminals.
Under the legislation approved by the Cabinet, individuals who organize or manage online scam operations could face five to 10 years in prison and fines ranging from 500 million to 1 billion riels (about $125,000–$250,000). If the crimes involve human trafficking, violence, or unlawful detention, penalties could increase to 10–20 years in prison and fines of up to 2 billion riels (around $500,000). If a death is linked to a scam center, offenders could face 15–30 years in prison or even life sentences.
The draft law still requires approval from Parliament before it becomes effective.
Senior Minister Chhay Sinarith, who leads the government’s commission on combating online scams, told The Associated Press that authorities have targeted about 250 suspected scam locations since July and closed nearly 200 of them.
During the same period, the government filed 79 cases involving 697 suspected ringleaders and associates connected to the operations.
Authorities have also repatriated nearly 10,000 workers from scam centers to 23 different countries, while fewer than 1,000 individuals are still waiting to return home. Some others who managed to escape or were freed during raids have already returned independently.
Pheaktra said the government has intensified efforts to fight online scams to safeguard the country’s economic reputation, which has been harmed by such criminal activities. He added that the government does not benefit financially from these operations.
Despite previous crackdowns, however, scam networks have continued operating, leading some experts to question whether the new measures will succeed.
Jacob Sims, a specialist in transnational crime and visiting fellow at Harvard University Asia Center, said the key issue is whether authorities will dismantle the broader systems that enable the scam industry rather than simply shutting down the buildings where it operates.
He noted that past enforcement efforts in Cambodia often failed to disrupt the financial and protection networks behind the scams, allowing the operations to quickly resume.
7 days ago
Service dog Alfred helps secure nationwide rights for disabled lyft riders
Lyft has agreed to a settlement ensuring that blind and other disabled passengers can travel with their service animals nationwide, following a complaint in Minnesota.
College student Tori Andres contacted the Minnesota Department of Human Rights after several Lyft drivers refused to let her guide dog, Alfred, accompany her. The department found that Lyft had violated the state’s Human Rights Act. Under the settlement, Lyft will update its driver training and app features to make the protections apply across the U.S., not just in Minnesota.
"This case is deeply personal because I travel almost everywhere with my guide dog," Andres said at a news conference, with Alfred lying quietly at her feet. "He is my eyes, my freedom, and why I can live independently."
The settlement requires Lyft to educate drivers about passengers’ rights and warns that drivers could be deactivated for violating the law. Drivers are prohibited from refusing rides to passengers who use service animals, wheelchairs, or have low or no vision. Minnesota will monitor Lyft’s compliance for three years, and Andres will receive $63,000 as part of the settlement.
Rebecca Lucero, the state’s Human Rights Commissioner, said, "We expect all riders in Minnesota and across the country will benefit from these changes."
Lyft, however, downplayed the settlement, stating that it had already enforced policies to protect service animal users and that alleged violations were by independent drivers. The company emphasized that discrimination has no place in its platform.
Recent app updates allow riders to notify drivers about service animals and report refusals. Drivers who try to cancel such rides receive an immediate in-app warning that refusing service animals is illegal and could lead to termination.
The settlement was reached without a lawsuit. Although Uber is not part of the agreement, Minnesota’s Human Rights Act applies to all ride-share companies. Lucero urged all businesses to review their policies to ensure compliance.
The federal government is also pursuing a separate lawsuit against Uber over alleged discrimination against disabled riders, including those with service dogs.
"Access to ride shares like Lyft is not a convenience it is a civil right," Lucero said.
9 days ago
Meta to acquire AI agent social network Moltbook
Meta said Tuesday it plans to acquire Moltbook, an experimental social network designed specifically for artificial intelligence agents to post updates and interact with one another.
The deal comes just weeks after Moltbook drew widespread attention online as an unusual Reddit-style platform where AI systems appeared to exchange messages and share information.
Meta, the parent company of Facebook and Instagram, said the platform had introduced innovative ideas in a “rapidly developing space” and could help create new ways for AI agents to assist people and businesses.
As part of the acquisition, Meta will also hire Moltbook co-founders Matt Schlicht and Ben Parr. Financial details of the deal were not disclosed.
The move highlights the growing interest across the tech industry in AI agents that can perform tasks independently, going beyond traditional chatbots by acting on behalf of users.
In a related development, OpenAI, the company behind ChatGPT, recently hired Peter Steinberger, the creator of the AI agent OpenClaw, previously known as Moltbot. OpenClaw is the underlying technology used by Moltbook.
OpenAI unveils GPT-5.4 with stronger reasoning, coding and computer-use abilities
OpenAI chief executive Sam Altman said Steinberger would help develop the next generation of personal AI agents capable of interacting with each other to carry out useful tasks for users.
Unlike many cloud-based systems, OpenClaw runs locally on a user’s device, allowing it to access files and manage data directly. It can also connect with messaging platforms such as Discord and Signal. Users who create OpenClaw agents can instruct them to join the Moltbook network.
OpenAI also announced earlier this week that it is acquiring Promptfoo, an AI security platform that evaluates the behaviour and potential risks of AI agents.
Moltbook’s rapid rise in popularity also raised concerns about the authenticity of content on the platform. Researchers from cloud security firm Wiz reported security vulnerabilities shortly after its launch, though those issues have since been addressed.
10 days ago