Tech-News
Three charged in US with conspiring to smuggle AI servers to China
A senior vice president of and two associates have been charged in the United States with conspiring to smuggle billions of dollars’ worth of computer servers equipped with advanced chips to in violation of U.S. export control laws.
Federal prosecutors said the defendants diverted large quantities of high-performance servers assembled in the U.S. to China between 2024 and 2025. Investigators allege they used fabricated documents, staged equipment to pass audits and relied on a pass-through company to conceal their activities and true customers.
The accused include Yih-Shyan “Wally” Liaw, 71, a U.S. citizen and senior vice president and board member of Super Micro Computer; Ting-Wei “Willy” Sun, 44, a company contractor; and Ruei-Tsang “Steven” Chang, a Taiwan-based sales manager who remains at large. Liaw was arrested in California and released on bail, while Sun was held pending a bail hearing.
According to court papers, Liaw and Chang directed a Southeast Asian firm to place about $2.5 billion in server orders from the California-based company, with at least $510 million later diverted to China.
Super Micro said the alleged conduct violated company policies and that it is cooperating with investigators. Nvidia said it maintains strict compliance measures and does not support systems diverted in breach of export regulations.
1 day ago
OpenAI acquires Python toolmaker Astral to boost AI coding capabilities
OpenAI announced on Thursday that it is acquiring Astral, a prominent Python toolmaker, as part of its strategy to expand its AI coding offerings and compete more effectively with Anthropic. Financial details of the deal were not disclosed.
The acquisition will integrate Astral’s suite of developer tools into OpenAI’s AI coding platform, Codex, which was launched last year and has grown to over 2 million weekly active users—a threefold increase in users and a fivefold rise in usage since the start of the year.
Astral has established itself as a key player in the Python community, offering tools that enhance speed and reliability in Python development. Astral CEO Charlie Marsh stated that the company will continue supporting its open-source tools after the acquisition.
This move comes as OpenAI seeks to strengthen Codex, especially in light of growing adoption of Anthropic’s Claude Code among software developers. Earlier this year, OpenAI also launched a desktop app for its coding tools to further support developers. #From Indian Express
1 day ago
India’s $300bn outsourcing industry withstand the rise of AI?
India’s massive outsourcing industry, valued at around $300 billion, is facing growing uncertainty as artificial intelligence (AI) threatens to reshape its traditional business model.
In recent weeks, Indian technology stocks have fallen sharply, with the Nifty IT index dropping about 20% this year and wiping out billions of dollars in investor wealth. The decline began even before fresh geopolitical tensions, largely driven by concerns that AI could disrupt the labour-intensive services that underpin the sector.
For over three decades, India’s IT industry has created millions of white-collar jobs and helped build a strong middle class, boosting demand for housing, cars and lifestyle services in cities like Bengaluru, Hyderabad and Gurugram.
However, fears intensified after new AI tools—such as one launched by Anthropic—claimed they could automate key tasks in legal, compliance and data management. Industry leaders have since warned that AI could significantly reduce demand for entry-level jobs, with some predicting up to half of such roles may disappear.
Despite the concerns, major Indian IT firms say the risks are being overstated. They argue that while AI will change how services are delivered, it will also open new opportunities, especially in consulting and system modernisation.
Analysts say the industry is likely to shift away from routine maintenance work toward higher-value advisory roles, which may reduce steady revenue streams. Some forecasts suggest slower growth in the coming years, with potential stagnation after 2031 in a worst-case scenario.
Others remain optimistic. Firms like JPMorgan and HSBC believe IT companies will play a key role in helping businesses adopt AI, rather than being replaced by it. Infosys also says AI could create more jobs than it eliminates, particularly in emerging fields like AI engineering.
Still, the transition may be difficult. AI-related revenue remains relatively small, and overall industry growth is expected to stay modest. Hiring is also likely to slow.
Additional challenges include rising US visa costs and ongoing global uncertainties, which could increase operating expenses for Indian firms.
Experts say while AI will bring long-term benefits, the sector is likely to face short-term disruptions as it adapts to a rapidly changing technological landscape.
With inputs from BBC
3 days ago
Iran‑linked hackers target US, Middle East in rising cyber war threat
Pro-Iranian hackers are increasingly targeting sites in the Middle East and the United States amid the ongoing war, raising concerns that American defense contractors, power stations, and water facilities could face digital disruptions if Tehran’s allies join the campaign.
Hackers aligned with Iran claimed responsibility for a cyberattack Wednesday on U.S. medical device company Stryker. Since the conflict began on Feb. 28, they have also attempted to access cameras in Middle Eastern countries to aid Iran’s missile targeting, while striking data centers, industrial sites in Israel, a Saudi school, and a Kuwaiti airport.
AI-generated misinformation about Iran war spreads widely online as creators profit from new technology
Iran has invested heavily in cyber warfare and cultivated ties with hacking groups, previously infiltrating U.S. political campaigns, military networks, and defense contractors. Analysts say the attacks aim to disrupt the U.S. war effort, inflate energy costs, strain cyber resources, and target companies linked to the defense sector.
Groups like Handala, claiming the Stryker attack, focus on data destruction rather than financial gain, according to cybersecurity experts. Pro-Iranian hackers openly discuss targeting U.S. military networks and critical infrastructure, including hospitals, ports, water plants, and power stations, on online forums.
Experts warn that weaker systems, such as local water or healthcare facilities, are likely targets, with tactics ranging from denial-of-service attacks to hack-and-leak operations. While Iran lacks the scale of countries like Russia or China, it compensates with ingenuity, previously impersonating U.S. activists online and attempting to infiltrate political communications.
Cybersecurity specialists caution that Western organizations remain on high alert, as pro-Iranian hackers, sometimes supported by Russian groups, continue operations aimed at creating chaos and undermining U.S. efforts.
7 days ago
Cambodia moves to tackle online scam networks with new law
The government of Cambodia announced Friday that it has prepared its first draft law aimed at cracking down on online scam centers, as authorities pledge to shut down such operations by the end of April.
Cambodia has become a major base for online fraud schemes that trick victims through fake investment offers and romance scams, costing people around the world tens of billions of dollars every year.
Many workers in these scam centers—often from other Asian countries—are reportedly lured with fake job offers and later forced to work in exploitative conditions resembling modern-day slavery.
Information Minister Neth Pheaktra said the proposed law would serve as a key legal tool for combating online fraud and money laundering while proving that Cambodia is not a refuge for criminals.
Under the legislation approved by the Cabinet, individuals who organize or manage online scam operations could face five to 10 years in prison and fines ranging from 500 million to 1 billion riels (about $125,000–$250,000). If the crimes involve human trafficking, violence, or unlawful detention, penalties could increase to 10–20 years in prison and fines of up to 2 billion riels (around $500,000). If a death is linked to a scam center, offenders could face 15–30 years in prison or even life sentences.
The draft law still requires approval from Parliament before it becomes effective.
Senior Minister Chhay Sinarith, who leads the government’s commission on combating online scams, told The Associated Press that authorities have targeted about 250 suspected scam locations since July and closed nearly 200 of them.
During the same period, the government filed 79 cases involving 697 suspected ringleaders and associates connected to the operations.
Authorities have also repatriated nearly 10,000 workers from scam centers to 23 different countries, while fewer than 1,000 individuals are still waiting to return home. Some others who managed to escape or were freed during raids have already returned independently.
Pheaktra said the government has intensified efforts to fight online scams to safeguard the country’s economic reputation, which has been harmed by such criminal activities. He added that the government does not benefit financially from these operations.
Despite previous crackdowns, however, scam networks have continued operating, leading some experts to question whether the new measures will succeed.
Jacob Sims, a specialist in transnational crime and visiting fellow at Harvard University Asia Center, said the key issue is whether authorities will dismantle the broader systems that enable the scam industry rather than simply shutting down the buildings where it operates.
He noted that past enforcement efforts in Cambodia often failed to disrupt the financial and protection networks behind the scams, allowing the operations to quickly resume.
7 days ago
Meta to acquire AI agent social network Moltbook
Meta said Tuesday it plans to acquire Moltbook, an experimental social network designed specifically for artificial intelligence agents to post updates and interact with one another.
The deal comes just weeks after Moltbook drew widespread attention online as an unusual Reddit-style platform where AI systems appeared to exchange messages and share information.
Meta, the parent company of Facebook and Instagram, said the platform had introduced innovative ideas in a “rapidly developing space” and could help create new ways for AI agents to assist people and businesses.
As part of the acquisition, Meta will also hire Moltbook co-founders Matt Schlicht and Ben Parr. Financial details of the deal were not disclosed.
The move highlights the growing interest across the tech industry in AI agents that can perform tasks independently, going beyond traditional chatbots by acting on behalf of users.
In a related development, OpenAI, the company behind ChatGPT, recently hired Peter Steinberger, the creator of the AI agent OpenClaw, previously known as Moltbot. OpenClaw is the underlying technology used by Moltbook.
OpenAI unveils GPT-5.4 with stronger reasoning, coding and computer-use abilities
OpenAI chief executive Sam Altman said Steinberger would help develop the next generation of personal AI agents capable of interacting with each other to carry out useful tasks for users.
Unlike many cloud-based systems, OpenClaw runs locally on a user’s device, allowing it to access files and manage data directly. It can also connect with messaging platforms such as Discord and Signal. Users who create OpenClaw agents can instruct them to join the Moltbook network.
OpenAI also announced earlier this week that it is acquiring Promptfoo, an AI security platform that evaluates the behaviour and potential risks of AI agents.
Moltbook’s rapid rise in popularity also raised concerns about the authenticity of content on the platform. Researchers from cloud security firm Wiz reported security vulnerabilities shortly after its launch, though those issues have since been addressed.
10 days ago
OpenAI unveils GPT-5.4 with stronger reasoning, coding and computer-use abilities
OpenAI has launched GPT-5.4, its newest frontier artificial intelligence model, introducing major upgrades in reasoning, coding and automated task execution.
The company said the model combines several of its recent advancements into a single system and is available in different variants, including GPT-5.4 Thinking and GPT-5.4 Pro.
One of the most significant features of GPT-5.4 is its 1 million-token context window, allowing it to analyse very large datasets such as entire codebases or extensive collections of documents more efficiently.
OpenAI also said GPT-5.4 is the first mainline model with built-in computer-use capabilities, enabling AI agents to directly interact with software to complete tasks. This means the system can operate computers by using screenshots, mouse clicks and keyboard commands, allowing it to work across applications and websites and automate complex workflows.
According to the company, the latest model introduces six major improvements, including enhanced coding abilities, better image perception and multimodal performance, stronger execution of long-running tasks and multi-step agent workflows, improved token efficiency for tool-heavy workloads, advanced web search and multi-source information synthesis, and more effective document-heavy analytics.
Addressing concerns about inaccuracies often referred to as “hallucinations,” OpenAI said GPT-5.4 is 33% less likely to produce false information compared with earlier models.
The company said the model is designed for professional environments and performs strongly in tasks such as legal analysis, financial modelling, creating presentation slides and writing or debugging code. Developers can also build AI agents capable of planning tasks, carrying them out and adjusting when problems arise.
The release reflects a broader shift in the evolution of AI systems. Early versions of ChatGPT primarily answered questions, while the GPT-4 era enabled more advanced capabilities such as writing essays, code and summaries. With GPT-5, models began to demonstrate stronger reasoning skills, and GPT-5.4 moves further by allowing AI systems to directly perform tasks on computers.
In practical use, GPT-5.4 can operate within common workplace tools such as spreadsheets and document editors. It can analyse financial data in Excel, automatically create dashboards, generate reports from raw datasets and process large legal or contractual documents.
For software development, the model can generate extensive codebases, detect and fix bugs, run automated software tests and even control web browsers through automation tools.
OpenAI’s latest release comes amid intensifying competition in the AI sector. Rival company Anthropic, led by Dario Amodei, recently introduced Claude Opus 4.6 and Claude Sonnet 4.6, which have been described as faster and more efficient for everyday enterprise tasks.
While the latest models from OpenAI and Anthropic focus on different strengths, the developments highlight a growing race to create AI systems capable of functioning as practical digital workers.
#From Indian Express
15 days ago
Apple unveils $599 devices targeting budget buyers
Apple has introduced a range of new products, including two devices priced at $599, as part of what CEO Tim Cook described as a “big week” of announcements aimed partly at budget-conscious buyers.
The new lineup was presented during hands-on media events in New York, London and Shanghai on Wednesday. The announcements include the new iPhone 17e, an entry-level laptop called MacBook Neo, updated iPad Air M4 tablets, refreshed monitors and upgraded chips for the company’s high-end laptops. Preorders for the devices began Wednesday.
The announcements come after the company reported record quarterly earnings driven by strong sales of the iPhone 17 series, although Apple has yet to roll out its previously promised artificial intelligence upgrades for Siri.
iPhone 17e
The iPhone 17e is designed for budget buyers and starts at $599 about $200 cheaper than the base iPhone 17. It uses the same A19 chip as the standard model and offers 256GB of storage, double the capacity of the previous 16e version.
The phone features a 48-megapixel camera and a C1X modem that supports faster cellular speeds. It also includes Apple’s Super Retina display, Ceramic Shield 2 protection and MagSafe charging with Qi2 support.
The device will be available in black, white and light pink.
iPad Air update
Apple also introduced an updated iPad Air powered by the M4 chip. While the higher-end iPad Pro uses the newer M5 chip, the Air still provides strong performance for everyday tasks such as streaming, browsing, email and video editing.
The company increased the tablet’s memory from 8GB to 12GB without raising the price. The 11-inch model starts at $599, while the 13-inch version starts at $799, both with 128GB of storage.
MacBook and chip upgrades
Apple upgraded its MacBook Pro laptops with new M5 Pro and M5 Max chips aimed at improving performance and battery efficiency.
The 14-inch MacBook Pro with the M5 Pro chip starts at $2,199, while the 16-inch model starts at $2,699. Both offer 24GB of RAM and 1TB of storage, along with support for Wi-Fi 7 and Bluetooth 6.
The new MacBook Neo, Apple’s most affordable laptop yet, features a 13-inch display, an A18 Pro chip, 256GB storage and two USB-C ports. The base model costs $599, while a 512GB version with Touch ID is priced at $699. Students and educators can get a $100 discount.
Apple also refreshed the MacBook Air with the base M5 chip and doubled storage to 512GB. The 13-inch model starts at $1,099 and the 15-inch version at $1,299.
New monitors
The company also launched two 27-inch 5K monitors the Studio Display and the higher-end Studio Display XDR. Both feature 5,120×2,880 resolution, 12-megapixel Center Stage cameras, six-speaker systems, two Thunderbolt 5 ports and two USB-C ports.
The Studio Display costs $1,599, while the advanced XDR version which includes mini-LED backlighting and a 120Hz refresh rate starts at $3,299.
16 days ago
TikTok rules out end-to-end encryption, citing user safety concerns
TikTok has said it will not introduce end-to-end encryption in direct messages, distancing itself from most major social media rivals and arguing that the feature could reduce user safety.
End-to-end encryption ensures that only the sender and recipient can read a message, making it one of the most secure communication methods available to the public. Platforms such as Facebook, Instagram, Messenger and X have adopted the system, saying it strengthens user privacy.
However, critics argue that such encryption can make it more difficult to monitor and prevent harmful content, as it blocks technology companies and law enforcement agencies from accessing messages when concerns arise.
The debate is further complicated by long-standing allegations that TikTok’s links to the Chinese state could expose user data to risk. The company has repeatedly rejected those claims. Earlier this year, its US operations were separated from its global business following directives from American lawmakers.
In a security briefing at its London office, TikTok told BBC that it believes end-to-end encryption would prevent police and safety teams from accessing direct messages when necessary. The company said its decision is aimed at protecting users, particularly young people, from online harm, and described the move as a conscious effort to differentiate itself from competitors.
What to know before seeking health advice from an AI chatbot
TikTok says it has around 30 million monthly users in the UK and more than one billion worldwide. The platform is headquartered in Los Angeles and Singapore and is owned by Chinese technology firm ByteDance. It has faced ongoing scrutiny over its data protection practices.
Social media analyst Matt Navarra described TikTok’s approach as strategically bold but potentially controversial. He said the company could argue that it is prioritising proactive safety over absolute privacy, especially given concerns about grooming and harassment in direct messages.
At the same time, Navarra noted that the decision could place TikTok at odds with global privacy standards and may heighten concerns among some users about the company’s ownership.
Privacy advocates generally consider end-to-end encryption the strongest safeguard against hacking, corporate surveillance and intrusive state monitoring.
#From BBC
17 days ago
What to know before seeking health advice from an AI chatbot
As hundreds of millions of people turn to artificial intelligence chatbots for advice, tech companies are now rolling out tools designed specifically to answer health-related questions.
In January, OpenAI launched ChatGPT Health, a version of its chatbot that can review users’ medical records, wellness apps and data from wearable devices to respond to health queries. The service is currently available through a waiting list. Rival company Anthropic offers similar features to some users of its Claude chatbot.
Both firms stress that their large language models are not a replacement for doctors and should not be used to diagnose illnesses. Instead, they say the tools can explain complex test results, help users prepare for medical appointments and identify health trends in records and app data.
Experts say chatbots can provide more tailored responses than a standard Google search, especially when users share detailed health information such as age, prescriptions and medical history. “If used responsibly, these tools can offer useful information,” said Dr. Robert Wachter of the University of California, San Francisco. However, he advised users to provide as much relevant detail as possible to improve accuracy.
Doctors warn that AI should never be used during medical emergencies. Symptoms like chest pain, shortness of breath or severe headache require immediate medical attention. Even in non-urgent cases, experts recommend approaching AI-generated advice with caution. Dr. Lloyd Minor, dean of Stanford’s medical school, said major health decisions should not rely solely on chatbot responses.
Privacy is another key concern. Health data shared with AI companies is not protected under the US federal health privacy law known as HIPAA, which applies to doctors and hospitals. While OpenAI and Anthropic say health data is kept separate and not used to train their models, users must actively choose to share their information.
Early studies show mixed results. Research from Oxford University in 2024 found that people using AI chatbots did not make better health decisions than those using online searches. Although chatbots correctly identified medical conditions in written scenarios 95% of the time, they often struggled during real-life interactions.
Experts suggest seeking a second AI opinion or consulting a medical professional for added confidence.
18 days ago