science-innovation
The Russian space agency says its Luna-25 spacecraft has crashed into the moon.
Russia's robot lander the Luna-25 spacecraft crashed into the moon after it had spun into uncontrolled orbit, the country’s space agency Roscosmos reported on Sunday.
Readf more: Chinese spacecraft enters Mars’ orbit, joining Arab ship
“The apparatus moved into an unpredictable orbit and ceased to exist as a result of a collision with the surface of the moon,” read a statement from the agency.
Read more:Spacecraft buzzes Jupiter’s mega moon, 1st close-up in years
Roscosmos said it lost contact with the spacecraft on Saturday after it ran into trouble while preparing for its pre-landing orbit after reporting an “abnormal situation ” that its specialists were analyzing. “During the operation, an abnormal situation occurred on board the automatic station, which did not allow the maneuver to be performed with the specified parameters,” Roscosmos said in a Telegram post.
The spacecraft was scheduled to land on the south pole of the moon on Monday, racing to land on Earth’s satellite ahead of an Indian spacecraft.
Read more:UAE spacecraft takes close-up photos of Mars' little moon
The lunar south pole is of particular interest to scientists, who believe the permanently shadowed polar craters may contain water. The frozen water in the rocks could be transformed by future explorers into air and rocket fuel.
The launch earlier this month was Russia’s first since 1976 when it was part of the Soviet Union.
Tecno collaborates with Vogue to capture 'style in motion' at London Fashion Week
TECNO has partnered with VOGUE, to capture the essence of “Style in Motion” during the London Fashion Week.
TECNO teamed up with VOGUE to capture the underlying emotions and individuality that lie within the realm of minimalistic designs, according to a press release.
Also read: Skoot-Walton to work on production of high-tech e-bikes
Under the guidance of renowned photographer Aria Shahrokhshahi, TECNO's CAMON 20 Series redefined fashion expression by capturing the power of emotional storytelling, it said.
“TECNO and VOGUE's collaboration is a celebration of emotional storytelling and the profound impact it can have on the fashion landscape. By emphasizing the connection between fashion and emotions, they have redefined the way fashion is perceived and experienced,” added the release.
Also read: Walton to hold first-ever mega 'International Advanced Components and Technology Expo' in August
TECNO also announced the upcoming TECNO CAMON 20 Series Fashion Festival in Bangladesh, scheduled to take place on July 17. The festival will showcase the latest trends and designs, bringing together fashion enthusiasts, industry professionals, and influencers from Bangladesh.
As fashion continues to evolve, TECNO's collaboration with VOGUE has set a new standard for capturing and celebrating the emotions and stories behind fashion. Through the integration of technology and fashion, they have paved the way for a future where style and emotions go hand in hand, the release also said.
Apple unveils a $3,500 headset as it wades into the world of virtual reality
Apple on Monday unveiled a long-rumored headset that will place its users between the virtual and real world, while also testing the technology trendsetter's ability to popularize new-fangled devices after others failed to capture the public's imagination.
After years of speculation, Apple CEO Tim Cook hailed the arrival of the sleek goggles — dubbed "Vision Pro" — at the the company's annual developers conference held on a park-like campus in Cupertino, California, that Apple's late co-founder Steve Jobs helped design. The device will be capable to toggling between virtual reality, or VR, and augmented reality, or AR, which projects digital imagery while users still see can see objects in the real world.
Read more: Microsoft will pay $20M to settle U.S. charges of illegally collecting children's data
“This marks the beginning of a journey that will bring a new dimension to powerful personal technology," Cook told the crowd.
Although Apple executives provided an extensive preview of the headset's capabilities during the final half hour of Monday's event, consumers will have to wait before they can get their hands on the device and prepare to pay a hefty price to boot. Vision Pro will sell for $3,500 once it's released in stores early next year.
“It's an impressive piece of technology, but it was almost like a tease,” said Gartner analyst Tuong Nguyen. “It looked like the beginning of a very long journey."
Instead of merely positioning the goggles as another vehicle for exploring virtual worlds or watching more immersive entertainment, Apple framed the Vision Pro as the equivalent of owning a ultrahigh-definition TV, surround-sound system, high-end camera, and state-of-the art camera bundled into a single piece of hardware.
Read more: Apple is expected to unveil sleek headset aimed at thrusting the masses into alternate realities
“We believe it is a stretch, even for Apple, to assume consumers would pay a similar amount for an AR/VR headset as they would for a combination of those products,” D.A. Davison Tom Forte wrote in a Monday research note.
Despite such skepticism, the headset could become another milestone in Apple’s lore of releasing game-changing technology, even though the company hasn’t always been the first to try its hand at making a particular device.
Apple's lineage of breakthroughs date back to a bow-tied Jobs peddling the first Mac in 1984 —a tradition that continued with the iPod in 2001, the iPhone in 2007, the iPad in 2010, the Apple Watch in 2014 and its AirPods in 2016.
The company emphasized that it drew upon its past decades of product design during the years it spent working on the Vision Pro, which Apple said involved more than 5,000 different patents.
The headset will be equipped with 12 cameras, six microphones and variety of sensors that will allow users to control it and various apps with just their eyes and hand gestures. Apple said the experience won't cause the recurring nausea and headaches that similar devices have in the past. The company also developed a technology to create three-dimensional digital version of each user to display during video conferencing.
Read more: OPPO launches MR Glass developer edition
Although Vision Pro won't require physical controllers that can be clunky to use, the goggles will have to either be plugged into a power outlet or a portable battery tethered to the headset — a factor that could make it less attractive for some users.
“They’ve worked hard to make this headset as integrated into the real world as current technology allows, but it’s still a headset,” said Insider Intelligence analyst Yory Wurmser, who nevertheless described the unveiling as a “fairly mind-blowing presentation.”
Even so, analysts are not expecting the Vision Pro to be a big hit right away. That's largely because of the hefty price, but also because most people still can't see a compelling reason to wear something wrapped around their face for an extended period of time.
If the Vision Pro turns out to be a niche product, it would leave Apple in the same bind as other major tech companies and startups that have tried selling headsets or glasses equipped with technology that either thrusts people into artificial worlds or projects digital images onto scenery and things that are actually in front of them — a format known as “augmented reality.”
Facebook founder Mark Zuckerberg has been describing these alternate three-dimensional realities as the “metaverse.” It's a geeky concept that he tried to push into the mainstream by changing the name of his social networking company to Meta Platforms in 2021 and then pouring billions of dollars into improving the virtual technology.
But the metaverse largely remains a digital ghost town, although Meta's virtual reality headset, the Quest, remains the top-selling device in a category that so far has mostly appealed to video game players looking for even more immersive experiences. Cook and other Apple executives avoided referring to the metaverse in their presentations, describing the Vision Pro as the company's first leap into “spatial computing” instead.
The response to virtual, augmented and mixed reality has been decidedly ho-hum so far. Some of the gadgets deploying the technology have even been derisively mocked, with the most notable example being Google's internet-connected glasses released more than a decade ago.
Microsoft also has had limited success with HoloLens, a mixed-reality headset released in 2016, although the software maker earlier this year insisted it remains committed to the technology.
Magic Leap, a startup that stirred excitement with previews of a mixed-reality technology that could conjure the spectacle of a whale breaching through a gymnasium floor, had so much trouble marketing its first headset to consumers in 2018 that it has since shifted its focus to industrial, health care and emergency uses.
Wedbush Securities analyst Dan Ives estimated Apple will sell just 150,000 of the headsets during its first year on the market before escalating to 1 million headsets sold during the second year — a volume that would make the goggles a mere speck in the company's portfolio.
By comparison, Apple sells more than 200 million of its marquee iPhones a year. But the iPhone wasn't an immediate sensation, with sales of fewer than 12 million units in its first full year on the market.
Google is giving its dominant search engine an artificial-intelligence makeover
Google on Wednesday disclosed plans to infuse its dominant search engine with more advanced artificial-intelligence technology, a drive that's in response to one of the biggest threats to its long-established position as the internet's main gateway.
The gradual shift in how Google's search engine runs is rolling out three months after Microsoft's Bing search engine started to tap into technology similar to that which powers the artificially intelligent chatbot ChatGPT, which has created one of Silicon Valley's biggest buzzes since Apple released the first iPhone 16 years ago.
Google, which is owned by Alphabet Inc., already has been testing its own conversational chatbot called Bard. That product, powered by technology called generative AI that also fuels ChatGPT, has only been available to people accepted from a waitlist. But Google announced Wednesday that Bard will be available to all comers in more than 180 countries and more languages beyond English.
Bard's multilingual expansion will begin with Japanese and Korean before adding about 40 more languages.
Now Google is ready to test the AI waters with its search engine, which has been synonymous with finding things on the internet for the past 20 years and serves as the pillar of a digital advertising empire that generated more than $220 billion in revenue last year.
"We are at an exciting inflection point," Alphabet CEO Sundar Pichai told a packed developers conference in a speech peppered with one AI reference after another. "We are reimagining all our products, including search."
More AI technology will be coming to Google's Gmail with a "Help Me Write" option that will produce lengthy replies to emails in seconds, and a tool for photos called "Magic Editor" that will automatically doctor pictures.
The AI transition will begin cautiously with the search engine that serves as Google's crown jewel.
The deliberate approach reflects the balancing act that Google must negotiate as it tries to remain on the cutting edge while also preserving its reputation for delivering reliable search results — a mantle that could be undercut by artificial intelligence's penchant for fabricating information that sounds authoritative.
The tendency to produce deceptively convincing answers to questions — a phenomenon euphemistically described as "hallucinations" — has already been cropping up during the early testing of Bard, which like ChatGPT, relies on still-evolving generative AI technology.
Google will take its next AI steps through a newly formed search lab where people in the U.S. can join a waitlist to test how generative AI will be incorporated in search results. The tests also include the more traditional links to external websites where users can read more extensive information about queried topics. It may take several weeks before Google starts sending invitations to those accepted from the waitlist to test the AI-injected search engine.
The AI results will be clearly tagged as an experimental form of technology and Google is pledging the AI-generated summaries will sound more factual than conversational — a distinct contrast from Bard and ChatGPT, which are programmed to convey more human-like personas. Google is building in guardrails that will prevent the AI baked into the search engine from responding to sensitive questions about health — such as, "Should I give Tylenol to a 3-year-old?" — and finance matters. In those instances, Google will continue to steer people to authoritative websites.
Google isn't predicting how long it will be before its search engine will include generative AI results for all comers. The Mountain View, California, company has been under intensifying pressure to demonstrate how its search engine will maintain its leadership since Microsoft began to load AI into Bing, which remains a distant second to Google.
The potential threat caused the stock price of Google's parent, Alphabet Inc., to initially plunge, although it has recently bounced back to where it stood when Bing announced its AI plans to great fanfare. More recently, The New York Times reported Samsung is considering dropping Google as the default search engine on its widely used smartphones, raising the specter that Apple might adopt a similar tactic with the iPhone unless Google can show its search engine can evolve with what appears to be a forthcoming AI-driven revolution.
As it begins to ingrain AI in its search engine, Google is aiming to make Bard smarter by connecting with the next generation of a massive data set known as a "large language model," or LLM, that fuels it. The LLM that Bard relies on is dubbed Pathways Language Model, or PaLM. The AI in Google's search engine will draw upon the next-generation PaLM2 and another technology known as a Multitask Unified Model, or MUM.
Although people will have to wait to see how Google's search engine will deploy generative AI to find answers, a new tool will be immediately available. Google is adding a new filter called "Perspectives" that will focus on what people are saying online about whatever topic is entered into the search engine. The new feature will be placed along existing search filters for news, images and video.
Chatbots posing as 'journalists' operating AI-generated ‘content farms’: Investigation
Chatbots posing as journalists have been running almost 50 AI-generated “content farms”, according to an investigation by the anti-misinformation tracker NewsGuard.
The concerned websites provide information on politics, health, environment, money, and technology at a “high volume” in order to give a quick turnover of material to saturate with advertisements for profit, according to the investigation, reports The Guardian.
“Some publish hundreds of articles a day,” Newsguard’s McKenzie Sadeghi and Lorenzo Arvanitis said. “Some of the content advances false narratives. Nearly all of the content features bland language and repetitive phrases, hallmarks of artificial intelligence.”
In all, 49 webpages in seven languages — English, Chinese, Czech, French, Portuguese, Tagalog, and Thai — have been identified as being "entirely or mostly" created by AI language models. Almost half of the sites had no obvious evidence of ownership or control, and only four could be contacted, said the report.
One such site, Famadillo.com, claimed to have an “expert” to use AI to edit old articles that nobody read anymore, while GetIntoKnowledge.com acknowledged utilizing "automation at some points where they are extremely needed."
Searching for typical error messages produced by services such as ChatGPT led to the discovery of AI-generated content. “All 49 sites identified by NewsGuard had published at least one article containing error messages commonly found in AI-generated texts, such as ‘my cutoff date in September 2021’, ‘as an AI language model’ and ‘I cannot complete this prompt’, among others,” the report also said.
While the sites share AI authorship, their levels of success vary: ScoopEarth.com, has 124,000 Facebook followers for its celebrity biographies, while others, such as the finance site FilthyLucre.com, have not attracted a single follower on any platform.
AI 'godfather' Geoffrey quits Google, warns of dangers
Geoffrey Hinton, a prominent figure in the field of artificial intelligence (AI), has resigned from Google, according to a statement he made to the New York Times.
Hinton, who is considered the godfather of AI, expressed regret for his work and warned of the potential dangers associated with AI chatbots, reports BBC.
"Right now, they're not more intelligent than us, as far as I can tell. But I think they soon may be," he told BBC.
Hinton also acknowledged that his age, 75, played a role in his decision to retire from Google.
"I'm 75, so it's time to retire," he said.
Current AI systems like ChatGPT are the result of Dr. Hinton's groundbreaking work in the fields of deep learning and neural networks.
However, the cognitive psychologist and computer scientist told the BBC that the chatbot may soon surpass the amount of knowledge that a human brain can store.
"At the moment, what we're observing is that things like GPT-4 much surpass a person in terms of its broad understanding. It's not as skilled at reasoning, but it can already make simple decisions,” he said.
He suggested that these chatbots could soon surpass human intelligence and expressed concern about "bad actors" who might use AI for nefarious purposes.
He praised Google for their responsible approach to AI and emphasized the need for caution and vigilance in the development of these technologies.
"I actually want to say something positive about Google. And they're more credible if I'm not affiliated with Google," said Hinton.
Google's top scientist, Jeff Dean, stated in a statement, "We remain committed to a responsible approach to AI."
China auto show highlights intense electric car competition
Global and Chinese automakers plan to unveil more than a dozen new electric SUVs, sedans and muscle cars this week at the Shanghai auto show, their first full-scale sales event in four years in a market that has become a workshop for developing electrics, self-driving cars and other technology.
Automakers are competing to roll out faster, more luxurious, more feature-drenched electric vehicles in the technology's biggest, most crowded market. The ruling Communist Party has invested billions of dollars in subsidies to buy an early lead in an emerging industry. Established global brands face intense competition from Chinese rivals.
For the first time since 2019, executives are flying in from the United States, Europe and Japan for the world's biggest auto show after anti-virus curbs that blocked most travel into China were lifted in December. Auto shows in the industry's biggest market went ahead during the pandemic, but on a smaller scale. Global brands were represented by employees of their China operations.
Drivers in the world's biggest auto market bought 5.4 million pure-electric vehicles last year, or about two-thirds of the global total of 8 million, plus 1.5 million gasoline-electric hybrids. That was more than one-quarter of total auto sales of 23.6 million. This year's EV sales are forecast to rise another 30%.
"Consumers lost interest in gasoline cars. That is the biggest challenge for foreign brands to compete in China," said industry analyst John Zeng of LMC Automotive. "They are going to have to show their best EV products."
Beijing is winding down government support and shifting the burden to automakers by requiring them to earn credits for EV sales. Manufacturers are pouring billions of dollars into developing models that can compete on price and features without subsidies. Many are forming partnerships to share soaring costs.
Auto Shanghai 2023 fills the cavernous Shanghai exhibition center, a 1.5 million-square-meter (16 million-square-foot) subcontinent of a building that is among the world's biggest.
Volkswagen AG, the country's top-selling brand, says it plans to display 28 models, half of them electrified. VW says it will debut its ID.7 limousine, which promises a 700-kilometer (435-mile) range on one charge.
China's BYD Auto, which competes with Tesla Inc. for the title of world's biggest-selling electric automaker, says it will display for the first time its U9 supercar from its luxury Yangwang brand. The automaker says the U9, with a 1 million yuan ($145,000) sticker price, can accelerate from zero to 100 kph (60 mph) in two neck-straining seconds.
China's auto sales peaked in 2017 at 24.7 million but collapsed in 2020 to 20.2 million after dealerships closed as part of efforts to contain COVID-19. They are recovering but are yet to return to the pre-pandemic level.
The ruling party's support for EV development is part of plans to gain wealth and global influence by transforming China into a creator of profitable technologies.
That campaign has strained relations with Washington and other trading partners, which are cutting off access to advanced processor chips used by makers of smartphones, electric cars and other high-tech products. China's own foundries can supply low-end chips used in many cars but not processors for artificial intelligence and other advanced functions.
Sales of gasoline-electric hybrids and pure-electric vehicles rose 26.2% over a year ago in the first three months of 2023 to 1.6 million, according to the China Association of Auto Manufacturers. Sales of pure electrics rose 14.4% to 1.2 million while hybrids increased 75.1% to 433,000.
Tesla and some other brands cut prices by 5% to 15% starting in January after sales growth slowed, though to still-robust levels compared with the slack U.S. and European markets. That prompted warnings the squeeze on an industry with dozens of fledgling brands might force smaller automakers into mergers or out of business.
China also is, along with the United States, a leader in development of self-driving taxis and trucks.
Baidu Inc., best known as a search engine operator, is the most prominent among developers that also include Pony.ai. Geely Group, owner of Volvo Cars, Lotus and Polestar, has announced plans for satellite-linked autonomous vehicles. Network equipment maker Huawei Technologies Ltd. is working on self-driving mining and industrial vehicles.
Baidu and Pony.ai received China's first licenses to offer autonomous ride-hailing services in Beijing with a safety driver aboard to take over in the event of an emergency in 2022. That came 18 months after Alphabet Inc.'s Waymo started driverless ride-hailing service in Phoenix, Arizona.
"We see very strong support from the government," said Jason Low of Canalys.
At the auto show, Chinese brand Aito plans to display its new M5 SUV with autonomous technology developed in an alliance with Huawei Technologies Ltd. The telecom equipment maker is expanding into the auto and other industries after U.S. sanctions imposed in a feud with Beijing over technology crushed Huawei's smartphone business.
China's market is so huge that even brands whose strongest selling point is roaring, gasoline-powered engines are embracing electrics.
BMW AG says its whole vehicle lineup at Auto Shanghai will be electrified. The German sport luxury brand says it will unveil two new models, the i7 M70L and XM Red Label, and show its M760Le in China for the first time.
Italy's Maserati, a Stellantis unit known for using high-performance Ferrari engines, plans to unveil its first electric SUV and says its electric sports car will get an Asia premiere.
Chinese luxury EV brand NIO Inc., which competes with Tesla at the premium end of the market, plans to display its latest SUV, the ES6. It promises a 610-kilometer (380-mile) range on one charge.
Mercedes Benz plans to unveil an electric SUV under its luxury Maybach brand and two SUVs. The company also has EV joint ventures with BYD Auto and Geely Group.
Toyota says it plans to unveil two new models in its bZ line of zero-emissions vehicles. Nissan plans to display its Max-Out electric convertible concept car. Honda is debuting a new prototype for its China-focused e:N electric brand.
Despite such investments, Western and Japanese brands need to be more aggressive about EV development to keep up with China's rapid evolution, said LMC's Zeng. He said many take too long to create models abroad without Chinese input.
"The model they bring to China lags behind Chinese models by three or four years in driving range and equipment," Zeng said. "They have to learn to design and test cars in China for China."
Are robot waiters the future of restaurant industry?
You may have already seen them in restaurants: waist-high machines that can greet guests, lead them to their tables, deliver food and drinks and ferry dirty dishes to the kitchen. Some have cat-like faces and even purr when you scratch their heads.
But are robot waiters the future? It's a question the restaurant industry is increasingly trying to answer.
Many think robot waiters are the solution to the industry's labor shortages. Sales of them have been growing rapidly in recent years, with tens of thousands now gliding through dining rooms worldwide.
"There's no doubt in my mind that this is where the world is going," said Dennis Reynolds, dean of the Hilton College of Global Hospitality Leadership at the University of Houston. The school's restaurant began using a robot in December, and Reynolds says it has eased the workload for human staff and made service more efficient.
But others say robot waiters aren't much more than a gimmick that have a long way to go before they can replace humans. They can't take orders, and many restaurants have steps, outdoor patios and other physical challenges they can't adapt to.
"Restaurants are pretty chaotic places, so it's very hard to insert automation in a way that is really productive," said Craig Le Clair, a vice president with the consulting company Forrester who studies automation.
Still, the robots are proliferating. Redwood City, California-based Bear Robotics introduced its Servi robot in 2021 and expects to have 10,000 deployed by the end of this year in 44 U.S. states and overseas. Shenzen, China-based Pudu Robotics, which was founded in 2016, has deployed more than 56,000 robots worldwide.
"Every restaurant chain is looking toward as much automation as possible," said Phil Zheng of Richtech Robotics, an Austin-based maker of robot servers. "People are going to see these everywhere in the next year or two."
Li Zhai was having trouble finding staff for Noodle Topia, his Madison Heights, Michigan, restaurant, in the summer of 2021, so he bought a BellaBot from Pudu Robotics. The robot was so successful he added two more; now, one robot leads diners to their seats while another delivers bowls of steaming noodles to tables. Employees pile dirty dishes onto a third robot to shuttle back to the kitchen.
Now, Zhai only needs three people to do the same volume of business that five or six people used to handle. And they save him money. A robot costs around $15,000, he said, but a person costs $5,000 to $6,000 per month.
Zhai said the robots give human servers more time to mingle with customers, which increases tips. And customers often post videos of the robots on social media that entice others to visit.
"Besides saving labor, the robots generate business," he said.
Interactions with human servers can vary. Betzy Giron Reynosa, who works with a BellaBot at The Sushi Factory in West Melbourne, Florida, said the robot can be a pain.
"You can't really tell it to move or anything," she said. She has also had customers who don't want to interact with it.
But overall the robot is a plus, she said. It saves her trips back and forth to the kitchen and gives her more time with customers.
Labor shortages accelerated the adoption of robots globally, Le Clair said. In the U.S., the restaurant industry employed 15 million people at the end of last year, but that was still 400,000 fewer than before the pandemic, according to the National Restaurant Association. In a recent survey, 62% of restaurant operators told the association they don't have enough employees to meet customer demand.
Pandemic-era concerns about hygiene and adoption of new technology like QR code menus also laid the ground for robots, said Karthik Namasivayam, director of hospitality business at Michigan State University's Broad College of Business.
"Once an operator begins to understand and work with one technology, other technologies become less daunting and will be much more readily accepted as we go forward," he said.
Namasivayam notes that public acceptance of robot servers is already high in Asia. Pizza Hut has robot servers in 1,000 restaurants in China, for example.
The U.S. was slower to adopt robots, but some chains are now testing them. Chick-fil-A is trying them at multiple U.S. locations, and says it's found that the robots give human employees more time to refresh drinks, clear tables and greet guests.
Marcus Merritt was surprised to see a robot server at a Chick-fil-A in Atlanta recently. The robot didn't seem to be replacing staff, he said; he counted 13 employees in the store, and workers told him the robot helps service move a little faster. He was delighted that the robot told him to have a great day, and expects he'll see more robots when he goes out to eat.
"I think technology is part of our normal everyday now. Everybody has a cell phone, everybody uses some form of computer," said Merritt, who owns a marketing business. "It's a natural progression."
But not all chains have had success with robots.
Chili's introduced a robot server named Rita in 2020 and expanded the test to 61 U.S. restaurants before abruptly halting it last August. The chain found that Rita moved too slowly and got in the way of human servers. And 58% of guests surveyed said Rita didn't improve their overall experience.
Haidilao, a hot pot chain in China, began using robots a year ago to deliver food to diners' tables. But managers at several outlets said the robots haven't proved as reliable or cost-effective as human servers.
Wang Long, the manager of a Beijing outlet, said his two robots have both have broken down.
"We only used them now and then," Wang said. "It is a sort of concept thing and the machine can never replace humans."
Eventually, Namasivayam expects that a certain percentage of restaurants — maybe 30% — will continue to have human servers and be considered more luxurious, while the rest will lean more heavily on robots in the kitchen and in dining rooms. Economics are on the side of robots, he said; the cost of human labor will continue to rise, but technology costs will fall.
But that's not a future everyone wants to see. Saru Jayaraman, who advocates for higher pay for restaurant workers as president of One Fair Wage, said restaurants could easily solve their labor shortages if they just paid workers more.
"Humans don't go to a full-service restaurant to be served by technology," she said. "They go for the experience of themselves and the people they care about being served by a human."
Italy temporarily blocks ChatGPT over privacy concerns
Italy is temporarily blocking the artificial intelligence software ChatGPT in the wake of a data breach as it investigates a possible violation of stringent European Union data protection rules, the government's privacy watchdog said Friday.
The Italian Data Protection Authority said it was taking provisional action “until ChatGPT respects privacy,” including temporarily limiting the company from processing Italian users' data.
U.S.-based OpenAI, which developed ChatGPT, didn’t immediately return a request for comment Friday.
While some public schools and universities around the world have blocked the ChatGPT website from their local networks over student plagiarism concerns, it’s not clear how Italy would block it at a nationwide level.
The move also is unlikely to affect applications from companies that already have licenses with OpenAI to use the same technology driving the chatbot, such as Microsoft’s Bing search engine.
The AI systems that power such chatbots, known as large language models, are able to mimic human writing styles based on the huge trove of digital books and online writings they have ingested.
The Italian watchdog said OpenAI must report within 20 days what measures it has taken to ensure the privacy of users' data or face a fine of up to either 20 million euros (nearly $22 million) or 4% of annual global revenue.
The agency's statement cites the EU's General Data Protection Regulation and noted that ChatGPT suffered a data breach on March 20 involving “users' conversations" and information about subscriber payments.
OpenAI earlier announced that it had to take ChatGPT offline on March 20 to fix a bug that allowed some people to see the titles, or subject lines, of other users’ chat history.
“Our investigation has also found that 1.2% of ChatGPT Plus users might have had personal data revealed to another user,” the company said. “We believe the number of users whose data was actually revealed to someone else is extremely low and we have contacted those who might be impacted.”
Italy's privacy watchdog lamented the lack of a legal basis to justify OpenAI's “massive collection and processing of personal data” used to train the platform's algorithms and that the company does not notify users whose data it collects.
The agency also said ChatGPT can sometimes generate — and store — false information about individuals.
Finally, it noted there's no system to verify users' ages, exposing children to responses "absolutely inappropriate to their age and awareness.”
The watchdog's move comes as concerns grow about the artificial intelligence boom. A group of scientists and tech industry leaders published a letter Wednesday calling for companies such as OpenAI to pause the development of more powerful AI models until the fall to give time for society to weigh the risks.
“While it is not clear how enforceable these decisions will be, the very fact that there seems to be a mismatch between the technological reality on the ground and the legal frameworks of Europe” shows there may be something to the letter's call for a pause “to allow for our cultural tools to catch up,” said Nello Cristianini, an AI professor at the University of Bath.
San Francisco-based OpenAI's CEO, Sam Altman, announced this week that he’s embarking on a six-continent trip in May to talk about the technology with users and developers. That includes a stop planned for Brussels, where European Union lawmakers have been negotiating sweeping new rules to limit high-risk AI tools, as well as visits to Madrid, Munich, London and Paris.
European consumer group BEUC called Thursday for EU authorities and the bloc’s 27 member nations to investigate ChatGPT and similar AI chatbots. BEUC said it could be years before the EU's AI legislation takes effect, so authorities need to act faster to protect consumers from possible risks.
“In only a few months, we have seen a massive take-up of ChatGPT, and this is only the beginning,” Deputy Director General Ursula Pachl said.
Waiting for the EU’s AI Act “is not good enough as there are serious concerns growing about how ChatGPT and similar chatbots might deceive and manipulate people.”
Not magic: Opaque AI tool may flag parents with disabilities
For the two weeks that the Hackneys’ baby girl lay in a Pittsburgh hospital bed weak from dehydration, her parents rarely left her side, sometimes sleeping on the fold-out sofa in the room.
They stayed with their daughter around the clock when she was moved to a rehab center to regain her strength. Finally, the 8-month-old stopped batting away her bottles and started putting on weight again.
“She was doing well and we started to ask when can she go home,” Lauren Hackney said. “And then from that moment on, at the time, they completely stonewalled us and never said anything.”
The couple was stunned when child welfare officials showed up, told them they were negligent and took their daughter away.
“They had custody papers and they took her right there and then,” Lauren Hackney recalled. “And we started crying.”
More than a year later, their daughter, now 2, remains in foster care. The Hackneys, who have developmental disabilities, are struggling to understand how taking their daughter to the hospital when she refused to eat could be seen as so neglectful that she’d need to be taken from her home.
They wonder if an artificial intelligence tool that the Allegheny County Department of Human Services uses to predict which children could be at risk of harm singled them out because of their disabilities.
Also Read: ChatGPT by Open AI: All you need to know
The U.S. Justice Department is asking the same question. The agency is investigating the county’s child welfare system to determine whether its use of the influential algorithm discriminates against people with disabilities or other protected groups, The Associated Press has learned. Later this month, federal civil rights attorneys will interview the Hackneys and Andrew Hackney’s mother, Cynde Hackney-Fierro, the grandmother said.
Lauren Hackney has attention-deficit hyperactivity disorder that affects her memory, and her husband, Andrew, has a comprehension disorder and nerve damage from a stroke suffered in his 20s. Their baby girl was just 7 months old when she began refusing to drink her bottles. Facing a nationwide shortage of formula, they traveled from Pennsylvania to West Virginia looking for some and were forced to change brands. The baby didn’t seem to like it.
Her pediatrician first reassured them that babies sometimes can be fickle with feeding and offered ideas to help her get back her appetite, they said.
When she grew lethargic days later, they said, the same doctor told them to take her to the emergency room. The Hackneys believe medical staff alerted child protective services after they showed up with a baby who was dehydrated and malnourished.
That’s when they believe their information was fed into the Allegheny Family Screening Tool, which county officials say is standard procedure for neglect allegations. Soon, a social worker appeared to question them, and their daughter was sent to foster care.
Over the past six years, Allegheny County has served as a real-world laboratory for testing AI-driven child welfare tools that crunch reams of data about local families to try to predict which children are likely to face danger in their homes. Today, child welfare agencies in at least 26 states and Washington, D.C., have considered using algorithmic tools, and jurisdictions in at least 11 have deployed them, according to the American Civil Liberties Union.
The Hackneys’ story — based on interviews, internal emails and legal documents — illustrates the opacity surrounding these algorithms. Even as they fight to regain custody of their daughter, they can’t question the “risk score” Allegheny County’s tool may have assigned to her case because officials won’t disclose it to them. And neither the county nor the people who built the tool have ever explained which variables may have been used to measure the Hackneys’ abilities as parents.
“It’s like you have an issue with someone who has a disability,” Andrew Hackney said in an interview from their apartment in suburban Pittsburgh. “In that case … you probably end up going after everyone who has kids and has a disability.”
As part of a yearlong investigation, the AP obtained the data points underpinning several algorithms deployed by child welfare agencies, including some marked “CONFIDENTIAL,” offering rare insight into the mechanics driving these emerging technologies. Among the factors they have used to calculate a family’s risk, whether outright or by proxy: race, poverty rates, disability status and family size. They include whether a mother smoked before she was pregnant and whether a family had previous child abuse or neglect complaints.
What they measure matters. A recent analysis by ACLU researchers found that when Allegheny's algorithm flagged people who accessed county services for mental health and other behavioral health programs, that could add up to three points to a child’s risk score, a significant increase on a scale of 20.
Allegheny County spokesman Mark Bertolet declined to address the Hackney case and did not answer detailed questions about the status of the federal probe or critiques of the data powering the tool, including by the ACLU.
“As a matter of policy, we do not comment on lawsuits or legal matters,” Bertolet said in an email.
Justice Department spokeswoman Aryele Bradford declined to comment.
NOT MAGIC
Child welfare algorithms plug vast amounts of public data about local families into complex statistical models to calculate what they call a risk score. The number that’s generated is then used to advise social workers as they decide which families should be investigated, or which families need additional attention — a weighty decision that can sometimes mean life or death.
A number of local leaders have tapped into AI technology while under pressure to make systemic changes, such as in Oregon during a foster care crisis and in Los Angeles County after a series of high-profile child deaths in one of the nation’s largest county child welfare systems.
LA County’s Department of Children and Family Services Director Brandon Nichols says algorithms can help identify high-risk families and improve outcomes in a deeply strained system. Yet he could not explain how the screening tool his agency uses works.
“We’re sort of the social work side of the house, not the IT side of the house,” Nichols said in an interview. “How the algorithm functions, in some ways is, I don’t want to say is magic to us, but it’s beyond our expertise and experience.”
Nichols and officials at two other child welfare agencies referred detailed questions about their AI tools to the outside developers who created them.
In Larimer County, Colorado, one official acknowledged she didn’t know what variables were used to assess local families.
“The variables and weights used by the Larimer Decision Aide Tool are part of the code developed by Auckland and thus we do not have this level of detail,” Jill Maasch, a Larimer County Human Services spokeswoman said in an email, referring to the developers.
In Pennsylvania, California and Colorado, county officials have opened up their data systems to the two academic developers who select data points to build their algorithms. Rhema Vaithianathan, a professor of health economics at New Zealand’s Auckland University of Technology, and Emily Putnam-Hornstein, a professor at the University of North Carolina at Chapel Hill’s School of Social Work, said in an email that their work is transparent and that they make their computer models public.
“In each jurisdiction in which a model has been fully implemented we have released a description of fields that were used to build the tool, along with information as to the methods used,” they said by email.
A 241-page report on the Allegheny County website includes pages of coded variables and statistical calculations.
Vaithianathan and Putnam-Hornstein’s work has been hailed in reports published by UNICEF and the Biden administration alike for devising computer models that promise to lighten caseworkers’ loads by drawing from a set of simple factors. They have described using such tools as a moral imperative, insisting that child welfare officials should draw from all data at their disposal to make sure children aren’t maltreated.
Through tracking their work across the country, however, the AP found their tools can set families up for separation by rating their risk based on personal characteristics they cannot change or control, such as race or disability, rather than just their actions as parents.
In Allegheny County, a sprawling county of 1.2 million near the Ohio border, the algorithm has accessed an array of external data, including jail, juvenile probation, Medicaid, welfare, health and birth records, all held in a vast countywide “data warehouse.” The tool uses that information to predict the risk that a child will be placed in foster care two years after a family is first investigated.
County officials have told the AP they’re proud of their cutting-edge approach, and even expanded their work to build another algorithm focused on newborns. They have said they monitor their risk scoring tool closely and update it over time, including removing variables such as welfare benefits and birth records.
Vaithianathan and Putnam-Hornstein declined the AP’s repeated interview requests to discuss how they choose the specific data that powers their models. But in a 2017 report, they detailed the methods used to build the first version of Allegheny’s tool, including a footnote that described a statistical cutoff as “rather arbitrary but based on trial and error.”
“This footnote refers to our exploration of more than 800 features from Allegheny’s data warehouse more than five years ago,” the developers said by email.
That approach is borne out in their design choices, which differ from county to county.
In the same 2017 report, the developers acknowledged that using race data didn’t substantively improve the model’s accuracy, but they continued to study it in Douglas County, Colorado, though they ultimately opted against including it in that model. To address community concerns that a tool could harden racial bias in Los Angeles County, the developers excluded people’s criminal history, ZIP code and geographic indicators, but have continued to use those data points in the Pittsburgh area.
When asked about the inconsistencies, the developers pointed to their published methodology documents.
“We detail various metrics used to assess accuracy — while also detailing ‘external validations,’” the developers said via email.
When Oregon’s Department of Human Services built an algorithm inspired by Allegheny’s, it factored in a child’s race as it predicted a family’s risk, and also applied a “fairness correction” to mitigate racial bias. Last June, the tool was dropped entirely due to equity concerns after an AP investigation in April revealed potential racial bias in such tools.
Justice Department attorneys cited the same AP story last fall when federal civil rights attorneys started inquiring about additional discrimination concerns in Allegheny’s tool, three sources told the AP. They spoke on the condition of anonymity, saying the Justice Department asked them not to discuss the confidential conversations. Two said they also feared professional retaliation.
IQ TESTS, PARENTING CLASS
With no answers on when they could get their daughter home, the Hackneys’ lawyer in October filed a federal civil rights complaint on their behalf that questioned how the screening tool was used in their case.
Over time, Allegheny’s tool has tracked if members of the family have diagnoses for schizophrenia or mood disorders. It’s also measured if parents or other children in the household have disabilities, by noting whether any family members received Supplemental Security Income, a federal benefit for people with disabilities. The county said that it factors in SSI payments in part because children with disabilities are more likely to be abused or neglected.
The county also said disabilities-aligned data can be “predictive of the outcomes” and it “should come as no surprise that parents with disabilities … may also have a need for additional supports and services.” In an emailed statement, the county added that elsewhere in the country, social workers also draw on data about mental health and other conditions that may affect a parent’s ability to safely care for a child.
The Hackneys have been ordered to take parenting classes and say they have been taxed by all of the child welfare system’s demands, including IQ tests and downtown court hearings.
People with disabilities are overrepresented in the child welfare system, yet there’s no evidence that they harm their children at higher rates, said Traci LaLiberte, a University of Minnesota expert on child welfare and disabilities.
Including data points related to disabilities in an algorithm is problematic because it perpetuates historic biases in the system and it focuses on people’s physiological traits rather than behavior that social workers are brought in to address, LaLiberte said.
The Los Angeles tool weighs if any children in the family have ever gotten special education services, have had prior developmental or mental health referrals or used drugs to treat mental health.
“This is not unique to caseworkers who use this tool; it is common for caseworkers to consider these factors when determining possible supports and services,” the developers said by email.
Before algorithms were in use, the child welfare system had long distrusted parents with disabilities. Into the 1970s, they were regularly sterilized and institutionalized, LaLiberte said. A landmark federal report in 2012 noted parents with psychiatric or intellectual disabilities lost custody of their children as much as 80 percent of the time.
Across the U.S., it’s extremely rare for any child welfare agencies to require disabilities training for social workers, LaLiberte’s research has found. The result: Parents with disabilities are often judged by a system that doesn’t understand how to assess their capacity as caregivers, she said.
The Hackneys experienced this firsthand. When a social worker asked Andrew Hackney how often he fed the baby, he answered literally: two times a day. The worker seemed appalled, he said, and scolded him, saying babies must eat more frequently. He struggled to explain that the girl’s mother, grandmother and aunt also took turns feeding her each day.
FOREVER FLAGGED
Officials in Allegheny County have said that building AI into their processes helps them “make decisions based on as much information as possible,” and noted that the algorithm merely harnesses data social workers can already access.
That can include decades-old records. The Pittsburgh-area tool has tracked whether parents were ever on public benefits or had a history with the criminal justice system — even if they were minors at the time, or if it never resulted in charges or convictions.
The AP found those design choices can stack the deck against people who grew up in poverty, hardening historical inequities that persist in the data, or against people with records in the juvenile or criminal justice systems, long after society has granted redemption. And critics say that algorithms can create a self-fulfilling prophecy by influencing which families are targeted in the first place.
“These predictors have the effect of casting permanent suspicion and offer no means of recourse for families marked by these indicators,” according to the analysis from researchers at the ACLU and the nonprofit Human Rights Data Analysis Group. “They are forever seen as riskier to their children.”
As child welfare algorithms become more common, parents who have experienced social workers’ scrutiny fear the models won’t let them escape their pasts, no matter how old or irrelevant their previous scrapes with the system may have been.
Charity Chandler-Cole, who serves on the Los Angeles County Commission for Children and Families, is one of them. She landed in foster care as a teen after being arrested for shoplifting underwear for her younger sister. Then as an adult, she said, social workers once showed up at her apartment after someone spuriously reported that a grand piano was thrown at her nephew who was living at her home — even though they didn’t own such an instrument.
The local algorithm could tag her for her prior experiences in foster care and juvenile probation, as well as the unfounded child abuse allegation, Chandler-Cole says. She wonders if AI could also properly assess that she was quickly cleared of any maltreatment concerns, or that her nonviolent offense as a teen was legally expunged.
“A lot of these reports lack common sense,” said Chandler-Cole, now the mother of four and CEO of an organization that works with the court system to help children in foster care. “You are automatically putting us in these spaces to be judged with these labels. It just perpetuates additional harm.”
Chandler-Cole’s fellow commissioner Wendy Garen, by contrast, argues “more is better” and that by drawing on all available data, risk scoring tools can help make the agency’s work more thorough and effective.
GLOBAL INFLUENCE
Even as their models have come under scrutiny for their accuracy and fairness, the developers have started new projects with child welfare agencies in Northampton County, Pennsylvania, and Arapahoe County, Colorado. The states of California and Pennsylvania, as well as New Zealand and Chile, have also asked them to do preliminary work.
And as word of their methods has spread in recent years, Vaithianathan has given lectures highlighting screening tools in Colombia and Australia. She also recently advised researchers in Denmark and officials in the United Arab Emirates on how to use technology to target child services.
“Rhema is one of the world leaders and her research can help to shape the debate in Denmark,” a Danish researcher said on LinkedIn last year, regarding Vaithianathan’s advisory role related to a local child welfare tool that was being piloted.
Last year, the U.S. Department of Health and Human Services funded a national study, co-authored by Vaithianathan and Putnam-Hornstein, that concluded that their overall approach in Allegheny could be a model for other places.
HHS’ Administration for Children and Families spokeswoman Debra Johnson declined to say whether the Justice Department’s probe would influence her agency’s future support for an AI-driven approach to child welfare.
Especially as budgets tighten, cash-strapped agencies are desperate to find more efficient ways for social workers to focus on children who truly need protection. At a 2021 panel, Putnam-Hornstein acknowledged that “the overall screen-in rate remained totally flat” in Allegheny since their tool had been implemented.
Meanwhile, foster care and the separation of families can have lifelong developmental consequences for the child.
A 2012 HHS study found 95% of babies who are reported to child welfare agencies go through more than one caregiver and household change during their time in foster care, instability that researchers noted can itself be a form of trauma.
The Hackneys’ daughter already has been placed in two foster homes and has now spent more than half of her short life away from her parents as they try to convince social workers they are worthy.
Meanwhile, they say they're running out of money in the fight for their daughter. With barely enough left for food from Andrew Hackney’s wages at a local grocery store, he had to shut off his monthly cell phone service. They’re struggling to pay for the legal fees and gas money needed to attend appointments required of them.
In February, their daughter was diagnosed with a disorder that can disrupt her sense of taste, according to Andrew Hackney’s lawyer, Robin Frank, who added that the girl has continued to struggle to eat, even in foster care.
All they have for now are twice-weekly visits that last a few hours before she’s taken away again. Lauren Hackney’s voice breaks as she worries her daughter may be adopted and soon forget her own family. They say they yearn to do what many parents take for granted — put their child to sleep at night in her own bed.
“I really want to get my kid back. I miss her, and especially holding her. And of course, I miss that little giggly laugh,” Andrew Hackney said, as his daughter sprang toward him with excitement during a recent visit. “It hurts a lot. You have no idea how bad.”