Apple is expected to unveil sleek headset aimed at thrusting the masses into alternate realities
Apple appears poised to unveil a long-rumored headset that will place its users between the virtual and real world, while also testing the technology trendsetter's ability to popularize new-fangled devices after others failed to capture the public's imagination. After years of speculation, the stage is set for the widely anticipated announcement to be made Monday at Apple's annual developers conference in a Cupertino, California, theater named after the company's late co-founder Steve Jobs. Apple is also likely to use the event to show off its latest Mac computer, preview the next operating system for the iPhone and discuss its strategy for artificial intelligence. But the star of the show is expected to be a pair of goggles — perhaps called “Reality Pro,” according to media leaks — that could become another milestone in Apple's lore of releasing game-changing technology, even though the company hasn't always been the first to try its hand at making a particular device. Apple's lineage of breakthroughs date back to a bow-tied Jobs peddling the first Mac in 1984 —a tradition that continued with the iPod in 2001, the iPhone in 2007, the iPad in 2010, the Apple Watch in 2014 and its AirPods in 2016. But with a hefty price tag that could be in the $3,000 range, Apple's new headset may also be greeted with a lukewarm reception from all but affluent technophiles. If the new device turns out to be a niche product, it would leave Apple in the same bind as other major tech companies and startups that have tried selling headsets or glasses equipped with technology that either thrusts people into artificial worlds or projects digital images with scenery and things that are actually in front of them — a format known as “augmented reality.” Apple's goggles are expected be sleekly designed and capable of toggling between totally virtual or augmented options, a blend sometimes known as “mixed reality." That flexibility also is sometimes called external reality, or XR for shorthand. Facebook founder Mark Zuckerberg has been describing these alternate three-dimensional realities as the “metaverse.” It's a geeky concept that he tried to push into the mainstream by changing the name of his social networking company to Meta Platforms in 2021 and then pouring billions of dollars into improving the virtual technology. But the metaverse largely remains a digital ghost town, although Meta's virtual reality headset, the Quest, remains the top-selling device in a category that so far has mostly appealed to video game players looking for even more immersive experiences. Apple executives seem likely to avoid referring to the metaverse, given the skepticism that has quickly developed around that term, when they discuss the potential of the company's new headset. In recent years, Apple CEO Tim Cook has periodically touted augmented reality as technology's next quantum leap, while not setting a specific timeline for when it will gain mass appeal. “If you look back in a point in time, you know, zoom out to the future and look back, you’ll wonder how you led your life without augmented reality,” Cook, who is 62, said last September while speaking to an audience of students in Italy. “Just like today you wonder how did people like me grow up without the internet. You know, so I think it could be that profound. And it’s not going to be profound overnight.” The response to virtual, augmented and mixed reality has been decidedly ho-hum so far. Some of the gadgets deploying the technology have even been derisively mocked, with the most notable example being Google's internet-connected glasses released more than a decade ago. After Google co-founder Sergey Brin initially drummed up excitement about the device by demonstrating an early model's potential “wow factor” with a skydiving stunt staged during a San Francisco tech conference, consumers quickly became turned off to a product that allowed its users to surreptitiously take pictures and video. The backlash became so intense that people who wore the gear became known as “Glassholes,” leading Google to withdraw the product a few years after its debut. Microsoft also has had limited success with HoloLens, a mixed-reality headset released in 2016, although the software maker earlier this year insisted it remains committed to the technology. Magic Leap, a startup that stirred excitement with previews of a mixed-reality technology that could conjure the spectacle of a whale breaching through a gymnasium floor, had so much trouble marketing its first headset to consumers in 2018 that it has since shifted its focus to industrial, healthcare and emergency uses. Daniel Diez, Magic Leap's chief transformation officer, said there are four major questions Apple's goggles will have to answer: “What can people do with it? What does this thing look and feel like? Is it comfortable to wear? And how much is it going to cost?” The anticipation that Apple's goggles are going to sell for several thousand dollars already has dampened expectations for the product. Although he expects Apple's goggles to boast “jaw dropping” technology, Wedbush Securities analyst Dan Ives said he expects the company to sell just 150,000 units during the device's first year on the market — a mere speck in the company's portfolio. By comparison, Apple sells more than 200 million iPhones, its marquee product a year. But the iPhone wasn't an immediate sensation, with sales of fewer than 12 million units in its first full year on the market. In a move apparently aimed at magnifying the expected price of Apple's goggles, Zuckerberg made a point of saying last week that the next Quest headset will sell for $500, an announcement made four months before Meta Platform plans to showcase the latest device at its tech conference. Since 2016, the average annual shipments of virtual- and augmented-reality devices have averaged 8.6 million units, according to the research firm CCS Insight. The firm expects sales to remain sluggish this year, with a sales projection of about 11 million of the devices before gradually climbing to 67 million in 2026. But those forecasts were obviously made before it's known whether Apple might be releasing a product that alters the landscape. “I would never count out Apple, especially with the consumer market and especially when it comes to finding those killer applications and solutions,” Magic Leap's Diez said. “If someone is going to crack the consumer market early, I wouldn’t be surprised it would be Apple.”
OPPO launches MR Glass developer edition
OPPO released its latest breakthrough in the XR field, the OPPO MR Glass Developer Edition, during the Augmented World Expo (AWE) 2023. This state-of-the-art mixed reality (MR) device is designed to offer an optimal environment for advanced developers to create and present exciting MR experiences. OPPO anticipates a surge in XR technology adoption in the near future, with MR as one of the most viable modalities. To drive innovation in MR applications, the OPPO MR Glass will be made available as an official Snapdragon Spaces developer kit in China to help attract more developers to the field and push the boundaries of XR technology. During the keynote speech at AWE 2023, Yi Xu, Director of XR Technology at OPPO said, OPPO MR Glass represents our latest breakthrough in this exploration, equipped with the advanced capabilities of Snapdragon Spaces to empower developers.” Xu highlighted the OPPO MR Glass as a breakthrough product, equipped with the advanced capabilities of Snapdragon Spaces, which empower developers to unlock boundless possibilities for XR innovation. OPPO and Qualcomm Technologies share a long-standing relationship and a common vision to establish an open ecosystem that empowers developers and unlocks the potential for XR innovation. Said Bakadir, Senior Director, XR Product Management at Qualcomm Technologies, Inc., acknowledged OPPO’s efforts in exploring technologies, products, content, and services for XR. Said Bakadir, Senior Director, XR Product Management, at Qualcomm Technologies, Inc. said, “We recognize OPPO’s long-standing efforts in exploring technologies, products, content, and services for XR, which make OPPO an ideal partner in this field. Through potential solutions improving productivity, creativity, and gaming experiences on OPPO MR Glass, we are glad to see growing vitality among developer groups and hope to find more MR content to enliven the platform, which is significant for creating innovative experience and bringing breakthroughs for the industry. In the future, we look forward to deepening our collaboration with OPPO to stimulate more innovations in the MR ecosystem.” OPPO MR Glass is built to provide developers with the best platform to create and test the latest MR experiences. Powered by the Snapdragon XR2+ platform, the MR Glass features OPPO’s proprietary SUPERVOOC fast charging and heart rate detection function, enabling a wide range of new applications. The device is crafted with skin-friendly material and incorporates Binocular VPT (Video Pass Through) technologies, dual front RGB cameras, pancake lenses, and a 120Hz high refresh rate.
Brazil: UN regional group has endorsed Amazon city to host 2025 climate conference
Brazil’s government announced Friday that a U.N. Latin America regional group has endorsed a Brazilian city in the Amazon region to host the 2025 U.N. climate change conference, though the world body has not yet publicly confirmed the venue. President Luiz Inácio Lula da Silva initially said Brazil will hold the conference, known as COP 30, in the city of Belem, state of Para, in the heart of the Brazilian rainforest, reflecting his intention to bring attention to the Amazon. A statement from the Brazilian government later clarified that the region's support was merely a step in the selection process. The “support for the Brazilian candidacy demonstrates the region’s confidence in Brazil’s capacity to advance the agenda in the fight against climate change,” the statement read. The latest U.N. climate conference was hosted by Egypt in Sharm el-Sheikh, and this year’s will take place in Dubai. The U.N. has not yet announced the 2024 venue, let alone the 2025 one, but the locations tend to rotate among regions, and the Brazilian government statement Friday indicated that a Latin American working group was choosing the 2025 venue, and had endorsed Belem. The final decision won't be made until COP 29 next year. “It will be a honor for Brazil to welcome representatives from all over the world in a state in our Amazon,” Lula said in a video posted on his social media channels. “I went to COPs in Egypt, in Paris, in Copenhagen, and all people talk about is the Amazon. So I said, ‘Why don’t we go there so you see what the Amazon is like?'” Brazil's foreign minister, Mauro Vieira, says in the video that the decision was made at the U.N. on May 18. The U.N. has yet to confirm the venue. Brazil's announcement comes in a week that Lula's administration's environmental governance has faced headwinds from Brazil's congress. Lawmakers by a large majority approved a measure that eroded the environment ministry's authority over construction in forested and coastal areas, as well as other development. Also this week, the congress is debating whether the state-run oil giant should be allowed to drill off the coast in the Amazon states of Amapa and Para.
EU official says Twitter abandons bloc's voluntary pact against disinformation
Twitter has dropped out of a voluntary European Union agreement to combat online disinformation, a top EU official said Friday. European Commissioner Thierry Breton tweeted that Twitter had pulled out of the EU's disinformation “code of practice” that other major social media platforms have pledged to support. But he added that Twitter's “obligation” remained, referring to the EU's tough new digital rules taking effect in August. “You can run but you can’t hide,” Breton said. San Francisco-based Twitter responded with an automated reply, as it does to most press inquiries, and did not comment. The decision to abandon the commitment to fighting false information appears to be the latest move by billionaire owner Elon Musk to loosen the reins on the social media company after he bought it last year. He has rolled back previous anti-misinformation rules, and has thrown its verification system and content-moderation policies into chaos as he pursues his goal of turning Twitter into a digital town square. Google, TikTok, Microsoft and Facebook and Instagram parent Meta are among those that have signed up to the EU code, which requires companies to measure their work on combating disinformation and issue regular reports on their progress. There were already signs Twitter wasn't prepared to live up to its commitments. The European Commission, the 27-nation bloc's executive arm, blasted Twitter earlier this year for failing to provide a full first report under the code, saying it provided little specific information and no targeted data. Breton said that under the new digital rules that incorporate the code of practice, fighting disinformation will become a “legal obligation.” “Our teams will be ready for enforcement,” he said.
ChatGPT-4: All you need to know
OpenAI’s ChatGPT-4 is the latest iteration of the groundbreaking Generative Pre-trained Transformer (GPT) series. Building on the success of its predecessors, GPT-4 offers enhanced capabilities, improved performance, and a more user-friendly experience. GPT-4 was publicly released on March 14, 2023, making it accessible to users worldwide. Let’s explore how to use ChatGPT-4, its new features, and more. New Features of OpenAI's ChatGPT-4 OpenAI highlights three significant advancements in this next-generation language model: creativity, visual input, and longer context. According to OpenAI, GPT-4 demonstrates substantial improvements in creativity, excelling in both generating and collaborating with users on creative endeavors. Let’s see some of the top new features of ChatGPT-4. Can Understand More Advanced Inputs One of the major breakthroughs of GPT-4 lies in its enhanced capacity to comprehend intricate and nuanced prompts. OpenAI reports that GPT-4 showcases performance at equivalence with human-level expertise on diverse professional and academic benchmarks. Read more: 7 Ways to Earn Money with ChatGpt This achievement was demonstrated by subjecting GPT-4 to numerous human-level exams and standardized tests, including the SAT, BAR, and GRE, without any specific training. Remarkably, GPT-4 not only grasped and successfully tackled these tests, but it also consistently outperformed its predecessor, GPT-3.5, across all assessments. GPT-4 boasts support for more than 26 languages, including less widely spoken ones like Latvian, Welsh, and Swahili. When assessed based on three-shot accuracy using the MMLU benchmark, GPT-4 surpassed not only GPT-3.5 but also other prominent LLMs such as PaLM and Chinchilla in terms of English-language proficiency across 24 languages. Multimodal Functionality In contrast to the previous version, ChatGPT, GPT-4 introduces a remarkable advancement in its range of multimodal capabilities. This latest model can now process not only text prompts but also image prompts. Read more: How to Use AI Tools to Get Your Dream Job This groundbreaking feature enables the AI to accept an image as input, interpret it, and explain it as effectively as a text prompt. The model seamlessly handles images of varying sizes and types, including documents that combine text and images, hand-drawn sketches, and even screenshots. Enhanced Steerability OpenAI further claims that GPT-4 exhibits a remarkable level of steerability. Notably, it has become stronger in staying true to its assigned character, reducing the likelihood of deviations when deployed in character-based applications. Developers now have the ability to prescribe the AI’s style and task by providing specific instructions within the system message. These messages enable API users to customize the user experience extensively while operating within defined parameters. To ensure model integrity, OpenAI is also actively working on enhancing the security of these messages, as they represent the most common method for potential misuse. Read more: ChatGPT ‘passed’ BCS exam, according to Science Bee’s experiment
ChatGPT's chief to testify before US Congress as concerns grow about artificial intelligence's risks
The head of the artificial intelligence company that makes ChatGPT is set to testify before US Congress as lawmakers call for new rules to guide the rapid development of AI technology. OpenAI CEO Sam Altman is scheduled to speak at a Senate hearing Tuesday. His San Francisco-based startup rocketed to public attention after its release late last year of ChatGPT, a free chatbot tool that answers questions with convincingly human-like responses. Also Read: How Europe is building artificial intelligence guardrails What started out as a panic among educators about ChatGPT's use to cheat on homework assignments has expanded to broader concerns about the ability of the latest crop of “generative AI” tools to mislead people, spread falsehoods, violate copyright protections and upend some jobs. And while there's no immediate sign that Congress will craft sweeping new AI rules, as European lawmakers are doing, the societal concerns brought Altman and other tech CEOs to the White House earlier this month and have led U.S. agencies to promise to crack down on harmful AI products that break existing civil rights and consumer protection laws. “Artificial intelligence urgently needs rules and safeguards to address its immense promise and pitfalls,” said a prepared statement from Sen. Richard Blumenthal, the Connecticut Democrat who chairs the Senate Judiciary Committee’s subcommittee on privacy, technology and the law. Founded in 2015, OpenAI is also known for other AI products including the image-maker DALL-E. Microsoft has invested billions of dollars into the startup and has integrated its technology into its own products, including its search engine Bing. Also Read: Future of AI and humanity: 4 dangers that most worry the 'Godfather of AI' Also testifying will be IBM's chief privacy and trust officer, Christina Montgomery, and Gary Marcus, a professor emeritus at New York University who was among a group of AI experts who called on OpenAI and other tech firms to pause their development of more powerful AI models for six months to give society more time to consider the risks. The letter was a response to the March release of OpenAI's latest model, GPT-4, described as more powerful than ChatGPT. “Artificial intelligence will be transformative in ways we can’t even imagine, with implications for Americans’ elections, jobs, and security,” said the panel's ranking Republican, Sen. Josh Hawley of Missouri. “This hearing marks a critical first step towards understanding what Congress should do.” Altman and other tech industry leaders have said they welcome some form of AI oversight but have cautioned against what they see as overly heavy-handed rules. In a copy of her prepared remarks, IBM's Montgomery asks Congress to take a “precision regulation" approach. "This means establishing rules to govern the deployment of AI in specific use-cases, not regulating the technology itself,” Montgomery said.
Musk, new Twitter CEO Linda Yaccarino spar over content moderation during on-stage interview
On Friday, Elon Musk announced that NBC Universal's Linda Yaccarino will serve as the new CEO of Twitter. Yaccarino is a longtime advertising executive credited with integrating and digitizing ad sales at NBCU. Her challenge now will be to woo back advertisers that have fled Twitter since Musk acquired it last year for $44 billion. Since taking ownership, Musk has fired thousands of Twitter employees, largely scrapped the trust-and-safety team responsible for keeping the site free of hate speech, harassment and misinformation, and blamed others — particularly mainstream media organizations, which he views as untrustworthy “competitors” to Twitter for ad dollars — for exaggerating Twitter's problems. In April, the two met for an on-stage conversation at a marketing convention in Miami Beach, Florida. Here are some highlights of their conversation: MUSK AND YACCARINO SPAR OVER CONTENT MODERATION The Miami discussion was cordial, although both participants drew some distinct lines in the sand. On a few occasions, Yaccarino steered the conversation toward issues of content moderation and the apparent proliferation of hate speech and extremism since Musk took over the platform. She couched her questions in the context of whether Musk could help advertisers feel more welcome on the platform. At one point, she asked if Musk was willing to let advertisers “influence” his vision for Twitter, explaining that it would help them get more excited about investing more money — "product development, ad safety, content moderation — that's what the influence is." Musk shut her down. “It’s totally cool to say that you want to have your advertising appear in certain places in Twitter and not in other places, but it is not cool to to try to say what Twitter will do," he said. “And if that means losing advertising dollars, we lose it. But freedom of speech is paramount.” MUSK REPEATS: NO SPECIAL INFLUENCE FOR ADVERTISERS Yaccarino returned to the issue a few moments later when she asked Musk if he planned to reinstate the company's “influence council,” a once-regular meeting with marketing executives from several of Twitter's major advertisers. Musk again demurred. “I would be worried about creating a backlash among the public,” he said. “Because if the public thinks that their views are being determined by, you know, a small number of (marketing executives) in America, they will be, I think, upset about that." Musk went on to acknowledge that feedback is important, and suggested Twitter should aim for a “sensible middle ground” that ensures the public “has a voice” while advertisers focus on the ordinary work of improving sales and the perception of their brands. PRESSING ELON ON HIS OWN TWEETS Musk didn't pass up the opportunity to sell the assembled marketers a new plan to solve Twitter's problems with objectionable tweets, which the company had announced the day before. Musk called the policy “freedom of speech but not freedom of reach," describing it as a way to limit the visibility of hate speech and similar problems without actually removing rule-breaking tweets. Yaccarino took a swing. “Does it apply to your tweets?” Musk has a history of posting misinformation and occasionally offensive tweets, often in the early morning hours. Musk acknowledged that it does, adding that his tweets can also be tagged with “community notes” that provide additional context to tweets. He added that his tweets receive no special boosts from Twitter. “Will you agree to be more specific and not tweet after 3 a.m.?" Yaccarino asked. “I will aspire to tweet less after 3 a.m.,” Musk replied.
How Europe is building artificial intelligence guardrails
Authorities around the world are racing to draw up rules for artificial intelligence, including in the European Union, where draft legislation faces a pivotal moment on Thursday. A European Parliament committee is set to vote on the proposed rules, part of a yearslong effort to draw up guardrails for artificial intelligence. Those efforts have taken on more urgency as the rapid advance of ChatGPT highlights benefits the emerging technology can bring — and the new perils it poses. Here's a look at the EU's Artificial Intelligence Act: HOW DO THE RULES WORK? The AI Act, first proposed in 2021, will govern any product or service that uses an artificial intelligence system. The act will classify AI systems according to four levels of risk, from minimal to unacceptable. Riskier applications will face tougher requirements, including being more transparent and using accurate data. Think about it as a "risk management system for AI,” said Johann Laux, an expert at the Oxford Internet Institute. WHAT ARE THE RISKS? One of the EU's main goals is to guard against any AI threats to health and safety and protect fundamental rights and values. That means some AI uses are an absolute no-no, such as “social scoring” systems that judge people based on their behavior or interactive talking toys that encourage dangerous behavior. Predictive policing tools, which crunch data to forecast where crimes will happen and who will commit them, are expected to be banned. So is remote facial recognition, except for some narrow exceptions like preventing a specific terrorist threat. The technology scans passers-by and uses AI to match their faces to a database. Thursday's vote is set to decide how extensive the prohibition will be. The aim is “to avoid a controlled society based on AI,” Brando Benifei, the Italian lawmaker helping lead the European Parliament's AI efforts, told reporters Wednesday. “We think that these technologies could be used instead of the good also for the bad, and we consider the risks to be too high.” AI systems used in high risk categories like employment and education, which would affect the course of a person's life, face tough requirements such as being transparent with users and putting in place risk assessment and mitigation measures. The EU's executive arm says most AI systems, such as video games or spam filters, fall into the low- or no-risk category. WHAT ABOUT CHATGPT? The original 108-page proposal barely mentioned chatbots, merely requiring them to be labeled so users know they’re interacting with a machine. Negotiators later added provisions to cover general purpose AI like ChatGPT, subjecting them to some of the same requirements as high-risk systems. One key addition is a requirement to thoroughly document any copyright material used to teach AI systems how to generate text, images, video or music that resembles human work. That would let content creators know if their blog posts, digital books, scientific articles or pop songs have been used to train algorithms that power systems like ChatGPT. Then they could decide whether their work has been copied and seek redress. WHY ARE THE EU RULES SO IMPORTANT? The European Union isn't a big player in cutting-edge AI development. That role is taken by the U.S. and China. But Brussels often plays a trendsetting role with regulations that tend to become de facto global standards. "Europeans are, globally speaking, fairly wealthy and there’s a lot of them," so companies and organizations often decide that the sheer size of the bloc’s single market with 450 million consumers makes it easier to comply than develop different products for different regions, Laux said. But it's not just a matter of cracking down. By laying down common rules for AI, Brussels is also trying to develop the market by instilling confidence among users, Laux said. “The thinking behind it is if you can induce people to to place trust in AI and in applications, they will also use it more,” Laux said. “And when they use it more, they will unlock the economic and social potential of AI.” WHAT IF YOU BREAK THE RULES? Violations will draw fines of up to 30 million euros ($33 million) or 6% of a company's annual global revenue, which in the case of tech companies like Google and Microsoft could amount to billions. WHAT’S NEXT? It could be years before the rules fully take effect. The flagship legislative proposal faces a joint European Parliament committee vote on Thursday. The draft legislation then moves into three-way negotiations involving the bloc’s 27 member states, the Parliament and the executive Commission, where faces further wrangling over the details. Final approval is expected by the end of the year, or early 2024 at the latest, followed by a grace period for companies and organizations to adapt, often around two years.
New Twitter rules expose election offices to spoof accounts
Tracking down accurate information about Philadelphia's elections on Twitter used to be easy. The account for the city commissioners who run elections, @phillyvotes, was the only one carrying a blue check mark, a sign of authenticity. But ever since the social media platform overhauled its verification service last month, the check mark has disappeared. That’s made it harder to distinguish @phillyvotes from a list of random accounts not run by the elections office but with very similar names. The election commission applied weeks ago for a gray check mark — Twitter’s new symbol to help users identify official government accounts – but has yet to hear back from the Twitter, commission spokesman Nick Custodio said. It’s unclear whether @phillyvotes is an eligible government account under Twitter’s new rules. That’s troubling, Custodio said, because Pennsylvania has a primary election May 16 and the commission uses its account to share important information with voters in real time. If the account remains unverified, it will be easier to impersonate – and harder for voters to trust – heading into Election Day. Impostor accounts on social media are among many concerns election security experts have heading into next year's presidential election. Experts have warned that foreign adversaries or others may try to influence the election, either through online disinformation campaigns or by hacking into election infrastructure. Election administrators across the country have struggled to figure out the best way to respond after Twitter owner Elon Musk threw the platform’s verification service into disarray, given that Twitter has been among their most effective tools for communicating with the public. Some are taking other steps allowed by Twitter, such as buying check marks for their profiles or applying for a special label reserved for government entities, but success has been mixed. Election and security experts say the inconsistency of Twitter’s new verification system is a misinformation disaster waiting to happen. “The lack of clear, at-a-glance verification on Twitter is a ticking time bomb for disinformation,” said Rachel Tobac, CEO of the cybersecurity company SocialProof Security. “That will confuse users – especially on important days like election days.” The blue check marks that Twitter once doled out to notable celebrities, public figures, government entities and journalists began disappearing from the platform in April. To replace them, Musk told users that anyone could pay $8 a month for an individual blue check mark or $1,000 a month for a gold check mark as a “verified organization.” The policy change quickly opened the door for pranksters to pose convincingly as celebrities, politicians and government entities, which could no longer be identified as authentic. While some impostor accounts were clear jokes, others created confusion. Fake accounts posing as Chicago Mayor Lori Lightfoot, the city’s Department of Transportation and the Illinois Department of Transportation falsely claimed the city was closing one of its main thoroughfares to private traffic. The fake accounts used the same photos, biographical text and home page links as the real ones. Their posts amassed hundreds of thousands of views before being taken down. Twitter’s new policy invites government agencies and certain affiliated organizations to apply to be labeled as official with a gray check. But at the state and local level, qualifying agencies are limited to “main executive office accounts and main agency accounts overseeing crisis response, public safety, law enforcement, and regulatory issues," the policy says. The rules do not mention agencies that run elections. So while the main Philadelphia city government account quickly received its gray check mark last month, the local election commission has not heard back. Election offices in four of the country's five most populous counties — Cook County in Illinois, Harris County in Texas, Maricopa County in Arizona and San Diego County — remain unverified, a Twitter search shows. Maricopa, which includes Phoenix, has been targeted repeatedly by election conspiracy theorists as the most populous and consequential county in one of the most closely divided political battleground states. Some counties contacted by The Associated Press said they have minimal concerns about impersonation or plan to apply for a gray check later, but others said they already have applied and have not heard back from Twitter. Even some state election offices are waiting for government labels. Among them is the office of Maine Secretary of State Shenna Bellows. In an April 24 email to Bellows’ communications director reviewed by The Associated Press, a Twitter representative wrote that there was “nothing to do as we continue to manually process applications from around the world.” The representative added in a later email that Twitter stands “ready to swiftly enforce any impersonation, so please don’t hesitate to flag any problematic accounts.” An email sent to Twitter's press office and a company safety officer requesting comment was answered only with an auto-reply of a poop emoji. “Our job is to reinforce public confidence,” Bellows told the AP. “Even a minor setback, like no longer being able to ensure that our information on Twitter is verified, contributes to an environment that is less predictable and less safe.” Some government accounts, including the one representing Pennsylvania’s second-largest county, have purchased blue checks because they were told it was required to continue advertising on the platform. Allegheny County posts ads for elections and jobs on Twitter, so the blue check mark “was necessary,” said Amie Downs, the county's communications director. When anyone can buy verification and when government accounts are not consistently labeled, the check mark loses its meaning, Colorado Secretary of State Jena Griswold said. Griswold’s office received a gray check mark to maintain trust with voters, but she told the AP she would not buy verification for her personal Twitter account because “it doesn’t carry the same weight” it once did. Custodio, at the Philadelphia elections commission, said his office would not buy verification either, even if it gets denied a gray check. “The blue or gold check mark just verifies you as a paid subscriber and does not verify identity,” he said. Experts and advocates tracking election discourse on social media say Twitter's changes do not just incentivize bad actors to run disinformation campaigns — they also make it harder for well-meaning users to know what’s safe to share.
IT Competition for Youths with Disabilities ends
The seventh National IT Competition for Youths with Disabilities has ended successfully. The day-long contest was organised by the Bangladesh Computer Council (BCC) of the ICT Division on Saturday at the BUBT University campus in the capital's Mirpur in collaboration with the Center for Services and Information on Disability (CSID) and the Bangladesh University of Business and Technology (BUBT). The Executive Director (Grade-1) of BCC Ranajit Kumar was present as the chief guest while Md. Shamsul Huda FCA, chairman of the Board of Trustee of BUBT and Khandaker Jahurul Alam, executive director of CSID, were present as special guests at the closing and award ceremony of the event. Chaired by Pro-Vice Chancellor of BUBT Professor Dr. Md. Ali Noor, BCC Director (Training & Development) Engr. Md. Golam Sarwar gave a welcome address on the programme. Ranajit Kumar said that the national IT competition is crucial for youth with disabilities in our country. It provides a remarkable platform to showcase their skills, and talents in the field of information technology, thereby creating an opportunity for massive expansion of ICT practice. "Such arrangements contribute to encouraging youth with disabilities as well as reducing barriers and challenges faced by persons with disabilities in accessing education and employment opportunities in the ICT field", he added.