Tech-News
UK to host global AI Summit to assess 'most significant risks'
The United Kingdom will hold a global artificial intelligence (AI) summit this autumn to assess the technology's "most significant risks."
A number of alarming warnings have been issued concerning the possibly existential threat that AI poses to humans, reports BBC.
Regulators throughout the world are trying to create new laws to mitigate that danger.
Prime Minister Rishi Sunak stated that he wants the United Kingdom to lead efforts to guarantee that the advantages of artificial intelligence are "harnessed for the good of humanity."
Also read: Regulation must to control AI for surveillance, disinformation: rights experts
"AI has an incredible potential to transform our lives for the better, but we need to make sure it is developed and used in a way that is safe and secure," he said.
The summit's attendees are currently unknown, but the UK government stated that it will "bring together key countries, leading tech companies, and researchers to agree on safety measures to evaluate and monitor the most significant risks from AI."
Speaking to reporters in Washington, DC, where Sunak is meeting with President Biden on the matter, the prime minister stated that the UK was the "natural place" to lead the discourse on AI.
Downing Street pointed to the prime minister's recent talks with the CEOs of key AI businesses as proof of this. It also mentioned the 50,000 individuals engaged in the sector, which is worth £3.7 billion to the UK.
Also read: UNESCO reveals new AI roadmap for classrooms
'Too ambitious'
Some have questioned the UK's ability to lead in this sector.
According to Yasmin Afina, a research fellow at Chatham House's Digital Society Initiative, the UK "could realistically be too ambitious."
She stated that the EU and US had "stark differences in governance and regulatory approaches" that the UK would struggle to reconcile, as well as a number of existing global efforts, such as the UN's Global Digital Compact, that had "stronger foundational bases already."
Afina went on to say that the UK was home to none of the world's most innovative AI startups.
Also read: How to Use AI Tools to Get Your Dream Job
"Instead of trying to play a role that would be too ambitious for the UK and risks alienating it, the UK should perhaps focus on promoting responsible behaviour in the research, development and deployment of these technologies," she told the BBC.
Deep unease
Since the chatbot ChatGPT first came on the scene in November, astounding people with its ability to answer complicated queries in a human-sounding manner, interest in AI has skyrocketed.
It can do so because of the enormous processing capacity of AI systems, which has sparked widespread concern, the report said.
Geoffrey Hinton and Prof Yoshua Bengio, two of the three so-called godfathers of AI, have been among those to issue concerns about how the technology they helped design has a high potential for disaster.
Read more: China warns of artificial intelligence risks, calls for beefed-up national security measures
These concerns have fueled calls for effective AI legislation, while many uncertainties remain about what that would include and how it would be implemented.
Regulatory race
The European Union is drafting an Artificial Intelligence Act, but even in the best-case scenario, it will take two and a half years to become law.
Last month, EU technology head Margrethe Vestager said it would be "way too late" and that the EU was working on a voluntary code for the industry with the US, which they anticipated would be completed within weeks.
China has also taken the lead in developing AI rules, including ideas requiring corporations to notify users anytime an AI algorithm is employed, the report added.
Read more: ChatGPT's chief to testify before US Congress as concerns grow about artificial intelligence's risks
The UK government published their opinions in a White Paper in March, which was criticized for having "significant gaps."
However, Marc Warner, a member of the government's AI Council, has suggested a stricter approach, telling the BBC that some of the most powerful kinds of AI may eventually have to be outlawed.
According to Matt O'Shaughnessy, visiting fellow at the Carnegie Endowment for International Peace, there was nothing the UK could do about the fact that others were leading the charge on AI legislation - but it could still play an essential role.
"The EU and China are both large markets that have proposed consequential regulatory schemes for AI - without either of those factors, the UK will struggle to be as influential," he said.
Read more: AI & Future of Jobs: Will Artificial Intelligence or Robots Take Your Job?
But he added the UK was an "academic and commercial hub", with institutions that were "well-known for their work on responsible AI".
"Those all make it a serious player in the global discussion about AI," he told the BBC.
A ‘vast paedophile network’ connected by Instagram's algorithms, says WSJ report
Instagram's recommendation algorithms linked and encouraged a "vast network of paedophiles" seeking illicit underage sexual content and conduct, according to the Wall Street Journal (WSJ).
These algorithms also marketed the sale of unlawful "child-sex material" on the network, it said.
The report is based on a joint investigation by the Wall Street Journal and researchers from Stanford University and the University of Massachusetts Amherst looking into child pornography on Meta's platform. Buyers might even "commission specific acts" or organize "meet ups" on some accounts.
Also read: Instagram adds new tools to help content creators earn money
"Pedophiles have long used the internet, but unlike the forums and file-transfer services that cater to people who have interest in illicit content, Instagram doesn't merely host these activities. Its algorithms promote them," the WSJ report said. "Instagram connects pedophiles and guides them to content sellers via recommendation systems that excel at linking those who share niche interests."
According to the investigation, Instagram users may search for child-sex abuse hashtags.
According to the researchers, these hashtags directed users to accounts that offered to sell paedophilic items and even included footage of minors harming themselves.
Also read: Meta brings Facebook Reels to Bangladesh
Anti-paedophile campaigners alerted the corporation to accounts purporting to belong to a girl selling underage sex content.
The activists got automated answers that stated, "Because of the high volume of reports we receive, our team hasn't been able to review this post." In another situation, the message advised the user to conceal the account in order to avoid viewing its material, the report said.
A Meta spokesperson confirmed receiving the reports but failing to act on them, attributing the failure to a technological glitch, it also said.
Also read: Instagram adds new tools to help content creators earn money
The company told the WSJ that it has repaired the flaw in its reporting system and is offering fresh training to its content moderators.
"Child exploitation is a horrific crime. We're continuously investigating ways to actively defend against this behaviour," the spokesperson said.
Meta claims to have shut down 27 paedophile networks in the last two years and is preparing more. It also stated that hundreds of hashtags that sexualize minors, some with millions of postings, had been banned, the report concluded.
Read more: Wish you could tweak that text? WhatsApp is letting users edit messages
Microsoft will pay $20M to settle U.S. charges of illegally collecting children's data
Microsoft will pay a fine of $20 million to settle Federal Trade Commission charges that it illegally collected and retained the data of children who signed up to use its Xbox video game console.
The agency charged that Microsoft gathered the data without notifying parents or obtaining their consent, and that it also illegally held onto the data. Those actions violated the Children’s Online Privacy Protection Act, the FTC stated.
Read:Twitter accuses Microsoft of misusing its data, foreshadowing a possible fight over AI
In a blog post, Microsoft corporate vice president for Xbox Dave McCarthy outlined additional steps the company is now taking to improve its age verification systems and to ensure that parents are involved in the creation of child accounts for the service. These mostly concern efforts to improve age verification technology and to educate children and parents about privacy issues.
McCarthy also said the company had identified and fixed a technical glitch that failed to delete child accounts in cases where the account creation process never finished. Microsoft policy was to hold that data no longer than 14 days in order to allow players to pick up account creation where they left off if they were interrupted.
Read: Microsoft reports boost in profits, revenue, as it pushes AI
The settlement must be approved by a federal court before it can go into effect, the FTC said.
Apple is expected to unveil sleek headset aimed at thrusting the masses into alternate realities
Apple appears poised to unveil a long-rumored headset that will place its users between the virtual and real world, while also testing the technology trendsetter's ability to popularize new-fangled devices after others failed to capture the public's imagination.
After years of speculation, the stage is set for the widely anticipated announcement to be made Monday at Apple's annual developers conference in a Cupertino, California, theater named after the company's late co-founder Steve Jobs. Apple is also likely to use the event to show off its latest Mac computer, preview the next operating system for the iPhone and discuss its strategy for artificial intelligence.
But the star of the show is expected to be a pair of goggles — perhaps called “Reality Pro,” according to media leaks — that could become another milestone in Apple's lore of releasing game-changing technology, even though the company hasn't always been the first to try its hand at making a particular device.
Apple's lineage of breakthroughs date back to a bow-tied Jobs peddling the first Mac in 1984 —a tradition that continued with the iPod in 2001, the iPhone in 2007, the iPad in 2010, the Apple Watch in 2014 and its AirPods in 2016.
But with a hefty price tag that could be in the $3,000 range, Apple's new headset may also be greeted with a lukewarm reception from all but affluent technophiles.
If the new device turns out to be a niche product, it would leave Apple in the same bind as other major tech companies and startups that have tried selling headsets or glasses equipped with technology that either thrusts people into artificial worlds or projects digital images with scenery and things that are actually in front of them — a format known as “augmented reality.”
Apple's goggles are expected be sleekly designed and capable of toggling between totally virtual or augmented options, a blend sometimes known as “mixed reality." That flexibility also is sometimes called external reality, or XR for shorthand.
Facebook founder Mark Zuckerberg has been describing these alternate three-dimensional realities as the “metaverse.” It's a geeky concept that he tried to push into the mainstream by changing the name of his social networking company to Meta Platforms in 2021 and then pouring billions of dollars into improving the virtual technology.
But the metaverse largely remains a digital ghost town, although Meta's virtual reality headset, the Quest, remains the top-selling device in a category that so far has mostly appealed to video game players looking for even more immersive experiences.
Apple executives seem likely to avoid referring to the metaverse, given the skepticism that has quickly developed around that term, when they discuss the potential of the company's new headset.
In recent years, Apple CEO Tim Cook has periodically touted augmented reality as technology's next quantum leap, while not setting a specific timeline for when it will gain mass appeal.
“If you look back in a point in time, you know, zoom out to the future and look back, you’ll wonder how you led your life without augmented reality,” Cook, who is 62, said last September while speaking to an audience of students in Italy. “Just like today you wonder how did people like me grow up without the internet. You know, so I think it could be that profound. And it’s not going to be profound overnight.”
The response to virtual, augmented and mixed reality has been decidedly ho-hum so far. Some of the gadgets deploying the technology have even been derisively mocked, with the most notable example being Google's internet-connected glasses released more than a decade ago.
After Google co-founder Sergey Brin initially drummed up excitement about the device by demonstrating an early model's potential “wow factor” with a skydiving stunt staged during a San Francisco tech conference, consumers quickly became turned off to a product that allowed its users to surreptitiously take pictures and video. The backlash became so intense that people who wore the gear became known as “Glassholes,” leading Google to withdraw the product a few years after its debut.
Microsoft also has had limited success with HoloLens, a mixed-reality headset released in 2016, although the software maker earlier this year insisted it remains committed to the technology.
Magic Leap, a startup that stirred excitement with previews of a mixed-reality technology that could conjure the spectacle of a whale breaching through a gymnasium floor, had so much trouble marketing its first headset to consumers in 2018 that it has since shifted its focus to industrial, healthcare and emergency uses.
Daniel Diez, Magic Leap's chief transformation officer, said there are four major questions Apple's goggles will have to answer: “What can people do with it? What does this thing look and feel like? Is it comfortable to wear? And how much is it going to cost?”
The anticipation that Apple's goggles are going to sell for several thousand dollars already has dampened expectations for the product. Although he expects Apple's goggles to boast “jaw dropping” technology, Wedbush Securities analyst Dan Ives said he expects the company to sell just 150,000 units during the device's first year on the market — a mere speck in the company's portfolio. By comparison, Apple sells more than 200 million iPhones, its marquee product a year. But the iPhone wasn't an immediate sensation, with sales of fewer than 12 million units in its first full year on the market.
In a move apparently aimed at magnifying the expected price of Apple's goggles, Zuckerberg made a point of saying last week that the next Quest headset will sell for $500, an announcement made four months before Meta Platform plans to showcase the latest device at its tech conference.
Since 2016, the average annual shipments of virtual- and augmented-reality devices have averaged 8.6 million units, according to the research firm CCS Insight. The firm expects sales to remain sluggish this year, with a sales projection of about 11 million of the devices before gradually climbing to 67 million in 2026.
But those forecasts were obviously made before it's known whether Apple might be releasing a product that alters the landscape.
“I would never count out Apple, especially with the consumer market and especially when it comes to finding those killer applications and solutions,” Magic Leap's Diez said. “If someone is going to crack the consumer market early, I wouldn’t be surprised it would be Apple.”
OPPO launches MR Glass developer edition
OPPO released its latest breakthrough in the XR field, the OPPO MR Glass Developer Edition, during the Augmented World Expo (AWE) 2023.
This state-of-the-art mixed reality (MR) device is designed to offer an optimal environment for advanced developers to create and present exciting MR experiences.
OPPO anticipates a surge in XR technology adoption in the near future, with MR as one of the most viable modalities. To drive innovation in MR applications, the OPPO MR Glass will be made available as an official Snapdragon Spaces developer kit in China to help attract more developers to the field and push the boundaries of XR technology.
During the keynote speech at AWE 2023, Yi Xu, Director of XR Technology at OPPO said, OPPO MR Glass represents our latest breakthrough in this exploration, equipped with the advanced capabilities of Snapdragon Spaces to empower developers.”
Xu highlighted the OPPO MR Glass as a breakthrough product, equipped with the advanced capabilities of Snapdragon Spaces, which empower developers to unlock boundless possibilities for XR innovation.
OPPO and Qualcomm Technologies share a long-standing relationship and a common vision to establish an open ecosystem that empowers developers and unlocks the potential for XR innovation. Said Bakadir, Senior Director, XR Product Management at Qualcomm Technologies, Inc., acknowledged OPPO’s efforts in exploring technologies, products, content, and services for XR.
Said Bakadir, Senior Director, XR Product Management, at Qualcomm Technologies, Inc. said, “We recognize OPPO’s long-standing efforts in exploring technologies, products, content, and services for XR, which make OPPO an ideal partner in this field. Through potential solutions improving productivity, creativity, and gaming experiences on OPPO MR Glass, we are glad to see growing vitality among developer groups and hope to find more MR content to enliven the platform, which is significant for creating innovative experience and bringing breakthroughs for the industry. In the future, we look forward to deepening our collaboration with OPPO to stimulate more innovations in the MR ecosystem.”
OPPO MR Glass is built to provide developers with the best platform to create and test the latest MR experiences. Powered by the Snapdragon XR2+ platform, the MR Glass features OPPO’s proprietary SUPERVOOC fast charging and heart rate detection function, enabling a wide range of new applications.
The device is crafted with skin-friendly material and incorporates Binocular VPT (Video Pass Through) technologies, dual front RGB cameras, pancake lenses, and a 120Hz high refresh rate.
Brazil: UN regional group has endorsed Amazon city to host 2025 climate conference
Brazil’s government announced Friday that a U.N. Latin America regional group has endorsed a Brazilian city in the Amazon region to host the 2025 U.N. climate change conference, though the world body has not yet publicly confirmed the venue.
President Luiz Inácio Lula da Silva initially said Brazil will hold the conference, known as COP 30, in the city of Belem, state of Para, in the heart of the Brazilian rainforest, reflecting his intention to bring attention to the Amazon.
A statement from the Brazilian government later clarified that the region's support was merely a step in the selection process. The “support for the Brazilian candidacy demonstrates the region’s confidence in Brazil’s capacity to advance the agenda in the fight against climate change,” the statement read.
The latest U.N. climate conference was hosted by Egypt in Sharm el-Sheikh, and this year’s will take place in Dubai.
The U.N. has not yet announced the 2024 venue, let alone the 2025 one, but the locations tend to rotate among regions, and the Brazilian government statement Friday indicated that a Latin American working group was choosing the 2025 venue, and had endorsed Belem. The final decision won't be made until COP 29 next year.
“It will be a honor for Brazil to welcome representatives from all over the world in a state in our Amazon,” Lula said in a video posted on his social media channels. “I went to COPs in Egypt, in Paris, in Copenhagen, and all people talk about is the Amazon. So I said, ‘Why don’t we go there so you see what the Amazon is like?'”
Brazil's foreign minister, Mauro Vieira, says in the video that the decision was made at the U.N. on May 18. The U.N. has yet to confirm the venue.
Brazil's announcement comes in a week that Lula's administration's environmental governance has faced headwinds from Brazil's congress. Lawmakers by a large majority approved a measure that eroded the environment ministry's authority over construction in forested and coastal areas, as well as other development.
Also this week, the congress is debating whether the state-run oil giant should be allowed to drill off the coast in the Amazon states of Amapa and Para.
EU official says Twitter abandons bloc's voluntary pact against disinformation
Twitter has dropped out of a voluntary European Union agreement to combat online disinformation, a top EU official said Friday.
European Commissioner Thierry Breton tweeted that Twitter had pulled out of the EU's disinformation “code of practice” that other major social media platforms have pledged to support. But he added that Twitter's “obligation” remained, referring to the EU's tough new digital rules taking effect in August.
“You can run but you can’t hide,” Breton said.
San Francisco-based Twitter responded with an automated reply, as it does to most press inquiries, and did not comment.
The decision to abandon the commitment to fighting false information appears to be the latest move by billionaire owner Elon Musk to loosen the reins on the social media company after he bought it last year. He has rolled back previous anti-misinformation rules, and has thrown its verification system and content-moderation policies into chaos as he pursues his goal of turning Twitter into a digital town square.
Google, TikTok, Microsoft and Facebook and Instagram parent Meta are among those that have signed up to the EU code, which requires companies to measure their work on combating disinformation and issue regular reports on their progress.
There were already signs Twitter wasn't prepared to live up to its commitments. The European Commission, the 27-nation bloc's executive arm, blasted Twitter earlier this year for failing to provide a full first report under the code, saying it provided little specific information and no targeted data.
Breton said that under the new digital rules that incorporate the code of practice, fighting disinformation will become a “legal obligation.”
“Our teams will be ready for enforcement,” he said.
ChatGPT-4: All you need to know
OpenAI’s ChatGPT-4 is the latest iteration of the groundbreaking Generative Pre-trained Transformer (GPT) series. Building on the success of its predecessors, GPT-4 offers enhanced capabilities, improved performance, and a more user-friendly experience. GPT-4 was publicly released on March 14, 2023, making it accessible to users worldwide. Let’s explore how to use ChatGPT-4, its new features, and more.
New Features of OpenAI's ChatGPT-4
OpenAI highlights three significant advancements in this next-generation language model: creativity, visual input, and longer context. According to OpenAI, GPT-4 demonstrates substantial improvements in creativity, excelling in both generating and collaborating with users on creative endeavors. Let’s see some of the top new features of ChatGPT-4.
Can Understand More Advanced Inputs
One of the major breakthroughs of GPT-4 lies in its enhanced capacity to comprehend intricate and nuanced prompts. OpenAI reports that GPT-4 showcases performance at equivalence with human-level expertise on diverse professional and academic benchmarks.
Read more: 7 Ways to Earn Money with ChatGpt
This achievement was demonstrated by subjecting GPT-4 to numerous human-level exams and standardized tests, including the SAT, BAR, and GRE, without any specific training. Remarkably, GPT-4 not only grasped and successfully tackled these tests, but it also consistently outperformed its predecessor, GPT-3.5, across all assessments.
GPT-4 boasts support for more than 26 languages, including less widely spoken ones like Latvian, Welsh, and Swahili. When assessed based on three-shot accuracy using the MMLU benchmark, GPT-4 surpassed not only GPT-3.5 but also other prominent LLMs such as PaLM and Chinchilla in terms of English-language proficiency across 24 languages.
Multimodal Functionality
In contrast to the previous version, ChatGPT, GPT-4 introduces a remarkable advancement in its range of multimodal capabilities. This latest model can now process not only text prompts but also image prompts.
Read more: How to Use AI Tools to Get Your Dream Job
This groundbreaking feature enables the AI to accept an image as input, interpret it, and explain it as effectively as a text prompt. The model seamlessly handles images of varying sizes and types, including documents that combine text and images, hand-drawn sketches, and even screenshots.
Enhanced Steerability
OpenAI further claims that GPT-4 exhibits a remarkable level of steerability. Notably, it has become stronger in staying true to its assigned character, reducing the likelihood of deviations when deployed in character-based applications.
Developers now have the ability to prescribe the AI’s style and task by providing specific instructions within the system message. These messages enable API users to customize the user experience extensively while operating within defined parameters. To ensure model integrity, OpenAI is also actively working on enhancing the security of these messages, as they represent the most common method for potential misuse.
Read more: ChatGPT ‘passed’ BCS exam, according to Science Bee’s experiment
ChatGPT's chief to testify before US Congress as concerns grow about artificial intelligence's risks
The head of the artificial intelligence company that makes ChatGPT is set to testify before US Congress as lawmakers call for new rules to guide the rapid development of AI technology.
OpenAI CEO Sam Altman is scheduled to speak at a Senate hearing Tuesday.
His San Francisco-based startup rocketed to public attention after its release late last year of ChatGPT, a free chatbot tool that answers questions with convincingly human-like responses.
Also Read: How Europe is building artificial intelligence guardrails
What started out as a panic among educators about ChatGPT's use to cheat on homework assignments has expanded to broader concerns about the ability of the latest crop of “generative AI” tools to mislead people, spread falsehoods, violate copyright protections and upend some jobs.
And while there's no immediate sign that Congress will craft sweeping new AI rules, as European lawmakers are doing, the societal concerns brought Altman and other tech CEOs to the White House earlier this month and have led U.S. agencies to promise to crack down on harmful AI products that break existing civil rights and consumer protection laws.
“Artificial intelligence urgently needs rules and safeguards to address its immense promise and pitfalls,” said a prepared statement from Sen. Richard Blumenthal, the Connecticut Democrat who chairs the Senate Judiciary Committee’s subcommittee on privacy, technology and the law.
Founded in 2015, OpenAI is also known for other AI products including the image-maker DALL-E. Microsoft has invested billions of dollars into the startup and has integrated its technology into its own products, including its search engine Bing.
Also Read: Future of AI and humanity: 4 dangers that most worry the 'Godfather of AI'
Also testifying will be IBM's chief privacy and trust officer, Christina Montgomery, and Gary Marcus, a professor emeritus at New York University who was among a group of AI experts who called on OpenAI and other tech firms to pause their development of more powerful AI models for six months to give society more time to consider the risks. The letter was a response to the March release of OpenAI's latest model, GPT-4, described as more powerful than ChatGPT.
“Artificial intelligence will be transformative in ways we can’t even imagine, with implications for Americans’ elections, jobs, and security,” said the panel's ranking Republican, Sen. Josh Hawley of Missouri. “This hearing marks a critical first step towards understanding what Congress should do.”
Altman and other tech industry leaders have said they welcome some form of AI oversight but have cautioned against what they see as overly heavy-handed rules. In a copy of her prepared remarks, IBM's Montgomery asks Congress to take a “precision regulation" approach.
"This means establishing rules to govern the deployment of AI in specific use-cases, not regulating the technology itself,” Montgomery said.
Musk, new Twitter CEO Linda Yaccarino spar over content moderation during on-stage interview
On Friday, Elon Musk announced that NBC Universal's Linda Yaccarino will serve as the new CEO of Twitter. Yaccarino is a longtime advertising executive credited with integrating and digitizing ad sales at NBCU. Her challenge now will be to woo back advertisers that have fled Twitter since Musk acquired it last year for $44 billion.
Since taking ownership, Musk has fired thousands of Twitter employees, largely scrapped the trust-and-safety team responsible for keeping the site free of hate speech, harassment and misinformation, and blamed others — particularly mainstream media organizations, which he views as untrustworthy “competitors” to Twitter for ad dollars — for exaggerating Twitter's problems.
In April, the two met for an on-stage conversation at a marketing convention in Miami Beach, Florida. Here are some highlights of their conversation:
MUSK AND YACCARINO SPAR OVER CONTENT MODERATION
The Miami discussion was cordial, although both participants drew some distinct lines in the sand. On a few occasions, Yaccarino steered the conversation toward issues of content moderation and the apparent proliferation of hate speech and extremism since Musk took over the platform. She couched her questions in the context of whether Musk could help advertisers feel more welcome on the platform.
At one point, she asked if Musk was willing to let advertisers “influence” his vision for Twitter, explaining that it would help them get more excited about investing more money — "product development, ad safety, content moderation — that's what the influence is."
Musk shut her down. “It’s totally cool to say that you want to have your advertising appear in certain places in Twitter and not in other places, but it is not cool to to try to say what Twitter will do," he said. “And if that means losing advertising dollars, we lose it. But freedom of speech is paramount.”
MUSK REPEATS: NO SPECIAL INFLUENCE FOR ADVERTISERS
Yaccarino returned to the issue a few moments later when she asked Musk if he planned to reinstate the company's “influence council,” a once-regular meeting with marketing executives from several of Twitter's major advertisers. Musk again demurred.
“I would be worried about creating a backlash among the public,” he said. “Because if the public thinks that their views are being determined by, you know, a small number of (marketing executives) in America, they will be, I think, upset about that."
Musk went on to acknowledge that feedback is important, and suggested Twitter should aim for a “sensible middle ground” that ensures the public “has a voice” while advertisers focus on the ordinary work of improving sales and the perception of their brands.
PRESSING ELON ON HIS OWN TWEETS
Musk didn't pass up the opportunity to sell the assembled marketers a new plan to solve Twitter's problems with objectionable tweets, which the company had announced the day before. Musk called the policy “freedom of speech but not freedom of reach," describing it as a way to limit the visibility of hate speech and similar problems without actually removing rule-breaking tweets.
Yaccarino took a swing. “Does it apply to your tweets?” Musk has a history of posting misinformation and occasionally offensive tweets, often in the early morning hours.
Musk acknowledged that it does, adding that his tweets can also be tagged with “community notes” that provide additional context to tweets. He added that his tweets receive no special boosts from Twitter.
“Will you agree to be more specific and not tweet after 3 a.m.?" Yaccarino asked.
“I will aspire to tweet less after 3 a.m.,” Musk replied.