China says its diplomats and government officials will fully exploit foreign social media platforms such as Facebook and Twitter that are blocked off to its own citizens.
Foreign ministry spokesman Geng Shuang on Monday likened the government to "diplomatic agencies and diplomats of other countries" in embracing such platforms to provide "better communication with the people outside and to better introduce China's situation and policies."
Facebook, Twitter and other social media platforms have tried for years without success to be allowed into the lucrative Chinese market, where Beijing has helped create politically reliable analogues such as Weichat and Weibo. Their content is carefully monitored by the companies and by government censors.
Despite that, Geng said China is "willing to strengthen communication with the outside world through social media such as Twitter to enhance mutual understanding." He also insisted that the Chinese internet remained open and said the country has the largest number of users of any nation, adding, "we have always managed the internet in accordance with laws and regulations."
The canny use of social media by pro-democracy protesters in Hong Kong has further deepened China's concern over the use of such platforms, prompting further crackdowns on the mainland, including on the use of virtual private networks.
Facebook Inc. said Thursday that it will continue to allow political ads on its platform including Instagram, despite possible false information in those ads run by politicians.
Facebook Director of Product Management Rob Leathern reasserted the policy by claiming that "people should be able to hear from those who wish to lead them, warts and all, and that what they say should be scrutinized and debated in public."
Leathern said Facebook does not intend to act the same way as Twitter, which completely bans political ads, or Google that limits the targeting of political ads.
Facebook admitted it received much criticism of its policy on political ads, but it asserted that decisions about those topics should not be made by private companies like Facebook.
Leathern said Facebook will give users more control on how they read political ads, including a functionality that helps them stop seeing ads run by political figures.
He noted that the U.S. tech giant will add more features in its Ad Library, a unique tool launched in May 2018 to allow Facebook users to access all political ads run by politicians and their campaigns on its platform.
Facebook has faced widespread scrutiny over its role in politics since U.S. general elections in 2016, and it has been slammed for giving too much freedom to politicians to post misinformation in advertisements, which apparently violated its own community standards.
Facebook says it is banning "deepfake" videos, the false but realistic clips created with artificial intelligence and sophisticated tools, as it steps up efforts to fight online manipulation.
The social network said late Monday that it's beefing up its policies to remove videos edited or synthesized in ways that aren't apparent to the average person, and which could dupe someone into thinking the video's subject said something he or she didn't actually say.
Created by artificial intelligence or machine learning, deepfakes combine or replace content to create images that can be almost impossible to tell are not authentic.
"While these videos are still rare on the internet, they present a significant challenge for our industry and society as their use increases," Facebook's vice president of global policy management, Monika Bickert, said in a blog post.
However, she said the new rules won't include parody or satire, or clips edited just to change the order of words. The exceptions underscore the balancing act Facebook and other social media services face in their struggle to stop the spread of online misinformation and "fake news" while also respecting free speech and fending off allegations of censorship.
The U.S. tech company has been grappling with how to handle the rise of deepfakes after facing criticism last year for refusing to remove a doctored video of House Speaker Nancy Pelosi slurring her words, which was viewed more than 3 million times. Experts said the crudely edited clip was more of a "cheap fake" than a deepfake.
Then, a pair of artists posted fake footage of Facebook CEO Mark Zuckerberg showing him gloating over his one-man domination of the world. Facebook also left that clip online. The company said at the time that neither video violated its policies.
The problem of altered videos is taking on increasing urgency as experts and lawmakers try to figure out how to prevent deepfakes from being used to interfere with U.S. presidential elections in November.
Facebook said any videos that don't meet existing standards for removal can still be reviewed by independent third-party fact-checkers. Those deemed false will be flagged as such to anyone trying to share or view them, which Bickert said was a better approach than just taking them down.
"If we simply removed all manipulated videos flagged by fact-checkers as false, the videos would still be available elsewhere on the internet or social media ecosystem," Bickert said. "By leaving them up and labelling them as false, we're providing people with important information and context."
A Ukrainian security researcher reported finding a database with the names, phone numbers and unique user IDs of more than 267 million Facebook users — nearly all U.S.-based — on the open internet. That data was likely harvested by criminals, said researcher Bob Diachenko, an independent security consultant in Kyiv.
The database, which Diachenko discovered with a search engine, was freely accessible online for at least 10 days beginning Dec. 4, he said. He notified the internet provider where it was hosted when he found it on Dec. 14; five days later it was no longer available.
Diachenko said someone downloaded the database to a hacker forum two days before he discovered it so it may have been shared among online thieves.
He first reported the finding Thursday in partnership with the U.K. tech news website Comparitech, which editor Paul Bischoff said has been helping write up Diachenko's discoveries of unsecured databases for about a year.
The researcher provided the AP with a 10-record sample from the database and the IDs — and two phone numbers that were answered — checked out against real Facebook users.
The evidence suggests the data was collected illegally, most likely by criminals in Vietnam who may have "scraped" it from public Facebook pages or by somehow obtaining privileged access to the service. Scraping is automated data-harvesting done by bots. A small fraction of the database include details on Vietnam-based users.
Diachenko said he did not share the database with Facebook, which did not directly confirm the finding. In a statement, the social network said it was investigating the issue and that the finding "likely" involved information obtained before Facebook took unspecified data-protection measures in recent years.
In 2018, the social media giant disabled a feature that allowed users to search for one another via phone number following revelations that the political firm Cambridge Analytica had accessed information on up to 87 million Facebook users without their knowledge or consent.
Diachenko said he had not determined when the data was collected. He said all the records had time stamps from January to June 2019 but that it was unclear who generated them.
Security experts say the affected Facebook users are at higher risk of being targeted by spam, password-stealing phishing attacks and identity theft attempts. The information can be cross-referenced with physical and email addresses and other data obtained in other data breaches. Facebook user IDs are unique numbers associated with individual accounts.
In September, the news site TechCrunch reported that Facebook IDs and phone numbers for more than 400 million users were similarly found exposed online by a researcher.
In March, Facebook disclosed that it had left hundreds of millions of user passwords readable by its employees on internal severs for years after a security researcher exposed the lapse.
Twitter has identified and removed nearly 6,000 accounts that it said were part of a coordinated effort by Saudi government agencies and individuals to advance the country's geopolitical interests.
Separately, Facebook said it removed hundreds of Facebook accounts, groups and pages linked to inauthentic behavior from two separate groups, one originating in the country of Georgia and one in Vietnam, which targeted people both in Vietnam and in the U.S.
Facebook said some of the accounts used profile photos generated by artificial intelligence and masqueraded as Americans. It is one of the first such misinformation efforts to use material generated by AI.
Tech companies have stepped up efforts to tackle misinformation on their services ahead of next year's U.S. presidential elections. The efforts followed revelations that Russians bankrolled thousands of fake political ads during the 2016 elections to sow dissent among Americans.
Twitter's and Facebook's announcements underscore the fact that misinformation concerns aren't limited to the U.S. and Russia.
In a blog post Friday, Twitter said the removed Saudi accounts were amplifying messages favorable to Saudi authorities, mainly through "aggressive liking, retweeting and replying." While the majority of the content was in Arabic, Twitter said the tweets also amplified discussions about sanctions in Iran and appearances by Saudi government officials in Western media.
"Governments have started to launch influence campaigns the same ways commercial enterprises launch campaigns to sell detergent or cars," said James Ludes, a national defense expert who teaches international relations and public policy at Salve Regina University in Rhode Island.
He said the Russian efforts in 2016 showed it was possible to "actually change public attitudes through the targeted use of social media."
While the attempts to root out the campaigns may seem like a game of whack-a-mole, he said companies have at least shown progress in taking steps to identify and root out manipulation campaigns run by foreign powers.
Twitter began archiving tweets and media it deems to be associated with known state-backed information operations in 2018. It shut 200,000 Chinese accounts that targeted Hong Kong protests in August.
The 5,929 accounts removed and added to the archives are part of a larger group of 88,000 accounts engaged in "spammy behavior" across a wide range of topics. But Twitter isn't disclosing all of them because some might be legitimate accounts taken over through hacking.
The Twitter accounts were linked to a social media marketing firm in Saudi Arabia called Smaat that managed many government departments in Saudi Arabia. The accounts used third-party automated tools to amplify non-political content at high volumes. Twitter said that activity was used to mask the political maneuverings of the same accounts.
Samuel Woolley, a professor at the University of Texas at Austin who studies disinformation, said that while the Saudi campaign used basic manipulation techniques, including the use of likes and retweets to give the illusion of popularity, the campaign's size and scale were unusual. The existence of a thousands-strong army of Saudi accounts also show that social media companies still don't have a good solution, he said, despite the progress they have made at identifying state-backed accounts.
"It's really clear we have to do something about it," he said. "It can't just be after the fact. We have to get better about detecting in real time."
Messages left with Saudi officials in Riyadh, Saudi Arabia, and the country's embassy in Washington were not immediately returned.
The Saudi government has used different tactics to control speech and keep reformers and others from organizing, including employing troll armies to harass and intimidate users online. It has also arrested and imprisoned Twitter users.
In September, Twitter suspended the account of the crown prince's former top adviser, Saud al-Qahtani, who also served as director of the cyber security federation. As with Friday's announcement, Twitter said that account had violated the company's platform manipulation policy.
Last month, two former Twitter employees were charged with acting as agents of Saudi Arabia without registering with the U.S. government. The complaint details a coordinated effort by Saudi government officials to recruit employees at the social media giant to look up the private data of Twitter accounts, including email addresses linked to the accounts and internet protocol addresses that can give up a user's location.
In terms of Facebook's actions, Facebook said the Georgia group targeted domestic audiences and the Vietnam group focused mainly in the U.S., as well as Vietnamese-, Spanish- and Chinese-speaking audiences around the world.
The company said they created networks of accounts to mislead others about who they were and what they were doing. To evade detection, they used a combination of fake and real accounts of people in the U.S. to manage pages and groups, the company said.
"We are making progress rooting out this abuse, but as we've said before, it's an ongoing challenge," Nathaniel Gleicher, Facebook's head of security policy, said in a blog post.