San Francisco, Feb 23 (AP/UNB) — Twitter co-founder Evan Williams will leave the social media company's board of directors at the end of the month.
Twitter announced his departure in a securities filings Friday and Williams tweeted confirmation of the news.
Williams served as Twitter's CEO from 2008 to 2010 and now is chief executive of publishing site Medium.com. He tweeted that stepping down will let him focus on other projects and that he will still be rooting for the Twitter team.
He was the second-largest Twitter shareholder on the company's board behind CEO and co-founder Jack Dorsey.
Dorsey tweeted his appreciation for Williams, who served on the board for 12 years.
London, Feb 18 (AP/UNB) — British lawmakers issued a scathing report Monday that accused Facebook of intentionally violating privacy and anti-competition laws in the U.K., and called for greater oversight of social media companies.
The report on fake news and disinformation on social media sites followed an 18-month investigation by Parliament's influential media committee. The committee recommended that social media sites should have to follow a mandatory code of ethics overseen by an independent regulator to better control harmful or illegal content.
The report called out Facebook in particular, saying that the site's structure seems to be designed to "conceal knowledge of and responsibility for specific decisions."
"It is evident that Facebook intentionally and knowingly violated both data privacy and anti-competition laws," the report states. It also accuses CEO Mark Zuckerberg of showing contempt for the U.K. Parliament by declining numerous invitations to appear before the committee.
"Companies like Facebook should not be allowed to behave like 'digital gangsters' in the online world, considering themselves to be ahead of and beyond the law," the report added.
Parliamentary committee reports are intended to influence government policy, but are not binding.
Facebook said it shared "the committee's concerns about false news and election integrity" and was open to "meaningful regulation."
"While we still have more to do, we are not the same company we were a year ago," said Facebook's U.K. public policy manager, Karim Palant.
"We have tripled the size of the team working to detect and protect users from bad content to 30,000 people and invested heavily in machine learning, artificial intelligence and computer vision technology to help prevent this type of abuse."
Facebook and other internet companies have been facing increased scrutiny over how they handle user data and have come under fire for not doing enough to stop misuse of their platforms by groups trying to sway elections.
The report echoes and expands upon an interim report with similar findings issued by the committee in July . And in December , a trove of documents released by the committee offered evidence that the social network had used its enormous trove of user data as a competitive weapon, often in ways designed to keep its users in the dark.
Facebook faced its biggest privacy scandal last year when Cambridge Analytica, a now-defunct British political data-mining firm that worked for the 2016 Donald Trump campaign, accessed the private information of up to 87 million users.
New York, Feb 15 (AP/UNB) — A report says Facebook and the Federal Trade Commission are negotiating a "multibillion dollar" fine for the social network's privacy lapses.
The Washington Post said Thursday that the fine would be the largest ever imposed on a tech company. Citing unnamed sources, it also said the two sides have not yet agreed on an exact amount.
Facebook has had several high-profile privacy lapses in the past couple of years. The FTC has been looking into the Cambridge Analytica scandal since last March. The data mining firm accessed the data of some 87 million Facebook users without their consent.
At issue is whether Facebook is in violation of a 2011 agreement with the FTC promising to protect user privacy. Facebook and the FTC declined to comment.
London, Feb 9 (AP/UNB) — Instagram has agreed to ban graphic images of self-harm after objections were raised in Britain following the suicide of a teen whose father said the photo-sharing platform had contributed to her decision to take her own life.
Instagram chief Adam Mosseri said Thursday evening the platform is making a series of changes to its content rules.
He said: "We are not where we need to be on self-harm and suicide, and we need to do more to protect the most vulnerable in our community."
Mosseri said further changes will be made.
"I have a responsibility to get this right," he said. "We will get better and we are committed to finding and removing this content at scale, and working with experts and the wider industry to find ways to support people when they're most in need."
The call for changes was backed by the British government after the family of 14-year-old Molly Russell found material related to depression and suicide on her Instagram account after her death in 2017.
Her father, Ian Russell, said he believes the content Molly viewed on Instagram played a contributing role in her death, a charge that received wide attention in the British press.
The changes were announced after Instagram and other tech firms, including Facebook, Snapchat and Twitter, met with British Health Secretary Matt Hancock and representatives from the Samaritans, a mental health charity that works to prevent suicide.
Instagram is also removing non-graphic images of self-harm from searches.
Facebook, which owns Instagram, said in a statement that independent experts advise that Facebook should "allow people to share admissions of self-harm and suicidal thoughts but should not allow people to share content promoting it."