San Francisco, Mar 22 (AP/UNB) — Facebook left hundreds of millions of user passwords readable by its employees for years, the company acknowledged Thursday after a security researcher exposed the lapse .
By storing passwords in readable plain text, Facebook violated fundamental computer-security practices. Those call for organizations and websites to save passwords in a scrambled form that makes it almost impossible to recover the original text.
"There is no valid reason why anyone in an organization, especially the size of Facebook, needs to have access to users' passwords in plain text," said cybersecurity expert Andrei Barysevich of Recorded Future.
Facebook said there is no evidence its employees abused access to this data. But thousands of employees could have searched them. The company said the passwords were stored on internal company servers, where no outsiders could access them. Even so, some privacy experts suggested that users change their Facebook passwords.
The incident reveals yet another huge and basic oversight at a company that insists it is a responsible guardian for the personal data of its 2.3 billion users worldwide.
The security blog KrebsOnSecurity said Facebook may have left the passwords of some 600 million Facebook users vulnerable. In a blog post , Facebook said it will likely notify "hundreds of millions" of Facebook Lite users, millions of Facebook users and tens of thousands of Instagram users that their passwords were stored in plain text.
Facebook Lite is a version designed for people with older phones or low-speed internet connections. It is used primarily in developing countries.
Last week, Facebook CEO Mark Zuckerberg touted a new "privacy-focused vision " for the social network that would emphasize private communication over public sharing. The company wants to encourage small groups of people to carry on encrypted conversations that neither Facebook nor any other outsider can read.
The fact that the company couldn't manage to do something as simple as encrypting passwords, however, raises questions about its ability to manage more complex encryption issues — such in messaging — flawlessly.
Facebook said it discovered the problem in January. But security researcher Brian Krebs wrote that in some cases the passwords had been stored in plain text since 2012. Facebook Lite launched in 2015 and Facebook bought Instagram in 2012.
The problem, according to Facebook, wasn't due to a single bug. During a routine review in January, it say, it found that the plain text passwords were unintentionally captured and stored in its internal storage systems. This happened in a variety of circumstances — for example, when an app crashed and the resulting crash log included a captured password.
But Alex Holden, the founder of Hold Security, said Facebook's explanation is not an excuse for sloppy security practices that allowed so many passwords to be exposed internally.
Recorded Future's Barysevich said he could not recall any major company caught leaving so many passwords exposed. He said he's seen a number of instances where much smaller organizations made such information readily available — not just to programmers but also to customer support teams.
Security analyst Troy Hunt, who runs the "haveibeenpwned.com" data breach website , said the situation may be embarrassing for Facebook but not dangerous unless an adversary gained access to the passwords. Facebook has had major breaches, most recently in September when attackers accessed some 29 million accounts .
Jake Williams, president of Rendition Infosec, said storing passwords in plain text is "unfortunately more common than most of the industry talks about" and tends to happen when developers are trying to rid a system of bugs.
He said the Facebook blog post suggests storing passwords in plain text may have been "a sanctioned practice," although he said it's also possible a "rogue development team" was to blame.
Hunt and Krebs both likened Facebook's failure to similar stumbles last year on a far smaller scale at Twitter and GitHub; the latter is a site where developers store code and track projects. In those cases, software bugs were blamed for accidentally storing plaintext passwords in internal logs.
Facebook's normal procedure for passwords is to store them encoded, the company noted Thursday in its blog post.
That's good to know, although Facebook engineers apparently added code that defeated the safeguard, said security researcher Rob Graham. "They have all the proper locks on the doors, but somebody left the window open," he said.
New York, Mar 18 (AP/UNB) — Facebook's effort to establish a service that provides its users with local news and information is being hindered by the lack of outlets where the company's technicians can find original reporting.
The service, launched last year, is currently available in some 400 cities in the United States. But the social media giant said it has found that 40 percent of Americans live in places where there weren't enough local news stories to support it.
Facebook announced Monday it would share its research with academics at Duke, Harvard, Minnesota and North Carolina who are studying the extent of news deserts created by newspaper closures and staff downsizing.
Some 1,800 newspapers have closed in the United States over the last 15 years, according to the University of North Carolina. Newsroom employment has declined by 45 percent as the industry struggles with a broken business model partly caused by the success of companies on the Internet, including Facebook.
The Facebook service, called "Today In ," collects news stories from various local outlets, along with government and community groups. The company deems a community unsuitable for "Today In" if it cannot find a single day in a month with at least five news items available to share.
There's not a wide geographical disparity. For example, the percentage of news deserts is higher in the Northeast and Midwest, at 43 percent, Facebook said. In the South and West, the figure is 38 percent.
"It affirms the fact that we have a real lack of original local reporting," said Penelope Muse Abernathy, a University of North Carolina professor who studies the topic. She said she hopes the data helps pinpoint areas where the need is greatest, eventually leading to some ideas for solutions.
Facebook doesn't necessarily have the answers. "Everyone can learn from working together," said Ann Kornblut, director of news initiatives at the company.
The company plans to award some 100 grants, ranging from $5,000 to $25,000, to people with ideas for making more news available, said Jimmy O'Keefe, product marketing manager for "Today In."
That comes on top of $300 million in grants Facebook announced in January to help programs and partnerships designed to boost local news.
The company doesn't plan to launch newsgathering efforts of its own, Kornblut said.
"Our history has been — and we will probably stick to it — to let journalists do what they do well and let us support them and let them do their work," she said.
London, Mar 16 (AP/UNB) — Internet companies scrambled Friday to remove graphic video filmed by a gunman in the New Zealand mosque shootings that was widely available on social media for hours after the horrific attack.
Facebook said it took down a livestream of the shootings and removed the shooter's Facebook and Instagram accounts after being alerted by police. At least 49 people were killed at two mosques in Christchurch, New Zealand's third-largest city.
Using what appeared to be a helmet-mounted camera, the gunman livestreamed in horrifying detail 17 minutes of the attack on worshippers at the Al Noor Mosque, where at least 41 people died. Several more worshippers were killed at a second mosque a short time later.
The shooter also left a 74-page manifesto that he posted on social media under the name Brenton Tarrant, identifying himself as a 28-year-old Australian and white nationalist who was out to avenge attacks in Europe perpetrated by Muslims.
"Our hearts go out to the victims, their families and the community affected by this horrendous act," Facebook New Zealand spokeswoman Mia Garlick said in a statement.
Facebook is "removing any praise or support for the crime and the shooter or shooters as soon as we're aware," she said. "We will continue working directly with New Zealand Police as their response and investigation continues."
Twitter, YouTube owner Google and Reddit also were working to remove the footage from their sites.
The furor highlights once again the speed at which graphic and disturbing content from a tragedy can spread around the world and how Silicon Valley tech giants are still grappling with how to prevent that from happening.
British tabloid newspapers such as The Daily Mail and The Sun posted screenshots and video snippets on their websites.
One journalist tweeted that several people sent her the video via the Facebook-owned WhatsApp messaging app.
New Zealand police urged people not to share the footage, and many internet users called for tech companies and news sites to take the material down.
Some people expressed outrage on Twitter that the videos were still circulating hours after the attack.
"Google is actively inciting violence," tweeted British journalist Carole Cadwalladr with a screen grab of search results of the video.
The video's spread underscores the challenge for Facebook even after stepping up efforts to keep inappropriate and violent content off its platform. In 2017 it said it would hire 3,000 people to review videos and other posts, on top of the 4,500 people Facebook already tasks with identifying criminal and other questionable material for removal.
But that's just a drop in the bucket of what is needed to police the social media platform, said Siva Vaidhyanathan, author of "Antisocial Media: How Facebook Disconnects Us and Undermines Democracy."
If Facebook wanted to monitor every livestream to prevent disturbing content from making it out in the first place, "they would have to hire millions of people," something it's not willing to do, said Vaidhyanathan, who teaches media studies at the University of Virginia.
"We have certain companies that have built systems that have inadvertently served the cause of violent hatred around the world," Vaidhyanathan said.
Facebook and YouTube were designed to share pictures of babies, puppies and other wholesome things, he said, "but they were expanded at such a scale and built with no safeguards such that they were easy to hijack by the worst elements of humanity."
With billions of users, Facebook and YouTube are "ungovernable" at this point, said Vaidhyanathan, who called Facebook's livestreaming service a "profoundly stupid idea."
In footage that at times resembled scenes from a first-person shooter video game, the mosque shooter was seen spraying terrified worshippers with bullets, sometimes re-firing at people he had already cut down.
He then walked outside, shooting at people on a sidewalk. Children's screams could be heard in the distance as he strode to his car to get another rifle, then returned to the mosque, where at least two dozen people could be seen lying in pools of blood.
He walked back outside, shot a woman, got back in his car, and drove away.
The livestream video was reminiscent of violent first-person shooter video games such as "Counter-Strike" or "Doom" as the gunman went around corners and calmly entered rooms firing at helpless victims. Many shooting games allow players to toggle between close-range and long-range weapons, and the gunman switched from a shotgun to a rifle during the video, reloading as he moved around.
At one point, the shooter even paused to give a shout-out to one of YouTube's top personalities, known as PewDiePie, with tens of millions of followers, who has made jokes criticized as anti-Semitic and posted Nazi imagery in his videos.
"Remember, lads, subscribe to PewDiePie," the gunman said.
The seemingly incongruous reference to the Swedish vlogger known for his video game commentaries as well as his racist references was instantly recognizable to many of his 86 million followers.
The YouTube sensation has been engaged in an online battle over which channel is the most subscribed to, and his followers have taken to posting messages encouraging others to "subscribe to PewDiePie."
PewDiePie, whose real name is Felix Kjellberg, said on Twitter he felt "absolutely sickened" that the alleged gunman referred to him during the livestream. "My heart and thoughts go out to the victims, families and everyone affected," he said.
The hours it took to take the violent video and manifesto down are "another major black eye" for social media platforms, said Dan Ives, managing director of Wedbush Securities.
The rampage's broadcast "highlights the urgent need for media platforms such as Facebook and Twitter to use more artificial intelligence as well as security teams to spot these events before it's too late," Ives said.
Hours after the shooting, Reddit took down two subreddits known for sharing video and pictures of people being killed or injured —R/WatchPeopleDie and R/Gore — apparently because users were sharing the mosque attack video.
"We are very clear in our site terms of service that posting content that incites or glorifies violence will get users and communities banned from Reddit," it said in a statement. "Subreddits that fail to adhere to those site-wide rules will be banned."
Videos and posts that glorify violence are against Facebook's rules, but Facebook has drawn criticism for responding slowly to such items, including video of a slaying in Cleveland and a live-streamed killing of a baby in Thailand. The latter was up for 24 hours before it was removed.
In most cases, such material gets reviewed for possible removal only if users complain. News reports and posts that condemn violence are allowed. This makes for a tricky balancing act for the company. Facebook says it does not want to act as a censor, as videos of violence, such as those documenting police brutality or the horrors of war, can serve an important purpose.
San Francisco, Mar 14 (AP/UNB) — The New York Times reports that federal prosecutors are conducting a criminal investigation into Facebook's data deals with major electronics manufacturers.
The newspaper says a grand jury in New York has subpoenaed information from at least two companies known for making smartphones and other devices, citing two unnamed people familiar with the request. It reports that both companies had data partnerships with Facebook that gave them access to the personal information of hundreds of millions of users.
Facebook describes those data deals as innocuous efforts to help smartphone makers provide Facebook features to users before the social network had its own app.
The Times reports that it is not clear when the inquiry began or exactly what it is focusing on. Facebook did not respond to a request for comment.
New York, Mar 14 (AP/UNB) — Facebook says it is aware of outages on its platforms including Facebook, Messenger and Instagram and is working to resolve the issue.
According to Facebook's status page , the outages started around 11 a.m. EDT on Wednesday. That page, which calls the problem a "partial outage," states that Facebook has experienced "increased error rates" since that time.
Downdetector.com, a site that monitors site outages, said the Facebook problem affected parts of the U.S., including the East and West Coast; parts of Europe and elsewhere. Both Facebook's desktop site and app appeared to be affected. Some users saw a message that said Facebook was down for "required maintenance."
Facebook did not say what was causing the outages.
Via its Twitter account, Facebook said the outage was not due to a "distributed denial of service" or DDoS attack, a type of attack that hackers use to interrupt service to a site.