Social-Media
How to Increase YouTube subscribers for free?
YouTube is one of the most popular online video websites in the world. It has millions of videos that people can watch for free, so gaining YouTube subscribers is a great way to build a presence as an internet personality. But with so many channels buzzing about on this website, it can be difficult to get noticed and you may not know where to start. That is why we have done all the work for you and rounded up 12 ways you can gain more YouTube subscribers for free.
Ways to increase your YouTube subscribers at free of cost
Make videos frequently
As obvious as this may seem, it's by far the best way to get more subscribers. Make more videos and your audience will grow quicker than you can imagine! As long as you are putting out content of some sort, be it a video review or an informative guide, people will start to take notice and subscribe to see what else you do. If you can't think of anything to make a video about, try looking at topics that other people are successful at covering in their videos. For example, if you're a beauty expert then make a video on how to apply foundation like a pro.
Read India blocks 35 Pakistan-backed YouTube channels
Use hashtags
YouTube has a great feature where you can put the hashtag #videoslikes on your video title, which will cause the video to be listed in various places on YouTube and in search results. You can also use the hashtag #vlog or #videoblog or whatever else you like. By using these tags, your video is going to be found by more people and give it a chance at being viewed.
Optimize your title
Make it catchy and relevant to your content. Include keywords, be sure to save the best information for your first paragraph, include keywords in subtitles, and mention the social media accounts you share. And that’s how your videos will get discovered by more people. Eventually, it will help to get more subscribers.
Read Best YouTube Channels to Learn English
Promote on social media
Write links and images on social media channels (Facebook, Twitter). Add an internal link to the YouTube page and use video thumbnails when promoting videos. Also invite fans to watch or share links (tweet, hashtag) from your channel.
Use keywords as much as possible
When creating new videos, be sure to use keywords related to your channel content. But don't spam them either. Use a mixture of words that fit your style and interests that are going to get users who are interested in those topics. You can use free keyword tools to find the best keywords to use.
Read Tips and Tricks to Increase YouTube Subscribers organically
Add annotations
You can bring your videos to life with annotations, which are basically captions or annotations that flash on the screen throughout videos. If you are a beauty vlogger and use lots of makeup products in your videos, then you could add annotations with the product names and brands throughout your video. You can also put in helpful tips about the video for those who want to know more about what it's all about. This is a great way to get more YouTube subscribers as it will make your channel relevant and full of information!
Create videos followig trending videos
Want more YouTube subscribers? Just make a video on a hot topic, whether it's popular news or something that fans have been keeping an eye on. As long as you make your video different enough, you can use this method for any type of content. Plus, if your video is about what other people like then it will be even more likely that people will subscribe to watch your future videos!
Read Top YouTube Movie Channels to Watch Full Length Movies Online Free
Make your video unique
Be sure to make your videos as original as possible. Whether you are recording a new song in the music industry or reviewing a new book on the literary scene, you need to be sure that people will want to watch your content and not just go elsewhere. Without this, it's easy for people to forget about your channel. If they like you enough in the beginning, they will most likely stick around to see what other content you have created!
Create a viral video
Comedy, drama, fun facts—videos that make viewers laugh or cry are much more likely to attract attention than those that don’t. If you can make your audience laugh or gasp at your creativity, you will have a much better chance at getting more YouTube subscribers than if you just shove a product in their faces.
Read How to Earn Money from YouTube Channel
Make videos relevant to search
Just because a video is popular doesn’t always mean that it will be found by people looking for the same topic. Make sure that you are writing and creating content that YouTube’s algorithms put in their search results, so they can help users find what they want. If your videos show in the relevant search, you are likely to get more views as well as subscribers. So, you may need to research how the YouTube algorithm works.
SEO
Search Engine Optimization (SEO) is a great way to show your videos on Google search results. YouTube SEO is different from website SEO. So, you can optimize your video for search engines. As a result, you will get more views and eventually attract more subscribers.
Read YouTube updates hate speech guidelines to prohibit videos
Participating in YouTube events, challenges
Another way of gaining more subscribers is to participate in YouTube events and challenges. There are tons of YouTube challenges that you can take part in. Usually, in contests, everyone has to create a video or make artwork on selected topics. If you are one of the lucky few that can make their video go viral, you could be on your way to getting YouTube subscribers at an incredible rate.
Conclusion
YouTube is highly competitive nowadays and it’s not always easy to get noticed. However, there are numerous ways to get noticed and grow subscribers. This article covered some top ways to augment the YouTube Subscribers organically at free of cost. You can also create your own strategy to attract more subscribers.
Read YouTube promises to stop promoting misleading videos
WhatsApp pushes privacy update to comply with Irish ruling
WhatsApp is adding more details to its privacy policy and flagging that information for European users, after Irish regulators slapped the chat service with a record fine for breaching strict EU data privacy rules.
Starting Monday, WhatsApp's privacy policy will be reorganized to provide more information on the data it collects and how it's used. The company said it's also explaining in more detail how it protects data shared across borders for its global service and the legal foundations for processing the data.
WhatsApp is owned by Facebook, now renamed Meta Platforms. With the update, users in Europe will see a banner notification at the top of their chat list that will take them to the new information.
Read:Ohio retirement fund sues Facebook over investment loss
WhatsApp is taking the action after getting hit with a record 225 million euro ($267 million) fine in September from Ireland’s data privacy watchdog for violating stringent European Union data protection rules on transparency about sharing people’s data with other Facebook companies.
The chat service said it disagreed with the decision, but it has to comply by updating its policy while it appeals. The update doesn’t affect how data is handled, and users won’t have to agree to anything new or take any other action.
Ireland's Data Privacy Commission is the lead privacy regulator for WhatsApp under European Union rules because its regional headquarters is in Dublin.
Read:Facebook to shut down face-recognition system, delete data
WhatsApp was embroiled in a separate privacy controversy earlier this year when it botched a different update to its privacy policy that raised concerns users were being forced to agree to share more of their data with Facebook. That update sparked a backlash from users who switched to rival services like Telegram and Signal, an investigation by Turkey's competition watchdog, a temporary German ban on gathering data, and a complaint by EU consumer groups.
A six-hour outage of Facebook services last month highlighted how vital WhatsApp has become for its more than 2 billion users worldwide.
Short video platforms are new way of building communities: Head of Likee Operations in Bangladesh
Gibson Yuen, Head of Likee Operations in Bangladesh, has said Short video platforms like ‘Likee’ are a new way of building communities which act as convenient-to-use social media platforms.
Gibson Yuen said, “It is not only comforting to watch short videos but is less time-consuming. Because of time constraints, people leave the unnecessary parts and jump straight to the point of the videos. On the other hand, long videos often get tedious to watch.”
In these short video platforms, content creators post videos from which viewers may learn a great deal, he said.
Read:Likee’s #KnowledgeMonth campaign ends; 5,470 videos uploaded by 1,904 users
Such categories may include life-hack videos, which let people know about tips and tricks that make life simpler, educational videos from educators, general knowledge videos from many daily newspapers, and much more. The best part here is the chance to learn so much without the monotonous feeling, but genuinely enjoying instead.
Visual learning is a learning method preferred by many children as well as adults as an effective method to learn and remember. Many studies have also found that visual learning has catered to better memory recollection than the auditory learning method. It means short video platforms can be an amazing way to communicate learnable information.
Through this learning process, communities of learners and educators are created where similar content-makers get a chance to collaborate virtually, and viewers get to discuss the videos with others, creating multiple communication channels, Gibson added.
Read Likee teams up with 10 Minute School
Covid-19, lockdown impositions, and a general fear of crowds have hindered the natural community-building instinct of humans. During these dire times, social media and short video platforms have acted as rescuers of this society-loving mankind, creating multiple communication channels for people, he said.
The short video industry has especially managed to improve the lives of talented content creators by giving them opportunities to communicate their ideas.
Communities of learners, educators, artists, and many more individuals have been made for them to share their interests and talents with each other, as well as the world, said the head of Likee Operations in Bangladesh.
Read: Likee launches campaign to promote cyber safety
Additionally, they create communities of people with similar talents. A community of singers, dancers, painters, etc., are generated, often adding colors to the lives of many. They get to discuss their work and interests with people who share similar ones. This not only enhances communication but also improves the mental health of many.
The world is filled with talented individuals who remain undiscovered due to the lack of opportunities. He said, “For them, the convenience of short video platforms is a blessing. It was rather tough for many cover artists to actually upload long videos targeting engaged audiences, as viewers often become reluctant to watch long covers.”
People started posting their videos, which can also be made aesthetically pleasing through filters on Short Video Platforms, like Likee. Through these videos, people also get to know about exciting forms of art, such as finger dancing – a form of dance that was rather unknown before short videos were trendy.
Read Learning together with Likee’s #Steps2learn
Besides music and dance, art and painting videos can also be conveniently showcased through such short videos. This not only promotes exposure of talent but also opens the window for many such talented individuals to get paid work. Talent hunters often spot artists from such apps, exposing them to jobs they would love.
Ohio retirement fund sues Facebook over investment loss
Ohio’s largest public employee pension fund has sued Facebook — now known as Meta — alleging that it broke federal securities law by purposely misleading the public about the negative effects of its social platforms and the algorithms that run them.
The lawsuit by the Ohio Public Employees Retirement System specifically claims that Facebook buried inconvenient findings about how the company has managed those algorithms as well as the steps it said it was taking to protect the public.
Read:Plenty of pitfalls await Zuckerberg’s ‘metaverse’ plan
The suit also contends claims that Facebook knew that its platform facilitated dissention, illegal activity, and violent extremism, but refused to correct it. The Associated Press and a coalition of other news organizations have reported extensively on Facebook's actions, internal dissents that warned of these problems and related issues around the world based on internal company documents, now known as the Facebook Papers, leaked by the data scientist and former Facebook employee Frances Haugen.
“Facebook said it was looking out for our children and weeding out online trolls, but in reality was creating misery and divisiveness for profit,” Ohio Attorney General Dave Yost said in a statement. “We are not people to Mark Zuckerberg, we are the product and we are being used against each other out of greed.”
Read:In the middle of a crisis, Facebook Inc. renames itself Meta
The lawsuit, filed last week in federal court in California, says market losses resulting from publicity over Facebook’s actions caused investors — including OPERS — to lose more than $100 billion . A Facebook spokesperson called the lawsuit without merit and said the company would fight it.
Grameenphone launches Text-Only Facebook, Discover
Grameenphone, in partnership with Meta, has launched text-only Facebook and Discover to enable Grameenphone customers to stay connected more consistently, even when they run out of data.
Text-only Facebook will help the telecom operator's customers to stay connected with a text-only version of Facebook and Messenger when they run out of data until they can top up their data balance again.
Discover, a mobile web and Android app, will allow Grameenphone customers to browse the internet using a daily balance of 15MB without data charges. Also, it only supports low-bandwidth features such as text and icons when using free data.
Read: Grameenphone signs agreement with D24 Logistics
Post and Telecommunication Minister Mustafa Jabbar inaugurated text-only Facebook and Discover Tuesday at Bangladesh Telecommunication Regulatory Commission (BTRC).
"Allowing the use of Facebook without the internet is a great initiative. This shall help reduce the digital divide by ensuring information sharing and connectivity of marginalised people," he said.
"The government has been emphasising bringing maximum people under the umbrella of digital connectivity. But to turn it into a reality, we need the private sectors, especially the mobile network operators to step forward," BTRC Chairman Shyam Sunder Sikder said. "It is a good move by Grameenphone to improve access to social media and other important resources on the internet."
Read:Bangladesh Retail Awards 2021: Grameenphone wins two accolades
"Today's launch is a testimony of co-creation with Meta and Regulator to best use digital solutions for ensuring access to vital information in need for one of the largest Facebook user bases in the world," Grameenphone CEO Yasir Azman said.
"Helping people stay connected and ensuring they have consistent access to important resources on the internet such as education and health resources is critical. We are grateful to support these programmes to enable better connectivity and access for people in Bangladesh," Paul Kim, director of International Business Development and Operator Partnerships, APAC at Meta, said.
Read GP Explorers: 2nd batch graduates from Grameenphone's in-house skill academy
Plenty of pitfalls await Zuckerberg’s ‘metaverse’ plan
When Mark Zuckerberg announced ambitious plans to build the “metaverse” — a virtual reality construct intended to supplant the internet, merge virtual life with real life and create endless new playgrounds for everyone — he promised that “you’re going to able to do almost anything you can imagine.”
That might not be such a great idea.
Zuckerberg, CEO of the company formerly known as Facebook, even renamed it Meta to underscore the significance of the effort. During his late October presentation, he effused about going to virtual concerts with your friends, fencing with holograms of Olympic athletes and — best of all — joining mixed-reality business meetings where some participants are physically present while others beam in from the metaverse as cartoony avatars.
But it’s just as easy to imagine dystopian downsides. Suppose the metaverse also enables a vastly larger, yet more personal version of the harassment and hate that Facebook has been slow to deal with on today’s internet? Or ends up with the same big tech companies that have tried to control the current internet serving as gatekeepers to its virtual-reality edition? Or evolves into a vast collection of virtual gated communities where every visitor is constantly monitored, analyzed and barraged with advertisements? Or foregoes any attempt to curtail user freedom, allowing scammers, human traffickers and cybergangs to commit crimes with impunity?
Picture an online troll campaign — but one in which the barrage of nasty words you might see on social media is instead a group of angry avatars yelling at you, with your only escape being to switch off the machine, said Amie Stepanovich, executive director of Silicon Flatirons at the University of Colorado.
“We approach that differently — having somebody scream at us than having somebody type at us,” she said. “There is a potential for that harm to be really ramped up.”
Read: Facebook to shut down face-recognition system, delete data
That’s one reason Meta might not be the best institution to lead us into the metaverse, said Philip Rosedale, founder of the virtual escape Second Life, which was an internet craze 15 years ago and still attracts hundreds of thousands of online inhabitants.
The danger is creating online public spaces that appeal only to a “polarized, homogenous group of people,” said Rosedale, describing Meta’s flagship VR product, Horizon, as filled with “presumptively male participants” and a bullying tone. In a safety tutorial, Meta has advised Horizon users to treat fellow avatars kindly and offers tips for blocking, muting or reporting those who don’t, but Rosedale said it’s going to take more than a “schoolyard monitor” approach to avoid a situation that rewards the loudest shouters.
“Nobody’s going to come to that party, thank goodness,” he said. “We’re not going to move the human creative engine into that sphere.”
A better goal, he said, would be to create systems that are welcoming and flexible enough to allow people who don’t know each other to get along as well as they might in a real place like New York’s Central Park. Part of that could rely on systems that help someone build a good reputation and network of trusted acquaintances they can carry across different worlds, he said. In the current web environment, such reputation systems have had a mixed record in curbing toxic behavior.
It’s not clear how long it will take Meta, or anyone else investing in the metaverse, to consider such issues. So far, tech giants from Microsoft and Apple to video game makers are still largely focused on debating the metaverse’s plumbing.
To make the metaverse work, some developers say they are going to have to form a set of industry standards similar to those that coalesced around HTML, the open “markup language” that’s been used to structure websites since the 1990s.
“You don’t think about that when you go to a website. You just click on the link,” said Richard Kerris, who leads the Omniverse platform for graphics chipmaker Nvidia. “We’re going to get to the same point in the metaverse where going from one world to another world and experiencing things, you won’t have to think about, ‘Do I have the right setup?’”
Nvidia’s vision for an open standard involves a structure for 3D worlds built by movie-making studio Pixar, which is also used by Apple. Among the basic questions being resolved are how physics will work in the metaverse — will virtual gravity cause someone’s glass to smash into pieces if they drop it? Will those rules change as you move from place to place?
Bigger disagreements will center on questions of privacy and identity, said Timoni West, vice president of augmented and virtual reality at Unity Technologies, which builds an engine for video game worlds.
“Being able to share some things but not share other things” is important when you’re showing off art in a virtual home but don’t want to share the details of your calendar, she said. “There’s a whole set of permission layers for digital spaces that the internet could avoid but you really need to have to make this whole thing work.”
Read: In the middle of a crisis, Facebook Inc. renames itself Meta
Some metaverse enthusiasts who’ve been working on the concept for years welcome the spotlight that could attract curious newcomers, but they also want to make sure Meta doesn’t ruin their vision for how this new internet gets built.
“The open metaverse is created and owned by all of us,” said Ryan Gill, founder and CEO of metaverse-focused startup Crucible. “The metaverse that Mark Zuckerberg and his company want is created by everybody but owned by them.”
Gill said Meta’s big splash is a reaction to ideas circulating in grassroots developer communities centered around “decentralized” technologies like blockchain and non-fungible tokens, or NFTs, that can help people establish and protect their online identity and credentials.
Central to this tech movement, nicknamed Web 3, for a third wave of internet innovation, is that what people create in these online communities belongs to them, a shift away from the Big Tech model of “accumulating energy and attention and optimizing it for buying behavior,” Gill said.
Evan Greer, an activist with Fight for the Future, said it’s easy to see Facebook’s Meta announcement as a cynical attempt to distance itself from all the scandals the company is facing. But she says Meta’s push is actually even scarier.
“This is Mark Zuckerberg revealing his end game, which is not just to dominate the internet of today but to control and define the internet that we leave to our children and our children’s children,” she said.
The company recently abandoned its use of facial recognition on its Facebook app, but metaverse gadgetry relies on new forms of tracking people’s gaits, body movements and expressions to animate their avatars with real-world emotions. And with both Facebook and Microsoft pitching metaverse apps as important work tools, there’s a potential for even more invasive workplace monitoring and exhaustion.
Activists are calling for the U.S. to pass a national digital privacy act that would apply not just to today’s platforms like Facebook but also those that might exist in the metaverse. Outside of a few such laws in states such as California and Illinois, though, actual online privacy laws remain rare in the U.S.
Facebook to shut down face-recognition system, delete data
Facebook said it will shut down its face-recognition system and delete the faceprints of more than 1 billion people amid growing concerns about the technology and its misuse by governments, police and others.
“This change will represent one of the largest shifts in facial recognition usage in the technology’s history,” Jerome Pesenti, vice president of artificial intelligence for Facebook’s new parent company, Meta, wrote in a blog post on Tuesday.
He said the company was trying to weigh the positive use cases for the technology “against growing societal concerns, especially as regulators have yet to provide clear rules.” The company in the coming weeks will delete “more than a billion people’s individual facial recognition templates,” he said.
Facebook’s about-face follows a busy few weeks. On Thursday it announced its new name Meta for Facebook the company, but not the social network. The change, it said, will help it focus on building technology for what it envisions as the next iteration of the internet -- the “metaverse.”
The company is also facing perhaps its biggest public relations crisis to date after leaked documents from whistleblower Frances Haugen showed that it has known about the harms its products cause and often did little or nothing to mitigate them.
Read: Just what are 'The Facebook Papers,' anyway?
Facebook didn’t immediately respond to questions about how people could verify that their image data was deleted, or what it would be doing with the underlying technology.
More than a third of Facebook’s daily active users have opted in to have their faces recognized by the social network’s system. That’s about 640 million people. Facebook introduced facial recognition more than a decade ago but gradually made it easier to opt out of the feature as it faced scrutiny from courts and regulators.
Facebook in 2019 stopped automatically recognizing people in photos and suggesting people “tag” them, and instead of making that the default, asked users to choose if they wanted to use its facial recognition feature.
Facebook’s decision to shut down its system “is a good example of trying to make product decisions that are good for the user and the company,” said Kristen Martin, a professor of technology ethics at the University of Notre Dame. She added that the move also demonstrates the power of public and regulatory pressure, since the face recognition system has been the subject of harsh criticism for over a decade.
Meta Platforms Inc., Facebook’s parent company, appears to be looking at new forms of identifying people. Pesenti said Tuesday’s announcement involves a “company-wide move away from this kind of broad identification, and toward narrower forms of personal authentication.”
“Facial recognition can be particularly valuable when the technology operates privately on a person’s own devices,” he wrote. “This method of on-device facial recognition, requiring no communication of face data with an external server, is most commonly deployed today in the systems used to unlock smartphones.”
Apple uses this kind of technology to power its Face ID system for unlocking iPhones.
Researchers and privacy activists have spent years raising questions about the tech industry’s use of face-scanning software, citing studies that found it worked unevenly across boundaries of race, gender or age. One concern has been that the technology can incorrectly identify people with darker skin.
Another problem with face recognition is that in order to use it, companies have had to create unique faceprints of huge numbers of people – often without their consent and in ways that can be used to fuel systems that track people, said Nathan Wessler of the American Civil Liberties Union, which has fought Facebook and other companies over their use of the technology.
“This is a tremendously significant recognition that this technology is inherently dangerous,” he said.
Facebook found itself on the other end of the debate last year when it demanded that facial recognition startup ClearviewAI, which works with police, stop harvesting Facebook and Instagram user images to identify the people in them.
Concerns also have grown because of increasing awareness of the Chinese government’s extensive video surveillance system, especially as it’s been employed in a region home to one of China’s largely Muslim ethnic minority populations.
Facebook’s huge repository of images shared by users helped make it a powerhouse for improvements in computer vision, a branch of artificial intelligence. Now many of those research teams have been refocused on Meta’s ambitions for augmented reality technology, in which the company envisions future users strapping on goggles to experience a blend of virtual and physical worlds. Those technologies, in turn, could pose new concerns about how people’s biometric data is collected and tracked.
Read:In the middle of a crisis, Facebook Inc. renames itself Meta
Meta’s newly wary approach to facial recognition follows decisions by other U.S. tech giants such as Amazon, Microsoft and IBM last year to end or pause their sales of facial recognition software to police, citing concerns about false identifications and amid a broader U.S. reckoning over policing and racial injustice.
At least seven U.S. states and nearly two dozen cities have limited government use of the technology amid fears over civil rights violations, racial bias and invasion of privacy.
President Joe Biden’s science and technology office in October launched a fact-finding mission to look at facial recognition and other biometric tools used to identify people or assess their emotional or mental states and character. European regulators and lawmakers have also taken steps toward blocking law enforcement from scanning facial features in public spaces.
Facebook’s face-scanning practices also contributed to the $5 billion fine and privacy restrictions the Federal Trade Commission imposed on the company in 2019. Facebook’s settlement with the FTC included a promise to require “clear and conspicuous” notice before people’s photos and videos were subjected to facial recognition technology.
And the company earlier this year agreed to pay $650 million to settle a 2015 lawsuit alleging it violated an Illinois privacy law when it used photo-tagging without users’ permission.
“It is a big deal, it’s a big shift but it’s also far, far too late,” said John Davisson, senior counsel at the Electronic Privacy Information Center. EPIC filed its first complaint with the FTC against Facebook’s facial recognition service in 2011, the year after it was rolled out.
Facebook dithered in curbing divisive user content in India
Facebook in India has been selective in curbing hate speech, misinformation and inflammatory posts, particularly anti-Muslim content, according to leaked documents obtained by The Associated Press, even as its own employees cast doubt over the company’s motivations and interests.
From research as recent as March of this year to company memos that date back to 2019, the internal company documents on India highlights Facebook’s constant struggles in quashing abusive content on its platforms in the world’s biggest democracy and the company’s largest growth market. Communal and religious tensions in India have a history of boiling over on social media and stoking violence.
The files show that Facebook has been aware of the problems for years, raising questions over whether it has done enough to address these issues. Many critics and digital experts say it has failed to do so, especially in cases where members of Prime Minister Narendra Modi’s ruling Bharatiya Janata Party, or the BJP, are involved.
Across the world, Facebook has become increasingly important in politics, and India is no different.
Read: Amid the Capitol riot, Facebook faced its own insurrection
Modi has been credited for leveraging the platform to his party advantage during elections, and reporting from The Wall Street Journal last year cast doubt over whether Facebook was selectively enforcing its policies on hate speech to avoid blowback from the BJP. Both Modi and Facebook chairman and CEO Mark Zuckerberg have exuded bonhomie, memorialized by a 2015 image of the two hugging at the Facebook headquarters.
The leaked documents include a trove of internal company reports on hate speech and misinformation in India. In some cases, much of it was intensified by its own “recommended” feature and algorithms. But they also include the company staffers’ concerns over the mishandling of these issues and their discontent expressed about the viral “malcontent” on the platform.
According to the documents, Facebook saw India as of the most “at risk countries” in the world and identified both Hindi and Bengali languages as priorities for “automation on violating hostile speech.” Yet, Facebook didn’t have enough local language moderators or content-flagging in place to stop misinformation that at times led to real-world violence.
In a statement to the AP, Facebook said it has “invested significantly in technology to find hate speech in various languages, including Hindi and Bengali” which has resulted in “reduced the amount of hate speech that people see by half” in 2021.
“Hate speech against marginalized groups, including Muslims, is on the rise globally. So we are improving enforcement and are committed to updating our policies as hate speech evolves online,” a company spokesperson said.
This AP story, along with others being published, is based on disclosures made to the Securities and Exchange Commission and provided to Congress in redacted form by former Facebook employee-turned-whistleblower Frances Haugen’s legal counsel. The redacted versions were obtained by a consortium of news organizations, including the AP.
Back in February 2019 and ahead of a general election when concerns of misinformation were running high, a Facebook employee wanted to understand what a new user in the country saw on their news feed if all they did was follow pages and groups solely recommended by the platform’s itself.
Read: Facebook unveils new controls for kids using its platforms
The employee created a test user account and kept it live for three weeks, a period during which an extraordinary event shook India — a militant attack in disputed Kashmir had killed over 40 Indian soldiers, bringing the country to near war with rival Pakistan.
In the note, titled “An Indian Test User’s Descent into a Sea of Polarizing, Nationalistic Messages,” the employee whose name is redacted said they were “shocked” by the content flooding the news feed which “has become a near constant barrage of polarizing nationalist content, misinformation, and violence and gore.”
Seemingly benign and innocuous groups recommended by Facebook quickly morphed into something else altogether, where hate speech, unverified rumors and viral content ran rampant.
The recommended groups were inundated with fake news, anti-Pakistan rhetoric and Islamophobic content. Much of the content was extremely graphic.
One included a man holding the bloodied head of another man covered in a Pakistani flag, with an Indian flag in the place of his head. Its “Popular Across Facebook” feature showed a slew of unverified content related to the retaliatory Indian strikes into Pakistan after the bombings, including an image of a napalm bomb from a video game clip debunked by one of Facebook’s fact-check partners.
“Following this test user’s News Feed, I’ve seen more images of dead people in the past three weeks than I’ve seen in my entire life total,” the researcher wrote.
Read:Ex-Facebook manager criticizes company, urges more oversight
It sparked deep concerns over what such divisive content could lead to in the real world, where local news at the time were reporting on Kashmiris being attacked in the fallout.
“Should we as a company have an extra responsibility for preventing integrity harms that result from recommended content?” the researcher asked in their conclusion.
The memo, circulated with other employees, did not answer that question. But it did expose how the platform’s own algorithms or default settings played a part in spurring such malcontent. The employee noted that there were clear “blind spots,” particularly in “local language content.” They said they hoped these findings would start conversations on how to avoid such “integrity harms,” especially for those who “differ significantly” from the typical U.S. user.
Even though the research was conducted during three weeks that weren’t an average representation, they acknowledged that it did show how such “unmoderated” and problematic content “could totally take over” during “a major crisis event.”
The Facebook spokesperson said the test study “inspired deeper, more rigorous analysis” of its recommendation systems and “contributed to product changes to improve them.”
“Separately, our work on curbing hate speech continues and we have further strengthened our hate classifiers, to include four Indian languages,” the spokesperson said.
Likee’s #KnowledgeMonth campaign ends; 5,470 videos uploaded by 1,904 users
Likee Bangladesh’s recent initiative #KnowledgeMonth, organized in partnership with 10 Minute School, has been ended which received wide acclaim from different regions and turned out to be a successful activity owing to its potential to create an enabling and healthy online atmosphere for knowledge sharing.
Likee launched the campaign on September 3 to encourage its users creating and sharing videos focusing on various academic and co-curricular skills, which would not only enlighten people but also help them showcase their creative side.
During the campaign, a total of 5,470 videos were uploaded by 1,904 users, and a staggering number of 35.8 million engagements were recorded. Likee users have uploaded videos on different categories such as English learning, football, art & painting, cooking with a concentrated focus on two streams - #howto and #education.
Read: Likee teams up with 10 Minute School
Many famous figures from different fields such as teachers, researchers, sportspersons, artists, culinary and life-skill enthusiasts and renowned nutritionists have taken part in the campaign and come up with enlightening videos.
Abdullah Al Shihab, 10 Minute School’s PR and Communication Manager, said about managing the overall operations of the collaborative campaign of 10 Minute School and Likee, “We started this #KnowledgeMonth campaign with a goal of ensuring educational value on the short video platform as Likee which people typically use for entertainment purpose.”
The campaign segments were distributed according to the platform users’ need to reach out to each category of them easily.
Read: Likee launches campaign to promote cyber safety
Tamanna Chowdhury, a clinical dietitian and nutritionist, said about this campaign, “Through my short videos I usually talk about diet tips and nutrition. Of late, I have come across Likee Bangladesh's #KnowledgeMonth campaign, which seems to be a quite a gem for me. I have shared many videos portraying different pertinent aspects related to nutritional needs. I hope people will see those and be aware of their health.”
On the other hand, Iffat, another participant, said, “Apart from showcasing my cooking skills, I have also got to learn about different life skills including cooking, art, math, spoken English etc. I am happy as after joining #KnowledgeMonth I was able to create a new cooking account and my cooking account got the verification as well.”
Inspired by the response this campaign has received, Likee is encouraged to arrange such campaigns in the future to provide a platform to the video creators for displaying their personal talents and help them expand their career options as well. This type of campaign is expected to create an ambiance where content creators will be able to acquire and develop knowledge together and add value to the fellow users’ experience.
Read Short video platforms are new way of building communities: Head of Likee Operations in Bangladesh
Facebook unveils new controls for kids using its platforms
Facebook, in the aftermath of damning testimony that its platforms harm children, will be introducing several features including prompting teens to take a break using its photo sharing app Instagram, and “nudging" teens if they are repeatedly looking at the same content that's not conducive to their well-being.
The Menlo Park, California-based Facebook is also planning to introduce new controls for adults of teens on an optional basis so that parents or guardians can supervise what their teens are doing online. These initiatives come after Facebook announced late last month that it was pausing work on its Instagram for Kids project. But critics say the plan lacks details and they are skeptical that the new features would be effective.
The new controls were outlined on Sunday by Nick Clegg, Facebook's vice president for global affairs, who made the rounds on various Sunday news shows including CNN's “State of the Union" and ABC's “This Week with George Stephanopoulos" where he was grilled about Facebook's use of algorithms as well as its role in spreading harmful misinformation ahead of the Jan. 6 Capitol riots.
Read:Could Facebook sue whistleblower Frances Haugen?
“We are constantly iterating in order to improve our products,” Clegg told Dana Bash on “State of the Union" Sunday. “We cannot, with a wave of the wand, make everyone’s life perfect. What we can do is improve our products, so that our products are as safe and as enjoyable to use."
Clegg said that Facebook has invested $13 billion over the past few years in making sure to keep the platform safe and that the company has 40,000 people working on these issues. And while Clegg said that Facebook has done its best to keep harmful content out of its platforms, he says he was open for more regulation and oversight.
“We need greater transparency,” he told CNN’s Bash. He noted that the systems that Facebook has in place should be held to account, if necessary, by regulation so that “people can match what our systems say they’re supposed to do from what actually happens.”
The flurry of interviews came after whistleblower Frances Haugen, a former data scientist with Facebook, went before Congress last week to accuse the social media platform of failing to make changes to Instagram after internal research showed apparent harm to some teens and of being dishonest in its public fight against hate and misinformation. Haugen’s accusations were supported by tens of thousands of pages of internal research documents she secretly copied before leaving her job in the company’s civic integrity unit.
Read:Ex-Facebook manager criticizes company, urges more oversight
Josh Golin, executive director of Fairplay, a watchdog for the children and media marketing industry, said that he doesn't think introducing controls to help parents supervise teens would be effective since many teens set up secret accounts any way. He was also dubious about how effective nudging teens to take a break or move away from harmful content would be. He noted Facebook needs to show exactly how they would implement it and offer research that shows these tools are effective.
“There is tremendous reason to be skeptical," he said. He added that regulators need to restrict what Facebook does with its algorithms.
He said he also believes that Facebook should cancel its Instagram project for kids.
When Clegg was grilled by both Bash and Stephanopoulos in separate interviews about the use of algorithms in amplifying misinformation ahead of Jan. 6 riots, he responded that if Facebook removed the algorithms people would see more, not less hate speech, and more, not less, misinformation.
Read:Whistleblower: Facebook chose profit over public safety
Clegg told both hosts that the algorithms serve as “giant spam filters."
Democratic Sen. Amy Klobuchar of Minnesota, who chairs the Senate Commerce Subcommittee on Competition Policy, Antitrust, and Consumer Rights, told Bash in a separate interview Sunday that it's time to update children's privacy laws and offer more transparency in the use of algorithms.
“I appreciate that he is willing to talk about things, but I believe the time for conversation is done," said Klobuchar, referring to Clegg's plan. “The time for action is now.”