Since the start of the 21st century, big tech companies such as Facebook, YouTube and Twitter, have been on the rise because they are developing algorithms that are designed to promote trending content which encourages users to spend more time on their online platforms. The research and studies on the algorithms of such companies, particularly on exploiting users and breaching their privacy for monetary benefits, have been increasing recently as people have become skeptical of what information is being shared with third-party companies and how are they being monitored and exploited by big tech companies.
These tech companies who are the go-to source of information and social interaction, especially during COVID-19, have benefitted societies, businesses, and governments in many ways as they help in promoting content, sharing important news, and interacting with new people. However, these countless benefits of online platforms and social media do come at a cost. By creating applications that threaten privacy, as well as using surveillance options that breach and track users’ search history, there is an essential need for governments and policymakers to investigate the algorithms of tech companies to protect users and stop them from being exploited. As a result, the onset of technological developments in the recent era has though enabled users to reach new depths in information, socializing, and communication; however, these developments also raise serious privacy concerns alongside a threat of misinformation through false news that may affect the users’ online experience and divert them to content that may not be fact-checked. The extreme content that may be presented to keep user attention and sell to third parties opens a discussion on possible policies government or policymakers could implement to create a safe and secure environment for online users where they can decide what they watch and how their data is used. Regulating and moderating online platforms, calling for transparency, encouraging investment in better AI, and educating users of their privacy rights are some of the key steps that should be taken by policymakers to counter the drawbacks of online platforms.
Social Media and internet algorithms are very curious topics to talk about since the start of the 21st century, as the role of the internet in politics, society, and information transmission, has increased. With the increase in the use of the internet for social and political purposes, a debate persists on the reliability, and validity of online platforms in sharing information and protecting users from any privacy breach. Social media websites like YouTube, Facebook, and Twitter, hence, play a vital role in gathering information and transmitting them to the millions of users on these online platforms; however, there is certainly a debate over whether the use of these websites results in accurate news or misinformation. These conversations over the use of social media, as a result, bring up questions on what social media websites do with our information, how users may be exploited for monetary and political purposes, privacy laws of government on website moderation, online algorithms, and their role in content viewing, and the role of users in searching up content
Source of Misinformation
Impact of Social Media
Social media websites and the internet, in general, have recently turned into an important element in our daily lives, where most people tend to search up and explore the vast variety of news sources and information available to understand or read about different things. They act as users’ go-to source for looking up content and increasing their understanding of things that they may wish to study. Spending time and following all the various free news sources may typically be useful for most; however, they do have a hidden cost that may affect the users’ online experience and the content that is being presented to them. Internet algorithms are the hidden mechanism that acts as the scaffolding for most online platforms which the user may not see at first, but it is present during their entire time online. When searching for content, we typically experience trends and patterns to what we search for and the content that is being recommended to us from the platform. The content shown is not presented accidentally, but through an online mechanism that tracks user search history and guides the user to content that is claimed to be similar to what has already been searched.
The debate over the use of internet algorithms and their role in spreading misinformation and hoaxes is still being questioned; however, there have certainly been incidents that show a correlation between the information shown online and the perspective of people. Since the 21st century brings a fast-paced life, the role of online journalism, to acquire information and communicate one’s opinion has reached new heights. The US elections have certainly been one of the many occasions where social media has shown its use as an important element in campaigning and changing perspectives. The widespread online database on statistics of voters and leader agendas are now accessed by millions of users from the comfort of their homes, and the information shown does tend to correlate with the outcome of certain elections. We tend to watch election speeches, look at candidate agendas, explore the depths of voter turnout, and read editorial opinions with only a few clicks. The debate over the use of the internet on information can be seen from the perspective of the economy where we tend to see “a positive impact of the internet on voter turnout” (Larcinesey 2017) and voicing of opinion since the start of the century. People now seem to be far more involved in the political situation of the economy which makes the online battle of campaigning even more important for the election outcome. However, with such widespread participation on online platforms, there comes a risk of fact-checking and reliability of new sources published.
US Elections and Fact-Checking
The online algorithms designed by big tech companies do play a part in guiding users to relevant websites. Where most trending platforms like Facebook, YouTube, and Twitter seem to be reliable due to the high amount of user traffic, they still contain sources that may be published for the motive of user attention and ad sales. With hundreds and thousands of posts being made in an hour, there is only a certain extent to which online platforms can moderate and fact-check the thousands of sites and accounts that operate on their platforms, which hence can lead to content floating on the internet that may be considered extreme, incorrect and unreliable. The online algorithms used by big tech companies tend to direct users to such sources that seem to be extreme and sensational making readers spend more time on reading them. Andrew Guess, a professor, and researcher at the Department of Politics at Princeton University had similar questions on the impact of social media on US elections, fact-checking of online news sources, and its effects on political misinformation and behavior. He writes in his paper that social media contains “extreme and sensational content, filled with fake or untrustworthy news that can divert users to towards a bias claim.” (Guess 2016) His findings also discuss the role of social media algorithms in the promotion of sensational content that may encourage a certain idea or perspective. Since users choose to search for information online and look at the hundreds of sources to read about elections and campaigns, it is hard to distinguish between the reliable and invalid sources which, as a result, can lead to wrong and inaccurate news articles being published and read which can influence users and even help specific candidates or topics to gain support.
Hoaxes and Extreme Content
The idea of publishing and spreading inaccurate news sources is not just limited to election campaigns. Trends on recommendation algorithm of promoting extreme and incorrect content have been seen throughout the previous decade as John Albright, a researcher at Columbia University, writes how incidents like the Parkland Shooting, Vegas Shooting of 2017 and even 9/11 have been shown as hoaxes by leading platforms since such videos tend to attract online users, forcing them to click and explore further. Further research states that social media websites “create a system that may encourage false and misleading information to float on their online database” (Timberg and Harwell 2018) to gain maximum traffic on their servers. Such an approach to deliberately allow false news to exist on online servers raises serious reliability questions as users may be shown articles that do not depict the true picture and may even manipulate their user thought process.
Another aspect of this discussion revolves around the spread of hate speech and extremism on such websites. Studies conducted on the correlation between social media and extremism demonstrated that overexposure to social media can “generate search results with extremist material, but benign, apolitical, and non-violent language also facilitated access to websites promoting violence and extremist ideologies.” (Schmitt 2018) The idea hence pushes forward the notion that though extreme search history can redirect users to incorrect and untrustworthy websites; however, normal, and orthodox searches can lead to users to unreliable websites and news sources as well.
Ad Sales and Recommendation algorithms
The real issue comes when we discuss the use of online platforms for user exploitation and private benefits. Big tech companies have developed algorithms and incentives that encourage third parties to contest for user attention. Research conducted by professors at the University of Massachusetts in Boston talks about the role of online journalism in the spread of hoax political news. They investigated the underlying media infrastructure and concluded how “social media companies created an incentive structure for hoax publishers that increased the spread of fake news” (Joshua and Jessica 2019), while online platforms generated revenue with the increase in engagement. By examining the chain through which hoax news enters the public circle they write how social media websites give incentives to publishers for hyping up hoaxes. Similarly, other studies show a correlation between users’ personal information and advertisements since big tech companies use “user data…to show personalized advertisements and content suggestions” (Maiga 2009) to increase their screen time on their websites.
Such findings reinforce how online media may not always present the most accurate information and how social media algorithms are designed in a way that encourages the spread of false news sources. They also raise questions on content moderation, lack of fact-checking, and how social media companies rack up sales by exploiting users through such content. Researchers at the University of Southern California had similar doubts as they collected data on the 2012 US elections to discuss the notion of fact-checking. The researchers write in their paper how social platforms act as radicalizers where they may provide information that is not always accurate and sometimes biased towards a certain topic while also allowing misinformed articles and videos to float on the internet as they raise money off it. Their research also discusses how online platforms may intentionally “share content that is biased…, and the lack of fact-checking may influence the perspective of users”(Shin and Kjerstin 2017) as such inaccurate articles are not taken down by algorithms mainly because they engage more users due to their unorthodox nature.
To further discuss the notion of user privacy and internet algorithms, another important question that arises is on privacy policies and structure of big tech companies, to protect user information and give them a safe experience. Where many companies like Google and Facebook explicitly write that they do ‘collect information about user activity in their services, to recommend content that user may like’, there are still doubts if that information is only used to suggest content or also used to sell to third parties. In the online market, companies tend to use their users – ‘us’ – as “livestock for their data stock where we are not the primary customers, but other businesses are the primary customers.” (Deibert 2020) which makes the online system beyond just the users' experience. Research in the past decade has increasingly focused on the issue of user privacy and exploitation. Online platforms allow users to customize their privacy scenario according to their preferences, by limiting their information being shared, blocking popup ads, and even tracking their search history. However, research by professors at Boston University has different findings to these claims of big tech companies. Their research conducted through experimenting with different privacy scenarios conclude on the notion “that recommendation algorithms typically lead users away from reliable sources over time and, those users who seek higher privacy tend to have more exposure to extreme videos.” (Spinelli and Mark 2020) Hence, though there are claims on user privacy by big companies there is still the case that some companies go above and beyond and work without complete transparency which can exploit users leading to the loss of trust of users on these platforms.
With all these discussions on online algorithms being at the forefront of misinformation and exposure to extreme videos, there is an alternative debate that goes against these claims. These discussions revolve around user choice, human desires of the unknown, and profit-seeking motives of private companies. Online social media websites including YouTube, Facebook, and Twitter, being online profit-making organizations, tends to act like any other website that shows content according to our preferences. The debate that these online websites intentionally show users extreme content and that they are solely responsible for the spread of extreme videos is a biased approach since users themselves are also fond of looking and searching up such content. Researchers at the University of Queensland had the same idea as they took user preference in question when discussing this topic and they concluded that blaming social media for “users getting radicalized is a biased approach since the users themselves have a very high tendency to not only search up extreme content but also to watch such intimidating content for hours.” (Matamoros and Gray 2019) Their discussion goes hand in hand with the topic that users themselves choose to watch such content and online websites simply give them what they desire. Similarly, other research in this area has different yet similar findings where researchers at UC Berkeley say that “online social media and their algorithms actively discourages viewers from visiting radicalizing or extremist content” (Ledwich and Zaitsev 2020), and other studies suggesting that the “role of users and how their search history and preference matter when looking up content.” (Will 2019) Such debates open an alternative discussion on whether private firms are not the ones to be blamed. Instead educating users and making them more aware of how their searches may lead to unreliable websites may be needed. It is still a user’s personal choice to search up extreme content, as online platforms do what you ask them to do.
Hence, the debate over recommendation algorithms and their role in affecting users can be difficult to understand, but one thing that is surely and urgently needed is the intervention in government policies to moderate these online platforms. The presence of doubtful news sources raises a lot of questions, and also demands the governments to take specific policy actions to moderate these online platforms.
The urgent need to monitor and regulate online social media websites and their algorithms is a must as we further move into an era that completely relies on online platforms for business, socialization, and communication. One policy that can be implemented by the government is to enforce rules for complete transparency on the ends of the social media platforms. Online social media platforms on their terms and conditions section do not openly state what data they are using and how they are using it. Instead, they state that user’s information is protected, and it is up to the user to control how they want to use the platform. However, if the user chooses to hide some specific information, or does not agree to all terms, they will not be able to use the complete features of the website and hence have their experience affected. Facebook and YouTube are some examples of such websites, where if you do not agree with the app developers to share and use your data, you will not be allowed to make an account and use the platform. Hence, by openly addressing the issue and allowing users to be the sole decision-makers of how they want their data to be used is a start. Imposing privacy protection and transparency laws hence would encourage online platforms to come up with and develop their algorithms in a way that does not take user search history, location, and personal information into consideration when they show content. Some policies that may be effective in such cases can be consumer privacy acts like The California Consumer Privacy Act (CCPA) and International Data Privacy Law that address private online platforms in showing transparency and imposes duties if they collect personal information of users without approval. By informing data subjects when and how data is collected, and giving them the ability to access, correct, and delete such information is essential to help regain the trust of users on social media websites. Such a policy is going to be most affective since we have already seen the use of them in different states, which gives a solid scaffolding to this approach.
2. Furthermore, educating users of these platforms is another key aspect that needs to be considered. Many users skim through the long and tedious terms and conditions as they enter an online platform. Some studies even suggest that “T&C are deliberately…long so that you don’t read them and just click accept.” (Lomas and Dillet 2015) One plausible policy that needs to be implemented revolves around a concise and precise summary of the terms and conditions that should be presented to users at the start which would enable them to quickly go over all the key privacy terms stated while saving much of their time. Simultaneously, educating users on their privacy rights is essential if we wish to reduce the impact of radicalization of online platforms. Letting users know of their rights when it comes to privacy protection, and the potential harm of sharing their personal information with such websites will help in a much secure online community. Since online media is the future and a key part of our lives, highlighting the essential privacy rights to users through seminars or courses should be implemented by governments. Moreover, educating people in corporate offices and businesses will also help firms with sensitive content to know about the right course of actions needed and encourage new web developers to make algorithms that do not compromise on privacy. We cannot stop people from searching for extreme content, but we can, through these policies, limits its harmful impact.
3. Another important policy that could be implemented is a regulation on limiting monetary incentives to third-party companies. As online platforms sell user time and attention to third-party companies and allow them to use user information for personalized ads, there is a need to regulate and limit the extent to which private companies can use user data and sell it. Companies such as YouTube allow 4-5 ads to run in between each monetized video. This creates incentives for third-party firms as they know that each user that watches that video will have to see 4-5 ads each of 10 seconds. Governments, by restricting the number of ads that can be aired can help reduce incentives to ad selling firms and help users to have a safer and personalized experience online.
4. Similarly, policies on fact-checking content and setting a standard on the extreme content need to be imposed. Recent updates have suggested that social media are aiming “to increase engagement across all channels” (Mayfield 2020) which if not done correctly can further create problems for users. Since online platforms raise millions in revenue, they should be encouraged to invest in improved algorithms and moderators that go over each post and video when being published. By doing so, they can remove any content that may not be fit for the online audience. Having informed and educated moderators can also help to improve the validity of the content being published which will result in creating an online platform that contains fact-checked, examined, and reliable sources where users can get correct information. This discussion can also be seen from the viewpoint of hacking and privacy breaching. Cases of hackers breaching security codes of major websites have been common in the past decade. Most recently we saw “data from 500 million LinkedIn users being scraped and for sale online” (Canales 2021) as per cyber news. Investing in better AI and maintaining a standard of algorithms can hence stop hackers and external companies to breach and look at user information which can help in improving the online experience of users on these websites.
Following such policies will eventually lead users to regain trust in these websites and result in a more accurate spread of information in a safe, personalized, and secure environment. The implementation of these policies will encourage a much better online experience while also not neglecting private social media companies to lose incentives in operating on the world wide web. Though further research needs to go into developing algorithms that create a balance between users and online firms, there is an urgent need for governments to step up and stop social media from radicalizing users.
Arham Malik is a sophomore studying Public Policy and Economics with a focus towards Data Analytics at the University of Toronto. My research interests include social media politics, labor economics, and public finance.
Canales, Katie (2021). “Hackers scraped data from 500 million LinkedIn users — about two-thirds of the platform's userbase — and have posted it for sale online”, Business Insider. https://www.businessinsider.com/linkedin-data-scraped-500-million-users-for-sale-online-2021-4
Deibert, Ronald. 2020. “Surveillance Capitalism” Lecture, Contemporary Challenges to Democracy: Democracy in the Social Media Age, University of Toronto, January.
Guess, Andrew M., Brendan Nyhan, and Jason Reifler. “Exposure to Untrustworthy Websites in the 2016 US Election.” Nature Human Behaviour 4, no. 5 (2020): 472–80. https://doi.org/10.1038/s41562-020-0833-x.
Joshua A. Braun & Jessica L. Eklund. “Fake News, Real Money: Ad Tech Platforms, Profit-Driven Hoaxes, and the Business of Journalism.” Digital Journalism, 2019. https://doi.org/10.1080/21670811.2018.1556314
Larcinesey, Valentino. “The Political Impact of the Internet in US Presidential Elections,” 2017. Economic Organisation and Public Policy Discussion Papers Series
Ledwich, Mark, and Zaitsev, Anna. “Algorithmic Extremism: Examining YouTube's Rabbit Hole of Radicalization.” First Monday, 2020. https://doi.org/10.5210/fm.v25i3.10419.
Lomas, Natasha. and Dillet, Romain. (2015). “Terms And Conditions Are The Biggest Lie Of Our Industry”, from https://techcrunch.com/2015/08/21/agree-to-disagree/
Maiga, Abdou, Ai Ho, and Esma Aimeur. “Privacy Protection Issues in Social Networking Sites.” 2009 IEEE/ACS International Conference on Computer Systems and Applications, 2009. https://doi.org/10.1109/aiccsa.2009.5069336
Mayfield, Dayana (2020). “Social Media Algorithms 2021”, from https://storychief.io/blog/en/social-media-algorithms-updates-tips
Matamoros, A. and Gray, J. (2019). Don’t just blame YouTube’s algorithms for ‘radicalization’. Humans also play a part. Retrieved 2019, from https://theconversation.com/dont-just-blame-youtubes-algorithms-for-radicalisation-humans-also-play-a-part-125494.
Schmitt, Josephine, and Rieger, Diana and Rutkowski, Olivia and Ernst, Julian, “Counter-messages as Prevention or Promotion of Extremism? The Potential Role of YouTube: Recommendation Algorithms”, Journal of Communication, Volume 68, Issue 4, August 2018. Pages 780–808, https://doi.org/10.1093/joc/jqy029
Shin, Jieun, and Kjerstin Thorson. “Partisan Selective Sharing: The Biased Diffusion of Fact-Checking Messages on Social Media.” Journal of Communication 67, no. 2 (2017): 233–55. https://doi.org/10.1111/jcom.12284
Spinelli, Larissa, and Mark, Crovella. “How YouTube Leads Privacy-Seeking Users Away from Reliable Information.” Adjunct Publication of the 28th ACM Conference on User Modeling, Adaptation and Personalization, 2020. https://doi.org/10.1145/3386392.3399566
Timberg, Craig and Harwell, Drew. “Parkland Shooting ‘Crisis Actor’ Videos Lead Users to a ‘Conspiracy Ecosystem’ on YouTube, New Research Shows: How One Conspiracy Leads to Another.,” Washington Post 2018. https://www.washingtonpost.com/news/the-switch/wp/2018/02/25/parkland-shooting-crisis-actor-videos-lead-users-to-a-conspiracy-ecosystem-on-youtube-new-research-show
Will, Feuer. “YouTube Actually Steers People Away from Radical Videos, Researchers Say,” 2019. https://www.cnbc.com/2019/12/28/youtube-recommendation-algorithm-discourages-radicalism-researchers.html.