The recent data scandal involving 87 million people’s personal information collected and processed by Cambridge Analytica, one of Facebook’s many partners, acted as a powerful reminder to all of us: Social media is not free. We pay with a self-generated product – our data – and we cannot control for what and how this product is used for, once it has entered the digital market. This perspective on social media stands in stark contrast with the visionary understanding which dominated the early days of the internet, but a significant number of scandals and leaks (like Edward Snowden’s revelations about the National Security Agency in the US) prove the fear of our data being misused to be real. Powerful corporations, among which the GAFAs (Google, Apple, Facebook and Amazon), are now working hand in hand with governments: Under the banner of “fighting terrorism” and “state security” , they are effectively implementing mass surveillance programs. The recent scandals around Russian interventions in the Brexit vote and the US elections show how social media corporations are, in fact, ready to give away data to whatever public or private entity that has sufficient economic resources. Whereas the internet was imagined as a new utopia in its early days, a space full of creative opportunities, grass-root democratic access and free exchange of opinion, it is now appearing more and more as a violent space breeding sectarianism and divisions in our societies.
The dark side of the internet: social control and social cooling
A glance at what is going on abroad can give us a pretty good outlook on what could happen in South Africa if nothing is done to defend our liberties and rights on the internet. Beyond seeing our elections manipulated by powerful interests through a substantial influence on our news feeds (like in the US and the UK), the risks could extend much further. A video went viral recently, portraying how social media will be used by 2020 all over China to track and rate every single internet user, depending on the conformity of his or her activities relating to the dominant definition of what is or isn’t a “good behavior”. The good samaritans would, for instance, get cheaper loans whereas dissidents or divergent users, i.e. those expressing discontent with the government, could be banned from booking planes and train tickets – and this is only the beginning. The risk of an extended social control enforced through large-scale data collection is very real, whether it appears soft and invisible as in democracies, or violent and oppressive as in dictatorships.
The indirect impact of such multi-faceted control on people’s behaviors is described as ‘social cooling’: people, when knowing they are being watched and scrutinized (at least by algorithms), tend to modify their habits in what can be seen as self-censorship. We start to internalize the omnipresent power structures by thinking and behaving more and more conformist. After all, who knows who is spying on us: Our boss? Our ex? Our government? Our political opponents? We are slowly moving towards a society where your social conformity matters more than what you really do or think, and alongside this, our capability to take risks and to act out of the box decreases significantly, threatening innovation processes and social change.
Personal solutions to ensure data privacy…
Even though both the current situation, as well as the realistic future, do not give many reasons to be optimistic, it is important to remember and develop possible alternatives. To start off, people can change their behaviors on a personal level – using a false name could be a first protection but much more can be done. A fair number of add-ons and softwares can allow you to browse without the risk of being tracked or spied on and can also confuse social media algorithms by sending them contradicting information to make your online profiling harder to set. Some other tools can also undo the inherent selection and filter mechanisms of search engines and news feeds to allow you to see opposing and contradicting opinions as well, instead of remaining stuck in your personal filter bubble.
In addition to the possibility of adopting personal security measures, more and more countries are already implementing laws and rules which allow individuals to force online platforms and services to erase specific information about its users upon request. The right to be forgotten is emerging and will probably become extremely important in a few years to make sure that today’s youth will be able to erase past mistakes.
However, these solutions clearly have their limits: firstly because not everyone has the knowledge, skills, and commitment to implement these security measures, and secondly because not everyone can access the financial and legal resources for pushing powerful online platforms to respect citizens’ rights. A collective answer is therefore urgently needed.
… and the necessity to adapt the collective idea of public spaces
Even though actions like raising awareness among citizens and educating the youth about these issues can be a first step, much more will need to be done in terms of setting the operational framework in which social media corporations and online service providers operate. For example, it is clear that the general conditions – the rules to which internet users automatically agree when using a website – are way too lax in terms of right to privacy. The mere fact that such general conditions are normally dozens of pages long – and yet most users are agreeing to them within seconds -, indicates that no one is actually reading them. Can we, in this case, consider that users gave their free, prior and informed consent? My data can be sold to nearly anyone, without me knowing the extent of what is collected and for which purposes. Therefore, general conditions need to be drastically changed, at least by becoming more accessible, understandable and all in all more user-friendly – in short, by granting users an effective “Right to Say No” to share, or not to share, their personal data.
The core of democratic principles at risk?
On a larger scale, the algorithms of social media – and the policies behind them – now make it increasingly possible for anyone with sufficient economic means to widely influence people’s perceptions, not least during election times. Not only can they collect information how to better target, influence and manipulate a certain type of people, but they can also sell this information to governments or private companies to give them the chance to depict themselves in a individually targeted and personalized way. The core of our democracy was a shared public space, open for pluralistic debates and contradicting opinions. Now, social media is threatening this core by transforming our public space into a plethora of individual spaces which only interact with each other when sharing the same opinions and beliefs. There is hardly any interaction between users with contradictory ideas and politically opposing views anymore. Everyone lives in their own filter bubble and gets flooded with posts reflecting their own ideology, subsequently leading to constant reinforcement and support of an already one-sided view.
Realizing the democratic role of social media – and the state’s responsibility in ensuring it
How to oppose this dangerous development of self-affirmation? To avoid moving towards societies in which entire swathes live disconnected from each other, it might be useful to understand social media (and other online platforms) as forms of public spaces: Such a framework would mean that corporations such as Facebook and Twitter can no longer legitimately control what to consider offensive and, therefore, what should be banned. In the era of ‘fake news’, we cannot give the last say to a single private company about what to regard as racist, misogynistic and violent content in our public spaces – particularly if they, due to profit maximization interests, use algorithms to determine whether some content of a page, or the entire page, should be censored or not. For example, when a widely recognized art piece is censored for ‘nudity reasons’ or when a Facebook group is closed down because too many political opponents report it as ‘offensive’ or ‘unappropriate’, a democratic control urgently needs to take place to make final decisions. Moreover, it looks like we have reached the point where some social media spaces – the ones that have a strong impact on our public discourse – should be granted specific media rights, just as conventional media forms do as well. Russia, for instance, gave a specific status to content producers with a significant reach (such as 10.000 followers on Facebook and Youtube) which recognizes them as “official media” – and though the Russian government uses this strategy to impose and reinforce control on social media content and therefore significantly restricts freedom of speech and plurality, we should not throw the baby out with the bathwater. Why not instead recognize that social media, as any other media, has a democratic role to play – and that this democratic role requires democratic entities, like legislative bodies and courts, instead of transferring it completely to private corporations?
Part of that way to regain diversity in our public spaces – a core element of democratic principles – would also be to ensure more neutrality in social media channels. And it looks like calling on the ethics of huge transnational corporations and reminding them of their responsibility in society simply won’t work. Therefore, an increased impact of public authorities will become necessary to ensure democratic rights. Publicly employed, specifically skilled and fairly remunerated workers could, for example, not only be monitoring the contents’ abidance by laws and online regulations, but also ensure that users are exposed to similar ads and content – content that is not just automatically generated to fit as smoothly as possible into a user’s previous behaviour, but instead also consists of a significant share of divergent posts, differing opinions and non-personalized ads.
Such actions – which, to an extent, suggest a form of “nationalization” of social media elements – might sound drastic at first, but it would in fact only sincerely acknowledge the immense role of social media in our digitized society. Recognizing that democratic social media need general rules and control mechanisms could not only avoid mass manipulations during critical democratic moments, but also ensure the existence of a public discourse which is not just shaped by self-affirmation, but instead by a plurality of differing opinions.
Protecting privacy vs Helping new platforms to emerge: a real contradiction?
Of course, these ideas have a major drawback: They will be costly. But if we consider both the use of social media as means of communication as well as the protection of data privacy as a basic right, we should regard the state as the responsible entity to ensure this right – and to therefore also provide sufficient public funds. And if political will exists, this money can be easily generated. Properly prohibiting and legally prosecuting tax evasion techniques of transnational online corporations like Facebook, or deciding to tax mainly those online service providers who generate the most value, are just two possible options. For example, the generated value could be measured by looking at who generates more than 5% of the online advertising market, or by looking at who totalizes more than 5% of people’s connection time on the internet. Not surprisingly, only a few multinational companies would be targeted – which would also increase the chances of success for smaller local e-companies that do not benefit from tax evasion techniques as much as large global corporations.
But once the State takes responsibility for (some) of those media services, wouldn’t that kill innovation and shrink spaces for smaller, newer online companies to emerge? Interestingly, excluding the US, only two other countries were able to breed and grow e-giants which are able to compete with the GAFAs: China and Russia. Why? Because both have been able to keep some sort of sovereignty over their national data. Even if the motivations that pushed these countries to restrict access to certain services were probably not philanthropic, the result is now here: Baidu, Alibaba and Tencent in China or Telegram in Russia are now competing with Facebook, Google, Amazon or Whatsapp. If South Africa, and more broadly Africa, does not want to keep their status of an ‘e-colony’, they will have to protect their own e-ecosystem by ensuring that citizens’ data is breeding its own artificial intelligence innovation centers and not greedy multinational companies. Imposing a stricter democratic control over those foreign e-monopolies will surely be a first step in that direction.