S. Matthew Liao writes an article on yesterday's issue of the New York Times asking if "Do You Have a Moral Duty to Leave Facebook?". The article builds upon the known distinction of duties to oneself and duties to others. I'm very interested in this matter as someone who himself had a Facebook account for six years and left in the summer of 2015. Most of the symptoms described by Liao as causing duties to oneself to leave Facebook were the exact ones that made me leave. So I was happy when I didn't have a personal motive to consider other reasons to leave as duties to others. And many of those reasons have emerged since the summer of 2015 (many are in Professor Liao's article) and they are all connected to the part Facebook has played in promoting hate speech, spreading fake news, invading privacy and other attacks on basic rights and important constitucional interests.
So, although I don't have a personal motive to consider duties to others as reasons to leave Facebook, as a lawyer and a law professor I still find it very important to consider those reasons. Law is a special field of Morality. I don't mean it as an affiliation to any Natural Law disposition but in the sense that Law, as an expression of authority (something we can all agree on), filters what minimum can be morally imposed on a society (before she rebels).
If this is so it means Law finds itself very often confronted with the necessity to tackle important moral issues and turn them into rules of another kind. So the question becomes not what moral duties can there be for users of Facebook, either to themselves or others, but what duties can we impose on Facebook to level the playing field for a minimum standard of morality. Or, as many other lawyers would prefer, to balance competing rights.
Since I'm currently researching the contributions of consumer law and especially the European Union concept of services of general economic interest to the balance of rights on the internet, my question when reading S. Matthew Liao would be "Does Facebook have a legal duty to consider the moraliy of its users?" or, on a more nuanced approach, does Facebook have to balance the rights of its users between themselves and against its own interests? I think many people would answer yes to this last question mas not so many would do the same to the first one. Exposing the moral side of this problem - as reading Professor Liao article made me do - and leading it to the legal domain poses very interesting questions.
The first problem is, of course, if Facebook, and even other social networks, should be considered services of general economic interest in the legal sense of EU law. But in everyday problems that Facebook has raised in these last few years the main one is more on the side of deciding if Facebook can be left alone to self-regulate its relations with its users. Many will answer that goes without saying because Facebook owns the platform and may do as it sees fit. But we wouldn't say that of the shop owner regarding its customers and let alone the airline or media company regarding their customers. And why not? Consumer law, especially when important services are concerned, are more morally sensitive in the sense I'm using morality here. They expose different moral views on matters that are key to the everyday functioning of a modern society. Maybe social platforms are not there yet in the mind of legislators, when compared to postal services or airline companies, but they are going to get there fast. In other words, the clash of different, sometimes outright competing interests, on social plataforms and the sheer asymmetry of power between the platforms and their users, calls for the authority of law in regulating those platforms and maybe even imposing legal duties on them to consider different moralities than the one that the platforms owners want to use as standards. Social platforms are also moral platforms so there is no hiding behind the "social facts" construction: we are also regulating morals. Legally speaking: a duty to have less self-regulation and more regulated self-regulation. Germany went for it on January 1st with a law that imposes a duty on social networks to monitor the minimum of minimal moral standards in a legal system: its criminal law. Even so the german law is giving rise to fierce discussions on its constitutionality because many say it violates several of the freedoms of networks owners and users of said networks.
It's a discussion that I cannot go into here (I'm having it in my mind first) my point being that, following Germany lead, the discussion is already well underway and it's really about making social platforms more morally acceptable regarding duties of its users to others. In a way, more morally neutral in the sense that Law in modern countries wants to relate to morality. Professor Liao finished his article writing that "For now I’m going to stay on Facebook. But if new information suggests that Facebook has crossed a moral red line, we will all have an obligation to opt out". Law, as in the case of the German approach, may prevent such red lines. Although it may just call that "balancing competing rights". Is it its place to do so? I would argue that has always been the case in constitucional democracies under the rule of law. In open spaces competing moral considerations have always been balanced by public moral considerations resulting from those social encounters. Otherwise known as Law.