top of page
Writer's pictureAnna Aseeva

‘Good AI augments humans, not vice versa’ they say.

Does it augment human rights?



Disclaimer: This is an updated English-language version of a note for Business and Human Rights, the United Nations Global Compact Network Poland Report 2022, which is originally published in Polish here: https://ungc.org.pl/wp-content/uploads/2022/12/Raport_Biznes_i_Prawa_Czlowieka_2022.pdf


Introduction

The COVID-19 pandemic has exacerbated the effects of the digital transformation: the—now—old global economic space, the extractive economy, is steadily giving way to the new economic space—digital economy. This transformation shakes the very groundwork of the existence and purpose of law, i.e. the regulation of social relations. For instance, the pandemic has pushed us all further into digital social spaces, putting questions of privacy at the forefront of societal regulation.


What's in each of the two Lady Justice’s scales?

Today, the consequences of developing tech without putting people first are clear. Despite the internet’s many benefits, it may also erode trust and fuel misinformation, polarization, and inequality. That is partly because the algorithms that shape our economies, society, and public discourse were developed with few legal restrictions or commonly held ethical standards. It becomes increasingly obvious and necessary that the technologies that shape our current socio-economic relations be consistent with our shared values, and, indeed, the fundamental rights. Confronting the abuse, exploitation, and manipulation of personal data collected via internet use is one of the fundamental rights issues. The legal challenges related to the property rights of these harvested data have implications for privacy, personal safety, and such like.


On the other hand, some platforms increase publicness of selected data. The so-called legal intelligence platforms are said to help legal professionals automate a certain portion of their work, especially by (i) doing online legal search (collection of legal documents) and (ii) creating comprehensive documents (enrichment thereof). These platforms’ artificial intelligence (AI) does that by finding on the web and automatically highlighting crucial information, which is then enriched with relevant external data such as legislation, court rulings, commercial registries, etc. Most typically, you can find both the original and the enriched documents in the database of the platforms. Among these documents, there are, as said, court decisions.


With publication, enrichment (and eventual re-usage) of court decisions, the main hurdle is an uneasy balance between (i) personal data and overall privacy of each person mentioned in the decision, on the one hand, and (ii) public policy, and, specifically, the interests of the general public to access the court decisions, on the other.


In the case of privacy, the fundamental rights at stake are:

- right to privacy;

- right to personal integrity;

- in the context of ‘sensitive’ litigation, the risk of disturbances or reprisals following the published judgments and particularly decisions related to such issues as terrorism, family, etc. and hence, the right to / interest in not suffering from these risks; and, finally,

- interest in not suffering the inconvenience of seeing one or more sensitive court decisions freely accessible on the internet (as a derivative of a right to digital/information self-determination).


In the opposite scale (public policy/ interest), at stake are:

- right to a fair trial;

- principle of publicness of the court decisions;

- right to reuse public information;

- right to information; and

- freedom of speech/ expression.


How can (and maybe, soon, shall) legal AI platforms balance the above rights and freedoms lying in each of the two Lady Justice’s scales?


First, the principle of pseudonymity, or concealment, of personal data and information postulates that court decisions are liable, according to the practice and/or the regulations applicable to that court, of having been the subject of an initial concealment of information (typically, the surnames and first names of natural persons) before the platforms get these data. In this case, the platforms only receive, by definition, an already concealed version of the decision.


Note that the EU General Data Protection Regulation (GDPR)'s article 5(1)(c) provides for data minimization in court proceedings. Additionally, the GDPR article 4(5) enshrines a pseudonymization (concealment) measure. The latter is limited to certain specified categories of personal data, which is a measure of security (among others) provided for in GDPR article 32, the principle of application of which is to be proportionate to the risks associated with the processing. Unlike the GDPR’s obligation of anonymization, which is an obligation of result, pseudonymization is hence arguably an obligation of conduct (of the legal intelligence platforms, in the context of this analysis).


If we take the GDPR articles 5(1) minimization and 4(5)-style pseudonymization as a common denominator, a sort of a minimum threshold for such platforms to operate in compliance with fundamental rights, then, to further mitigate the privacy side of the issue, the platforms should additionally ensure:

1) implementation of a pseudonymization algorithm favouring an ‘over-pseudonymization’: i.e. in case of doubt, platforms should hide the terms on which a doubt exists, rather than risking a ‘not-enough pseudonymization’;

2) continuous improvement of their pseudonymization algorithm, which should thus be regularly updated;

3) taking into account the requests for additional concealment when this proves necessary: ​​additional occultation of personal information upon request, carried out manually when the raw data holder demonstrates an impact on their rights;

4) verification by a human person of any modification relating to a court decision already published;

5) verification by a human person, before its publication, of any court decision of conveyed by a platform user;

6) drafting an easily accessible and understandable privacy policy, in particular concerning the procedures for exercising the rights of personal raw data holders; and

7) non-referencing of certain types of decisions: for instance, to intentionally reject from referencing by search engines the judgments on sensitive matters, such as family matters.


Summing up

Zero risk of breaching the privacy side of the subject-matter does not exist. But if the legal platforms at least fully implement all the above measures, it is very unlikely that a substantive risk occurs and that there is a significant harm for, or adverse impact on, the concerned individuals. The cost-benefit ratio thus tilts in favour of the online dissemination of the judgments, even if total pseudonymization cannot be guaranteed.


In the European context, such measures are likely to make it possible to establish the legal basis within the meaning of GDPR article 6(1) stipulating dual legitimate interest of (i) the principle of publicness of the court decisions and (ii) the right to reuse public information, thus enabling both the general public and legal professionals to be able to access the decisions, and the legal AI platforms to reuse them in their database. All in all, the public nature of the published judgments reinforces the chances of lawfulness of legal intelligence platforms’ operations since the reuse of public information constitutes a right established by the Court of Justice of the European Union (CJEU) yet in 2011 (see CJEU, Commission v Poland, C-362/10).


Lastly, and importantly, on 2 March 2023, in the Case Norra Stockholm Bygg, C‑268/21, the CJEU ruled that

'1. Article 6(3) and (4) of [the GDPR] must be interpreted as meaning that that provision applies, in the context of civil court proceedings, to the production as evidence of a staff register containing personal data of third parties collected principally for the purposes of tax inspection.

2. Articles 5 and 6 of [the GDPR] must be interpreted as meaning that when assessing whether the production of a document containing personal data must be ordered, the national court is required to have regard to the interests of the data subjects concerned and to balance them according to the circumstances of each case, the type of proceeding at issue and duly taking into account the requirements arising from the principle of proportionality as well as, in particular, those resulting from the principle of data minimisation referred to in Article 5(1)(c) of that regulation.' (Emphasis added)


This CJEU's decision in general lines confirms our above analysis, in particular regarding the right to reuse public information.

94 views0 comments

Comments


bottom of page