Disclaimer: This is a shortened and updated version of Sections 3 and 5 of my article Liable and Sustainable by Design: A Toolbox for a Regulatory Compliant and Sustainable Tech
You can read article in full (in open-access) here:
#digitaleconomy #Web3 #Web4 #websustainability #digitalecosystem #techlaw #innovation #innovationlaw #startups #digital #digitaltrade #digitalservices #DigitalServicesAct #KYC #KYB #cybersecurity #compliance #duediligence #ethics #safety #equality #privacy #profiling #scoring
Now that I have your attention (food is good, old but gold, right?), let us delve into some tenets of regulatory compliance and ethics relevant for tech companies
Humans are classified into new discriminated groups that are arbitrary, human incomprehensible, and random. Thus, contrary to groups that previously—‘traditionally’—suffered from discrimination based on gender, age, ethnicity, etc., these new groups (such as lower-case users or owners of bad-quality pictures or phones) are random because they are defined by incomprehensible and unknown characteristics and boundaries created and deployed by AI in a random manner.
(A necessary introduction that you will see in every recipe :)
The pandemic has exacerbated the effects of the digital transformation: the extractive economy is steadily giving way to the new economic space—the digital economy. This transformation shakes the very foundations of the existence and purpose of law, i.e., the regulation of social relations. However, today, the consequences of developing tech in an unsustainable manner are becoming obvious. Unsustainable tech development contributes to trust erosion, misinformation, and polarization, leading to such legal/ethical issues as irresponsible practices of all sorts, unsafe and insecure digital market, inequality, lack of transparency, breach of privacy, etc.
These developments occur partly because the algorithms that shape our economies, society, and even public discourse were developed with few legal restrictions or commonly held ethical standards. Consequently, ensuring that technologies align with our shared values and existing legal frameworks is crucial.
This series of blog posts explores existing and prospective legal and regulatory frameworks that make tech not only legal by design but also, and especially, liable and sustainable by design. The key questions include whether new laws are necessary or if existing legal, regulatory, and ethical concepts can be adapted (or both!).
It is argued here rather in favour of the adaptation of pre-existing legal concepts, ethical standards, and policy tools to regulate the digital economy effectively, while paying attention to possible gaps and possibilities to fill those gaps. The objective is to synthesize these concepts, analyse their applicability to Web 3.0 and Web 4.0 regulation, and provide a toolkit (see below) for a regulatory compliant and sustainable tech. The blog series focuses on organizations involved in tech and innovation, particularly Web 3-4 actors, using systems analysis to examine regulatory constructions both functionally and institutionally.)
Figure 1: Toolbox. Source: Anna Aseeva (2023), 'Liable and Sustainable by Design: A Toolbox for a Regulatory Compliant and Sustainable Tech', in Sustainability, Vol.16, https://doi.org/10.3390/su16010228
In Episode 2 of the series, I analyse the existing concepts, most recent practices, and avenues in (i) regulatory compliance and (ii) ethics applicable to tech organizations.
Recipe #2: Compliance & Ethics
1. In this episode
As the previous episode of this series shows, beyond more ‘traditional’—capitalistic—constructs of corporate and contract law, consumer contract law and the neighbouring legal concepts raise questions about how to create a sustainable—i.e., safe, transparent, and trustworthy—digital market ecosystem. The discussions of compliance thus naturally follow the Episode 1 on corporate and contract law. Additionally, when one speaks about privacy, especially with respect to the advent and ever-growing use of algorithms, questions of ethics are unavoidable. I address both compliance and ethics below.
2. Compliance
Regulatory compliance somewhat complements consumer protection in the digital economy, which I discussed in the Recipe #1: Corporate & Contract Law. It often covers the sector-specific rules stemming from safety, security, and fundamental rights protections in the areas of anti-terrorism, prevention of various forms of abuse, anti-corruption, money laundering, etc. It is not really and not necessarily about tracing and/or controlling every step taken by consumers and a broader group of internet users. Rather, its main objective is to make a market more ‘ecological’, because a ‘clean’—that is, organized and clear—market is a sustainable market, a safe and transparent ecosystem of trust, and therefore, a market that can develop.
Compliance rules often come from fields that pre-existed Web 3-4, such as banking and finance law and regulation, with such basics as KYC (Know Your Customer) and KYB (Know Your Business) information. These data comprise a client’s or an investor’s identity and/or fund verification procedures. Knowing the customers and business partners is key to ensuring that a market is not ‘polluted’, and only the service provider is able to carry out these checks. I will cover these and similar questions regarding not the enterprises themselves, but also and especially their financial and monetary inputs/outputs in the next blog post—the one on cryptoassets.
When we speak about enterprises on the online market, it is important to make sure that their customers and/or business partners are ‘ecological’ in the sense that they provide safe and original content, goods, or services. For instance, online marketplaces, which are, along with DAOs discussed in the previous blog post, an increasingly typical example of a Web 3-4 business, will have to monitor and trace their traders in order to ensure a safe, transparent and trustworthy ecosystem for consumers. Organizing their online interfaces in a way that allows digital businesses to carry out their due diligence and information obligations towards consumers is another sine qua non feature of regulatory compliant online marketplaces and virtually all types of tech companies.
It is important to note that in the EU, public authorities are now able to remove unsafe content, products, or services directly from the online platforms. EU operators of online marketplaces are also required to make reasonable efforts to randomly check whether unsafe content, products, or services have been identified as being illegal in any official database and, if they have, to take the appropriate action.
Under these new EU rules that function under the banner of the Digital Services Act (DSA), entities running online platforms that are in any way accessible to minors are required to put in place appropriate measures to ensure high levels of privacy, safety, and security of minors on their services. Furthermore, the DSA also prohibits advertising that targets minors via profiling based on users’ personal data when it can be established with reasonable certainty that those users are minors.
In addition to the EU DSA, which regulates the obligations of digital companies that act as intermediaries in their role of connecting consumers with goods, services, and content, any online operator (including outside the EU) without a compliance programme will likely encounter considerable financial obstacles to finding a bank, a payment service provider, etc.
Overall, implementation of a compliance strategy for a tech organization involves a robust and well-thought-out risk management strategy. Once this strategy is ready, the enterprise must establish relevant procedures and ensure that its management, operations, and transactions comply with its strategy and promises, including public pleas (most typically, on the website of the company). Parts of such data must be made publicly available.
3. Ethics
In tech regulation, and, more broadly, in the digital economy, ethics is closely related to the above questions of data governance, privacy, and, specifically, AI-related proceedings. As said in the introduction, at present, the consequences of developing tech in an unsustainable manner are obvious: the digital economy, while facilitating our access to virtually any content, good, or service through the internet, may also erode trust and fuel misinformation, polarization, and inequality. Indeed, the algorithms that shape our current socio-economic relations were developed not only with few legal restrictions, but also, and, indeed, especially, with fewer commonly held ethical standards.
In any matter related to data governance, privacy, and, particularly, the deployment of AI, today’s tech organizations will have to conduct deeper ethical due diligence of their business partners, as well as screening of the content, product, and/or service they are offering, its impact on society, and the conditions that they will have to define ex ante to use/deploy it.
Today, one of the key ethical concerns regarding AI involves so-called deepfakes. Deepfakes involve the manipulation of physical appearance, including human facial appearance, through deep generative methods. It is needless to say that deepfakes can adversely affect our socio-economic relations, our everyday life, and society as a whole.
They can be harmful, for example, in education and research (e.g., in plagiarism, transparency, lawful grounds for data processing), banking and finance (fraud, fake identities for bank accounts and financial proceedings, unlawful access to bank accounts, etc.), justice (administration and enforcement thereof), etc.
Other ethical dimensions that a version of Web 3-4 that is liable and sustainable by design should be mindful of, include, but are not limited to
(i) secure and authentic information (so, questions of safety and privacy),
(ii) a safe online space for everyone (questions of equality, and, to some extent, of safety), and
(iii) equal access to education, employment, and the social system, and, eventually, a market of goods, services, and content (questions of equality).
The latter question of equality of access is today directly relevant for tech companies and AI for several reasons. Specifically, the algorithms, especially, generative AI, bring more context to our online search results, help us write our software and create our applications, they communicate with us and/or instead of us in generating text, images, code, videos, audios, etc (for more examples, see here). AI also creates new search engine architectures, serves as personalized therapy bots, assists developers in their programming tasks, and, through chat bots, assists us in our everyday interactions with virtually all providers of goods or services online.
The above benefits, however, also come with downsides and even dangers. The use of AI may involve abuse, exploitation, and manipulation of personal data collected via internet use. Algorithms are also increasingly used to make important decisions affecting all stages and areas of our life, from school and university admission, to job search, to loan application, to granting insurance and medical treatment, and beyond.
For example, today, dedicated private law companies provide credit institutions with a score assessing citizens by using their private data collected from the internet. On account of this score (also called AI profiling), though it is based solely on automated processing, credit institutions may make negative decisions (to refuse a loan application) that significantly affect people’s lives. Aside from the socio-economic and legal effects of this decision, this system obviously raises questions of access to the market—in this example, the credit market—hence also raising ethical questions of equality and the ethical and moral soundness of making a fully automated decision on such subjects.
Note that not only the access to credit market, but also the other areas of our lives in which decision-making is based on, or informed by, AI processing, discrimination occurs in the same way. Today, it ranges from, as said above, our school years, to retirement, and, eventually, death (because AI can also inform—or even completely form—decision-making on such types of insurance as life or funeral insurance, for example). Specifically, AI creates groups based on patterns and correlations in attributes, behaviours, and preferences of users, e.g., our:
- web history and web traffic;
- choice of browser (Explorer and Safari are ‘preferred’ by the algorithms over Chrome, Firefox, or Opera);
- social media and selling platforms’ data;
- lowercase use or all-caps use;
- speed of page scrolling;
- clicking behaviour;
- picture pixels;
etc.
Humans are thus classified into new discriminated groups that are arbitrary, human incomprehensible, and random. Thus, contrary to groups that previously—‘traditionally’—suffered from discrimination based on gender, age, ethnicity, etc., these new groups (such as lower-case users or owners of bad-quality pictures or phones) are random because they are defined by incomprehensible and unknown characteristics and boundaries created and deployed by AI in a random manner.
These ethical issues of privacy, safety, and, increasingly, equality, are drawing greater attention from policymakers and lawyers, who manage these questions in a more legalistic manner, especially in the EU, especially in litigation (see relevant AG Opinions and CJEU rulings, in, e.g., such recent cases as Norra Stockholm Bygg (here and here) and SCHUFA Holding (here and here)).
Hence, every digital organization will have to undertake a kind of a ‘double screening’, making sure that their business partners and their business respect basic common ethical values and standards while monitoring the myriad legislative and judicial developments on the subject-matter. Such monitoring will be necessary above all when the work of tech firms in any way relates to the EU and its single market, as the EU approaches many aspects of the digital economy in a particular, yet quite stringent, way.
EU also has its own considerations for another important ethical and legal aspect of tech developments such as AI—namely, the protection of intellectual property that I will discuss in the next episodes of this blog series.
4. Summing up
I identified two key developments brought about by the digital transformation. Firstly, this new rather decentralized and increasingly self-regulating digital economic space creates so-called governance and regulatory gaps. Secondly, and consequently, some of these gaps are filled rapidly (at times, successfully, at times less so) by a burgeoning newest legal framework.
The section which focused on regulatory compliance, revealed a number of findings stemming from the two strands—i.e., from governance and regulatory gaps and from the newest legal framework aiming to fill those gaps. Online marketplaces and all other kinds of online platforms in any way offering various digital services, goods, and, today, increasingly, content, are a typical example of new Web 3-4 'creatures'. At present, they may be non-compliant due to unfair or simply dangerous dealings by myriad fund providers, traders, sub-contractors, and even third parties operating on their platforms. They may thus fail to guarantee a safe, transparent, and trustworthy sustainable ecosystem for consumers and the whole economy.
Along with the pre-existing AML, KYC, KYB, due diligence and like regulatory tools outlined in that section, the online platforms should thus organize their interface in a way that allows all concerned parties to comply with the corresponding digital due diligence and information obligations. One of the most recent regulatory developments more typical and adapted to, for instance, services, including financial services that are fully digital, is the EU Digital Services Act that offers a general framework in the subject-matter.
Overall, regarding both general strands—the gaps and filling those gaps—any tech enterprise must take seriously the conception and implementation of its compliance strategy that includes a robust and well-thought risk management.
On the ethics front, I highlighted the follwing gaps. Regarding AI, today one of the major ethical issues is deepfakes. Section 5 showed that deepfakes can be harmful for myriad areas of our everyday life: education and research, banking and finance, administration of justice, etc.
Other highlighted pain points arising from AI (specifically, classification and generative systems) naturally include equal access to education, employment, the social system, and the market of goods, services, and content, as well as, more generally, secure and authentic information and, eventually, a safe online space for everyone. At the conceptual level, those issues are reflected in the ethical questions of safety, privacy, and, especially, equality.
As the analysis of relevant ethical issues showcased, humans are thus classified by AI into new groups that face discrimination. These groups are arbitrary, human incomprehensible, and random because they are defined by incomprehensible and unknown characteristics, and created and deployed by AI in a random manner. These gaps of privacy, safety, and, progressively, equality, are drawing increasing attention from policy-makers and lawyers, who attempt to fill them more formally, especially in the EU, and especially in litigation, as shown by the results of various 2022-2023 CJEU decisions.
Comments