Following political agreement in December 2023, the European Parliament recently adopted the AI Act, a comprehensive law that seeks to implement a framework to regulate the AI ecosystem in the EU.[ii]
The AI Act is part of a broader slew of measures being adopted by the EU to regulate the digital ecosystem and will take its place together with other recent legislative and policy interventions[iii] – as a pathway for the EU to achieve its digital strategy of ensuring a “human-centric digital ecosystem”, where both citizens and businesses prosper from the use of digital technologies.[iv]
The AI Act tries to ensure that AI systems are “safe” and “respect fundamental rights and EU values”, to promote innovation and investment in AI and facilitate the development of a single market for AI applications.[v] The law is seen as a “flagship legislative initiative with the potential to foster the development and uptake of safe and trustworthy AI across the EU.”[vi] As with the European General Data Protection Regulation (GDPR) in the context of data protection, the law is also seen as establishing a global standard for AI regulation.[vii]
The law has been in the pipeline for over three years now, with an initial version proposed by the EU Commission in 2021. This was followed by revised versions being released by the EU Council in 2022 and by the EU Parliament in June 2023.[viii] The European Parliament adopted the AI Act on March 13, 2024. The text of the legislation will now be finalized at the EU level before finally being adopted through the EU’s legislative process (i.e. post approval by the EU Council).
The regulation will come into force twenty days after its publication in the official journal and will be fully applicable 2 years from its entry into force (though some provisions may come into force at different times), implying that member states will need to enact laws to enforce the AI Act sooner rather than later.
The AI Act – as initially proposed by the EU Commission – sought to cover a wide range of systems developed using any of the methods listed in Annexure – I to the Act (which includes techniques such as machine learning, logic and knowledge-based approaches, and statistical or Bayesian approaches) to generate outputs such as content, predictions, recommendations, or decisions influencing environments they interact with. However, this definition was criticized as being excessively broad and including within its scope simple software systems. Pursuant to negotiations, a more restrictive definition is used in the final law, based on definitions proposed by the OECD.[ix]
Accordingly, the latest text adopted by the EU Parliament defines an AI system as “a machine-based system designed to operate with varying levels of autonomy, that may exhibit adaptiveness after deployment and that, for explicit or implicit objectives, infers, from the input it receives, how to generate outputs such as predictions, content, recommendations, or decisions that can influence physical or virtual environments.”
The law primarily targets providers of AI systems (those who bring an AI system to market that is, developers, importers and distributors), though it also casts obligations on “deployers” of such systems (for instance, an employer using an automated recruitment system or a local government authority implementing an automated traffic management system).
The law will apply to any AI system that is used in the EU, irrespective of whether it is developed in or operated from a foreign jurisdiction. The law will however not apply to four specific areas:
The AI Act adopts a “risk-based approach” to regulation by delineating four categories of AI systems based on their uses, and implementing a graded system of obligations proportionate to the perceived risk involved. It also casts separate obligations with respect to general purpose AI systems.
AI system developers/providers are to self-assess[x] and categorize their products under one of the following categories:
Prior to deployment, providers of high-risk systems will have to carry out fundamental rights impact assessments and undergo conformity assessments to ensure their adherence to the law.[xii] They will have to adhere to various obligations concerning the implementation of risk mitigation mechanisms, data governance,[xiii] documentation and log/record keeping, effective human oversight, transparency (including by providing adequate information to deployers), cybersecurity, incident notification and maintaining robustness and accuracy standards. Providers and deployers will also be required to conduct post-deployment monitoring of systems. High risk systems will also require registration on an EU-wide database.
The law also creates two-tiered obligations for general purpose AI systems (GPAI). All such systems must adhere to transparency related obligations, comply with copyright law, and maintain and provide detailed information on the data used for training the model.[xiv]
If a GPAI system meets certain criteria that qualify it as having a “systemic risk”, certain additional obligations will kick in, including the need to conduct model evaluations, implement risk mitigation mechanisms, pursue adversarial testing, adhere to reporting requirements, and ensure cybersecurity. Such systems are also expected to adhere to standards laid down at the EU level or, in their absence, adhere to codes of practice that are to be developed through multi stakeholder processes.
The law establishes a new AI Office within the EU Commission, charged with fostering the development of standards, and overseeing implementation of the law across member states. This body will be advised by a panel of independent experts.
A separate European AI Board, composed of representatives of member states and the EU Commission, will provide advice to the EU Commission, develop evaluation methods, advise on designation of AI systems, and monitor possible risks. This body will be advised by a multistakeholder advisory forum.
At the national level, member states will need to designate one or more national authorities to supervise the application and implementation of the law.[xv] Enforcement is to occur through Market Surveillance Authorities (who could be sectoral regulators). These bodies will have the power to inspect and investigate systems, require withdrawal of products from the market, or inform the national supervisory authority of risks posed to fundamental rights by AI systems.
In addition to the risk-based nature of obligations imposed under the law, the system of proportionate penalties, and the derogations for microenterprises, the AI Act envisages the extensive use of regulatory sandboxes for controlled testing of AI systems, including in real-world situations. The law also permits real world testing of high-risk AI systems subject to various conditions laid down, such as approval of a testing plan by the relevant national supervisory authority, avoidance of vulnerable people for testing, testing only for a time period necessary to achieve objectives, informed consent of test subjects, and more.
The AI Act enables complaints to be made by the public to the member country’s relevant market surveillance authorities.[xvi] Breach of the law will therefore be handled by existing (or newly created) authorities responsible for consumer protection and enforcement of public-interest regulation in designated sectors.
Specific consumer rights conferred by the law include the ability of individuals to receive explanations about decisions by high-risk AI systems where such decisions impact their rights.
In keeping with principles of proportionality, breach of the law will see implementation of a graded system of fines, based both on the nature of obligations that have been violated as well as the nature of the entity concerned. Notably, small and medium enterprises (SMEs) and start-ups will face lesser fines in cases of infringement of the law.
Per Article 99, the maximum fines could extend to a percentage of the defaulting entities’ global annual turnover or a predetermined amount depending on “whichever is higher”. The caps are Eur 35 million or 7% of turnover for violation of rules pertaining to banned AI applications, Eur 15 million or 3% for violation of various obligations, and Eur 7.5 million or 1.5% for supplying incorrect information.
As is the case with the EU, the regulation of the digital economy is also high on the Chinese government’s agenda. However, insofar as regulatory frameworks pertaining to AI are concerned, in addition to laws and regulations concerning issues such as data protection and e-commerce,[xvii] China has not yet enacted any holistic or comprehensive law on AI (as is the case with the EU).
A roadmap for developing a regulatory framework is laid out in the Notice of the State Council on Issuing the Development Plan on the New Generation of Artificial Intelligence,[xviii] which recognizes the need for an urgent overhaul of China’s regulatory framework pertaining to AI.
The document lays out three steps towards this end: (a) establishing ethical norms, policies and regulations for AI in specific fields by 2020, (b) establishing preliminary regulations, norms, and policy frameworks for the AI sector and developing capacity towards assessing and enhancing AI security by 2025, and (c) establishment of a comprehensive legal framework for AI by 2030. The document therefore calls on government departments to carry out research on various issues germane to the AI ecosystem.[xix]
While moving towards a comprehensive framework on AI regulation by 2030, China has already enacted administrative regulations that govern the use of certain types of AI, namely, generative AI,[xx] deep synthesis technology,[xxi] and algorithmic recommendation systems. Separately, the Supreme Chinese People’s Court has issued an opinion on the use of AI in the judiciary,[xxii] while various courts have also ruled on rights and obligations arising from the deployment and use of AI.
This regulation was implemented in mid-2023, soon after the launch of OpenAI. The regulation applies to all AI systems that provide services (that involve the generation of AI content) to the Chinese public. Thus, developers of generative AI systems from foreign jurisdictions must comply with the regulation as long as the generative AI system is available to the general public in China.[xxiv]
The regulation does not apply to services that are not provided to the public or are not commercial in nature. However, it is unclear what the scope of term “public” is. For example, it is unclear if generative AI can be used freely in universities, corporations, courts, governments, or other entities if it is limited to a specific group of people rather than accessible more generally.
Per Article 4, the regulation lays down four core principles to regulate generative AI:
In addition, the regulation largely focuses on assigning AI system service providers liability for violation of legal statutes. For instance, per Article 9, “service providers” are recognized as producers of “network information content”, and therefore must ensure applicability of Chinese data protection, data security, and other related laws.[xxv]
Service providers are required to scrutinize end-user activity, and take action if the user is found to be utilizing the generative AI system for illegal ends, i.e. for violation of any existing criminal or civil law.[xxvi] Upon discovering any illegal content (i.e. content that violates any Chinese civil or criminal laws), the service provider must report such content to relevant authorities and adopt measures to prevent its AI system from re-creating similar illegal content by re-training or optimizing models. Interestingly, the article casts an obligation on service providers to take measures to prevent minors from excessively accessing or relying on generative AI services.
This regulation applies to all service providers of Deep Synthesis based online services, both public and private sector, whether Chinese or foreign.[xxvii]
As with the previous regulation on generative AI, this regulation primarily lays down liability, security requirements, and related obligations on service providers, while also specifically targeting problems with deepfakes and similar AI-created content that has the propensity to mislead or misinform the public.
Per Article 9, service providers are required to authenticate their user’s real identity and only provide services to authenticated identification holders. Article 10 requires service providers to scrutinize user activity (input data and results).
The regulation also attempts to combat problems with fake AI content (deep fakes, etc.) by requiring service providers, whose systems provide the ability to edit biometric information (including human faces and voices), to notify their users about the need to seek consent of any individuals whose personal information is being utilized or edited. Should a system create outputs that could lead to “confusion or misunderstanding by the public”, it must ensure that users are able to recognize the content as AI generated.[xxviii]
The regulation applies to service providers who use algorithm generated recommendations, to provide information to users.[xxx] The regulation seeks to protect users from manipulation by service providers such as e-commerce websites, search engines, social media feeds, and the like (all of whom use algorithms to choose what information to display to users), who are prohibited from using algorithms to:
Service providers are to inform users of the nature of the algorithm recommendation service and publish the basic principles, purposes, and mechanics of the system. In the case of services where a “significant impact” is exerted on user rights or interests, they must inform the user of the same. They must also provide users with an option to opt-out of tailored or algorithm recommended services. Service providers are also required to tailor their services to account for the different mental and physical capacities of minors, elders, workers, and consumers; to avoid discrimination; and to protect people’s rights and interests.
The unprecedented growth of the AI ecosystem has forced governments around the world to reckon with the various possible harms that unregulated development of this technology could cause. China and the EU are two of the first jurisdictions to implement public-interest regulation of the AI ecosystem. AI regulation is part of a broader attempt to regulate the digital ecosystem in both jurisdictions, though the objective of regulation in each is slightly different.
The EU’s AI Act is a holistic regulation that aims to balance multiple interests, such as promoting innovation and investment, while ensuring respect for human rights and consumer safety. China, on the other hand, has adopted a more piecemeal approach to regulating this space. While consumer protection is certainly an important aim of Chinese regulation, this objective appears secondary to the broader aim of ensuring national security, and public order, and social cohesion.
The EU’s AI Act, which applies to both the private and public sector use of AI systems, seeks to implement regulatory obligations on AI systems based on the perceived risk of certain types of AI models and use cases. The Act identifies and bars certain AI systems as having “unacceptable risks”, while imposing a series of risk mitigation, security, transparency, and accountability related obligations to AI systems delineated as posing a “high risk”. Systems that pose “limited” or “minimal risks” are relatively unregulated, with transparency obligations imposed on the former. GPAI systems are also regulated under the law, with greater obligations imposed on those that pose “systemic risk”. The Act establishes new institutions at the EU level charged with oversight and development of standards, while existing consumer protection and other market surveillance authorities are tasked with enforcement at the national level.
In comparison to the EU, the Chinese government appears to still be at a “testing” stage insofar as regulation of the AI ecosystem is concerned. While it has issued three administrative regulations governing various types of AI, none of these are a statute yet. Administrative regulations, while binding, do not carry the same legal weight as a statute. It appears these instruments are commonly used to “test” regulatory frameworks prior to being finalized in the form of a statute.[xxxi]
The Chinese framework comprises three regulations that govern specific types of AI systems: generative AI services, deep synthesis services, and algorithmic recommendation systems. Each of these primarily focuses on assigning and clarifying service provider liability for violation of Chinese civil and criminal laws enabled by use of the AI service in question. Systems are also required to be designed to be transparent, trustworthy and to protect consumers from manipulation and deception.
The Chinese method of implementing specific regulations to govern each type of AI system creates a patchwork regulatory framework. This could, on the one hand, work to promote innovation (as regulation is limited to specific use cases) and allow for regulations to be tweaked before being concretized in the form of a statute. On the other hand, such a system can also cause uncertainty. Consumer rights protection may also suffer in the absence of specific or overarching safety and grievance redress related frameworks. This may be particularly true given that China approaches regulation of the digital ecosystem largely from a national security and public order perspective, as opposed to a private rights-based perspective.
Comparing the two approaches, one can see that the Chinese regulation focuses on assigning responsibility and dealing with the harms that specific types of AI systems can cause. It is more reactive and limited than the EU’s approach, which aims to be broader and more forward looking. While prima facie, the EU approach appears preferable, there could, however, be benefits and drawbacks to each.
The Chinese method allows the government to intervene whenever needed. To date, however only certain specific user-facing AI services have been the subject of regulation. While the Chinese framework recognizes problems such as AI discrimination and bias, it largely leaves it to service providers and the government to scrutinize and act on problematic content. The European approach leaves more room for interpretation as to its scope, though critically it also recognizes various fundamental rights for users and provides them with more agency. That said, the EU approach has also been criticised as moving away from a “rights based approach”, as exemplified by the GDPR, to a “risk based approach”, which foregrounds innovation and commercial development of AI technologies over protection of fundamental human rights.[xxxii]
In many ways, the EU approach is somewhat less intrusive than the Chinese approach in that it attempts to implement proportionate and light touch regulation (for instance, through permitting self-assessment of AI systems). However, the EU framework does bar the use of certain AI systems, while the Chinese framework does not. That said, the AI Act in the EU is bureaucratic in nature, with likely high compliance costs connected to documentation, reporting requirements, and the like. The EU framework also makes more demands of regulatory capacity. The framework in China appears far more permissive or liberal in this respect.
Crucially, the European AI law applies to both the private and public sector, whereas the primary object of regulation in China is the private sector. This could be concerning in the context of rights protection, access to benefits, etc., particularly given the user surveillance mechanisms built into the Chinese regulatory framework.
As more and more jurisdictions around the world seek to regulate the AI ecosystem in the public interest, one is likely to see the models developed in China, and particularly the EU, to be of great interest to governments around the world. We have already seen how the EU’s privacy regulation – the GDPR – has influenced countries to adopt similar regulatory frameworks.[xxxiii] This has both economic and political benefits. For instance, regulatory compatibility between two jurisdictions can make it easier for companies from one jurisdiction to access markets in another.[xxxiv] Every country should adopt rules tailored to its own particular circumstances and public interest priorities. While some may choose to directly emulate the rules now in place in the EU or China, every country should take note of these policies and use them to inform the choices they make going forward. The EU and China show that it is practicable to adopt and implement AI regulation designed to protect consumers, address bias, and advance safety. Countries that do not proactively regulate AI may risk the welfare of their people.
[i] Rishab Bailey is Research Director at Public Citizen. Jeanne Huang is an Associate Professor at the University of Sydney, Australia.
[ii] EU Parliament, Legislative resolution on the proposal for a regulation of the European Parliament and of the Council on laying down harmonised rules on Artificial Intelligence, March 13, 2024, https://www.europarl.europa.eu/doceo/document/TA-9-2024-0138_EN.html; Council of the EU, Artificial Intelligence Act: Council and Parliament strike a deal on the first rules for AI in the world, February 2, 2024, https://www.consilium.europa.eu/en/press/press-releases/2023/12/09/artificial-intelligence-act-council-and-parliament-strike-a-deal-on-the-first-worldwide-rules-for-ai/; EU Parliament, Artificial Intelligence Act: MEPs adopt landmark law, March 13, 2024, https://www.europarl.europa.eu/news/en/press-room/20240308IPR19015/artificial-intelligence-act-meps-adopt-landmark-law
[iii] Notable amongst these are the General Data Protection Regulation (GDPR), which establishes regulations around fair data processing practices with a view to protect privacy; the Digital Services Act, which seeks to ensure a safe and accountable online ecosystem; the Digital Markets Act, which attempts to ensure fair and open digital markets; the Data Act, which aims to open up data for re-use and enhance interoperability and porting between cloud service providers; and proposed Product Liability Rules, which aims to revise liability rules pertaining to software and AI products. Each of these will also apply to various aspects of the AI ecosystem.
[iv] EU Commission, A Europe Fit for the Digital Age, https://commission.europa.eu/strategy-and-policy/priorities-2019-2024/europe-fit-digital-age_en; EU Commission, Europe’s Digital Decade, https://digital-strategy.ec.europa.eu/en/policies/europes-digital-decade#tab_3; EU For Digital, EU Digital Strategy, https://eufordigital.eu/discover-eu/eu-digital-strategy/
[v] Council of the EU, Artificial intelligence act: Council and Parliament strike a deal on the first rules for AI in the world, December 9, 2023, https://www.consilium.europa.eu/en/press/press-releases/2023/12/09/artificial-intelligence-act-council-and-parliament-strike-a-deal-on-the-first-worldwide-rules-for-ai/
[vi] Id.
[vii] Id.
[viii] EU Commission, Proposal for a regulation of the EU Parliament and of the Council Laying Down Harmonized Rules on AI, April 21, 2021, https://eur-lex.europa.eu/legal-content/EN/TXT/HTML/?uri=CELEX:52021PC0206. The EU Council released its version of the law in 2022, with the EU Parliament suggesting a third version in June 2023. Refer Council of the EU, Proposal for a Regulation of the European Parliament and of the Council laying down harmonised rules on artificial intelligence, November 25, 2022, https://data.consilium.europa.eu/doc/document/ST-14954-2022-INIT/en/pdf and EU Parliament, Artificial Intelligence Act, June 14, 2023, https://www.europarl.europa.eu/doceo/document/TA-9-2023-0236_EN.pdf.
[ix] Council of the EU, Artificial Intelligence Act: Council and Parliament strike a deal on the first rules for AI in the world, February 2, 2024, https://www.consilium.europa.eu/en/press/press-releases/2023/12/09/artificial-intelligence-act-council-and-parliament-strike-a-deal-on-the-first-worldwide-rules-for-ai/. The OECD defines an AI system as a “machine-based system that can, for a given set of human-defined objectives, make predictions, recommendations or decisions influencing real or virtual environments. It does so by using machine and/or human-based inputs to: i) perceive real and/or virtual environments; ii) abstract such perceptions into models through analysis in an automated manner (e.g., with ML, or manually); and iii) use model inference to formulate options for information or action. AI systems are designed to operate with varying levels of autonomy.” Cameron Kerry et al., Strengthning International Cooperation on AI, Brookings, October 2021, https://www.brookings.edu/wp-content/uploads/2021/10/Strengthening-International-Cooperation-AI_Oct21.pdf, p. 47
[x] A subset of high-risk AI systems will require third parties to audit conformity with the law.
[xi] The use of AI based systems by law enforcement agencies emerged as one of the more contentious issues during negotiations. The compromise reached by the EU permits the use of biometric identification systems in public spaces by law enforcement subject to various safeguards – such as prior judicial authorization and use only for a limited list of serious crimes.
[xii] Subject to emergency use cases for law enforcement purposes.
[xiii] This includes rules about how training data sets must be designed and used, rules about data preparation, and assessments of the assumptions used with a view to reduce bias. Data sets must be relevant, representative, free of errors and complete with respect to the given purpose.
[xiv] Open source and models in the R&D phase are exempt from these requirements.
[xv] This does not imply the need for establishing new institutions. Member states can choose to have existing regulators and regulatory institutions monitor and enforce the law.
[xvi] “Market surveillance authorities” are authorities designated by an EU member state under Article 10 of EU Regulation 2019/1020 to ensure compliance with EU laws pertaining to consumer health, safety, environment, etc. A country may designate more than one authority as a market surveillance authority. For example, in France the main market surveillance authority is the Directorate-General for Competition, Consumer Affairs and the Combating of Fraud. Other sectoral regulators (such as the Directorate for Maritime Affairs, Directorate-General for Labour, and the National Frequencies Agency, amongst others.) also contribute to market surveillance. Refer Regulation (EU) 2019/1020 of the European Parliament and of the Council of 20 June 2019 on market surveillance and compliance of products, https://eur-lex.europa.eu/legal-content/EN/TXT/?uri=celex%3A32019R1020; France, National Market Surveillance Programme, 2016, https://ec.europa.eu/docsroom/documents/14746/attachments/3/translations/en/renditions/native#:~:text=In%20France%2C%20market%20surveillance%20is,import%20to%20the%20European%20Union%2C
[xvii] Notably, the Communist Party of China Central Committee and the State Council issued “Opinions on Building a More Perfect Factor Market-Based Allocation System and Mechanism” in 2020, which recognised “data” as a major “production factor” alongside land, labour, capital and technology. Government of China, Opinions on Building a More Perfect Factor Market-Based Allocation System and Mechanism,April 9, 2020, https://www.gov.cn/zhengce/2020-04/09/content_5500622.htm. China has also enacted (a) a Personal Data Protection law, which, amongst other things, regulates automated decision making using personal data, (b) a Data Security law, and (c) an E-commerce law. In addition, the Chinese Civil Code and other general legislation, such as the Espionage Act are also relevant to the digital ecosystem.
[xviii] Issued by the State Council on July 8, 2017, Guo Fa [2017] No. 35
[xix] Including ethical concerns of AI, civil and criminal liability, privacy and property protection, information security in the context of artificial intelligence applications, establishing a system for tracing and accountability, and clarifying legal entities in artificial intelligence, along with their related rights, obligations, and responsibilities.
[xx] This refers to AI models and related technologies with the ability to generate text, images, audio, video, and other content.
[xxi] Deep synthesis technology, also known as deepfake technology, is a type of artificial intelligence that use machine learning algorithm and can generate or manipulate image, audio, or video content that appears realistic but is not authentic. Zhiyi Liu & Yejie Zheng, Virtual World Under AI: Augmented Reality and Deep Synthesis, in AI Ethics and Governance: Black Mirror and Order 95, 2022, https://doi.org/10.1007/978-981-19-2531-3_7.
[xxii] Opinions of the Supreme People’s Court on Regulating and Strengthening the Applications of Artificial Intelligence in the Judicial Fields, No. 33 of the Supreme People’s Court, promulgated on August 12, 2022
[xxiii] Order No. 15 of the Cyberspace Administration of China, et al, promulgated on July 10, 2023, and effective on August 15, 2023
[xxiv] Thus, systems developed for internal use, for instance, research, are exempted from the scope of the regulation.
[xxv] According to Article 22, Paragraph 2 of the “Interim Measures for the Management of Generative Artificial Intelligence Services,” “generative artificial intelligence service providers” refer to organizations or individuals that provide generative artificial intelligence services using generative artificial intelligence technology (including providing generative artificial intelligence services through programmable interfaces, etc.). This includes deployers who have purchased AI systems from third parties to provide relevant services to the public. Refer decision of the Guangzhou Internet Court in (2024) Yue 0192 Min Chu 113, February 2024.
[xxvi] The service provider must warn, restrict access and finally, terminate access to the service. Reports must also be filed with relevant government authorities.
[xxvii] Order No. 13 of the China Cyberspace Administration, the Ministry of Industry and Information Technology, and the Ministry of Public Security, promulgated on November 25, 2022, and effective on January 10, 2023
[xxviii] This applies if the system provides: (a) Text generation or editing conducted through simulation of natural persons such as intelligent dialogue and intelligent writing, (b) Editing services for speech generation such as synthetic voice and imitation voice or editing services that significantly change personal identity characteristics; (c) Editing services for character image and video generation such as face generation, face replacement, face manipulation, and posture manipulation or editing services that significantly change personal identity characteristics; and (d) Generation or editing services for immersive simulated scenes; and others.
[xxix] Order No. 9 of the Cyberspace Administration of China, et al, promulgated on December 31, 2021, and effective on March 1, 2022
[xxx] These services are defined as those that use “generation and synthesis, personalized push, selection sort, search filtering, scheduling decision, and other algorithm technologies”.
[xxxi] Administrative regulations are also used as instruments to implement statutes.
[xxxii] Fanny, Estelle Masse, Daniel Laufer, The EU should regulate AI on the basis of rights, not risks, Access Now, January 13, 2023, https://www.accessnow.org/eu-regulation-ai-risk-based-approach/
[xxxiii] See for example, Saranpaal Calais et al., How has GDPR influenced the evolution of data protection in APAC? A & O Shearman, May 24, 2023, https://www.aoshearman.com/en/insights/how-has-gdpr-influenced-the-evolution-of-data-protection-in-apac; Brian Daigle, Data Protection Laws in Africa: A Pan African Survey and Noted Trends, USITC Journal of International Commerce and Economics, February 2021, https://www.usitc.gov/publications/332/journals/jice_africa_data_protection_laws.pdf
[xxxiv] In the context of the EU, the “Brussels effect” (i.e. a de facto export of regulations from the EU through market mechanisms) is well recognized and a public reason for implementing forward looking digital regulation.