UK
New data laws debated in Parliament
Gov.uk: The reforms to UK data laws aim to reduce the number of consent pop-ups people see online, which repeatedly ask users to give permission for websites to collect data about their visits.
Before the changes come into effect, the government will work with industry and the Information Commissioner’s Office to ensure technology to help people set their preferences automatically is effective and readily available. This will help web users to retain choice and control over how their data is used.
The strengthened regime will seek to ensure data adequacy with the European Union’s General Data Protection Regulation (GDPR) and will modernise the Information Commissioner’s Office through the creation of a statutory board with a chair and chief executive to make sure it remains a world-leading, independent data regulator.
The Bill will make it easier and quicker for people to verify their identity digitally, if they want to, by establishing a framework for the use of trusted and secure digital verification services, and will reduce the number of cookie pop-ups people see online.
The legal changes will improve the UK’s ability to strike international data deals and make these partnerships more secure, allowing British businesses to seize billions of pounds of data trade as a reward of Brexit.
Data Minister Julia Lopez is expected to tell the House today:
This Bill will maintain the high standards of data protection that British people rightly expect.
But it will also help the people who are using our data to make our lives healthier, safer, and more prosperous. That’s because we’ve co-designed it with those people, to ensure that our regulation reflects the way real people live their lives and run their businesses.
The Parliamentary debate coincides with the Global Cross-Border Privacy Rules (CBPR) Forum in London. Over four days of workshops (Monday 17 – Thursday 20 April) the UK will lead global discussions between government officials, regulators and privacy experts, exploring how global privacy regimes can be more compatible and improve data transfers.
Singapore
Singapore appointed as Deputy Chair of the Global Cross-Border Policy Rules (CBPR) Forum’s Global Forum Assembly
Imda: ingapore has been appointed the Deputy Chair of the Global Cross Border Policy Rules (CBPR) Forum’s policy making body, the Global Forum Assembly.
The Global CBPR Forum was co-founded by Singapore and other participants of the APEC CBPR System in April 2022.1 The Forum seeks to enable the free flow of data and effective data protection and privacy globally through the establishment of the Global CBPR and Privacy Recognition for Processors (PRP) Systems. With the publication of Global CBPR Framework and appointment of its leadership team, the Forum is now welcoming the participation of other jurisdictions.
The Global CBPR Framework complements the Global CBPR Declaration (2022) in setting out the principles and objectives of the Forum. The Framework establishes the core requirements on which the Global Cross-Border Privacy Rules (CBPR) and Global Privacy Recognition for Processors (PRP) Systems will be based.
AI
A.I. Is Already Harming Democracy, Competition, Consumers, Workers, Climate, and More
PublicCitizen: Now is the time for journalists to draw attention to the escalating hazards of generative A.I. to make voters and policymakers aware of what is at stake. In that spirit, Public Citizen is hosting a hybrid in-person/Zoom conference in Washington, D.C. on Thursday, April 27, featuring U.S. Rep. Ted Lieu (D-Calif.) in which leading academics, technologists, and public policy advocates will discuss the wide range of threats A.I. already poses. We hope you or a colleague can attend and be part of the conversation about the many dangers and risks:
A.I. is already giving monopolies advantages and encouraging anticompetitive practices. The massive computing power required to train and operate large language models and other generative A.I. gives big corporations with the most resources a huge advantage. Products like ChatGPT have the potential to worsen self-preferencing by search engines – an anticompetitive practice companies like Amazon, Apple, and Microsoft have already abused. Moreover, OpenAI is developing plugins that will allow ChatGPT to carry out actions that can be performed on the web, such as booking flights, ordering groceries, and shopping. By structuring plugins as a kind of app store within ChatGPT, OpenAI is likely to reproduce Big Tech’s tendency to thwart and throttle competition, siphoning money from small and local businesses.
A.I. is already spreading misinformation. Misinformation-spreading spambots aren’t new, but generative A.I. tools easily allow bad actors to mass produce deceptive political content. For example, OpenAI’s newest large language model, GPT-4, is better able to produce misinformation and can do so more persuasively than its predecessors. One study found that text-based generative A.I. can help conspiracy theorists quickly generate polished, credible-looking messages to spread misinformation, which sometimes cites evidence that doesn’t even exist.
A.I. is already making convincing deepfakes. Increasingly powerful audio and video production A.I. tools are making authentic content harder to distinguish from deepfakes. A.I. has already convincingly mimicked President Joe Biden and former President Donald Trump, as well as other high-profile candidates and media figures. The FBI issued a warning in 2019 that scammers are using deepfakes to create sexually explicit images of teenagers to extort them for money. Even the U.S. military has used deepfakes.
A.I. is already exploiting artists and content creators. Works that artists and writers put online have been used without their consent to train generative A.I. tools, which then produce derivative material. For example, far-right trolls used A.I. to transform cartoonist Sarah Andersen’s work into neo-Nazi memes.Artists have filed a class action lawsuit against Stability AI, as have engineers, who say the company plagiarizes source code they wrote. Voice actors are reportedly being subject to contract language allowing employers to synthesize their voices using A.I. And Getty Images – whose watermark bleeds through in images purportedly “created” by A.I. – is also suing. No one gave OpenAI, valued at an estimated $29 billion, permission to use any of this work. And there is no definitive way to find out whether an individual’s writing or creative output was used, to request compensation, or to withdraw material from OpenAI’s data set.
A.I. is already exploiting workers. Companies developing A.I. tools use texts and images created by humans to train their models – and typically employ low-wage workers abroad to help filter out disturbing and offensive content. Sama, OpenAI’s outsourcing partner, employs workers in Kenya, Uganda, and India for companies like Google, Facebook, and Microsoft. The workers labeling data for OpenAI reportedly took home an average of less than $2 per hour. Three separate Sama teams in Kenya were assigned to spend nine-hour shifts labeling 150-250 passages of text of up to 1,000 words each for sexual abuse, hate speech, and violence. Workers said it left them mentally scarred.
A.I. is already influencing policymakers. A.I. can be used to lobby policymakers with authentic-sounding but artificial astroturf campaigns from machines masquerading as constituents. An early example of this: In 2017, spambots flooded the Federal Communications Commission with millions of comments opposing net neutrality. In response, the agency decided to ignore non-expert comments entirely and rely solely on legal arguments, thereby excluding nearly all public input from its rulemaking process.
A.I. is already scamming consumers. Scammers are already using ChatGPT and other A.I. tools for increasingly sophisticated rip-off schemes and phishing emails. In 2019, criminals used A.I. tools to impersonate the CEO of a U.K.-based energy company, successfully requesting a fraudulent transfer of nearly a quarter million dollars. And in 2022, thousands of people fell victim to a voice-imitation A.I. deepfake: Scammers used A.I. tools to pose as loved ones in an emergency situation – and ripped people off to the tune of more than $11 million.
A.I. is already fueling racism and sexism. When data shaped by pre-existing societal biases is used to train algorithmic decision-making machines, those machines replicate and exacerbate the biases. OpenAI’s risk assessment report released with GPT-4’s launch was forthright about the model’s tendency to reinforce existing biases, perpetuate stereotypes, and produce hate speech. And Lensa, an A.I.-powered tool for creating images of users based on selfies, has a tendency to produce overtly sexualized images of women, even more so if the woman is of Asian descent.
A.I. is already replacing media with bogus content. The use of A.I. in journalism and the media is accelerating with virtually no guardrails holding back abuse. BuzzFeed laid off 12% of its workforce and then announced plans to use ChatGPT to produce quizzes and listicles, alarming company staff. Its A.I.’s seemingly authoritative statements included worrisome errors that could confuse or mislead readers. Subsequent reporting revealed that BuzzFeed published dozens of travel articles written almost entirely by generative A.I. that were comically repetitive. Meanwhile, Arena Group, publisher of Sport’s Illustrated and Men’s Journal, recently debuted its first A.I.-written story, which was criticized for several medical errors. And CNET, a once-popular consumer electronics publication acquired in 2020 by a private equity firm, has been quietly producing A.I.-generated content for more than a year, apparently to game Google search results and draw dollars from advertisers.
A.I. is already undermining privacy. ChatGPT has given rise to a host of new data security and surveillance concerns. Because A.I. is trained by scraping the internet for writing, it’s likely that sensitive personal information posted online has been scooped up. Once that data is absorbed into ChatGPT, there’s no way to know what, if anything, it does to keep that data secure. Therapy chatbots collect data about users’ mental health; A.I. tools that mimic deceased loved ones require training on personal and private interactions; virtual friend and virtual romantic partners encourage levels of intimacy that make divulging sensitive information almost inevitable. Little existing regulation limits how businesses might monetize this sensitive data or how A.I. might wittingly or unwittingly misuse it.
A.I. is already contributing to climate change. Training and maintaining generative A.I. tools requires significant computing power and energy, and the more they need, the bigger their carbon footprint. The energy required for training large language models is comparable to five cars’ construction and lifetime use and a car driving back and forth between New York City and San Francisco 550 times. Adding generative A.I. to search engines is predicted to require Google and Bing to increase their computing power and energy consumption by four to five times.
These harms have arisen at the genesis of A.I. Scaling it up now necessarily means exponentially compounding all of them. The speed at which businesses are deploying new A.I. tools practically guarantees that the damage will be devastating and widespread – and that whatever can be done to limit that damage will have a harder time making a difference after A.I. tools are deployed than before. We need strong safeguards and a broad, agile regulatory regime in place before businesses disseminate A.I. widely. Until then, we need a pause.
Reports
EU Digital Trade Rules: Undermining Attempts to rein in Big Tech
left.eu: This report shows how Big Tech corporations are working to constrain the ability of European Union (EU) democratic bodies to regulate their activities in the public interest through “trade” agreements, which are binding and permanent.
Digitalization is the defining economic transformation of our time. The benefits to society are well-known, but the harms caused from the expansion of Big Tech are still being understood. The EU has started to recognise the urgent need rein in some of Big Tech’s most pernicious practices. The Digital Services Act (DSA), the Digital Markets Act (DMA), along with the Data Act, the Data Governance Act (DGA) and the Artificial Intelligence Act (AI Act) are first steps towards ensuring that the digital sector of the economy operates under the same framework of fair play and the public interest as the rest of the economy.
The same EU that is advancing new laws governing the digital economy is simultaneously promoting a digital trade policy that contradicts, and would severely constrain, current and future public interest policymaking in the EU and beyond.
Through a number of bilateral and regional trade agreements, Big Tech is seeking to maintain a policy environment which favours private control of technological resources and practices, and data, for supernormal profit. Control over data – and in particular, the ability to transfer data across borders – and keeping their algorithms or source codes secret are the top goals of Big Tech in any “digital trade” agreement.
The EU has finalized trade agreements with a dedicated digital trade chapter with Canada, Singapore, Vietnam, Japan, the UK, Mexico, Chile, Mercosur, and New Zealand. And it is currently negotiating digital trade chapters with Indonesia, Australia, India, the region of Eastern and Southern Africa (ESA), and plurilaterally in the WTO.
This research analyses the most dangerous clauses included in the EU digital trade agenda (“free” flow of data, bans on data localisation and non-disclosure of source code).
Report: https://left.eu/content/uploads/2023/03/Digital-Trade-vFinal.pdf
2023 Landscape Confronting Tech Power
AINOW: Artificial intelligence1 is captivating our attention, generating both fear and awe about what’s coming next. As increasingly dire prognoses about AI’s future trajectory take center stage in the headlines about generative AI, it’s time for regulators, and the public, to ensure that there is nothing about artificial intelligence (and the industry that powers it) that we need to accept as given. This watershed moment must also swiftly give way to action: to galvanize the considerable energy that has already accumulated over several years towards developing meaningful checks on the trajectory of AI technologies. This must start with confronting the concentration of power in the tech industry.
The AI Now Institute was founded in 2017, and even within that short span we’ve witnessed similar hype cycles wax and wane: when we wrote the 2018 AI Now report, the proliferation of facial recognition systems already seemed well underway, until pushback from local communities pressured government officials to pass bans in cities across the United States and around the world.2 Tech firms were associated with the pursuit of broadly beneficial innovation,3 until worker-led organizing, media investigations, and advocacy groups shed light on the many dimensions of tech-driven harm.4
These are only a handful of examples, and what they make clear is that there is nothing about artificial intelligence that is inevitable. Only once we stop seeing AI as synonymous with progress can we establish popular control over the trajectory of these technologies and meaningfully confront their serious social, economic, and political impacts—from exacerbating patterns of inequality in housing,5 credit,6 healthcare,7 and education8 to inhibiting workers’ ability to organize9 and incentivizing content production that is deleterious to young people’s mental and physical health.10
In 2021, several members of AI Now were asked to join the Federal Trade Commission (FTC) to advise the Chair’s office on artificial intelligence.11 This was, among other things, a recognition of the growing centrality of AI to digital markets and the need for regulators to pay close attention to potential harms to consumers and competition. Our experience within the US government helped clarify the path for the work ahead.
ChatGPT was unveiled during the last month of our time at the FTC, unleashing a wave of AI hype that shows no signs of letting up. This underscored the importance of addressing AI’s role and impact, not as a philosophical futurist exercise but as something that is being used to shape the world around us here and now. We urgently need to be learning from the “move fast and break things” era of Big Tech; we can’t allow companies to use our lives, livelihoods, and institutions as testing grounds for novel technological approaches, experimenting in the wild to our detriment. Happily, we do not need to to draft policy from scratch: artificial intelligence, the companies that produce it, and the affordances required to develop these technologies already exist in a regulated space, and companies need to follow the laws we already in effect. This provides a foundation, but we’ll need to construct new tools and approaches, built on what we already have.
There is something different about this particular moment: it is primed for action. We have abundant research and reporting that clearly documents the problems with AI and the companies behind it. This means that more than ever before, we are prepared to move from identifying and diagnosing harms to taking action to remediate them. This will not be easy, but now is the moment for this work. This report is written with this task in mind: we are drawing from our experiences inside and outside government to outline an agenda for how we—as a group of individuals, communities, and institutions deeply concerned about the impact of AI unfolding around us—can meaningfully confront the core problem that AI presents, and one of the most difficult challenges of our time: the concentration of economic and political power in the hands of the tech industry—Big Tech in particular.
Report: https://ainowinstitute.org/wp-content/uploads/2023/04/AI-Now-2023-Landscape-Report-FINAL.pdf
International Preemption by “Trade” Agreement: Big Tech’s Ploy to Undermine Privacy, AI Accountability, and Anti-Monopoly Policies”
ReThinkTrade:This policy brief uses excerpts from 117th Congress bills and from administration policy documents to show the direct conflicts between prominent U.S. domestic digital governance proposals and the “digital trade” agenda that Big Tech interests seek in current trade negotiations. The Trump administration included a pro-Big-Tech Digital Trade chapter in the U.S.-Mexico-Canada Agreement (USMCA). USMCA Chapter 19 expands on what was viewed as a Big Tech-rigged Electronic Commerce chapter in the Trans-Pacific Partnership (TPP). Many of the restrictions on domestic policy in USMCA Chapter 19 are not found in other nations’ pacts that have digital terms. Big Tech interests have been clear that their goal is, at a minimum, to replicate the USMCA/TPP approach to “digital trade” rules in current trade talks, and with respect to some sensitive issues push for broader prerogatives for tech firms and new limits on governments.
Report: https://rethinktrade.org/wp-content/uploads/2023/03/International-Preemption-by-Trade-Agreement.pdf