The Tech Lobbying Is Coming From Inside the House
TheAmericanProspect: Sen. Elizabeth Warren’s report shows the enduring power of Big Tech in trade deals.
In addition, as Li demonstrated in her paper, career staffers at USTR were brought around to Big Tech’s way of thinking about digital trade policy—that any regulations impeding tech platforms are illegal trade barriers, and that countries should not restrict the usage of personal data or impose any legal liabilities on digital platforms.
Thanks for reading Digitrade! Subscribe for free to receive new posts and support my work.
And so the 128 pages of internal emails mainly show communications between USTR career staff, or top deputies like Bianchi and White, and these revolving-door tech lobbyists, going all the way back to the first months of the Biden administration. The frequency of the communications and meetings suggest that the government and these lobbyists are something like a team, working together on a digital trade policy they each sustain.
Contact isn’t all initiated from one side, either. While tech lobbyists are of course seeking meetings, offering briefings, getting clarity on rumors and information, and just trying to get in front of the principal decision-makers, often the communications start from the government side. Whenever some development on trade policy transpires, government officials go to the representatives of these giant companies, seeking advice and consultation.
There’s “an upcoming opportunity to touch base with Mexico on the fintech measure and so we wanted to see if you all may have any updates since the measure has been finalized,” writes USTR director Sarah Ellerman to Kibria, the Google rep, in March 2021. Leah Liston, another USTR director, asks Amazon’s Ari Giovenco if his team has “input” for an upcoming meeting with Mexican officials in June 2021. That same month, USTR’s Kenneth Schagrin asks Giovenco for an update on Amazon’s “consultations with the Saudis.”
However narrow these discussions are intended to be, they don’t usually end there: A meeting between Tai and Amazon’s Punke intended to cover Punke’s time as WTO ambassador and “how you approached the job” is later referenced in the context of Punke raising the topic of Mexico’s “de minimis” standard, an important exemption from cross-border inspection and fees for goods up to a certain value, which Amazon has been abusing to bring millions of packages per day into the U.S. without customs duties.
There’s tons of flattery and offers of support. But eventually, we get to the heart of things. Google’s Karan Bhatia writes Tai on August 5, 2021, to complain about proposed legislation in South Korea preventing dominant app stores from forcing developers to essentially pay tolls for access. Bhatia says this “would have the effect of prohibiting U.S. business models and advantaging local providers,” even though it would affect a Korean app distributor as well. Bhatia just asks Tai to raise concerns with Korean officials. Tai replies that the issue “is on my radar” and that she will provide updates in the future. Another email refers to a discussion between Augerot and White on the “DMA,” an acronym for the Digital Markets Act, an EU regulation on the market power of the tech platforms which industry has been trying to gut.
Feds turn antitrust focus to digital pharma ads
Politico: IQVIA is the leading provider of pharmaceutical sales and reference data, and also sells software for analyzing that information. Drug companies use IQVIA’s trove of information — which includes over 800 million de-identified patient records and petabytes of sales, promotional and prescription data — to gauge the likely demand for the drugs they’re developing and accurately compensate their sales forces. Generic drug companies, for example, can use the data to determine if it is financially feasible to introduce a competitor to a branded drug.
DeepIntent is a privately held advertising technology company that works with pharmaceutical companies to market drugs to doctors and patients. It also helps client companies measure and improve the success of those ad campaigns.
IQVIA made multiple moves in 2022 to build out an advertising business, including the separate purchase of Lasso Marketing, another health care ad tech company.
The FTC is investigating both the combination of the two direct competitors — Lasso and DeepIntent — as well as so-called “vertical” concerns of whether IQVIA would be able to leverage its mountain of pharmaceutical sales data to monopolize the pharmaceutical advertising market, three of the people said.
In its most recent annual report, IQVIA said the scope of its data covers more than 85 percent of the world’s pharmaceuticals. That includes “more than 1.2 billion comprehensive, longitudinal, non-identified patient records spanning sales, prescription and promotional data, medical claims, electronic medical records, genomics, and social media” from around 150,000 data suppliers.
Pharmaceutical advertising is big business. The total U.S. market for pharma ads is at least $11.5 billion, based on data collected by advertising analytics company Standard Media Index. Darrick Li, SMI’s vice president of sales in North America said anecdotal evidence could put that number as high as $15 billion. Of that, he said, around 53 percent (roughly $8 billion at the high estimate) is digital, which is growing at a rapid 17 percent clip, in the first quarter of 2023 compared to the year-earlier period, Li said.
How trade commitments narrowed EU rules to access AI’s source codes
Euractiv: The capacity for public authorities and external auditors to access the source code of Artificial Intelligence in an upcoming EU rulebook was restricted based on a digital trade agreement, according to internal documents from the European Commission.
The internal documents were obtained via a freedom of information request by Kristina Irion, a law professor at the University of Amsterdam, showing several requests from the Commission’s trade department to the digital policy department on the draft AI Act.
The AI Act is a landmark legislative proposal to regulate Artificial Intelligence based on its potential to cause harm. The requests concern narrowing down provisions in the regulation related to the disclosure of source code, aligning them to EU trade commitments.
For Irion, the documents point to a concerning reverse of what should be the right approach because “the EU should use trade policy to promote its legislative agenda, not let previous trade commitments influence its digital policy-making.”
The European Union intends to push for its digital rules to become new international standards at a United Nations convention intended to produce a global vision of the digitalised society.
In particular, trade policies officials requested narrowing down the provisions related to the provisions of source code to bring them in line with the EU-UK Trade Cooperation Agreement.
The trade agreement commits London and Brussels only to require the transfer of software’s source code under specific conditions, notably if a regulatory body requests it to enforce a law meant to protect public safety.
In an internal note dated 9 April 2021, the trade department thanked the digital policy department for having amended the requirements on technical documentation but asked for further changes regarding the conformity assessment of the quality management systems, specifically on the provision related to the external vetting of notified bodies – authorised independent auditors.
The trade department requested that the wording on the provision of the source code should be narrowed down, removing a reference to ‘full’ access and specifying that it would only be provided to assess the conformity of a high-risk system to avoid an excessively broad interpretation.
Similarly, the trade department requested to eliminate the reference to granting ‘full’ access to the source code for a market surveillance authority to assess whether an AI system deemed at high-risk to cause harm complies with the AI Act’s obligations.
At the same time, the trade policy officers asked that the notified body and public authority be bound by confidentiality obligations when a source code is disclosed.
All the requested changes made it into the final draft the European Commission published later that month.
Following months of intense negotiations, members of the European Parliament (MEPs) have bridged their difference and reached a provisional political deal on the world’s first Artificial Intelligence rulebook.
“Giving a blank protection on any type of source code, despite the fact a lot of it is available via open source, tantamounts to sideloading a new protective regime that undercuts the current copyright and trade secrets rules, where at least you have to prove the information is commercially sensitive,” Irion added.
The EU Council, which adopted its position on the AI regulation in December, further narrowed the scope of the conditions to access AI’s source code, effectively making it a measure of last resort.
Meanwhile, provisional text in the European Parliament moved away from national authorities the power to access source code to the AI system’s training model.
According to Irion, the risk is making access to this precious information excessively difficult for regulators, who might face administrative burden or be scared to lose litigation from companies and repay damages.
“With the rise of powerful AI models like ChatGPT, we need a better understanding of how these disruptive technologies actually work. We cannot just rely on the technical document the companies provide,” Irion added.