2 Senators Propose Bipartisan Framework for A.I. Laws
US Today: Sen. Richard Blumenthal, Democrat of Connecticut, and Sen. Josh Hawley, Republican of Missouri, plan to announce a comprehensive framework to regulate artificial intelligence in the latest attempt by Congress to keep pace with technology.
The heads of the Senate Judiciary Subcommittee on Privacy, Technology and Law said in interviews Thursday that their framework includes requirements for licensing and testing of AI, creating an independent federal agency to oversee the technology, holding companies accountable for privacy, and more will civil rights violations as well as requirements for data transparency and security standards.
Lawmakers plan to highlight their proposals in an AI hearing on Tuesday that will feature Brad Smith, the president of Microsoft, and William Dally, the chief scientist at AI chipmaker Nvidia. Mr. Blumenthal and Mr. Hawley plan to introduce bills from the framework.
On Wednesday, top tech executives including Elon Musk, Microsoft’s Satya Nadella and OpenAI’s Sam Altman will meet with Senate Leader Chuck Schumer and other lawmakers in a separate closed-door session on AI regulations.
Since the release of the ChatGPT chatbot in November, lawmakers in Washington have been eager to learn about the risks and opportunities of AI in order to create rules for the industry. Mr. Blumenthal said lawmakers didn’t want to repeat the mistakes they made when they couldn’t agree on privacy and security laws for social media companies.
But the framework is likely to face resistance from the technology industry. IBM and Google have opposed the creation of a new regulator for the technology.
UN cybercrime treaty risks becoming a ‘global surveillance pact’
The Register: An international treaty on countering cybercrime is in danger of becoming an "expansive global surveillance pact" that will trample data privacy and human rights, activists warned UN delegates as they meet in New York City this week to hammer out an updated proposal.
The draft United Nations cybercrime treaty, which has been under negotiations for over two years, aims to define what online crime actually is and how member states can better work together to curb the growing global problem.
However, there’s concern among many governments and civil rights advocates that that the treaty — originally proposed by Russia, with support from countries including China, North Korea, Iran, Venezuela, and Nicaragua — will pave the way for regimes to legalize surveillance across borders and criminalize online speech, seemingly with the support of the international community.
The treaty’s sixth negotiating session began on Monday at the UN headquarters in Manhattan with delegates reviewing the draft through September 1.
During a press conference on Wednesday, human rights and digital privacy advocates warned that unless the draft’s wording changes significantly, the proposal will give governments the green light to persecute activists, journalists, and marginalized groups — in other words, the usual victims when it comes to authoritarian regimes’ attempts to criminalize speech and privacy.
G-7 countries commit to AI code of conduct
Politico: Officials from the G7 group of leading democratic countries agreed Thursday to create an international code of conduct for artificial intelligence as politicians from Brussels to Washington seek greater control over this emerging technology.
As part of the voluntary guidelines, policymakers said countries would work together on specific principles that would oversee the likes of generative AI and other advanced forms of the technology. This attempt at creating a unified, but nonbinding, international rulebook would then be presented to G7 leaders as early as November.
The code of conduct is expected to include commitments from companies to take steps to stop potential societal harm created by their AI systems; to invest in tough cybersecurity controls over how the technology is developed; and to create risk management systems to curb the potential misuse of the technology.
China Finalizes New AI Rules as the Global Race to Regulate AI Intensifies
JD Supra: As China pledges to become the world leader of artificial intelligence (AI) by 2030, it is charging ahead by implementing its own comprehensive AI rules, investing heavily in the technology and receiving sizable foreign funding for its AI efforts. The authoritarian state’s swift progress comes as the European Union (EU) has already approved sweeping draft legislation to regulate AI. Meanwhile, the United States and the United Kingdom attempt to catch up by drafting goals and guidelines for the rapidly developing technology.
Beginning Aug. 15, 2023, China’s first round of generative AI regulations are set to go into effect. The regulations apply to generative AI technology companies that offer AI services to the public. Public AI providers, such as OpenAI, which developed ChatGPT, must first obtain a license to operate in China. Once a license is secured, the providers must perform periodic security assessments of their platforms, register algorithms that can influence public opinion and confirm that user information is secure. They are also mandated to take a strict stance on “illegal” content by stopping the generation of it, improving the algorithm and reporting the illegality to the relevant state agencies. Illegal content includes content that infringes on the intellectual property rights of others. Additionally, the rules state that AI providers must protect China’s national security by abiding by the country’s “core values of socialism.” How they must do so remains unclear though.