Markets

Biden’s New AI Executive Order Is Regulation Run Amok

4 Mins read

Signed yesterday, President Biden’s new executive order on artificial intelligence safety is already making waves across the technology industry. While the intention of the order is ensuring the responsible development and safe use of AI, the result of the order is likely to be entirely different. The order suffers from a classic “Ready! Fire! Aim!” mentality, jumping the gun with overly prescriptive regulations before assessing the nature and significance of the problem it aims to solve. This may prove one of the most dangerous government policies in years.

Coming in at more than 100 pages in length, the executive order is a directive across the “whole of government” to begin regulating this sweeping new technology that has the potential to revolutionize entire sectors of the economy and our lives, from education to healthcare to finance. The order directs or makes requests of countless federal agencies, from the Departments of Energy and Homeland Security to the Consumer Financial Protection Bureau and the Federal Housing Financing Agency, just to name a few. These agencies, in turn, have the authority to issue regulations that carry the force of law.

One of the order’s more important mandates requires that companies developing the most advanced AI models report to the government information on model training, parameter weights, and safety testing. Transparency about results of safety tests sounds practical, but in reality it could discourage tech companies from doing more testing, as results need to be shared with the federal government. Moreover, the very essence of AI research is iterative experimentation, and this mandate could bog down companies in red tape and reporting when they should be tweaking their models to improve safety. Given these tradeoffs, it’s unclear that all the reporting will improve safety for anyone.

Rather than try to identify problems and think of targeted solutions, the order simply assumes that factors like computing power and the number of model parameters are the right metrics to begin assessing risk. No evidence is offered to justify these assumptions. Other components of the order are similarly overly-simplistic. For example, it directs the Office of Management and Budget, Commerce Department, and Homeland Security Department to identify steps to watermark AI-generated content. This is a bit like using a band-aid on a bone fracture. Sophisticated bad actors will be able to remove watermarks or produce high-quality deepfake content without them.

Also problematic is that the data sharing requirements in the order may be illegal. In recent years, progressives have argued the Defense Production Act (DPA) should be used to advance a variety of fashionable political causes. The DPA is a 1950 law intended to make it easier for the government to influence private production during wartime. Already stretching the intent of the law, former President Trump used DPA authority to ramp up government purchases of ventilators during the Covid-19 pandemic. Now, Biden is using these powers to direct tech companies to turn over proprietary AI data.

The government’s “regulate first, ask questions later” approach is reminiscent of when Congress created the renewable fuels program in 2005, which increased the ethanol content in gasoline. Intended as a way to address global warming, the law drove up global food prices as crops like corn were used for fuel. At the same time, the supposed benefits of the mandate in the form of carbon dioxide emission reductions were minimal. The example highlights the importance of understanding the problem a policy is intending to solve before rushing forward with command-and-control regulations.

Some potential unintended consequences of the AI safety order are already apparent. For example, it could inadvertently provide a strategic advantage to bad actors. By imposing stringent regulatory hurdles and reporting mandates, the government will slow down reputable companies that are invested in responsible AI development and have a public image to maintain. These trustworthy entities will naturally prioritize regulatory compliance, leading to delays in innovation and deployment.

Meanwhile, malevolent entities who are unconcerned with regulations or public perception, can exploit this slowdown to accelerate unsanctioned activities. In essence, the order could level the playing field in favor of those it aims to deter, while those committed to ethical and transparent practices bear the brunt of its bureaucratic impediments.

Similarly, some U.S. cloud infrastructure providers are going to be required to report on their interactions with foreign entities, verifying identities and even revealing them to the government. The devil will be in the details, but this could create an immense burden on these providers and may deter foreign entities from engaging with U.S. businesses. Much like E.U. regulation is driving technology companies off the continent, the U.S. could push innovation in AI offshore.

The order also mandates the creation of new AI Governance Boards and Chief AI Officer positions across federal agencies, laying a possible groundwork for a new centralized AI agency. While this might sound like a move toward more-effective governance, a blanket, all-encompassing AI regulator is likely to be even less agile and less dynamic than the dozens of already slow and inefficient federal regulatory agencies with more focused missions.

A new AI regulator will inevitably spur bureaucratic turf wars. For example, the executive order directs the Federal Trade Commission to consider using its rulemaking authority to ensure “fair competition” in the AI marketplace, and it mandates the Department of Labor draft principles and best practices for employer use of AI. Will these agencies easily cede the authority they have amassed over decades to a new AI super regulator?

With respect to competition, the administration’s scattershot approach will put smaller AI firms and open source technologies at a competitive disadvantage. Having fewer resources to devote to compliance and less sway in the corridors of power, we could easily see a retreat of these smaller players from the market, relaxing competitive pressures on the Big Tech companies. Meanwhile, foreign adversaries like China and North Korea will continue their AI programs unabated, as will bad actors who don’t feel any need to comply with government mandates at all.

While the administration is correct that AI is going to require an upheaval of modern administrative governance, its “regulate everything but the kitchen sink” approach to AI governance is not the right strategy. Regulation of AI is going to require speed, agility, and flexibility to match the dynamism of the AI landscape. Our existing regulatory institutions, based on 76-year old laws and processes, are not up to the task to govern 21st century technology.

Biden’s AI safety order could well be the biggest policy mistake of my lifetime. The new order may have been crafted with the intention of being a guide toward a safer AI future, but it has been plotted using an outdated map.

Read the full article here

Related posts
Markets

This 6.5% Dividend Will Go From Cheap To Pricey

3 Mins read
With the S&P 500 up double-digits this year, the media is at it again—cranking up worries that we’re headed for another crash….
Markets

Global platinum market on track to post largest supply deficit on record

3 Mins read
The global supply of platinum is expected to significantly fall short of demand this year, with the World Platinum Investment Council forecasting…
Markets

Powell Warns It’s ‘Premature’ To Discuss Interest Rate Cuts—Despite Market’s Newfound Optimism

1 Mins read
Topline Federal Reserve Chairman Jerome Powell said Friday it’s too early for the Fed to declare victory in its war on inflation…
Get The Latest News

Subscribe to get the top fintech and
finance news and updates.

Leave a Reply

Your email address will not be published. Required fields are marked *