The technology firms of Silicon Valley want people to make way for AI by giving up control. That couldn’t be further from the European view
In his speech at the White House AI summit on 23rd July, Donald Trump’s message for America’s tech leaders was clear. Many of them had raised the money and used their networks to propel him back to the presidency. As new rumours relating to his relationship with the convicted sex offender Jeffrey Epstein circulated in the media, Trump wanted his friends in Silicon Valley to know that he is still their guy. It was a significant move for him to make, especially as many of them would be more than happy to see vice president J.D. Vance step into his place, a politician with close ties to tech power brokers like Peter Thiel.
Alongside major investments and planning easements for new data centres, semiconductor fabrication plants and energy production facilities, the president committed to rolling back regulation and giving the AI developers a free pass. In response to the debate about the unlicensed scraping of data from published works to train AI models, the president was clear:
“You can’t be expected to have a successful AI program when every single article, book, or anything else that you’ve read or studied, you’re supposed to pay for. Gee. I read a book. I’m supposed to pay somebody.” It may be news to Donald Trump that if writers and creators don’t get paid, then that’s the end of the publishing industry as we know it.
The president also re-committed to reverse the AI safety initiatives supported by the previous administration, regulations designed to prevent illegal discrimination and other criminal activity being promoted online. Here Trump stated: “The American people do not want woke Marxist lunacy in the AI models… That’s why on day one, I very proudly terminated Joe Biden’s order on woke AI effective immediately. You don’t have any of those crazy rules.”
Government regulations shouldn’t require companies to take enforcement measures against legal speech, but here Trump is effectively taking safety guardrails out of the system altogether. He gave limited recognition that AI, “brings the potential for bad as well as for good, for peril, as well as for progress. But the daunting power of AI is… not going to be a reason for retreat from this new frontier.” In terms of how to navigate that path, the president is happy to leave that to the leaders of American tech.
We know that their preference is for self-regulation, not for direct liability for anything apart from the most harmful illegal content. In a recent article for The Times Joel Kaplan, Meta’s new president of global affairs, condemned the European Union’s laws on data protection, fair competition and online safety for “being over-enforced by activist regulators… imposing spurious fines and forcing companies to redesign their business models.”
Well if a business model is actively promoting self-harm content to children, price fixing to extort customers, and exploiting personal data without user consent, then the regulator should step in. Meta has also demonstrated its desire to resist any external regulation, by announcing that it will refuse to follow the EU’s Code of Practice on AI when it goes live on 2 August, a voluntary set of rules giving guidance on transparency and copyright, as well as on safety and security issues.
Chamath Palihapitiya, a former senior Facebook executive and founder of Social Capital, a Palo Alto based venture capital firm, was name-checked in president Trump’s speech alongside the White House’s chief AI advisor, David Sacks, himself another major tech investor. Both Chamath and Sacks are regulars on the popular All-In podcast on politics and technology and on which the president himself has been a guest.
In a recent show, Chamath speculated that the problem with public concern about AI was the human desire to be in control. He explained, with reference to data from autonomous vehicles, that, “there was a psychological need for humans to believe we were part of the answer. But what this is showing is because of Moore’s law and because of general computation, it’s just not necessary. You have to let go, give up control. And that’s very hard for some people.”
This has become in many ways the key argument that separates the American AI industry from regulators in Europe: the extent to which we are prepared to give up control. It is true that there has to be a sufficient level of trust for someone to get into a self-driving car, or have medical scans analysed by a machine instead of a human. However, the combination of high efficiency and low cost will probably convert most people to these technological innovations.
There must be limits however to how much control we are prepared to concede, and on what. Our democracy relies on citizens being able to receive accurate information in order to make informed decisions, and our media and creative industries require a means of being compensated for their craft in order to continue.
If AI-powered newsfeeds select the information we see, influenced by the vast networks of bad bots that are now responsible for about 37% of global internet traffic, then how does human insight and creativity find its audience?
With every other major innovation in media distribution, from printing to radio and television, we have built systems that create responsibilities for the publisher or broadcaster, and opportunities for the consumer to access content of their choice. Going all in on AI cannot leave all these decisions to the people who profit from the systems they build.
That is what has got us to where we are now, and online social experiences that are too often dominated by harm, hate and fraud. If the AI industry wants us to give up some control, they need to work harder to build the trust required to do that.