Welcome to AI Decoded, Fast Company’s weekly newsletter that breaks down the most important news in the world of AI. You can sign up to receive this newsletter every week here.
Trump and his allies woo Silicon Valley with hands-off AI policy
The tech media has for months been reporting on a supposed shift to the right by Silicon Valley founders and investors. The narrative seems driven by recent pledges of support for Trump from the likes of Marc Andreessen, Joe Lonsdale, and Elon Musk. The Trump camp has been wooing tech companies and investors, including those within the AI sector.
The Washington Post reported Tuesday that a group of Trump allies and ex-cabinet members have written a draft executive order that would mark a radical shift away from the Biden administration’s current approach to AI regulation. The draft executive order comes from the right-wing think tank, America First Policy Institute, which is led by Trump’s former chief economic adviser Larry Kudlow. The document proposes an AI regulatory regime that relies heavily on the AI industry to regulate itself when it comes to the safety and security of its models, and would establish “industry-led” agencies to “evaluate AI models and secure systems from foreign adversaries,” the Post reports. It would also create “Manhattan Projects” to develop cutting-edge AI for the military.
By contrast, the Biden administration’s current executive order (EO) on AI is chiefly concerned with the security risks to U.S. people and interests that the very largest AI models might pose. The administration seems particularly worried that such models, delivered as a service via an API, could be used to wage some kind of cyberwar on the U.S. The Biden order, which was signed into law last October, proposes that makers of such models regularly report to the Commerce Department on the development, safety testing, and distribution of their products. The EO’s reporting requirements apply only to the very largest AI models that are hosted in very big data centers. Right now, only a few well-monied AI companies have built such models.
But many in the AI sector fear that model sizes and computing power will rapidly increase, which would subject even smaller AI companies to onerous reporting requirements. “Things that today look very hard and expensive are going to get very cheap,” said Andreessen, founding partner of Andreessen Horowitz, on a recent podcast with fellow cofounder Ben Horowitz. The way Andreessen sees it, Biden’s regulations would stop the industry’s rapid movement forward and give the current market leaders a monopoly on big foundation models. That may be why one of those market leaders, OpenAI (an a16z portfolio company) has called for more stringent AI regulation.
However, the Biden EO’s current “big model” definitions are flexible, by design—the current thresholds are merely placeholders. The EO says that the Commerce Department will “determine the set of technical conditions for a large AI model to have potential capabilities that could be used in malicious cyber-enabled activity, and revise that determination as necessary and appropriate.”
The rightward shift of Silicon Valley is mostly a social media and podcast phenomenon. Silicon Valley has long been a bastion of liberals and libertarians, and there is no evidence that it’s changed much over the past decade. In the ‘24 election cycle, the top 20 venture capital firms and their employees have given twice as much money to Democratic candidates and causes versus Republican ones, a Wired review of Federal Election Commision reports shows.
YouTube is a victim of AI’s original sin: web scraping
AI models are trained largely on large corpuses of text scraped from the internet. Huge training datasets were being created while online publishers and creators had no idea it was happening. That’s how GPT-2 originally began showing hints of real language savvy and some kind of intelligence. Now, of course, publishers are wise to the situation, and many have found new revenue sources by licensing their data to AI companies for training.
Google, whose AI researchers opened the door to LLMs, was also a victim of the web data harvesting practiced by AI developers. A new investigation by the nonprofit news organization Proof finds that Anthropic, Nvidia, Apple, and Salesforce used the subtitles and transcripts of thousands of YouTube videos to train their language models. These included videos by popular creators such as MrBeast and Marques Brownlee, and from the channels of MIT, Harvard, NPR, Stephen Colbert, John Oliver, and others. The Proof investigators found that, overall, the training dataset included text from 173,536 YouTube videos across more than 48,000 channels.
The dataset containing the content wasn’t scraped by employees of the big tech companies that used it to train models. Rather, the YouTube text is part of a publicly available dataset called “Pile,” which is a compilation of various text datasets created by the nonprofit AI research group EleutherAI, the report says. A research paper published by EleutherAI says that YouTube subtitles are especially valuable because they are often available in a variety of languages. YouTube’s terms of service prohibit scraping its content without permission.
CoreWeave CEO Mike Intrator on generative AI’s effect on the power grid
Recent studies have shown that the advance of generative AI models may significantly increase demand on the power grid. A new study released Wednesday by Columbia University shows that by 2027, the GPUs that run generative AI models will constitute about 1.7% of the total electric use in the U.S., or 4% of the total projected electricity sales. “While this might seem minimal, it constitutes a considerable growth rate over the next six years and a significant amount of energy that will need to be supplied to data centers,” the report says.
People within the AI infrastructure business have been thinking about the problem for a while now. “I think that the U.S. is in a position where the amount of power that’s going to be required and the scale of the power that’s required for these data centers is going to put increasing pressure on the grid,” says Mike Intrator, CEO of CoreWeave, which offers cloud computing designed for AI training and inference. “It’s going to become a bottleneck and a limiting factor for the development of the AI infrastructure that is required.”
Over the past year, CoreWeave increased its data center count from 3 to 14, and expects to have 28 data centers worldwide. Intrator believes that significant investment in the grid will be needed to both increase its power and improve the way it moves power around to where it’s needed. “I know that that’s a challenge because of how those projects are regulated at the state level,” he says, “and I know that that’s going to require some real thought.”
More AI coverage from Fast Company:
- The ‘AI-in-everything’ era is here, and it’s giving us a lot of stuff we don’t need
- AI demand puts more pressure on data centers’ energy use. Here’s how to make it sustainable
- How ‘I, Robot’ eerily predicted the current dangers of AI 20 years ago
- Why AI is still stuck in its dial-up era
Want exclusive reporting and trend analysis on technology, business innovation, future of work, and design? Sign up for Fast Company Premium.