Keanu Reeves has fought machines on-screen. Now, his performances are teaching them.
In September, Lionsgate inked a deal with Runway, permitting the AI firm to train a new generative AI model on its extensive film and TV library, including blockbuster franchises like John Wick, Saw, and The Hunger Games. Lionsgate’s vice-chair Michael Burns believes the partnership will save the studio “millions and millions of dollars” by aiding filmmakers in pre-production and post-production processes.
The move signals just the latest shift in how major studios view AI’s role in filmmaking. It also comes at an inflection point for Hollywood and AI. Like a number of other AI firms, Runway faces legal challenges and copyright infringement claims around its image-generation system. And while actors and writers have secured temporary restrictions over AI, the Animation Guild and the studios remain deadlocked in contract negotiations, many of which focus on the technology.
While major studios begin to quietly explore AI’s potential, a wave of startups and smaller organizations are developing AI tools intended to enhance, rather than replace, Hollywood’s creativity. These homegrown solutions, created by industry insiders who understand the unique challenges of film and television production, aim to address specific pain points in the creative process while preserving the human touch that defines great storytelling.
The ultimate success of these tools—and their acceptance by skeptical creatives wary of any AI encroachment—remains an open question in an industry grappling with rapid technological change.
One such tool comes from Ryan Turner, chief creative officer at LA-based production company Echobend. Turner has launched a project to turn screenplays into audio using generative AI using technology from the startup ElevenLabs.
The project was inspired by his own struggle: Turner has anywhere from 15 to 50 scripts he’s trying to get through and can’t find time to read during the workday. He would check out more at home, but struggled to keep staring at his screen.
“I’m not gonna get home and then open up a PDF,” he says. “It’s the last thing I want to do.”
Instead, he figured that an audio rendition of scripts, consumed during commutes, gym sessions, or mundane chores, could be a feasible solution. And he knew he wasn’t the only one dealing with this bottleneck; feedback on scripts typically spans weeks, if not longer. He sensed a larger opportunity.
“I don’t think that many people really enjoy the process of opening a PDF and reading a script,” Turner says. “It’s not like a novel—it’s specifically written to be filmed, not read.”
While text-to-speech apps exist, screenplays’ distinct structure results in an inelegant auditory experience, such as repetitive character name mentions. Echobend’s solution, utilizing over 30 voices, streamlines the narrative akin to a radio drama, albeit with certain expressive constraints. “If there’s a comedic beat, they’re not going to really hit it,” Turner says. “It’s not going to really nail that reading, but you’re gonna understand that was a joke.”
To show it off, Turner took a few pages of a script I wrote, and within 15 minutes of checking that the file didn’t have any weird formatting, selecting voices, and then rendering, the script was now an MP3.
Different voices make it clear who is speaking and make it easy to track the story, but some voices are more evocative than others. The sluglines—short descriptions of a scene’s location that are usually fragments of sentences that look like EXT. HOUSE-DAY—weren’t distracting or slowing the story down.
Maybe most importantly, it was more of a production than my script would likely ever get (call me if you’re a producer interested in a High Maintenance–type story following books and their impact in prisons), and hearing it gave me new insight into the flow of the narrative.
But the implications go beyond just saving time for producers or writers getting to hear their scripts out loud. For writers, especially those with dyslexia or other reading difficulties, the tool offers a new way to experience their work. Challenges persist: Many voice options predominantly sound like white characters. Turner imagines roping in actors for voice-overs, something the app supports. There are legal risks too: ElevenLabs, the company that makes the AI voices behind Echobend’s project, was sued in August by two actors who claim the company used their voices to train its AI.
While the tool is functional and showcased on Echobend’s website, Turner and his team haven’t yet launched a major marketing push. Instead, they’ve been quietly demonstrating it at film festivals like Cannes and Sundance, gathering feedback and exploring potential markets.
Duncan Crabtree-Ireland, the executive director of SAG-AFTRA, says he had different opinions on the tool, depending on the use case. For writers wanting to interact with their work or producers needing to get through a slush pile, that’s not the concern. However, replacing actors with AI voices for a table read would violate SAG-AFTRA contracts if not done in careful compliance with notice and informed consent requirements.
Under the protections the union got in its negotiations last year, studios must get actors’ consent before making a digital replica of them. A law recently signed by California Governor Gavin Newsom also protects actors from having their work cloned without their consent. (The Writers Guild of America (WGA) separately reached their own agreement specifying that writing generated by AI cannot be considered “literary material.”)
The bigger questions for Echobend and other AI tools for preproduction and postproduction are about the training that made the models possible. Were actors and artists compensated? Was copyright respected?
“I have the same concern about this one that I have about any AI tool,” Crabtree-Ireland says.
These concerns aren’t merely theoretical. A new industry survey reveals the stark reality: three-quarters of entertainment executives acknowledge using AI to eliminate, reduce, or consolidate jobs. The study projects roughly 204,000 positions will be impacted over the next three years, especially entry-level workers, sound engineers, voice actors, concept artists, and VFX and postproduction teams.
Turning scripts into scenes
Startups are also aiming their AI tools at other parts of the production process. Lore Machine, an AI-powered tool for screenplay visualization, allows writers to upload their scripts and generate a gallery of images with consistent characters and locations. Thobey Campion, Lore Machine’s founder and former head of publishing at Vice Media, envisions a future where writers can create and distribute their own digital media, potentially retaining more control over their work.
“What we’ve seen with Hollywood is the first adopters are, in fact, writers,” Campion says.
At the heart of their system is a clever solution to the “character consistency” problem. Rather than generating characters from scratch for each scene, Lore Machine employs a library of over 3,000 pre-built poses. These poses aren’t just static images, but 3D models mapped to key points like hands, elbows, and shoulders. When a character needs to appear in a new scene, the system selects an appropriate pose and then applies the chosen artistic style over it. This approach helps ensure that characters maintain their appearance and proportions across different images and scenes.
The artistic styles themselves are another technical feat. Instead of relying on broad AI models trained on scraped internet data, Lore Machine has created preset styles using carefully curated, rights-cleared sources. For example, their “1987” style is built from public domain movie screenshots from that year. Users select from these presets rather than describing styles in natural language, which helps avoid accidental copyright infringement.
Text processing is equally crucial to the system. Lore Machine uses a combination of narrow, specialized language models working alongside more general large language models. This teamwork allows the AI to better understand and visualize screenplay elements.
Still, the system has trouble rendering story lines with consistency, as I found when I stuck part of my romance novel into Lore. For instance, sometimes the models lose track of the plot midway through—what Campion calls “lost in the middle syndrome”—and fixing it is “an ongoing project.”
Lore Machine and Echobend are hardly alone in trying to apply generative AI to the edges of filmmaking. Startups and tools like OneDoor, Charismatic.ai, and Storyboarder.ai promise to help with storyboarding. Cinelytic has launched a tool to offer script feedback with generative AI. Other apps aim to help automate video tagging and editing tasks and improve visual effects.
Philip Gelatt, a screenwriter who recently worked on an AI-generated manga project for HP OMEN, Hewlett-Packard’s gaming brand, remains skeptical.
“I’m largely anti-AI,” Gelatt says. He only sent away his writing for someone else to stick into Lore, but his experience with the technology left him uncertain about its role in assisting human creativity.
“One of my favorite things is working with human artists. I just find it a valuable part of the creative process,” he says.
Gelatt’s concerns extend to the quality of AI-generated content. He noted challenges in maintaining visual consistency for characters across panels, a crucial aspect of visual storytelling. Despite his reservations, Gelatt acknowledges potential niche uses, particularly for creators with limited resources. “There’s probably small usages and small things that are worthwhile,” he says.
‘We want to be included’
As AI tools like Turner’s audio screenplay converter and Lore Machine’s visualization technology seek to gain traction, the entertainment industry grapples with their wider implications. This tension is playing out in real-time as the Animation Guild, representing more than 5,000 animation workers, continues negotiations with major studios, seeking to secure protections against the unchecked use of AI in their field. In a report published in September, the union’s task force dedicated to AI found that generative AI tools create outputs “that can target most of the job categories of TAG members, spanning from design to production, animation to scriptwriting.”
Jodie Hudson, a veteran animator with 16 years of experience, has mixed feelings about the potential of AI as Hollywood struggles through economic headwinds.
“Right now the industry is kind of in free fall,” he says, citing factors including pandemic-era overspending and concerns about the business model of streaming. He sees AI as companies’ potential solution to profitability issues, but emphasizes that creatives aren’t inherently against AI. “We want to be included in the conversation about how AI is used,” he says.
The stakes for inclusion are sky-high. A recent survey commissioned by the Animation Guild found that 29 percent of animation jobs could potentially be disrupted by AI in the next three years—the term of the union’s next contract. With union negotiators calling this moment “existential,” the guild is prioritizing the regulation of AI along with preventing further outsourcing of LA studio work to foreign countries.
Hudson’s concerns extend beyond animation. He points out that AI could give unfair advantages in publishing and other creative fields, potentially flooding markets with AI-generated content. And as studios and media companies rush to keep pace with the tech giants, they not only risk shortchanging writers, artists, and directors; they may be ceding what little leverage they have over those tech behemoths in the future.
“I don’t know if they can afford the cost of selling out,” she says.