DaVinci Resolve 20 Released in Public Beta – with AI-Powered Features
Just ahead of NAB 2025, Blackmagic Design announced DaVinci Resolve 20 and here we outlined many of the changes. Now, at the show, they released the public beta, which brings over 150 new features across the platform. Nino stopped by the Blackmagic Design booth to speak with DaVinci Resolve specialist Simon Hall, who highlighted the key updates, many centered around AI-assisted workflows aimed at streamlining the editing process while keeping creative decisions in the user’s hands.
It’s important to note what Simon pointed out: the idea of AI is not to replace the person “in the seat” but to provide tools that simply make editing quicker. Let’s have a look at what stood out!

AI script-based pre-editing – Intelliscript
One of the most exciting new features is Intelliscript, an AI-powered tool that assembles timelines based on a written script, a transcript, or an audio transcription. It doesn’t require any special formatting – a basic text file with paragraphs is enough. Editors can use this to auto-assemble long interviews or scripted scenes, with the system matching text to spoken content.
When multiple takes exist, the tool stacks them in the timeline so you can choose the best version during the edit. It also works with multi-camera setups, allowing the same script to be applied across multiple angles.
Color Correction with DaVinci Resolve

AI-assisted multicam switching
Nino asked Simon how Resolve handles multicam edits, especially with two or three cameras recording at once. He pointed out the new AI Multicam Smart Switch, an AI tool that analyzes mouth movement and audio to identify who’s speaking and automatically switches angles. It’s useful for podcasts or interviews recorded without live switching. If the footage is synced via timecode, the tool can generate a clean first pass. Simon noted it’s not meant to replace live switching but offers a helpful shortcut in post.

AI Voice Conversion tool and music editing
The update also introduces a voice replacement feature. Editors can swap out a voice using built-in models or a voice sample – for example, to change a voice or an accent or fix a line. Nino asked Simon whether Resolve can actually learn someone’s voice. He confirmed that it can: Resolve includes a learning module that allows users to train a voice model by uploading a sample. While the built-in models cover common needs, users who want something more specific have the option to create their own.
Music trimming has also been improved: Resolve can now automatically identify loop points and restructure a track to match the length of your edit. Instead of manually cutting and blending sections, you can simply drag the cue shorter or longer, and the software adjusts it in the background.

More editing tools for DaVinci Resolve 20
Simon said that many of the tools are meant to save editors time. Here are some examples:
- Media Pool stacking, so you can manually order clips before dropping them into a timeline.
- A new Source Timeline view that lets you edit from one timeline into another – handy for doc-style workflows.
- A redesigned keyframe editor with spline support, bringing more of Fusion’s animation power directly into the edit page. It’s especially useful for animating layered PSD files.

Cut page improvements
The Cut page has been updated with features like dynamic trimming, for example, aimed at speeding up fast-turnaround edits. It now includes a full-height portrait mode, making it easier to work with vertical video for social platforms. Mouse-based trimming has also been added, bringing Speed Editor-style functionality to users without the hardware. In addition, new AI-assisted trimming tools help streamline edits without interrupting the flow of the timeline.
Fusion gets deep image compositing
Although it wasn’t mentioned in the interview, according to their webpage, Fusion offers a Deep Image Compositing toolset. This feature typically allows for advanced layering and depth-based compositing workflows, which is especially useful for VFX-heavy pipelines.
Privacy and AI training
Simon emphasized that no user content is used to train AI models (except when explicitly providing a voice sample for voice conversion). AI tools may perform temporary cloud-based processing, but project files and media stay local.
Price and availability
The public beta is available now, and you can download it here. As in the past, it is free. Some features are still being finalized and could carry a small fee down the line, but nothing has been confirmed yet.
How do you see AI fitting into your editing process? Let us know in the comments below.