Skip to main content

Runway brings precise camera controls to AI videos

Content creators will have more control over the look and feel of their AI-generated videos thanks to a new feature set coming to Runway’s Gen-3 Alpha model.

Advanced Camera Control is rolling out on Gen-3 Alpha Turbo starting today, the company announced via a post on X (formerly Twitter).

Recommended Videos

Advanced Camera Control is now available for Gen-3 Alpha Turbo. Choose both the direction and intensity of how you move through your scenes for even more intention in every shot.

(1/8) pic.twitter.com/jRE6pC9ULn

— Runway (@runwayml) November 1, 2024

The new Advanced camera controls expand on the model’s existing capabilities. With it, users can “move horizontally while panning to arc around subjects … Or, move horizontally while panning to explore locations,” per the company. They can also customize the direction and intensity of how the camera moves through a scene “for even more intention in every shot,” while combining “outputs with various camera moves and speed ramps for interesting loops.”

Unfortunately, since the new feature is restricted to Gen-3 Alpha Turbo, you will need to subscribe to the $12-per-month Standard plan to access that model and try out the camera controls for yourself.

Or quickly zoom out to reveal new context and story.

(7/8) pic.twitter.com/dovmMUsGEx

— Runway (@runwayml) November 1, 2024

Runway debuted the Gen-3 Alpha model in June, billing it as a “major improvement in fidelity, consistency, and motion over Gen-2, and a step towards building General World Models.” Gen-3 powers all of Runway’s text-to video, image-to-video, and text-to-image tools. The system is capable of generating photorealistic depictions of humans, as evidenced in the X post, as well as creating outputs in a wide variety of artistic styles.

Advanced Camera Controls arrive roughly a month after Runway revealed gen-3’s new video-to-video capabilities in mid-September, which allows users to edit and “reskin” a generated video in another artistic style using only text prompts. When combined with Apple’s Vision Pro AR headset, the results are striking. The company also announced the release of an API so that developers can integrate gen-3’s abilities into their own apps and products.

The new camera controls could soon be put to use by film editors at Lionsgate, the studio behind the John Wick and The Hunger Games franchises, which signed a deal with Runway in September to “augment” humans’ efforts with AI generated video content. The deal reportedly centers on the startup building and training a new generative AI model fine-tuned on Lionsgate’s 20,000-title catalog of films and television series.

Andrew Tarantola
Andrew Tarantola is a journalist with more than a decade reporting on emerging technologies ranging from robotics and machine…
OpenAI’s Advanced Voice Mode can now see your screen and analyze videos
Advanced Santa voice mode

OpenAI's "12 Days of OpenAI" continued apace on Wednesday with the development team announcing a new seasonal voice for ChatGPT's Advanced Voice Mode (AVM), as well as new video and screen-sharing capabilities for the conversational AI feature.

Santa Mode, as OpenAI is calling it, is a seasonal feature for AVM, and offers St. Nick's dulcet tones as a preset voice option. It is being released to Plus and Pro subscribers through the website and mobile and desktop apps starting today and will remain so until early January. To access the limited-time feature, first sign in to your Plus or Pro account, then click on the snowflake icon next to the text prompt window.

Read more
OpenAI’s Sora doesn’t feel like the game-changer it was supposed to be
Sora's interpretation of gymnastics

OpenAI has teased, and repeatedly delayed, the release of Sora for nearly a year. On Tuesday, the company finally unveiled a fully functional version of the new video-generation model destined for public use and, despite the initial buzz, more and more early users of the release don't seem overly impressed. And neither am I.

https://x.com/OpenAI/status/1758192957386342435

Read more
Google’s new Gemini 2.0 AI model is about to be everywhere
Gemini 2.0 logo

Less than a year after debuting Gemini 1.5, Google's DeepMind division was back Wednesday to reveal the AI's next-generation model, Gemini 2.0. The new model offers native image and audio output, and "will enable us to build new AI agents that bring us closer to our vision of a universal assistant," the company wrote in its announcement blog post.

As of Wednesday, Gemini 2.0 is available at all subscription tiers, including free. As Google's new flagship AI model, you can expect to see it begin powering AI features across the company's ecosystem in the coming months. As with OpenAI's o1 model, the initial release of Gemini 2.0 is not the company's full-fledged version, but rather a smaller, less capable "experimental preview" iteration that will be upgraded in Google Gemini in the coming months.

Read more