Skip to main content

Digital Trends may earn a commission when you buy through links on our site. Why trust us?

AI can now place a digital Coca-Cola next to any meal

A glass of Coca-cola on a table next to some tacos
The Coca-Cola Company

AI is infiltrating the world of advertisements, and Coca-Cola is the latest to find a use for it. The Coca-Cola Company announced Monday ahead of Siggraph that it has partnered with the ad agency to WPP to incorporate AI from Nvidia into its global ad campaigns.

“With Nvidia, we can personalize and customize Coke and meals imagery across 100-plus markets, delivering on hyperlocal relevance with speed and at global scale,” Samir Bhutada, global VP of StudioX Digital Transformation at Coca-Cola, said in a press statement released Monday.

Recommended Videos

Coke has been working with WPP to develop Prod X, a custom production studio and digital twin tools that the beverage company can use in its ads. A digital twin is just a virtual copy of a real-life object that can be manipulated in a 3D environment. You can probably see why it would helpful for a company like Coca-Cola.

WPP also announced Monday that Coca-Cola will be among the first adopters of Nvidia NIM microservices for Universal Scene Description (OpenUSD), a “3D framework that enables interoperability between software tools and data types for building virtual worlds,” that was invented by Pixar Animation Studio. With NIM and USD, WPP is able to leverage a large catalog of branded images and digital models, and assemble them into localized, culturally relevant scenes so that Coca-Cola can better target local markets.

This content engine is based on Nvidia’s Omniverse Cloud, an API and SDK platform that connects a variety of 3D tools.

WPP leverages that platform to connect product-design data from software such as Adobe’s Substance 3D with, for example, generative AI systems from Adobe and Getty so that its designers can create photorealistic product models (in this case, bottles of Coca-Cola) using natural language prompts.

Ad makers can generate enormous libraries of visual assets as well as the python code needed to create the 3D scenes around those assets.

“The beauty of the solution is that it compresses multiple phases of the production process into a single interface and process,” Perry Nightingale, senior vice president of creative AI at WPP, said of the new NIM microservices. “It empowers artists to get more out of the technology and create better work.”

Andrew Tarantola
Andrew Tarantola is a journalist with more than a decade reporting on emerging technologies ranging from robotics and machine…
Get ready: Google Search may bring a pure ‘AI mode’ to counter ChatGPT
AI Overviews being shown in Google Search.

It is match point Google as the tech giant prepares to introduce a new “AI Mode” for its search engine, which will allow users to transition into an atmosphere that resembles the Gemini AI chatbot interface.

According to a report from The Information, Google will add an AI Mode tab to the link options in its search results, where the “All,” “Images,” “Videos,” and “Shopping” options reside. The AI Mode would make Google search more accessible and intuitive for users, allowing them to “ask follow-up” questions pertaining to the links in the results via a chatbot text bar, the publication added.

Read more
I tested Intel’s new XeSS 2 to see if it really holds up against DLSS 3
The Intel logo on the Arc B580 graphics card.

Although it technically arrived alongside the Arc B580, Intel quickly disabled its new XeSS 2 feature shortly after it was introduced. Now, it's back via a new driver update, and with a few fixes to major crashes issues. I took XeSS 2 out for a spin with the Arc B580, which has quickly climbed up the rankings among the best graphics cards, but does XeSS 2 hold up its side of the bargain?

XeSS 2 is Intel's bid to fight back against Nvidia's wildly popular DLSS 3. The upscaling component at the core of XeSS is the same, but XeSS 2 includes both a Reflex-like latency reduction feature and, critically, frame generation. The latency reduction, called XeLL, is enabled by default with frame generation.

Read more
Windows PCs now works with the Quest 3, and I tried it out for myself
i tried windows new mixed reality link with my quest 3 alan truly sits in front of a pc and adjusts virtual screen while wear

Microsoft and Meta teamed up on a new feature that lets me use my Windows PC while wearing a Quest 3 or 3S, and it’s super easy to connect and use. I simply glance at my computer and tap a floating button to use Windows in VR on large displays only I can see.

Meta’s new Quest 3 and 3S are among the best VR headsets for standalone gaming and media consumption. When I want more performance or need to run one of the best Windows apps that aren’t yet available in VR, I can connect to a much more powerful Windows PC.
Setting up Mixed Reality Link
Scanning Microsoft's Mixed Reality Link QR code with a Meta Quest 3 Photo by Tracey Truly / Digital Trends

Read more