How Apple Built Cinematic Mode for the iPhone 13 [Report]
Posted September 23, 2021 at 5:41pm by iClarified
Apple shares how it built Cinematic Mode for the iPhone 13 in a recent interview with TechCrunch. Cinematic mode shoots with a shallow depth of field and automatically adds focus transitions between subjects.
The site spoke to Kaiann Drance, VP, Worldwide iPhone Product Marketing and Johnnie Manzari, a designer on Apple’s Human Interface Team about how the tentpole feature came to be.
“We knew that bringing a high quality depth of field to video would be magnitudes more challenging [than Portrait Mode],” says Drance. “Unlike photos, video is designed to move as the person filming, including hand shake. And that meant we would need even higher quality depth data so Cinematic Mode could work across subjects, people, pets, and objects, and we needed that depth data continuously to keep up with every frame. Rendering these autofocus changes in real time is a heavy computational workload.”
Apple said it started by working closely with directors of photography, camera operators, and 1st ACs to understand the importance of pulling focus.
“It was also just really inspiring to be able to talk to cinematographers about why they use shallow depth of field. And what purpose it serves in the storytelling. And the thing that we walked away with is, and this is actually a quite timeless insight: You need to guide the viewer’s attention.”
“Now the problem is that today, this is for skilled professionals,” Manzari notes. “This is not something that a normal person would even attempt to take on, because it is so hard. A single mistake — being off by a few inches…this was something we learned from portrait mode. If you’re on the ear and you’re not on their eyes. It’s throwaway.”
Hit the link below for the full report...
Read More
The site spoke to Kaiann Drance, VP, Worldwide iPhone Product Marketing and Johnnie Manzari, a designer on Apple’s Human Interface Team about how the tentpole feature came to be.
“We knew that bringing a high quality depth of field to video would be magnitudes more challenging [than Portrait Mode],” says Drance. “Unlike photos, video is designed to move as the person filming, including hand shake. And that meant we would need even higher quality depth data so Cinematic Mode could work across subjects, people, pets, and objects, and we needed that depth data continuously to keep up with every frame. Rendering these autofocus changes in real time is a heavy computational workload.”
Apple said it started by working closely with directors of photography, camera operators, and 1st ACs to understand the importance of pulling focus.
“It was also just really inspiring to be able to talk to cinematographers about why they use shallow depth of field. And what purpose it serves in the storytelling. And the thing that we walked away with is, and this is actually a quite timeless insight: You need to guide the viewer’s attention.”
“Now the problem is that today, this is for skilled professionals,” Manzari notes. “This is not something that a normal person would even attempt to take on, because it is so hard. A single mistake — being off by a few inches…this was something we learned from portrait mode. If you’re on the ear and you’re not on their eyes. It’s throwaway.”
Hit the link below for the full report...
Read More