Pace on “Avatar” 3D

VincePace_640m

“Avatar,” James Cameron’s most recent epic, opens on December 18, 2009. The camera systems and 3D rigs were supplied by PACE. I recently spoke with Vince Pace, CEO of PACE. Vince is a long-time colleague, renaissance inventor and filmmaker, whose high tech company supplied the 3D cameras and rigs on “Avatar,” “Journey to the Center of the Earth,” “Aliens of the Deep,” and many other major productions. Vince has a long history of innovation for the motion picture business, having designed and built underwater housings, lights, camera rigs and accessories. He also has a long string of credits as DP.  Here’s what Vince had to say about “Avatar,” 3D, cameras, rigs and much more:

We are in the middle of a 3D revolution. We’re part of that process at PACE—our system is called Fusion 3D.

“Avatar” has been 10 years in the making. I have been working on it for 4 years as DP and “Stereographer.” I was the 2nd Unit DP in New Zealand and the 1st Unit DP on the LA shooting. This was not a documentary like some of my previous work with Jim Cameron. I worked with him on “Ghosts of the Abyss,” the 1st polarized 3D film released on both IMAX and regular movie screens. This time I stepped up to the plate for an entirely different project.

Jim’s style on “Avatar” is intimate. He doesn’t hesitate to go hand-held. He likes scenes being in a state of movement, and he doesn’t hesitate to get behind the camera’s eyepiece, operating where he’s right up there with the actors—they are performing right with him.

For this job, the challenge was the flexibility of the systems. We started with up to 7 or 8 cameras, set up for different configurations—so we could choose which configuration was right for a particular shot: hand-held, Steadicam, crane, Technocrane, dolly, etc.

Two weeks before we were about to start shooting, I got a list of this and that. We discussed it. The most important thing was having the vertical camera of the 3D rig inverted—typically it’s on top, pointing down towards the beamsplitter—which makes the rig very top-heavy. What we did was to flip the beamsplitter 180 degrees and invert the vertical camera—aiming it straight up instead. So one camera is aiming at the action, the other is aiming up, and the beamsplitter is at the intersection. This made hand-holding very well balanced—imagine one camera on your shoulder, and the other camera, the one pointing up, behind your shoulder. It was also very well balanced for Steadicam.

We finally settled on 4 systems: 2 rigs ready for Steadicam, hand-held, cranes and dollies. 1 rig for wide-angle shots and Technocrane. And 1 rig with the cameras set up side by side.

We mostly used Sony F950 cameras (3x 2/3” sensors). We also had Sony HDC 1500 cameras for frame rates up to 60 fps. Remember, we started this project four years ago. In LA, we went to the Sony F23 and F950. Our main lenses were Fujinon 16x 6.3-101mm f/1.8 zooms. We also had some Fujinon 7-35mm short zooms. We didn’t use primes. Jim wanted the flexibility and speed of using zooms.

We recorded to tape on Sony SRW-1 and 5500 recorders. We also used Codex digital recorders. It was all shot at 1920×1080 24p HD.

Although Jim is very knowledgeable about technology, he avoided being too technical; he was not pixel-counting. The driving factor was entertainment and keeping the creative strong. He didn’t worry about clipping highlights—he didn’t worry about burning out certain highlights. He knows the look and knows what he wants—he focuses on performance, and doesn’t get bogged down in engineering stuff. He knows. He also tells me when he needs a Rembrandt and when to get out of his way. The DITs know not to tell him that such and such part of the shot is looking a little hot. We had a very good engineer—Robert Brunelli.

3D is not for the faint of heart. What you think can be fixed can hurt you later on. Both lenses and both cameras have to be the same. It’s not a master-slave situation: it’s a master-master scenario. We have to manage both cameras and both lenses equally—and we developed the software to “marry” them. The way we manage them is kind of an advanced motion control technique, where we mechanically manage lens movement, optics, tracking, centering, and so on—and then the software takes over.

Follow focus on our system is kind of like taping focus marks aided by software. In this vast digital realm, some of the old school techniques hold very true.

Our next big hurdle on the production was a show-stopper: Steadicam. We found that Steadicam shots were looking hand-held, instead of smooth and gliding. The shot would begin as a floating camera, but by the end, it wasn’t as smooth. It turned out that was because the operator would grab the post to make an adjustment in balance towards the end of the shot. Jim has a perfect eye for framing. We brought in Patrick Campbell, president of PACE, and he suggested a counterbalance at the bottom of the Steadicam. Now it stayed perfectly balanced. We had a true 3D Steadicam rig.

The 3D rig for the Steadicam weighed 20 pounds with cameras and lenses. Fiber cables connected the cameras to the base station. Our camera assistants used monitors hard-wired to the base station.

About 50% of the production was shot traditionally: on a traditional head and dolly or sticks. This rig weighed about 36 pounds with two Sony 950 cameras. We used OConnor 25-75 heads. When we got to LA, we used Sony F23 cameras, and the weight went up to 50 pounds. We used Oconnor 120 heads.

At PACE, we have more than 50 rigs. We like to say they are camera agnostic. The rigs for the Sony 950 are more compact, of course. But, we’re putting Sony F35 cameras on some of our rigs now. The rigs have come to the point where we can say, “what would the shot be if it were in 2D?”

For follow focus, we use the Preston wireless system. Our cameras and rigs work on a Camnet system. Beginning with “Tron,” we opened the protocol to work with Preston FIZ systems. One hand unit controls both lenses. They are treated as if they were one. The commands are transferred to the PACE software, and it goes from there. For convergence, we can do it with a Preston control or use Camnet.

Interocular distance is often derived from focus. Follow focus is a zen-like application that requires great skill, so it’s smart to link convergence to focus, and let the focus puller control it because they usually have a better eye for this than an engineer or stereographer sitting at a monitor.

We developed a system we call the Constant Divergence Algorithm. It means we can manage the stereo so we can cut any shot. Previously, it was sometimes painful to cut from one shot to another if the divergence was off. We learned this from our sports jobs. In sports, it’s all about live switching, and coverage from multiple angles, and you want to be able to cut from any angle of any camera.  Broadcast is like one performance with many cameras. For good stereo 3D entertainment, at 24 fps, the bar goes up. The convergence point doesn’t have to change. There’s a sweet spot—just as you have to carefully compose a frame in 2D, the same thing is true in stereo. You have to not only carefully compose the frame, you also have to maintain the feeling of stereo. Physiologically, it means our eyes are not bouncing around all over the place, giving us a headache. It’s like audio volume. When you listen to a movie, the audio levels can’t be jumping all over the place. The same thing is true visually.

On “Avatar,” as I looked at the images, after a while the process became transparent. I forgot it was 3D. I was immersed in the story and the entertainment. The visuals are epic. It’s something we haven’t seen before.

A lot of technology went into this. One of these developments was Simulcam. This allowed us to feed the CG (Computer Generated) image into the camera operator’s viewfinder, marrying both the live scene being shot with the effects material. This works much better than seeing an actor against a green screen. In this way, the camera operator becomes a character. I remember one scene where I was shooting a point of view shot. I went down on the ground, framing up at the character who was approaching me. My knee was in the frame. I moved my leg down to clear it out of the shot. My knee was still there. And then I realized, it wasn’t my knee in the shot—it was the Simulcam CG knee. I had become part of the action without realizing it. It was intuitive—an organic transition into the filmmaking process.

At Weta in New Zealand, we did performance capture. And there, we learned that if a shot’s not doable physically, then it shouldn’t be part of the film. The audience will not suspend their disbelief to accept undoable shots.

Jim had AVIDs on the set, cutting in 2D. He had access to everything as it happened on set. For monitoring on set, we had “The Pod.” This was a full 3D projection system set up with Digital Cinema Projectors. We also used Hyundai 46” monitors with Passive Polarized glasses.

The Digital Intermediate was done at Modern in LA from the 1920×1080 material. The theatrical release will be on over 7,000 screens, 35mm film, digital and Imax. Around 20,000 prints will be made.

We need to bring 3D out of the realm of 3G. 3G is Girls, Gore and Gimmicks. If it’s good 2D, it will probably be better in 3D—so why not use it. But, if it’s a bad movie in 2D, shooting it in 3D isn’t going to help. I learned that 3D should have a bright future if we can get good filmmakers to embrace it. We need to break barriers and get into the audience’s imagination.

Leave a Comment

4 Responses:

  1. Pat says:

    Really great article. Would love to hear more!

  2. Pingback: How Many Megapixels Did Avatar Need? « Petavoxel

  3. SAEED RIZVI says:

    EXCELLENT ARTICLE and also informative for Film Makers

  4. SAEED RIZVI says:

    As a Film maker, this article helps those Producers who are planning to make movie on 3D. Like in my case. Once again a great Article.

Tags: