One common theme I get in my In Box these days has to do with “the Nikon Z’s can’t track in autofocus.” This is incorrect. It started with YouTube videos at camera introduction that made similar claims, and has morphed and metamorphosed like a chameleon that makes cocoons in its larval stage but never quite becomes beautiful and gains flight.
We’ve got a big problem with nomenclature here, particularly when different brands use similar terms differently.
So let’s define a term:
Tracking. Focus initializes on an object, then follows as that object moves.
In order to track on a Nikon, you need to set your focus system to AF-A or AF-C (again, nomenclature does vary across brands), because focus must continue after the camera detects the object. Face Detect and Eye Detect are not by themselves tracking modes. They’re initializing functions (e.g. “find a face to focus on”). Both Face Detect and Eye Detect work in AF-S (single focus) modes. They only become tracking modes when you set a continuous focus mode (again, AF-A or AF-C, though AF-A starts as a non-continuous mode and then eventually figures out that the thing it focused on moved, so maybe it ought to follow; as you can guess from my wording, AF-A tends to not react quick enough, and I tell people to avoid it).
Face and Eye Detect aren’t the only object recognition algorithms in cameras these days. Virtually no one will talk about what their cameras do when no human is in the scene, but many of them look for “large thing that’s nearby.” Others look for “how much of the scene is at the same distance?”
Which brings us to 3D Tracking. In the Nikon world—which is where much of the confusion and misinformation lies—3D Tracking started with color recognition. That goes all the way back to the F5 film SLR, though back then it was very crude and mostly related to exposure.
In 3D Tracking on the Nikon DSLRs, you put the focus cursor on something, initiate focus—half press the shutter release or press the AF-ON button—and the camera analyzes color shapes in the area the focus cursor is over and builds a data set of the unique color pattern. Originally, this worked best with face shapes and tones. If a face tone was detected at the point you said to focus, the camera identified the shape and size of that, and followed it, even outside the autofocus cursor area! I and others were astonished when we found that a subject so identified could stray away from all the available focus positions, but when it came back to any of them, the camera correctly noted that and shifted focus accordingly.
As the metering sensor that collected this color information got more pixels, the system became quite robust. In the D5 generation cameras (D500, D850, D5, and D7500), 3D Tracking mode got good enough to use on things other than skin tones. It really does seem to follow most objects well, and on a D500 or D7500 it tends to do that completely across the frame.
So what is the complaint about the Z6 and Z7? Is it that 3D Tracking doesn’t work? No. It’s that we have to tell the camera what to start using it on with an additional step. Because Nikon implemented it as a mode within a mode, you have to perform additional steps to get 3D Tracking started. And you have to tell it when to stop, and then go through the same steps to get it started again.
Frankly, if one of my programmers gave me the Z6/Z7 method (now also on the Z50), I’d be screaming at them until they fixed it. It’s quite fixable, because it’s a UI thing, not a performance thing.
The Z6 and Z7 track motion just fine, particular human motion, as I think I’ve proven many times now (sports and wildlife photography, as well as event). For most people photographing humans, AF-C with Auto Area and Face/Eye Detect works quite well. Sony still has a small edge, but Nikon rapidly closed the gap (Canon did, too; both with firmware updates).
So it’s really only with identifying a non-human face or object that I want to track that’s the problem, because I have to jump through hoops to tell the camera what that is. It’s not a performance problem—the 3D Tracking Nikon provides in mirrorless is very similar to Sony’s capability when used for focus-and-reframe use—it’s solely an implementation problem on the Nikon: too klutzy and step driven when it doesn’t need to be.
Simply put, Nikon’s engineering team made an implementation mistake that they need to fix. I suspect they’ll eventually get around to fixing this, but so far we’ve not seen them acknowledge their mistake (in the Japanese culture they’ll never admit it was a mistake, they’ll just quietly fix it). I suspect that Nikon is very busy at the moment trying to get more new products to market at the moment, including a very critical one in the D6.
If fixing 3D Tracking in mirrorless isn’t on their to-do list, though, they haven’t been paying enough attention to customers. I’m pretty sure that’s not true.
-----------
By the way, the term 3D Tracking came about as a way of pointing out that the focus system wasn’t just watching spots across the horizontal and vertical axis to establish focus. 3D Tracking systems follow objects moving to and from the camera (depth), and they generally use predictive calculations as they do so.