Adaptive Depth of Field based on distance

In Unity 2017 with the new Post-Processing effects is it possible to have adaptive DoF, that is so the that the focus is based on camera distance to the object. So if you’re looking (pointing the crosshair) at an object, like a note on a lamp post it becomes clear so you can read the text but the city and street in the background becomes blurred. But if you look to the side of the lamp post the street becomes clear and the note on the lamp post becomes blurred.

Is this built into the DoF for the Post-Processing effects? Do I have to use script for this, using RayCast? Physically based DoF should work like this by default shouldn’t it?

Would this work in VR as well, or are their limitations for post effects in VR (Oculus, GearVR or Vive)?

It’s been possible for a while. You’d have to raycast and change the target depth on the post-process effect.

The problem is that it looks terrible. You can’t actually know what the user’s eyes are focused on (unless you use eye-tracking, but that’s a whole other can of worms), so you have to assume it’s the center of the screen, which is often inaccurate.

Think of it this way: If you pan a camera right to left over a lamp post, the DOF would appear to flicker closer when you pass the lamppost, then immediately blur it again once it’s off. It’s very jarring to the user.