Very strange results from Camera.WorldToScreenPoint()

I am trying to do a very simple thing: draw a bounding box around various objects. To do it, I get the mesh and transform it into screen coords using Camera.WorldToScreenPoint(); and then I find the x and y extents in screen coords but store the vertices that correspond to them in world coords.

I then take these 4 points, cast rays at them to ensure the object is visible and if it is, transform them into screen space. At this point, things start to go wrong. Depending on location relative to the camera, the transform produces the wrong result.

I have attached a couple of screenshots to illustrate, in the scene view, you can see that the boxes around the objects in front of the camera are drawn correctly but the object just outside the view frustum is not; you can see that its box is in the wrong place. Each object in the scene is repeated before and behind the camera: the effect appears as I move the camera backwards to bring the objects close to the frustum.

The game view shows the result after the points have been transformed back into world space and drawn as a 3D plane. The box at top right does not enclose anything (the object is offscreen)

Objects that are located near each other in the world are being flung apart, by thousands, sometimes tens of thousands of pixels: I’m beating my brains out over this and making no headway: any help is appreciated.

I’m now convinced this is a unity bug; I can’t see any other reason for the result to change like this.

I’m going to try for a workaround by culling all objects outside the camera frustum (in world space) using GeometryUtility.CalculateFrustumPlanes() and then testing each object using GeometryUtilityTestPlanesAABB() on the bounds of each object. The example for this shows the test using an object with a collider: does anyone know if this means a collider is mandatory?