# Finding an origin for two rays

Let's say I have two rays that share an origin, ray A and ray B, and two world points, point C and point D. How can I find a new shared origin so that ray A intersects point C and ray B intersects point D?

So I just realized that in order for this to work, some rules have to be in place and I need to explain my situation better. This is all based on a camera. I have two world points that are in front of the camera. The two rays are calculated from two points on the screen using Camera.ScreenPointToRay. Their shared origin is then set to the camera's position. I need these rays to intersect two world points, but I need them to stay the same relative to the camera. The camera will obviously need to be able to rotate around it's local z axis to be able to accommodate all possible points in the world. I really just need to find a new camera position such that the two points on the screen create rays that intersect the two defined world points. I will then be ignoring the rotation necessary to get this position.

I hope this all makes sense

Your additional requirements narrow it down a bit what you want to do, however there's still something unclear. If you have two rays which are not parallel so they describe some sort of angle, there are still unlimited possibilities where you can place the origin of two rays with the same property. Basically you can choose a point from some sort of ellipsoid around your two points.

If you additionally state that the cameras y position should be at the same "level" as the points you still have unlimited possibilities as you can choose a point from an ellipse around your points.

if you additionally state that the distance of the camera to each of the two points should be equal you still have two possibilities since you can view the points also from the other side.

In addition the camera can also be flipped around the z axis which gives you 2 additional possibilities. If the camera should try to keep it's local y axis aligned with the world y axis this could be ignored.

An explanation of what's the actual goal would help i guess. You abstracted your problem too much.

Okay, so I've taken time to reconsider my issue and decide if what I was doing really was the best way of going about it. I've decided to do things a different way.

Basically, this is all part of a scheme to get 1:1 zooming working, so that the fingers always point at the same points on a defined plane. At this point I've decided to adjust the way I want to solve this by doing the following:

When two fingers initially touch the screen, I calculate the distance between the two screen points projected onto my plane. Then, every frame that there's movement in either of the fingers, I calculate a new distance for the camera to be at. I'm attempting to do this by trying to find the distance the camera needs to be at in order for the two screen points being projected onto the plane to be the same distance apart that we recorded when the fingers first touched down.

I've been struggling with the trigonometry of tetrahedrons all day long to try to solve this issue. The reason I've been using a tetrahedron is because the camera could be at an angle and so I not only create a triangle between the two points projected on the plane and the camera's position, but I also need another point that these are all connected to at the camera's position but with a y value of the plane. I suppose it's important to note that this plane does not have a rotation and has the same y height at all times.

I'm much more confident about this plan of attack, I just have the same issue of the trying to figure out the trig behind it.

I think I know what you mean... I'll see if I can explain my solution:

So when 2 fingers touch the screen you calculate the distance between them, let's say the distance is 20 units. Then you use this distance to calculate 2 new points in screen space (let's call these SP1 & SP2); SP1 is 10 units (half the distance) directly to the *left* of the centre point of the screen (remember this is in screen space too), and SP2 is 10 units to the *right* of the centre.

Then you need 2 points that are on your plane (let's call these PL1 & PL2). PL1 will be SP1 converted to a point on your plane (hopefully that makes sense), and PL2 the same for SP2.

Now, every frame that the fingers stay on the screen you need to re-calculate SP1 & SP2. So get the distance, half it, and move them to the left and right of the screen centre accordingly. I don't know if your camera moves up and down at all, but if it does you need to move PL1 & PL2 with it so that PL1, PL2, SP1, & SP2 are always all on a straight horizontal line in-line with the screen centre.

For the zoom: You calculate PL1 & PL2 as screen space positions, and then move the camera or the plane (or whatever does your zoom) in and out to make sure that PL1 converted to screen space is always lined up with SP1, and PL2 the same for SP2.

I really hope that made some sense.... and if it did I hope it helps!

Do you just mean that you need a mathematical algorithm to move the camera from side to side on the Z axis that keeps the 2 rays leaving the camera at the same angle (but 1 positive and 1 negative angle) like in the attached images?

### Your answer

### Welcome to Unity Answers

The best place to ask and answer questions about development with Unity.

To help users navigate the site we have posted a site navigation guide.

If you are a new user to Unity Answers, check out our FAQ for more information.

Make sure to check out our Knowledge Base for commonly asked Unity questions.

If you are a moderator, see our Moderator Guidelines page.

We are making improvements to UA, see the list of changes.