# Shader MVP matrices: what does vertex position mean at each step?

I've been scouring the web trying to find an explanation of what the position of a vertex actually means before and after each projection in the standard MVP chain. After reading and trying some things out, I understand it in a general sense:

Initially, position is a multiple of the bounds of an object, in local object space

After

**M**, position is in world coordinatesAfter

**V**, position is in camera coordinatesAfter

**P**, position is in screen coordinates

What I'm trying to figure out for each of these is what exactly the units mean. Here's what I understand so far - if anything is incorrect or you can fill in the gaps, I'd appreciate any help :)

**Initial**

Units are -0.5 to 0.5 along the 3 axes, with -0.5 and 0.5 corresponding to the bounds of the object. No surprises there.

**Post-M**

Units are in Unity's world coordinate system. No surprises there.

**Post-V**

Units are in some kind of camera coordinate system. X and Y appear to be square, and a quad with an assigned X and Y range of 1 in MV spaces doesn't appear to quite fill up the screen from bottom to top. No idea what units Z is in.

**Post-P**

X and Y units are in viewport coordinates, with -1 to 1 being the range horizontally and vertically? No idea what units Z is in.

Once you're in post-P space, it appears you can use _ScreenParams to get the number of pixels and convert from viewport to pixel coordinates in XY.

So in a general sense, I'm looking for information on what the coordinates mean in each space, and what can be done with them.

I'm trying to create some custom billboarding. Here are a couple examples of what I hope to do with this information:

Create a billboard that appears the same size regardless of distance from the camera.

Create a billboard whose size is dependent on distance from the camera, but not linearly. E.g. a billboard that appears to be 0 pixels wide at 1000 game units, and 100 pixels wide at 10 game units, increasing along some kind of curve.

Why do you assume that it is $$anonymous$$ then V then P? By convention in mathematics, the expression $$anonymous$$VP * u would be evaluated by multiplying by P first. I can only assume that the same is done here, so that would be a matrix from projection space to model space.

No, that's not true, The point is that we do a right side multiplication.

```
transformedpoint = P * V * $$anonymous$$ * point
```

So yes, we do first apply $$anonymous$$ then V and finally P. $$anonymous$$ (model matrix) brings local model coordinates into worldspace. V (view matrix) brings world space coordinates into the local space of the camera (essentially the inverse of the camera transform object). P (projection matrix) finally does the projection from view space into normalized viewport space.

If you want to know more about the projection matrix or matrices in general, have a look at my matrix crash course

And beyond the math, that's the definition - it applies them in that order, when used the standard way. If the actual math was P($$anonymous$$/2+V^2) we'd still call it the $$anonymous$$VP matrix.

**Answer** by Owen-Reynolds
·
May 24, 2013 at 01:45 AM

OpenGL stuff (like the graphics "Red Book") do a pretty good job explaining that stuff. It's standard -- nothing to do with Unity.

Initial are the raw model coords, straight from the modelling program. Can be anything, but obviously should be touching/surrounding 000. The tip of an animated orc's nose is always (0,3,0.4) for every orc, every frame.

After MV is "world units" in the camera's local coordinate system. The same as `Cam.main.InverseTransform(P);`

. If the camera is 10 meters away facing you, it will be (0,0,-10) (at some step, negative Z is in front of you.)

P accounts for the view angle and converts to generic viewPort as you say. Depending on the system that's often -1 to 1 (so 00 is centered.) Hardware will convert to pixels. The same way, is now "normalized" depth.

Standard way to make something the same size at depth is to set the matrix not to shrink it. In Unity, it's simpler to just make a 2nd Ortho camera (which makes the matrix you wanted.) For depth doing funny stuff, should be able to have the vert shader check Z (in meters) or distance(xyz), also in meters, after the MV step.

Awesome, thanks for the answer. The $$anonymous$$V thing makes perfect sense now that you've pointed it out :)

In terms of keeping billboards the same size, ortho is definitely a good way to go (and we do that for GUI elements), but we need our billboards to actually be at a point in-world, able to be occluded and of course moving relative to the camera as the user moves around.

So a hybrid approach sounds like it will work: save the depth in the $$anonymous$$V stage, and then set the size (as pixels or percent) in the $$anonymous$$VP stage.

At the $$anonymous$$V stage, I presume that (0,0,0) is at the point of the camera, not something bizarre like the center of the camera's near clip plane or anything - let me know if that isn't correct :)

The view matrix really is just pure local camera coords, so (000) is the camera. For example this:

http://webglfactory.blogspot.com/2011/06/how-to-create-view-matrix.html is just doing a lookAt (Unity did not invert the LookAt :-))

The near and far planes are part of the depth calc, in the P step. Likewise the camera view angle (or Ortho bounds) is for Perspective, which is, of course, in the P step.

**Answer** by Boris S.
·
Nov 18, 2014 at 05:33 PM

I am learning transformations too and have found a strange issue. After applying MVP transformation in vertex shader I expect to get vertex coordinates in [-1;1] range, but it is true only for orthographic camera. With perspective camera it is not true. I do not know what exactly values I get, but they are much bigger than 1. Could anyone please explain me why it happens?

**Answer** by darktemplar216
·
Jul 27, 2015 at 11:05 AM

the answer is wrong. after v = p*v*m*v0, v.x is not[-1, 1], v.y is not[-1, 1]

the grey area is[-1, 1]

I use o.vertex = mul(UNITY_MATRIX_MVP, v.vertex); o.vertex as color to do the experiment

### Your answer

### Welcome to Unity Answers

The best place to ask and answer questions about development with Unity.

To help users navigate the site we have posted a site navigation guide.

If you are a new user to Unity Answers, check out our FAQ for more information.

Make sure to check out our Knowledge Base for commonly asked Unity questions.

If you are a moderator, see our Moderator Guidelines page.

We are making improvements to UA, see the list of changes.

### Follow this Question

### Related Questions

how can i reproduce the 'Unity_Matrix_VP' value? 0 Answers

Cut through camera render and display LITERALLY NOTHINGNESS 0 Answers

Camera replacement shader not working 0 Answers

[Shader] normal vector and model matrix inverse 1 Answer

Water camera effects 0 Answers