Comments and answers for "Shader MVP matrices: what does vertex position mean at each step?"
http://answers.unity.com/questions/462253/shader-mvp-matrices-what-does-vertex-position-mean.html
The latest comments and answers for the question "Shader MVP matrices: what does vertex position mean at each step?"Comment by Owen-Reynolds on Owen-Reynolds's comment
http://answers.unity.com/comments/1610210/view.html
And beyond the math, that's the definition - it applies them in that order, when used the standard way. If the actual math was P(M/2+V^2) we'd still call it the MVP matrix.Fri, 08 Mar 2019 03:12:34 GMTOwen-ReynoldsComment by Bunny83 on Bunny83's comment
http://answers.unity.com/comments/1610183/view.html
No, that's not true, The point is that we do a right side multiplication.
transformedpoint = P * V * M * point
So yes, we do first apply M then V and finally P. M (model matrix) brings local model coordinates into worldspace. V (view matrix) brings world space coordinates into the local space of the camera (essentially the inverse of the camera transform object). P (projection matrix) finally does the projection from view space into normalized viewport space.
<br>
If you want to know more about the projection matrix or matrices in general, have a look at [my matrix crash course][1]
[1]: http://answers.unity.com/answers/1359877/view.htmlFri, 08 Mar 2019 00:52:45 GMTBunny83Comment by GarthLaoshi
http://answers.unity.com/comments/1610181/view.html
Why do you assume that it is M then V then P? By convention in mathematics, the expression MVP * u would be evaluated by multiplying by P first. I can only assume that the same is done here, so that would be a matrix from projection space to model space.Fri, 08 Mar 2019 00:29:01 GMTGarthLaoshiAnswer by darktemplar216
http://answers.unity.com/answers/1015802/view.html
the answer is wrong.
after v = p*v*m*v0, v.x is not[-1, 1], v.y is not[-1, 1]![alt text][1]
[1]: /storage/temp/51066-qq图片20150727181046.jpg
the grey area is[-1, 1]
I use o.vertex = mul(UNITY_MATRIX_MVP, v.vertex); o.vertex as color to do the experimentMon, 27 Jul 2015 11:05:37 GMTdarktemplar216Answer by Boris S.
http://answers.unity.com/answers/834963/view.html
I am learning transformations too and have found a strange issue. After applying MVP transformation in vertex shader I expect to get vertex coordinates in [-1;1] range, but it is true only for orthographic camera. With perspective camera it is not true. I do not know what exactly values I get, but they are much bigger than 1. Could anyone please explain me why it happens?Tue, 18 Nov 2014 17:33:37 GMTBoris S.Comment by Owen-Reynolds on Owen-Reynolds's answer
http://answers.unity.com/comments/462546/view.html
The view matrix really is just pure local camera coords, so (000) is the camera. For example this:
http://webglfactory.blogspot.com/2011/06/how-to-create-view-matrix.html is just doing a lookAt (Unity did not invert the LookAt :-))
The near and far planes are part of the depth calc, in the P step. Likewise the camera view angle (or Ortho bounds) is for Perspective, which is, of course, in the P step.Fri, 24 May 2013 12:47:57 GMTOwen-ReynoldsComment by Julien-Lynge on Julien-Lynge's answer
http://answers.unity.com/comments/462310/view.html
Awesome, thanks for the answer. The MV thing makes perfect sense now that you've pointed it out :)
In terms of keeping billboards the same size, ortho is definitely a good way to go (and we do that for GUI elements), but we need our billboards to actually be at a point in-world, able to be occluded and of course moving relative to the camera as the user moves around.
So a hybrid approach sounds like it will work: save the depth in the MV stage, and then set the size (as pixels or percent) in the MVP stage.
At the MV stage, I presume that (0,0,0) is at the point of the camera, not something bizarre like the center of the camera's near clip plane or anything - let me know if that isn't correct :)Fri, 24 May 2013 02:00:14 GMTJulien-LyngeAnswer by Owen-Reynolds
http://answers.unity.com/answers/462305/view.html
OpenGL stuff (like the graphics "Red Book") do a pretty good job explaining that stuff. It's standard -- nothing to do with Unity.
Initial are the raw model coords, straight from the modelling program. Can be anything, but obviously should be touching/surrounding 000. The tip of an animated orc's nose is always (0,3,0.4) for every orc, every frame.
After MV is "world units" in the camera's local coordinate system. The same as `Cam.main.InverseTransform(P);`. If the camera is 10 meters away facing you, it will be (0,0,-10) (at some step, negative Z is in front of you.)
P accounts for the view angle and converts to generic viewPort as you say. Depending on the system that's often -1 to 1 (so 00 is centered.) Hardware will convert to pixels. The same way, is now "normalized" depth.
---
Standard way to make something the same size at depth is to set the matrix not to shrink it. In Unity, it's simpler to just make a 2nd Ortho camera (which makes the matrix you wanted.) For depth doing funny stuff, should be able to have the vert shader check Z (in meters) or distance(xyz), also in meters, after the MV step.Fri, 24 May 2013 01:45:40 GMTOwen-Reynolds