VCP Mirror TheoryThe following are the list of steps to be done to implement such a VCP reflection properly:
In the implementation of the VCP mirror; I used the Direct3DX miscellaneous function D3DXCreateRenderToSurface to get an ID3DXRenderToSurface interface. The ID3DXRenderToSurface interface could be used to attach a texture surface, which can then be used as the render target easily. The reflection matrix in step 2 can be easily calculated using the D3DXMatrixReflect function in Direct3D. This matrix is really handy as you could easily get the VCP by multiplying it with the current camera world space position. The multiplication has the same effect of flipping the point at the current camera position to the VCP in world space. The reflection matrix simply inverts the coordinate system on the other side of the mirroring plane. It's a variation of the common identity matrix. So if the yz plane were the mirroring plane, then the reflection matrix would have a diagonal of {1, 1, 1}, which inverts the yaxis appropriately. The following is the reflection matrix for a mirroring plane on the yz plane: Step 4 computes the perpendicular distance of the camera from the mirroring plane in world space. We need a vector from one of the mirror vertex to the camera position lets call it a. We also need another vector representing the normal of the mirroring plane lets call it n. Normal n could be calculated easily by the cross product of any of the mirror's two edges. A vector projection of vectors a on n will then yield the perpendicular distance we need. Figure 3 shows the vector projection to find the perpendicular distance. In mathematical notation, the vector projection would be as follows: This is just the dot product of vector a with normalized n. Alternatively, the VCP calculated in step 3 could be used to replace the camera position when calculating a but you would need to be careful with the signage of the resultant scalar distance. If the distance is negative, the camera is behind the mirror; hence steps 5 to 11 can be skipped. Steps 5 to 7 are meant for converting the mirror and the virtual camera (at VCP) from world space to camera space (eye space for OpenGL jargon). In camera space, the virtual camera, should sit on the origin. Step 5 involves standard vector dot product arithmetic and some vertex rotations. We need to rotate the mirror vertices twice: once about the xaxis and once about the yaxis. The reason for doing this is to align the mirror's vertices with the xy plane. The angle to rotate about the xaxis comes from the dot product of the mirror up vector and the yaxis. The angle to rotate about the yaxis comes from the dot product of the mirror's "LeftToRight" vector with the xaxis. The "LeftToRight" vector can be computed using the lower left mirror vertex and the lower right mirror vertex. The two rotations are combined into a rotation matrix so that we can just multiply the mirror vertices with it to effect the combined rotation. We have to be very careful when it comes to computing the rotation about the yaxis. We need to ensure that after rotation, the virtual camera would be facing the +ve z direction; if you are using a RightHand coordinate system as in OpenGL then it's the ve z direction. This is a must because the projection matrix calculation in step 9 would not work if the virtual camera does not face the +ve z direction. In fact, all projection matrix calculation for Left Hand coordinate system assumes that the camera is looking at the +ve z direction! Again the assumption for Right Hand coordinate system is –ve z direction. We can make sure that our virtual camera, when rotated, faces the +ve z direction by testing the z coordinate value of the mirror's normal after it had been rotated by the rotation matrix calculated earlier on. The mirror's normal is the same as the direction of the virtual camera. We multiply it with the computed rotation matrix and check the rotated normal. The signage of the z coordinate of the rotated normal tells you whether the virtual camera will be facing +ve or –ve z direction. If the z coordinate of the rotated normal is –ve (facing –ve z direction), we would need to recompute the rotation matrix by negating the angle to rotate about the yaxis. The following figure sums up this paragraph, I hoped. Step 6 uses the previous rotation matrix to rotate the virtual camera at VCP by the same amount as the mirror vertices. Remember that the rotated virtual camera must face +ve z for Left Hand coordinate system (DirectX) and –ve z for Right Hand coordinate system (OpenGL). Step 7 involves negating the x, y and z components of the rotated VCP and then translating the mirror vertices (already parallel with the xy plane) using the negated values. This has the effect of translating the mirror vertices by the same magnitude needed to move the VCP to the origin. The conversion from world space to camera space is now complete and we can start computing the projection matrix. Figure 5 shows the mirror vertices being rotated for alignment with the xy plane and then translated (to camera space). Step 8 basically grabs the min/max x and y coordinates from the mirror vertices in camera space (e.g. after the rotation and translation). Step 9 is where we formulate a matrix to represent the projection that we want in camera space. The values needed are the min/max x and y values, the near clip and the far clip. The near clip is the perpendicular distance calculated in step 4. The far clip would depend on how far, or how "deep", you want the reflection to be. Normally this should be a large value or at least matching the far clip value of any custom view frustum. You could compute the projection matrix with the above ingredients using formulas or you could simply use the D3DXMatrixPerspectiveOffCenterLH function in DirectX. Figure 6 below shows the graphical depiction of such a projection in camera space. Step 10 computes the view matrix for the virtual camera. The purpose of the view matrix in DirectX is to collate the orientation and position of the viewer. Our viewer in this case is the virtual camera at the VCP. To compute the correct view matrix for the virtual camera, simply "point" the virtual camera at the original camera in world space. Take a look at Figure 7 below, which shows the top view of the off center projection and the view of the virtual camera. Again, you have the option of applying the full formulas to compute the view matrix. But it can be done easily with the D3DXMatrixLookAtLH function, which is again meant for left hand coordinate system. Use the D3DXMatrixLookAtRH function if you are using right hand coordinate system. A point to note here is that the up vector required by the D3DXMatrixLookAtLH function should be the world up vector, which is usually the yaxis. Do not use the up vector of the camera. This is because when the camera rotates, the reflection in the mirror should not be rotating, it should be fixed. Hence the use of the world up vector, which should be fixed all the time, I hoped. Lastly set the world and view matrices (e.g. pDevice>SetTransform(D3DTS_WORLD, mat) in DirectX) and render the scene to the texture surface that we had prepared in step 1. You had just completed the 1^{st} rendering pass! The 2^{nd} pass is just drawing the entire scene including the mirror, which can now be textured with the nice reflection texture you rendered previously. :) The following are some screen shots of incorrect and correct reflections generated using the VCP technique. The reasons for the inaccuracies will be explained later. Figure 8 and Figure 9 show two screenshots during the implementation of the VCP sample. Figure 8 seems to be a correct reflection but is actually wrong. With the mirror standing perpendicularly and bottom touching the floor, the orientation and mismatching of the floor textures gives a good hint that the reflection is wrong. Figure 9 shows a much more obvious screen capture of the same implementation. The floor reflection is totally wrong and the scene appears a little skewed towards the bottom of the mirror. The errors were traced to the incorrect calculation of the projection matrix. A wrong projection matrix would easily cause an incorrect reflection as shown in Figure 10 below and it is not easy to trace such an error if the projection is just off marginally. Now lets take a look at some of the screen captures of correct reflections generated in the sample. Notice the correct reflection of the floor immediately in front of the mirror. Since the mirror's bottom edge is touching the floor, the reflection should be as depicted in Figure 12 above instead of the one in Figure 9. The next screen captures shows another accurate reflection generated from an elevated viewing position. Ok! That's all for VCP reflections. Fire up your favorite development environment or editor and start coding! Reference

