1. Job description
In this part of the course, we will focus on rendering images using ray tracing. One of the most important operations in ray tracing is to find the intersection of light and object. Once the intersection of the light and the object is found, you can perform shading and return the pixel color. In this assignment, we need to realize two parts: the generation of light and the intersection of light and triangle. The workflow of this code framework is:
- Start with the main function. We define the parameters of the scene, add objects (spheres or triangles) to the scene, set their materials, and then add light sources to the scene.
- Call the Render(scene) function. In the cycle of traversing all pixels, the corresponding light is generated and the returned color is saved in the frame buffer. At the end of the rendering process, the information in the frame buffer is saved as an image.
- After generating the light corresponding to the pixel, we call the CastRay function, which calls trace to query the intersection of the light and the nearest object in the scene.
- Then, we perform shading at this intersection. We have set up three different shading situations and have provided you with the code.
• global.hpp: contains the basic functions and variables that will be used in the whole framework.
• Vector.hpp: since we no longer use the Eigen library, we provide common vector operations here, such as dotProduct, crossProduct and normalize.
• Object.hpp: parent class of the rendered object. Both Triangle and Sphere classes inherit from this class.
• Scene.hpp: define the scene to render. Including setting parameters, objects and lights.
• Renderer.hpp: renderer class, which implements all ray tracing operations.
The functions you need to modify are:
• Renderer. Base note: Render() in CPP: Here you need to generate a corresponding light for each pixel, then call function castRay() to get the color, and finally store the color in the corresponding pixel of the frame buffer.
• Triangle. Raytriangleintersect() in HPP: V0, V1 and V2 are the three vertices of the triangle, orig is the starting point of the light, and dir is the direction vector of the light unit. tnear, u, v are the parameters you need to update using the Moller trumbore algorithm derived in our class
2. Solution
This assignment needs to be modified, but it's still a little simple
2.1 Render
According to the requirements of the topic, we need to generate a corresponding light for each pixel in this part. According to the known conditions, we already have the fov of the camera, the pixel and aspect ratio of the scene. Now we need to calculate the light from the camera to each point of the pixel. However, according to these conditions, we find that there is a difference, that is, the depth of the imaging plane is unknown, so we cannot calculate the height of the scene in the world coordinate system through fov, Naturally, it is impossible to calculate its width according to the aspect ratio, and this light is not available, but we look carefully at the source code given by the job, which has such a statement: Vector3f dir = Vector3f(x, y, -1)// Don’t forget to normalize this direction! It means that we should not forget to normalize the calculated light direction, that is, this vector is the original vector from the camera to the pixel, so it implies a condition that the depth of the imaging plane is - 1 (the camera looks in the - z direction). Therefore, we can calculate the specific light according to the existing conditions (the default traversal order is top left to bottom right, and the pixel coordinate (640480) corresponds to the world coordinate system (0,0), During coordinate conversion, don't forget that it is the coordinate of the center of the converted pixel, and add 0.5 to i and j):
void Renderer::Render(const Scene& scene) { std::vector<Vector3f> framebuffer(scene.width * scene.height); float scale = std::tan(deg2rad(scene.fov * 0.5f)); float imageAspectRatio = scene.width / (float)scene.height; // Use this variable as the eye position to start your rays. Vector3f eye_pos(0); int m = 0; for (int j = 0; j < scene.height; ++j) { for (int i = 0; i < scene.width; ++i) { // generate primary ray direction float x; float y; // TODO: Find the x and y positions of the current pixel to get the direction // vector that passes through it. // Also, don't forget to multiply both of them with the variable *scale*, and // x (horizontal) variable with the *imageAspectRatio* x = 2 * scale * imageAspectRatio / scene.width * (i + 0.5) - scale * imageAspectRatio; y = - 2 * scale / scene.height * (j + 0.5) + scale; Vector3f dir = Vector3f(x, y, -1); // Don't forget to normalize this direction! dir = normalize(dir); framebuffer[m++] = castRay(eye_pos, dir, scene, 0); } UpdateProgress(j / (float)scene.height); } // save framebuffer to file FILE* fp = fopen("binary.ppm", "wb"); (void)fprintf(fp, "P6\n%d %d\n255\n", scene.width, scene.height); for (auto i = 0; i < scene.height * scene.width; ++i) { static unsigned char color[3]; color[0] = (char)(255 * clamp(0, 1, framebuffer[i].x)); color[1] = (char)(255 * clamp(0, 1, framebuffer[i].y)); color[2] = (char)(255 * clamp(0, 1, framebuffer[i].z)); fwrite(color, 1, 3, fp); } fclose(fp); }
2.2 rayTriangleIntersect
The code in this part is the specific implementation of the M ö ller Trumbore algorithm mentioned in structure 13:
The specific derivation process of the algorithm is shown in: Möller Trumbore
It should be noted that after the calculation, don't forget to calculate the validity of the calculated tnear, u and v. if the range is correct, it will return true
bool rayTriangleIntersect(const Vector3f& v0, const Vector3f& v1, const Vector3f& v2, const Vector3f& orig, const Vector3f& dir, float& tnear, float& u, float& v) { // TODO: Implement this function that tests whether the triangle // that's specified bt v0, v1 and v2 intersects with the ray (whose // origin is *orig* and direction is *dir*) // Also don't forget to update tnear, u and v. Vector3f E1, E2, S, S1, S2, re; E1 = v1 - v0; E2 = v2 - v0; S = orig - v0; S1 = crossProduct(dir, E2); S2 = crossProduct(S, E1); re = Vector3f(dotProduct(S2, E2), dotProduct(S1, S), dotProduct(S2, dir)); re = re / dotProduct(S1, E1); tnear = re.x; u = re.y; v = re.z; if(tnear > 0 && v >= 0 && v <= 1 && u >= 0 && u <= 1) return true; return false; }
3. Effect
Basically consistent with the example
4. Framework understanding
In the light tracing part of the course, the homework framework has been further expanded and many complex calculations have been introduced. In order to get started faster on the later homework, I have combed the homework framework
4.1 scene setting
First, the program starts in main CPP, the main function here creates the whole scene (1280 * 960 pixels):
int main() { Scene scene(1280, 960); ...
Then we created two spheres, set their center, material, color and other attributes, and added them to the scene:
... auto sph1 = std::make_unique<Sphere>(Vector3f(-1, 0, -12), 2);//The center of the circle is (- 1, 0, - 12) and the radius is 2 sph1->materialType = DIFFUSE_AND_GLOSSY;//The material is DIFFUSE_AND_GLOSSY, which can be regarded as the reflection on this object, is the superposition of rough reflection (between specular reflection and diffuse reflection) and diffuse reflection sph1->diffuseColor = Vector3f(0.6, 0.7, 0.8);//Diffuse background color is (0.6, 0.7, 0.8) auto sph2 = std::make_unique<Sphere>(Vector3f(0.5, -0.5, -8), 1.5);The center of the circle is(0.5, -0.5, -8),The radius is 1.5 sph2->ior = 1.5;//The refractive index is 1.5 sph2->materialType = REFLECTION_AND_REFRACTION;//The material is reflection_ AND_ Reflection can be regarded as the superposition of specular reflection and refraction scene.Add(std::move(sph1)); scene.Add(std::move(sph2));//Add two spheres to the scene ...
The next step is to create a ground rectangular mesh, which can be regarded as the splicing of two triangles for calculation. Here, vertIndex is used to distinguish the vertices of two triangles. For example, the vertex of the first triangle is the 0,1,3 vertex in verts, and the vertex of the second triangle is the 1,2,3 vertex in verts:
... Vector3f verts[4] = {{-5,-3,-6}, {5,-3,-6}, {5,-3,-16}, {-5,-3,-16}};//Four vertices of the mesh uint32_t vertIndex[6] = {0, 1, 3, 1, 2, 3};//Vertex sequence number of two triangles Vector2f st[4] = {{0, 0}, {1, 0}, {1, 1}, {0, 1}}; auto mesh = std::make_unique<MeshTriangle>(verts, vertIndex, 2, st);//2 refers to two triangles mesh->materialType = DIFFUSE_AND_GLOSSY;//Set the mesh material to divide_ AND_ GLOSSY scene.Add(std::move(mesh));//Mesh adding scene ...
Then create two point lights and add them to the scene:
... scene.Add(std::make_unique<Light>(Vector3f(-20, 70, 20), 0.5));//The coordinates of the point light source are (- 20, 70, 20) and the intensity is 0.5 scene.Add(std::make_unique<Light>(Vector3f(30, 50, -12), 0.5)); ...
Finally, create a renderer and add the created scene to the renderer:
... Renderer r; r.Render(scene); return 0; }
4.2 rendering
Go to renderer The Rander function of CPP, which is one part of our modification of this job, we have completed the light solution to each pixel of the imaging plane. Next, we calculate the intersection between the light and the scene through castRay and calculate the color, store the color in the framebuffer and wait for drawing:
void Renderer::Render(const Scene& scene) { std::vector<Vector3f> framebuffer(scene.width * scene.height); float scale = std::tan(deg2rad(scene.fov * 0.5f)); float imageAspectRatio = scene.width / (float)scene.height; // Use this variable as the eye position to start your rays. Vector3f eye_pos(0); int m = 0; for (int j = 0; j < scene.height; ++j) { for (int i = 0; i < scene.width; ++i) { // generate primary ray direction float x; float y; x = 2 * scale * imageAspectRatio / scene.width * (i + 0.5) - scale * imageAspectRatio; y = - 2 * scale / scene.height * (j + 0.5) + scale; Vector3f dir = Vector3f(x, y, -1); // Don't forget to normalize this direction! dir = normalize(dir); framebuffer[m++] = castRay(eye_pos, dir, scene, 0); ...
4.3 intersection
After calculating the light, find out the intersection between the light and the scene to facilitate the calculation of shading. Here we enter the castRay function. First, some basic settings:
Vector3f castRay(const Vector3f &orig, const Vector3f &dir, const Scene& scene, int depth) { if (depth > scene.maxDepth) {//If the maximum number of light reflections / refractions of the scene is exceeded, set the color to black and do not go down return Vector3f(0.0,0.0,0.0); } Vector3f hitColor = scene.backgroundColor;//Set the default hitcolor as the background color of the scene. If the light does not intersect with the object, it is the background color ...
The next step is to judge whether the light intersects with the objects in the scene through the trace function:
... if (auto payload = trace(orig, dir, scene.get_objects()); payload) { ...
Go to the trace function, which is to call the intersect method of each object to find the intersection by using the passed parameters
std::optional<hit_payload> trace(const Vector3f &orig, const Vector3f &dir, const std::vector<std::unique_ptr<Object>> &objects) { float tNear = kInfinity; std::optional<hit_payload> payload; for (const auto & object : objects) { float tNearK = kInfinity; uint32_t indexK; Vector2f uvK; if (object->intersect(orig, dir, tNearK, indexK, uvK) && tNearK < tNear)//If the obtained intersection is valid, save it { ...
Because object has two subclasses: sphere and triangle, there are also two kinds of natural intersect ion:
4.3.1 triangle intersection
In triangle The intersect method can be seen in HPP. Here, you can traverse two triangles in a mesh and run the rayTriangleIntersect function to calculate their intersection. It has been given in the previous solution, so the code will not be pasted here
bool intersect(const Vector3f& orig, const Vector3f& dir, float& tnear, uint32_t& index, Vector2f& uv) const override { bool intersect = false; for (uint32_t k = 0; k < numTriangles; ++k) { const Vector3f& v0 = vertices[vertexIndex[k * 3]]; const Vector3f& v1 = vertices[vertexIndex[k * 3 + 1]]; const Vector3f& v2 = vertices[vertexIndex[k * 3 + 2]]; float t, u, v; if (rayTriangleIntersect(v0, v1, v2, orig, dir, t, u, v) && t < tnear) { ...
4.3.2 sphere intersection
In sphere The intersect method can be seen in HPP, and the formula is listed according to what we learned in class:
bool intersect(const Vector3f& orig, const Vector3f& dir, float& tnear, uint32_t&, Vector2f&) const override { // analytic solution Vector3f L = orig - center; float a = dotProduct(dir, dir); float b = 2 * dotProduct(dir, L); float c = dotProduct(L, L) - radius2; float t0, t1; if (!solveQuadratic(a, b, c, t0, t1)) return false; if (t0 < 0) t0 = t1; if (t0 < 0)//If both solutions are less than 0, they are invalid return false; tnear = t0; return true; }
The next step is to enter the function solveQuadratic to solve the univariate quadratic equation, but the solution here is slightly different. The links you want to learn more about are as follows: Solving quadratic equation of one variable
bool solveQuadratic(const float &a, const float &b, const float &c, float &x0, float &x1) { float discr = b * b - 4 * a * c; if (discr < 0) return false; else if (discr == 0) x0 = x1 = - 0.5 * b / a; else { float q = (b > 0) ? -0.5 * (b + sqrt(discr)) : -0.5 * (b - sqrt(discr)); x0 = q / a; x1 = c / q; } if (x0 > x1) std::swap(x0, x1); return true; }
4.4 coloring
After finding the intersection point, we store the result in payload and return to Rander In the castRay function of CPP:
The next thing to do is to obtain the surface properties of the intersected objects to judge the coloring mode and the subsequent direction of light
... if (auto payload = trace(orig, dir, scene.get_objects()); payload) { Vector3f hitPoint = orig + dir * payload->tNear;//intersection Vector3f N; // normal Vector2f st; // st coordinates payload->hit_obj->getSurfaceProperties(hitPoint, dir, payload->index, payload->uv, N, st); ...
4.4.1 REFLECTION_AND_REFRACTION shading
In this material, the shading is considered by two rays: reflected and refracted rays. After calculating their directions, call castRay to continue the next ray tracing from the current point. Finally, use fresnel function to calculate the shading proportion of reflection and refraction at this point to obtain the final color. fresnel's derivation process is complex. You can click the link to further understand: Fresnel formula
When calculating the new rayorigin, it is hitPoint ± n * scene Epsilon, after query, it is found that the default episilon=0.00001, that is, the new rayorigin is only slightly different from the original intersection. For example, when the included angle between the new ray and the normal vector of the intersection is greater than 90 °, then the starting point of the new ray is hitPoint - n * scene Epsilon is inside the object, so it should only be used to distinguish internal and external light
... switch (payload->hit_obj->materialType) { case REFLECTION_AND_REFRACTION: { Vector3f reflectionDirection = normalize(reflect(dir, N)); Vector3f refractionDirection = normalize(refract(dir, N, payload->hit_obj->ior)); Vector3f reflectionRayOrig = (dotProduct(reflectionDirection, N) < 0) ? hitPoint - N * scene.epsilon : hitPoint + N * scene.epsilon; Vector3f refractionRayOrig = (dotProduct(refractionDirection, N) < 0) ? hitPoint - N * scene.epsilon : hitPoint + N * scene.epsilon; Vector3f reflectionColor = castRay(reflectionRayOrig, reflectionDirection, scene, depth + 1); Vector3f refractionColor = castRay(refractionRayOrig, refractionDirection, scene, depth + 1); float kr = fresnel(dir, N, payload->hit_obj->ior); hitColor = reflectionColor * kr + refractionColor * (1 - kr); break; } ...
4.4.2 REFLECTION coloring
The object surface of this material only considers one specular reflection light, and the specific coloring process has been covered in the upper part:
... case REFLECTION: { float kr = fresnel(dir, N, payload->hit_obj->ior); Vector3f reflectionDirection = reflect(dir, N); Vector3f reflectionRayOrig = (dotProduct(reflectionDirection, N) < 0) ? hitPoint + N * scene.epsilon : hitPoint - N * scene.epsilon; hitColor = castRay(reflectionRayOrig, reflectionDirection, scene, depth + 1) * kr; break; } ...
4.4.3 default shading (diffuse highlight (Phong illumination model))
If it is the default material, simply use the Phong model mentioned in the previous assignment 3 for shading, and only consider the highlight and diffuse reflection. Note that whether the intersection is blocked should be considered here. If the light source can be seen at the intersection, use the Phong model to calculate the lighting color, otherwise it will turn black in the shadow. In addition, after the light hits the rough object, it will no longer reflect. That's why the final shadow we render is hard shadow:
... default: { Vector3f lightAmt = 0, specularColor = 0; Vector3f shadowPointOrig = (dotProduct(dir, N) < 0) ? hitPoint + N * scene.epsilon : hitPoint - N * scene.epsilon; for (auto& light : scene.get_lights()) {//Each light source is calculated once and the results are accumulated Vector3f lightDir = light->position - hitPoint; float lightDistance2 = dotProduct(lightDir, lightDir); lightDir = normalize(lightDir); float LdotN = std::max(0.f, dotProduct(lightDir, N)); // If there is an intersection and the distance is less than its distance from the light source, the point is not in the shadow auto shadow_res = trace(shadowPointOrig, lightDir, scene.get_objects()); bool inShadow = shadow_res && (shadow_res->tNear * shadow_res->tNear < lightDistance2); lightAmt += inShadow ? 0 : light->intensity * LdotN; Vector3f reflectionDirection = reflect(-lightDir, N); specularColor += powf(std::max(0.f, -dotProduct(reflectionDirection, dir)), payload->hit_obj->specularExponent) * light->intensity; } hitColor = lightAmt * payload->hit_obj->evalDiffuseColor(st) * payload->hit_obj->Kd + specularColor * payload->hit_obj->Ks; break; } ...
4.5 preservation
After calculating the coloring, return to the Render function, save the color in the framebuffer, and update the progress bar according to the current progress. After all colors are calculated, take out the data from the framebuffer, calculate the color and draw it on the ppm image file for saving. The process ends
... framebuffer[m++] = castRay(eye_pos, dir, scene, 0); } UpdateProgress(j / (float)scene.height);//Update progress bar } // save framebuffer to file FILE* fp = fopen("binary.ppm", "wb"); (void)fprintf(fp, "P6\n%d %d\n255\n", scene.width, scene.height); for (auto i = 0; i < scene.height * scene.width; ++i) { static unsigned char color[3]; color[0] = (char)(255 * clamp(0, 1, framebuffer[i].x)); color[1] = (char)(255 * clamp(0, 1, framebuffer[i].y)); color[2] = (char)(255 * clamp(0, 1, framebuffer[i].z)); fwrite(color, 1, 3, fp); } fclose(fp); }
enclosure
Attach the source code. Interested friends can try the effect by themselves:
[GAMES101] operation 5