This is a Java implementation of the raytracer from the book Ray Tracing in One Weekend by Peter Shirley. I added texture mapping for the spheres and light emitters.
Why in Java and not C++? Why go with something slower and more memory hungry? When I followed the first book I was trying to better learn Java and didn't want to copy paste (already knew C++ better then too, but now I'm probably bad at C++). Turns out the lack of operator overloading and pointers also gets really gross.
The initial raytracer with my modifications is found in the weekend
folder.
My continuation of the raytracer from the next book, Ray Tracing: The Next Week, is found in the folder week
. I've added triangle objects and STL file loading to render more complicated objects. The bounding volume hierarchies are really nice and are necessary for high polygon count STLs. I still need to find ways to make this raytracer more efficient and am reading up more on other raytracers and lighting algorithms.
I've just gotten the third book in the series. Time to read that along with the other articles I've been trawling.
Right now I am trying to implementing the Cook-Torrance BSDF. I'm not sure how correct mine is but it doesn't seem too wrong. However, there are little speckles of pure white on some edges that increases with roughness, and I'm trying to track down the source of them. Appearances are very deceiving when working with renderers, just looking "close enough" can be very wrong.
The cube, magnolia, sphere, and teapot models in the objects
folder are from this site. The Pokeball and Turners Cube are from GrabCad. Other teapot from here. The earth and moon textures in textures
were the first ones that showed up when I Googled.
The file Tracer.java
has the main
for this raytracer. You can set the x and y resolution of the output with nx
and ny
and the number of samples per pixel with ns
. Set world
to a HittableList
of the objects you want to render and cam
to the camera you want to render the scene from. The MAX_DEPTH
value sets the max recursion depth for the color
method. After about 5-10 bounces most images don't change very much so don't set this too high (especially with many mirror surfaces). When working with light sources and a black background (background emits no light), the number of samples per pixel needed to get a fairly clean image is usually around 10,000. However, this is very computationally intense and can take quite a bit of time to fully render.
During the rendering process, the intermediate image is displayed in a DrawingPanel
(which was borrowed from APCS) one row at a time. You can save the output image from here or just use ImageIO.save
to save the BufferedImage
in the program.
The file AccelTester.java
can be used to test if two scenes have the same intersection properties. This is useful for testing if the implementation of acceleration structures like the BVH actually work. It generates random rays in the bounding box of the scene, and checks that they have the same intersection properties.
TracerV.java
was used to render a video using the first version of the raytracer. The camera was moved as a function of time and each rendered frame was then put together into a final video using ImageMagick. There is an example below.