SciVis 2015 Project 3: "rendr"
Assigned Tue Feb 17; Due Thu Feb 26 at 11:59pm
In this project you will build a direct volume renderer. This involves material covered in the class Jan 29 through Feb 5, and the readings on volume rendering assigned, and FSV Chapter 5. You will combine 3D convolution with elements of graphics (a simple camera model, the over operator, and Blinn-Phong shading) to complete a tool that can make high-quality volume renderings of real-world 3D datasets.
Logistics
Your CNETID-scivis-2015 directory should now be populated with a p3rendr directory which contains all the files for this project. As long as you have environment variable $SCIVIS set correctly, and if you are logged into one of the CSIL Macs, you should be able to type make in p3rendr to build a rendr executable. You should also have a reference executable rrendr which you should use for comparison and debugging. You can work individually or in pairs for this project; see the information in the Logistics section of the Project 1 page for details; you should see a "Project 3: rendr (p3rendr)" assignment for which you can create a work group. If you pair up with the same partner as in a previous assignment, you will be re-using the previously created repository. Because of the amount of work in this assignment, working with a partner is encouraged.What to do
As with other projects, the best description of what to do lies mainly in the given source files, and reference executable rrendr. Run "./rrendr" to review the commands available, and run "./rrendr about" to see what will need to be implemented. Reading through and understanding the header file rnd.h is essential. The rndMath.h macro collection is much like mprMath.h from the previous project, but expended somewhat.The handling of these lines
is the same as in the previous assignment./* v.v.v.v.v.v.v.v.v.v.v.v.v.v.v.v.v.v.v begin student code */ /* ^'^'^'^'^'^'^'^'^'^'^'^'^'^'^'^'^'^'^ end student code */You will be editing the header file (rnd.h) to augment the definition of three structs: rndContext, rndConvo (for storing the output of convolution computation), and rndRay (for storing the state associated with one ray marching through space). Thus, even more than in the previous project, your work will involve making multiple coordinate changes to multiple files. In some source files you may also add new functions, but these should be static functions (only for use within that source file). As before, you may not create new source files, or edit the Makefile.
At the start of the functions that you have to implement, there is a comment block that defines what the inputs and outputs to the function should be; this provides the technical definition of what a correct implementation should be. The reference implementation rmapr should conform to this. For this project there is no additional work for 33710 students to do relative to 23710 students.
Example rendr/rrendr commands to try
The commands here use "./rrendr"; you should make sure that you get the exactly same results by running "./rendr". There is new data in the scivis-2015 repository, which below will be referred to below as "$SCIVIS", in the data/3d subdirectory. Read the 00-info.txt file there to see a description of the datasets useful for this project.
- Always make sure your code builds cleanly with "make"
- To learn about the geometry of any of the given volume datasets, you can use rrendr info. There is no code in here for you to write. Try "./rmapr info $SCIVIS/data/3d/parab.nrrd". As always in this class, axis ordering is fast to slow. There is information printed about "spacings between samples", this is also per-aixs, and that could inform how densely you sample rays in volume rendering (if the ray step size is much larger than a voxel is big, you will likely see artifacts).
- For rndConvoEval(), note that you will have to handle both (32-bit) floating point and (16-bit) signed short volumes.
- Make sure you can correctly reconstruct from the different samplings of f(x,y,z)=-z in $SCIVIS/data/3d/zramp?.nrrd. Your convolution evaluation should always return -z, and your gradient ("-g true") should always be (0,0,-1), up to numerical precision, regardless of the "-w x y z" position you specify (as long as the point is inside the volume). Try all the kernels (as listed by "rendr klist"); except for "-k box", all should correctly reconstruct f(x,y,z)=-z and its gradient.
- The opacity from either the univariate lookup table, or the Levoy opacity functions, may be outside the range [0,1] (depending on how those opacity functions have been set up). Be sure to clamp the opacity to range [0,1] prior to doing opacity correction based on step size, and blending.
- The little $SCIVIS/data/3d/cube.nrrd dataset (added Feb 19) offers one way to work through some of the steps of building the volume renderer. There is an associated $SCIVIS/data/3d/cube-cam.txt file that specifies camera and image information. One of the first things to figure out is how to compute the initial ray position in rndRayStart(). The results of this can be exposed by setting rndContext->planeSep (rendr go -s) to zero: that means the ray stop after a single sample. We can probe rndProbePosView (-p pv) to get view-space position of the ray start, which generates a 3x300x300 array saved here to pv.nrrd. We set a variable $CUBE for these demos to shorten the command lines.
Note that specifying a kernel (with -k box) is needed for rendr go, but the rndProbePosView result should not depend on the kernel. We can query the values at the corners of the output pv.nrrd array with:CUBE="@$SCIVIS/data/3d/cube-cam.txt -i $SCIVIS/data/3d/cube.nrrd" ./rrendr go $CUBE -s 0 -k box -p pv -b sum -o pv.nrrdunu slice -i pv.nrrd -a 1 2 -p 0 0 | unu save -f text unu slice -i pv.nrrd -a 1 2 -p M 0 | unu save -f text unu slice -i pv.nrrd -a 1 2 -p 0 M | unu save -f text unu slice -i pv.nrrd -a 1 2 -p M M | unu save -f text- Converting from view-space to world-space is done via rndCamera->VtoW, which is set via rndCameraUpdate(), which is tested with rendr cam
Note the "11.7548 = ncv" output from the above; this is the negative of the third view-space coordinate of every ray's starting position:./rrendr cam @$SCIVIS/data/3d/cube-cam.txtunu slice -i pv.nrrd -a 0 -p 2 | unu minmax -- Make sure that ray positions can be correctly converted to world-space:
and then probe pw.nrrd with unu slice -i pw.nrrd commands like those above../rrendr go $CUBE -s 0 -k box -p pw -b sum -o pw.nrrd- Converting from world-space to index-space is needed for the convolution; you should compute the inverse of rndVolume->ItoW (the volume is passed to rndContextNew()) via rnd_4m_affine_inv, which you can test with rendr 4mi. You can also probe index-space position of the ray starts:
and then probe pi.nrrd with unu slice -i pi.nrrd commands like those above../rrendr go $CUBE -s 0 -k box -p pi -b sum -o pi.nrrd- To see what the actual volume values are at the ray start (on the near clipping plane), we probe rndProbeValue (rendr go -p v). rndRayStart() calls rndRaySample(), and your code for this should be calling rndConvoEval() for this probe. rndRayStart() next calls rndRayBlendInit(), which needs to keep track of whether the last sample actually generated values that can be blended (see the comment above rndRayBlend about this). rndRayFinish should use rndContext->outsideValue for the ray result buffer rndRay->result[] when there are no values to blend; rendr go -ov sets rndContext->outsideValue. Here we set this to zero, and probe with a few kernels (you should test them all):
and then join these side-by-side into a picture; the triangles are smaller for kernels with higher support:for K in box tent bspln3; do ./rrendr go $CUBE -s 0 -k $K -p v -b sum -o v-$K.nrrd doneunu join -i v-{box,tent,bspln3}.nrrd -a 1 | unu quantize -b 8 -o v-test.png- Knowing whether the convolution location was "inside" the volume is important for rndRaySample and rndRayBlend. As explained in rnd.h, the mean, max, and sum of some probes (like rndProbePosView) is defined over all ray sample positions, but for other probes (like rndProbeValue) it is defined only over those ray sample positions inside in the volume (i.e. inside the region where we have data values where needed for the kernel support). rndProbeInside is a probe that should just be a copy (set by rndRaySample) of rndConvo->inside. We can probe this on the near clipping plane, for different kernels, and make another side-by-side picture:
for K in box tent bspln3; do ./rrendr go $CUBE -s 0 -k $K -p in -b sum -o in-$K.nrrd done unu join -i in-{box,tent,bspln3}.nrrd -a 1 | unu quantize -b 8 -o in-test.png- A different plane, further into the volume, is a slightly more interesting place to test getting first derivatives by convolution (thought isolated tests with rendr ceval also do this), and whether rndRaySample is correctly copying this information from the rndConvo struct.
which makes: v-1.png, gv-1.png, gm-1.png. Try this with different kernels (except -k box, which only has a zero derivative).for P in v gv gm; do ./rrendr go $CUBE -nc -1 -s 0 -k tent -p $P -b sum -o $P-1.nrrd unu quantize -b 8 -i $P-1.nrrd -o $P-1.png done- Besides the ray starting point, the increment vector between ray samples (call it "ray step") needs to be computed. The length of the ray step depends on rndContext->planeSep (rendr go -s), but with perspective projection its not the same; it is the same with orthographic projection. There isn't a direct way to query the ray step (there isn't an rndProbe for it), but we can recover it (in view-space) for planeSep=1 with some unu hacking:
Actually, pv0.nrrd and pv1.nrrd should be the same: pv0.nrrd is the result of setting planeSep to zero (and rndCamera->fc to 3), whereas pv1.nrrd is the result of setting rndCamera->fc to -1.5, which is so close to rndCamera->nc=-2 that the ray only gets one sample before it goes past the far clipping plane (this will work if the ray step is correct, and if your handling of this condition is correct in rndRayStep()):./rrendr go $CUBE -s 0 -nc -2 -fc 3 -k box -p pv -b sum -o pv0.nrrd ./rrendr go $CUBE -s 1 -nc -2 -fc -1.5 -k box -p pv -b sum -o pv1.nrrd ./rrendr go $CUBE -s 1 -nc -2 -fc -0.5 -k box -p pv -b sum -o pv2.nrrdThen to recover the ray step vectors, knowing pv2.nrrd holds the sum of two view-space sample positions, and pv1.nrrd is the first, pv2.nrrd minus two times pv1.nrrd is the difference:unu diff pv0.nrrd pv1.nrrdThis shows that in view-space, the third coordinate of the step is always -1 (because planeSep is 1, and the n basis vector of view space points towards the viewer). The first and second coordinates are slightly positive or negative, according to how the rays diverge due to perspective. With orthographic projection all ray steps are the same:unu 2op x pv1.nrrd 2 | unu 2op - pv2.nrrd - -o step.nrrd unu slice -i step.nrrd -a 1 2 -p 0 0 | unu save -f text unu slice -i step.nrrd -a 1 2 -p M 0 | unu save -f text unu slice -i step.nrrd -a 1 2 -p 0 M | unu save -f text unu slice -i step.nrrd -a 1 2 -p M M | unu save -f textYou should use printf to confirm that your rays are stepping in the right direction for each pixel; the steps above provide a way for you to find the correct answer../rrendr go $CUBE -s 1 -nc -2 -fc -1.5 -k box -p pv -b sum -ortho -o pv1o.nrrd ./rrendr go $CUBE -s 1 -nc -2 -fc -0.5 -k box -p pv -b sum -ortho -o pv2o.nrrd unu 2op x pv1o.nrrd 2 | unu 2op - pv2o.nrrd - -o stepo.nrrd unu slice -i stepo.nrrd -a 1 2 -p 0 0 | unu save -f text unu slice -i stepo.nrrd -a 1 2 -p M 0 | unu save -f text unu slice -i stepo.nrrd -a 1 2 -p 0 M | unu save -f text unu slice -i stepo.nrrd -a 1 2 -p M M | unu save -f text- If ray stepping is working, you can test the different blends, with taking multiple steps along the ray:
which makes p-max.png, p-sum.png, p-mean.png.for B in max sum mean; do ./rrendr go $CUBE -s 0.1 -k box -p v -b $B -o p-$B.nrrd unu quantize -b 8 -i p-$B.nrrd -o p-$B.png done- (more incremental steps towards volume rendering posted soon)
- To make a shading fuzzy isosurface at around value 0.85 (look at cube-atxf1.txt), with red/green/blue light coming from X/Y/Z in world-space:
./rrendr lgen -i x $SCIVIS/cmap/cube-atxf1.txt -mm 0 1 -o lut.nrrd ./rrendr go $CUBE -us 0.03 -s 0.03 -k ctmr -p rgbalit -b over -lut lut.nrrd -lit $SCIVIS/lit/rgb.txt -o cube-rgb.nrrd overrgb -i cube-rgb.nrrd -b 0 0 0 -o cube-rgb.png- To show off depth-cueing, using a single white light (reusing lut.nrrd from above):
./rrendr go $CUBE -us 0.03 -s 0.03 -k ctmr -p rgbalit -b over -lut lut.nrrd -lit $SCIVIS/lit/1.txt -dcn 1.2 1.1 0.9 -dcf 0.2 0.3 0.4 -o cube-dc.nrrd overrgb -i cube-dc.nrrd -b 0 0 0 -o cube-dc.png- This script cube-rot.sh made the frames to make this movie
- (Feb 20) The Levoy transfer functions simplify how fuzzy isosurfaces are rendered because they need fewer degrees of freedom to an image similar to what a simpler univariate opacity function (with more degrees of freedom) produces. Here is a comparison:
To note: the drawlut.sh script is being used here to make a picture of the RGBA lookup table (LUT), in which a gray/white texture is overlaid with the colors and opacities assigned over the domain of the LUT. This offers a way to visually inspect the contents of the LUT. The subsequent rendering commands are set up to emphasize that the only difference is in the -lut and./rendr lgen -i $SCIVIS/cmap/cube-{ctxf,atxf3}.txt -n 256 -mm 0 1 -o luta.nrrd ./drawlut.sh luta.nrrd luta.png ./rendr lgen -i $SCIVIS/cmap/cube-ctxf.txt x -n 256 -mm 0 1 -o lut1.nrrd ./drawlut.sh lut1.nrrd lut1.png PARMS="-us 0.03 -s 0.01 -k ctmr -p rgbalit -b over -lit $SCIVIS/lit/1.txt -dcn 1.1 1.1 1.1 -dcf 0.4 0.4 0.4" ./rendr go $CUBE $PARMS -lut luta.nrrd -o cube-lut.nrrd overrgb -i cube-lut.nrrd -o cube-lut.png ./rendr go $CUBE $PARMS -lut lut1.nrrd -lev $SCIVIS/cmap/cube-levoy3.txt -o cube-lev.nrrd overrgb -i cube-lev.nrrd -o cube-lev.png-lev arguments. rnd.h specifies that the Levoy opacity function multiplies the opacity assigned by the LUT; the second rendering (using the Levoy opacity function) uses the LUT in which the opacity is always 1.0. Grading
The grade will be based on correctness (75%), debugging process (15%), and style (10%). If your code does not compile cleanly, with the Makefile provided, you will get a ZERO for correctness. Correctness will be evaluated with examples such as those above, and maybe some additional ones to test corner cases. Rendering with and without multiple threads should generate identical results..To get full credit for debugging process, you will prepare a write-up, submitted as p3rendr.pdf in your repository, which documents how you proved to yourself that your code is correct, ignoring the reference implementation. Supposing you had no reference implementation (as in the real world), what kind of datasets would you create, and how would you render them, to see if your code was correct? Below are four major areas of concern for code correctness, which include the kinds of things you should be worrying about when writing and testing your code. You should choose three things from the list, and in your write-up, describe what experiment(s) you performed to test the correctness of your code. An experiment is a procedure that produces a targeted comparison between your mental model of how things work, which how things actually do work. To document an experiment, you need to describe what the input data is (what new dataset(s) did you create by adding new capability to rendr svs?), how you set up the rendering (how is rendr go being run), what you expected to see in the output, and what you actually got in the output.
The correctness of convolution is not part of the debugging process credit; you can use the zramp datasets as noted above to test your convolution.
- Camera and ray geometry: the look-at point appears in the middle of the rendered image, the given up vector is upwards in the rendered image, things are appearing with the correct aspect ratio, the near and far clipping planes work to clip out parts of the world, and the field-of-view is correctly controlling how much is visible, and perspective versus orthographic projection works correctly.
- Transfer functions: Given the color and opacity functions specified as a sequence of control points, the right colors and opacities are computed for the lookup-table, and the lookup-table is being correctly indexed during rendering. Also, the opacities computed by rndTxfLevoy correctly achieve a fuzzy isosurface effect, independent of the gradient magnitude.
- Probing and Blending: All the blendings work correctly for all the probes for which the blending is meaningful (rndBlendOver only works for rndProbeRgba and rndProbeRgbaLit). Is rndBlendOver correctly handling the opacity correction as a function of ray step size (HW4 #2)?
- Lighting with Blinn-Phong and depth-cueing: The Blinn-Phong reflection model is working correctly (the ambient, diffuse, and specular terms are each correct), specular highlights appear where they ought to (you can specify a function and a camera so that you can predict where specular highlights should appear), and depth cueing works.
For the style points, keep in mind:
- You should try to avoid needlessly recomputing things with every call to rndConvoEval. This is why you're allowed to add variables and state to the rndContext and the rndConvo.
- Your handling of the different rndProbes should be as orthogonal as possible to your handling of the different rndBlends. Probes should be handled by rndRaySample, and blendings should be handled by rndRayBlend.
- If your implementation of rndRender (or rather the code rndRay functions that it depends on) runs markedly slower than the reference implementation, you may lose some points. You can see how long each pixel takes to render by adding the "-t" option to rendr go, and then slicing out and inspecting the last component in output out.nrrd via:
unu slice -i out.nrrd -a 0 -p M | unu quantize -b 8 -o time.png- (More style considerations to be posted)