SciVis 2017 Project 3: "rendr"
Assigned Mon Jan 30; Due Thu Feb 14 at 11:59pm
In this project you will build a direct volume renderer. You will combine 3D convolution with elements of graphics (a simple camera model, the over operator, and Blinn-Phong shading) to complete a tool that can make high-quality renderings of volume (3D image) datasets.
Logistics
Same as in previous projects. The new directory in your CNETID-scivis-2017 repository, with the source files for this project, is called p3rendr. The library is called rnd, the executable is called rendr, and the reference executable is rrendr. Because of the amount of work in this assignment, working with a partner is encouraged.Unlike with previous projects, this same code base (the p3rendr directory) will be largely re-used for project 4, to be assigned around Feb 9. Project 4 will involve some new vector visualization code (a p4vectr directory that hasn't been distributed yet), and it will involve accelerating and optimizing this p3rendr code. That is, Project 4 will involve two different directories (breaking the strict mapping between project number and directory name). If you partner with someone now for Project 3, you will also be partnering with them for the portion of Project 4 involving the p3rendr code. That partnership (carrying from Project 3 over to Project 4) does not count against your maximum number of 3 times that you can partner with the same person. You will be able to partner with someone different for working on the p4vectr code (and that will count against the 3 times you can partner with the same person).
For Project 3, your renderer will be single-threaded. For Project 4, you will have to make it multi-threaded, and measure the speed-up from being multi-threaded. To make your life easier for Project 4, you should be thinking now about how to organize code so that it will be thread-safe, when the time comes to execute in a multi-threaded way.
What to do
As with other projects, the best description of what to do lies mainly in the given source files, and reference executable rrendr. Run "./rrendr" to review the commands available, and run "./rrendr about" to see what will need to be implemented. Reading through and understanding the header file rnd.h is essential. The rndMath.h macro collection is much like mprMath.h from the previous project, but expanded somewhat.The handling of these lines
is the same as in the previous assignment./* v.v.v.v.v.v.v.v.v.v.v.v.v.v.v.v.v.v.v begin student code */ /* ^'^'^'^'^'^'^'^'^'^'^'^'^'^'^'^'^'^'^ end student code */You will be editing the header file (rnd.h) to augment the definition of three structs: rndCtx, rndConvo (for storing the output of convolution computation), and rndRay (for storing the state associated with one ray marching through space). Thus, even more than in the previous project, your work will involve making multiple coordinate changes to multiple files. In some source files you may also add new functions, but these should be static functions (only for use within that source file). As before, you may not create new source files, or edit the Makefile.
At the start of the functions that you have to implement, there is a comment block that defines what the inputs and outputs to the function should be; this provides the technical definition of what a correct implementation should be. The reference implementation rmapr should conform to this.
The inter-relationship of functions in ray.c is one of the more complicated things that you have to understand for this project; this map of how these functions call each other may be helpful.
New in p3rendr is a mechanism that may help for enabling and disabling diagnostic print statements. Note that rnd.h includes this line:
The rndVerbose global variable is defined in misc.c, and set to 0. You shouldn't change that, but you can set the value of rndVerbose via environment variable, with each run of rendr. Note this line in rendr.c:extern int rndVerbose; /* global flag for debugging messages */which uses a getenvInt utility function, based on getenv, defined earlier in rendr.c. So if you, on the command-line:rndVerbose = getenvInt("RND_VERBOSE");Then when rendr runs it will immediately set global rndVerbose to 5. If you want, you can condition all your debugging message with something like:export RND_VERBOSE=5 ./rendrorif (rndVerbose) { fprintf(stderr, "%s: hello\n", me); ... }Then you can set different thresholds for what kind of debugging messages get printed when (how exactly is up to you), but all the debugging messages will go away once you either (on the shell) "export RND_VERBOSE=0" or "unset RND_VERBOSE". You may also want to add additional variables to the rndConvo or rndCtx structs to control debugging messages in a more fine-grained way (e.g., only for certain pixels of the rendered image). If you ensure that all such debugging is conditioned on rndVerbose being non-zero, it will be easy to turn them all off.if (rndVerbose > 3) { fprintf(stderr, "%s: hello ok we're really going to print a lot now...\n", me); ... }For Project 3 there is no additional work for 33710 students to do relative to 23710 students.
What to do and rendr commands to try
The commands here use "./rrendr"; you should make sure that you get the exactly same results by running "./rendr". There is new data in the scivis-2017 repository, which below will be referred to below as "$SCIVIS", in the data/3d subdirectory. Read the 00-info.txt file there to see a description of the datasets useful for this project.
- Conversions between view, world, and the data's index space will require inverting 4x4 matrices to transform homogeneous coordinates. The bottom row will always be "0 0 0 1" Use rendr 4mi to test your implementation of rnd_4m_affine_inv, which should be based on your understanding of FSV 1.15, and your reading of rndMath.h for macros that might be useful. To test this:
(Every time you run the unu 1op nrand it will generate different random numbers; see the usage information about -s from unu 1op to change this if you want).echo 0 | unu pad -min 0 0 -max 3 2 | unu 1op nrand -o mat.txt echo 0 0 0 1 >> mat.txt ./rrendr 4mi mat.txt- Determining view-space coordinates also depends on a camera specification, and the calculations done with that. This is performed by rndCameraUpdate, tested by rendr cam. So examples to try:
The @ prefix effectively inserts the contents of the file onto the command-line, offering a way of storing a collection of command-line options in one place../rrendr cam @$SCIVIS/data/3d/rhand-cam.txt ./rrendr cam @$SCIVIS/data/3d/teddy-cam.txt- Like the last project, this project uses colormaps defined as a sequence of control points. However, rather than having the colormap evaluation (and possible color-space conversion to RGB) happening in the inner loop of the volume renderer, we want to use something simpler: a look-up table ("lut"). A look-up table approximates a function by storing the evaluation of hat function, uniformly sampled over some interval. Using a look-up table involves finding the closest index for a given value, and returning the values stored at that index (with no further lerping or other interpolation). For this project you should use a cell-centered sampling of the interval; though in practice (with LUTs having many entries per control point) the difference will be very slight.
rndTxfLutGenerate takes one colormap (to assign color as a function of reconstructed data value) and another similar map to assign (scalar) opacity, and stores their evaluation in a 4-by-N 2D array of RGBA values. You debug your implementation of rndTxfLutGenerate with rendr lgen; you also use rendr lgen to make the LUTs that you use for volume rendering. You can use the given script drawlut.sh to make a picture of a LUT to visualize its contents. The simplest tests involve very simple ramps (bash's own echo -e is used to express multiple lines in a single command)
In all the images produced by drawlut.sh, the domain of the map increases from left to right. There is a background texture of horizontal bands: the extent to which you can see this texture indicates transparency (one minus opacity). When that texture is covered, it is covered by the color at that point in the LUT. One subtlety of rndTxfLutGenerate is handling whether to rescale the domain over which the LUT is sampled to the interval over which the control points are defined. Whether to do this for color and opacity is indicated by the first and second booleans given to rendr lgen -rs, respectively. The color and opacity maps defined have just two control points, at 0 and at 1.echo -e "#range:=alpha\n0 0\n1 1" > atxf.txt echo -e "#range:=rgb\n0 0 1 0\n1 1 0 1" > ctxf.txt ./rrendr lgen -i ctxf.txt atxf.txt -mm 0 1 -rs false false -o lut-0.nrrd ./rrendr lgen -i ctxf.txt atxf.txt -mm -2 3 -rs true true -o lut-1.nrrd ./rrendr lgen -i ctxf.txt atxf.txt -mm -2 3 -rs false true -o lut-2.nrrd ./rrendr lgen -i ctxf.txt atxf.txt -mm -2 3 -rs true false -o lut-3.nrrd ./rrendr lgen -i ctxf.txt atxf.txt -mm -2 3 -rs false false -o lut-4.nrrd rm -f lut-?.png for I in 0 1 2 3 4; do ./drawlut.sh lut-$I.nrrd lut-$I.png done open lut-?.pngComparing the lut-?.png images, we see that lut-0.png and lut-1.png are the same, because even though the second was defined over a different domain ([-2,3] instead of [0,1]), rescaling for both color and opacity was enabled, so LUT contents will be the same (though you can see from unu head lut-{0,1}.nrrd that the meta-data is different; this difference will be reflected in how rndCtx->txf.vmin and rndCtx->txf.vmax are set by rndCtxSetTxf, which is called for you in rendr_go.c). Comparing the lut-2.png to lut-1.png, the opacities are the same, but the colors are different; Comparing the lut-3.png to lut-1.png, it's the opposite: the colors are the same, but the opacities are different. The variation of colors and opacities stretching across lut-1.png is compressed to a middle portion of lut-4.png.
- Here are analogous rendr lgen examples to test handling of HSV colormaps (this will over-write all the files created by the previous commands).
echo -e "#range:=alpha\n0 0\n1 1" > atxf.txt echo -e "#range:=hsv\n0 0 1 1\n0.33 0.33 1 1\n0.66 0.66 1 1\n1 1 1 1" > ctxf.txt ./rrendr lgen -i ctxf.txt atxf.txt -mm 0 1 -rs false false -o lut-0.nrrd ./rrendr lgen -i ctxf.txt atxf.txt -mm -2 3 -rs true true -o lut-1.nrrd ./rrendr lgen -i ctxf.txt atxf.txt -mm -2 3 -rs false true -o lut-2.nrrd ./rrendr lgen -i ctxf.txt atxf.txt -mm -2 3 -rs true false -o lut-3.nrrd ./rrendr lgen -i ctxf.txt atxf.txt -mm -2 3 -rs false false -o lut-4.nrrd rm -f lut-?.png for I in 0 1 2 3 4; do ./drawlut.sh lut-$I.nrrd lut-$I.png done open lut-?.png- Besides univariate opacity maps based on reconstructed scalar data values, this project also involves a simple kind of bivariate opacity map of value and gradient magnitude, which was first described in Levoy's 1988 paper, which implements a roughly constant thickness isosurface. The three parameters determining this are: the isovalue, the apparent thickness, and the maximum opacity. These values are also stored in text files, which can be passed to rendr go -lev. Your code handles these in rndTxfLevoy, which you can test with rendr levoy.
echo -e "0.3 1 0.5\n0.7 0.3 1.3" | ./rrendr levoy -i - -mm 0 1 -gm 1 -o lev.nrrd unu quantize -b 8 -i lev.nrrd -min 0 -max 1 -o lev.png open lev.png- To learn about the geometry of any of the given volume datasets, you can use rrendr info. There is no code in here you have to write, but you can expand on what this does if you want. Try "./rmapr info $SCIVIS/data/3d/parab.nrrd". As always in this class, axis ordering is fast to slow. There is information printed about "spacings between samples", this is also per-aixs. This spacing can inform how densely you sample rays in volume rendering (if the ray step size is much larger than a voxel is big, you will likely see artifacts), or how thick you make Levoy fuzzy isosurfaces (more on these below).
- To make some different samplings of the same underlying function f(x,y,z)=-z;
The zramp?.nrrd datasets are all of type 32-bit float. The zramp?-us.nrrd datasets are 16-bit (signed) short, and the values stored are the result of scaling (according to the -vs option to rendr sdg) and casting. Your rndConvoEval() will have to handle both types of volume data. You can see where the function being sampled is defined in rendr_sdg.c../rrendr sdg -w 0 -l 5 5 5 -sz 25 25 25 -o zramp0.nrrd ./rrendr sdg -w 0 -l 5 5 5 -sz 25 25 25 -o zramp0-us.nrrd -t short -vs 4 10000 ./rrendr sdg -w 0 -l 5 5 5 -sz 20 25 30 -o zramp1.nrrd ./rrendr sdg -w 0 -l 5 5 5 -sz 20 25 30 -r 0.5 0.6 -0.3 -o zramp2.nrrd ./rrendr sdg -w 0 -l 5 5 5 -sz 20 25 30 -r 0.5 0.6 -0.3 -sh 0.1 -0.2 0.3 -c 0.1 -0.1 0.3 -o zramp3.nrrd ./rrendr sdg -w 0 -l 5 5 5 -sz 20 25 30 -r 0.5 0.6 -0.3 -sh 0.1 -0.2 0.3 -c 0.1 -0.1 0.3 -o zramp3-us.nrrd -t short -vs 4 10000- Try 3D convolution on the float datasets:
Aside from floating-point round-off error, the convolution evaluation should always return -z, and your gradient ("-g true") should always be (0,0,-1), regardless of the "-w x y z" position you specify (as long as the point is inside the volume). Try other kernels (as listed by "rendr klist", except for "-k box"), all should correctly reconstruct f(x,y,z)=-z and its gradient. We can also try with the 16-bit datasets:for K in tent bspln2 bspln3 c4hexic; do for X in 0 1 2 3; do echo "========== zramp${X}.nrrd with $K" ./rrendr ceval -i zramp${X}.nrrd -k $K -w 0.2 0.4 0.8 -g true done doneHere, because of the -vs 4 10000 passed to rendr sdg, the values and the gradients should be 10000/4=2500 times bigger than the values and gradients from the float data.for K in tent bspln2 bspln3 c4hexic; do for X in 0 3; do echo "========== zramp${X}-us.nrrd with $K" ./rrendr ceval -i zramp${X}-us.nrrd -k $K -w 0.2 0.4 0.8 -g true done done- We can make a cube-like dataset with rendr sdg -w 2:
This offers one way to work through some of the steps of building the volume renderer. We can make a file to store command-line options associated with setting up the camera:./rrendr sdg -w 2 -l 4 4 4 -o cube.nrrdOne of the first things to figure out is how to compute the initial ray position in rndRayStart(). The results of this can be exposed by setting trndCtx->planeSep (via rendr go -s) to zero: that means the ray will stop after a single sample. We can probe rndProbePosView (-p pv) to get view-space position of the ray start, which generates a 3x300x300 array saved here to pv.nrrd. We set a variable $CUBE for these demos to shorten the command lines.echo "-fr 6 12 5 -at 0 0 0" > cube-cam.txt echo "-up 0 0 1" >> cube-cam.txt echo "-nc -2.56304 -fc 2.56304 -fov 16" >> cube-cam.txt echo "-sz 300 300" >> cube-cam.txtNote that specifying a kernel (with -k box) is needed for rendr go, but the rndProbePosView result should not depend on the kernel. We can query the values at the corners of the output pv.nrrd array with:CUBE="@cube-cam.txt -i cube.nrrd" ./rrendr go $CUBE -s 0 -k box -p pv -b sum -o pv.nrrdunu slice -i pv.nrrd -a 1 2 -p 0 0 | unu save -f text unu slice -i pv.nrrd -a 1 2 -p M 0 | unu save -f text unu slice -i pv.nrrd -a 1 2 -p 0 M | unu save -f text unu slice -i pv.nrrd -a 1 2 -p M M | unu save -f text- Converting from view-space to world-space is done via rndCamera->VtoW, which is set via rndCameraUpdate(), which is tested with rendr cam
Note the "11.7548 = ncv" output from the above; this is the negative of the third view-space coordinate of every ray's starting position:./rrendr cam @cube-cam.txtunu slice -i pv.nrrd -a 0 -p 2 | unu minmax -- Make sure that ray positions can be correctly converted to world-space:
and then probe pw.nrrd with unu slice:./rrendr go $CUBE -s 0 -k box -p pw -b sum -o pw.nrrdunu slice -i pw.nrrd -a 1 2 -p 0 0 | unu save -f text unu slice -i pw.nrrd -a 1 2 -p M 0 | unu save -f text unu slice -i pw.nrrd -a 1 2 -p 0 M | unu save -f text unu slice -i pw.nrrd -a 1 2 -p M M | unu save -f text- Converting from world-space to index-space is needed for the convolution; you should compute the inverse of rndVolume->ItoW (the volume is passed to rndCtxNew()) via rnd_4m_affine_inv, which you can test with rendr 4mi. You can also probe index-space position of the ray starts:
and then probe pi.nrrd with unu slice -i pi.nrrd commands like those above../rrendr go $CUBE -s 0 -k box -p pi -b sum -o pi.nrrd- To see what the actual volume values are at the ray start (on the near clipping plane), we probe rndProbeValue (rendr go -p v). rndRayStart() calls rndRaySample(), and your code for this should be calling rndConvoEval() for this probe. rndRayStart() next calls rndRayBlendInit(), which needs to keep track of whether the last sample actually generated values that can be blended (see the comment above rndRayBlend about this). rndRayFinish should use rndCtx->outsideValue for the ray result buffer rndRay->result[] when there are no values to blend; rendr go -ov sets rndCtx->outsideValue. Here we set this to zero, and probe with a few kernels (you should test them all):
and then join these side-by-side into a picture; the triangles are smaller for kernels with higher support:for K in box tent bspln3; do ./rrendr go $CUBE -s 0 -k $K -p v -b sum -o v-$K.nrrd doneunu join -i v-{box,tent,bspln3}.nrrd -a 1 | unu quantize -b 8 -o v-test.png- Knowing whether the convolution location was "inside" the volume is important for rndRaySample and rndRayBlend. As explained in rnd.h, the mean, max, and sum of some probes (like rndProbePosView) is defined over all ray sample positions, but for other probes (like rndProbeValue) it is defined only over those ray sample positions inside in the volume (i.e. inside the region where we have data values where needed for the kernel support). rndProbeInside is a probe that should just be a copy (set by rndRaySample) of rndConvo->inside. We can probe this on the near clipping plane, for different kernels, and make another side-by-side picture:
for K in box tent bspln3; do ./rrendr go $CUBE -s 0 -k $K -p in -b sum -o in-$K.nrrd done unu join -i in-{box,tent,bspln3}.nrrd -a 1 | unu quantize -b 8 -o in-test.png- A different plane, further into the volume, is a slightly more interesting place to test getting first derivatives by convolution (thought isolated tests with rendr ceval also do this), and whether rndRaySample is correctly copying this information from the rndConvo struct.
which makes: v-1.png, gv-1.png, gm-1.png. Try this with different kernels (except -k box, which only has a zero derivative).for P in v gv gm; do ./rrendr go $CUBE -nc -1 -s 0 -k tent -p $P -b sum -o $P-1.nrrd unu quantize -b 8 -i $P-1.nrrd -o $P-1.png done open {v,gv,gm}-1.png- Besides the ray starting point, the increment vector between ray samples (call it "ray step") needs to be computed. The length of the ray step depends on rndCtx->planeSep (rendr go -s), but with perspective projection its not the same; it is the same with orthographic projection. There isn't a direct way to query the ray step (there isn't an rndProbe for it), but we can recover it (in view-space) for planeSep=1 with some unu hacking:
Actually, pv0.nrrd and pv1.nrrd should be the same: pv0.nrrd is the result of setting planeSep to zero (and rndCamera->fc to 3), whereas pv1.nrrd is the result of setting rndCamera->fc to -1.5, which is so close to rndCamera->nc=-2 that the ray only gets one sample before it goes past the far clipping plane (this will work if the ray step is correct, and if your handling of this condition is correct in rndRayStep()):./rrendr go $CUBE -s 0 -nc -2 -fc 3 -k box -p pv -b sum -o pv0.nrrd ./rrendr go $CUBE -s 1 -nc -2 -fc -1.5 -k box -p pv -b sum -o pv1.nrrd ./rrendr go $CUBE -s 1 -nc -2 -fc -0.5 -k box -p pv -b sum -o pv2.nrrdThen to recover the ray step vectors, knowing pv2.nrrd holds the sum of two view-space sample positions, and pv1.nrrd is the first, pv2.nrrd minus two times pv1.nrrd is the difference:unu diff pv0.nrrd pv1.nrrdThis shows that in view-space, the third coordinate of the step is always -1 (because planeSep is 1, and the n basis vector of view space points towards the viewer). The first and second coordinates are slightly positive or negative, according to how the rays diverge due to perspective. With orthographic projection all ray steps are the same:unu 2op x pv1.nrrd 2 | unu 2op - pv2.nrrd - -o step.nrrd unu slice -i step.nrrd -a 1 2 -p 0 0 | unu save -f text unu slice -i step.nrrd -a 1 2 -p M 0 | unu save -f text unu slice -i step.nrrd -a 1 2 -p 0 M | unu save -f text unu slice -i step.nrrd -a 1 2 -p M M | unu save -f textYou should use printf to confirm that your rays are stepping in the right direction for each pixel; the steps above provide a way for you to find the correct answer../rrendr go $CUBE -s 1 -nc -2 -fc -1.5 -k box -p pv -b sum -ortho -o pv1o.nrrd ./rrendr go $CUBE -s 1 -nc -2 -fc -0.5 -k box -p pv -b sum -ortho -o pv2o.nrrd unu 2op x pv1o.nrrd 2 | unu 2op - pv2o.nrrd - -o stepo.nrrd unu slice -i stepo.nrrd -a 1 2 -p 0 0 | unu save -f text unu slice -i stepo.nrrd -a 1 2 -p M 0 | unu save -f text unu slice -i stepo.nrrd -a 1 2 -p 0 M | unu save -f text unu slice -i stepo.nrrd -a 1 2 -p M M | unu save -f text- If ray stepping is working, you can test the different blends, with taking multiple steps along the ray:
which makes p-max.png, p-sum.png, p-mean.png.for B in max sum mean; do ./rrendr go $CUBE -s 0.1 -k box -p v -b $B -o p-$B.nrrd unu quantize -b 8 -i p-$B.nrrd -o p-$B.png done open p-{max,sum,mean}.png- To make a shading fuzzy isosurface at around value 0.85 (look at cube-atxf1.txt), with red/green/blue light coming from X/Y/Z in world-space:
./rrendr lgen -i x $SCIVIS/cmap/cube-atxf1.txt -mm 0 1 -o lut.nrrd ./rrendr go $CUBE -us 0.03 -s 0.03 -k ctmr -p rgbalit -b over -lut lut.nrrd -lit $SCIVIS/lit/rgb.txt -o cube-rgb.nrrd overrgb -i cube-rgb.nrrd -b 0 0 0 -o cube-rgb.png- To show off depth-cueing, using a single white light (reusing lut.nrrd from immediately above).
./rrendr go $CUBE -us 0.03 -s 0.03 -k ctmr -p rgbalit -b over -lut lut.nrrd -lit $SCIVIS/lit/1.txt -dcn 1.2 1.1 0.9 -dcf 0.2 0.3 0.4 -o cube-dc.nrrd overrgb -i cube-dc.nrrd -b 0 0 0 -o cube-dc.png- Looking at the teddy-bear CT scan:
The two final images trend-val.png and trend-gm.png are a grid of nine images of the value and gradient magnitude, respectively. Each one is built up from from left to right: kernels box, tent, and bspln2, and from top to bottom: sum, mean, and max. Quantization ranges were found by hand to make informative images.TEDDY="-i $SCIVIS/data/3d/teddy-tiny.nrrd @$SCIVIS/data/3d/teddy-cam.txt " TPARM="-s 1 -sz 350 370" for K in box tent bspln2; do for P in val gm; do for B in sum mean max; do echo "========= -k $K -p $P -b $B " ./rrendr go $TEDDY $TPARM -k $K -p $P -b $B -o trend-$B-$P-$K.nrrd done done done function tjoin { echo "making trend-$1-$2.png" unu join -i trend-$1-$2-{box,tent,bspln2}.nrrd -a 1 | unu axdelete -a 0 | unu quantize -b 8 -min $3 -max $4 -o trend-$1-$2.png } tjoin sum val 0 57000 tjoin mean val 24 106 tjoin max val 1 850 tjoin sum gm 0 5000 tjoin mean gm 0 9.9 tjoin max gm 0 140 unu join -i trend-{sum,mean,max}-val.png -a 1 -o trend-val.png unu join -i trend-{sum,mean,max}-gm.png -a 1 -o trend-gm.pngGrading
The grade will be based on correctness (75%), debugging process (15%), and style (10%). If your code does not compile cleanly, with the Makefile, .strip.sh, and .integrity.sh, provided, you will get a ZERO for correctness. Correctness will be evaluated with examples such as those above, and maybe some additional ones to test corner cases. Code that does compile (if there are compile errors that halt the generation of rendr) cannot be graded.To get full credit for debugging process, you will prepare a write-up, submitted as p3rendr.pdf in your repository, which documents how you proved to yourself that your code is correct, ignoring the reference implementation. Supposing you had no reference implementation (as in the real world), what kind of datasets would you create, and how would you render them (with which parameters), to see if your code was correct? Below are the four major areas of concern for code correctness, which include the kinds of things you should be worrying about when writing and testing your code. For each of the four things, describe in your write-up what experiment(s) you performed to test the correctness of your code. An experiment is a procedure that produces a targeted comparison between your mental model of how things work, which how things actually do work. To document an experiment, you need to describe what the input data is (what new dataset(s) did you create by adding new capability to rendr sdg?), how you set up the rendering (how is rendr go being run), what you expected to see in the output, and what you actually got in the output. Include the generated images in the write-up, so it can be read and understood without executing any commands. Each of these four things should get about one page of text and writing in the write-up.
The correctness of convolution is not part of the debugging process credit; you can use the zramp datasets as noted above to test your convolution.
- Camera and ray geometry: the look-at point appears in the middle of the rendered image, the given up vector is upwards in the rendered image, things are appearing with the correct aspect ratio, the near and far clipping planes work to clip out parts of the world, and the field-of-view is correctly controlling how much is visible, and perspective versus orthographic projection works correctly.
- Transfer functions: Given the color and opacity functions specified as a sequence of control points, the right colors and opacities are computed for the lookup-table, and the lookup-table is being correctly indexed during rendering. Also, the opacities computed by rndTxfLevoy correctly achieve a fuzzy isosurface effect, independent of the gradient magnitude.
- Probing and Blending: All the blendings work correctly for all the probes for which the blending is meaningful (rndBlendOver only works for rndProbeRgba and rndProbeRgbaLit). Is rndBlendOver correctly handling the opacity correction as a function of ray step size (HW4 #2)?
- Lighting with Blinn-Phong and depth-cueing: The Blinn-Phong reflection model is working correctly (the ambient, diffuse, and specular terms are each correct), specular highlights appear where they ought to (you can specify a function and a camera so that you can predict where specular highlights should appear), and depth cueing works.
For the style points, keep in mind:
- You should try to avoid needlessly recomputing things with every call to rndConvoEval. This is why you're allowed to add variables and state to the rndCtx and the rndConvo.
- Your handling of the different rndProbes should be as orthogonal as possible to your handling of the different rndBlends. Probes should be handled by rndRaySample, and blendings should be handled by rndRayBlend.
- If your implementation of rndRender (or rather the code rndRay functions that it depends on) runs markedly slower than the reference implementation, you may lose some points. You can see how long each pixel takes to render by adding the "-t" option to rendr go, and then slicing out and inspecting the last component in output out.nrrd via:
unu slice -i out.nrrd -a 0 -p M | unu quantize -b 8 -o time.png- Any new functions you add to ctx.c, go.c, ray.c, rendr_sdg.c, or any other files in which it is possible to add functions, should be qualified as static, so that they do not produce externally visible symbols in librnd.a.
- Your rendr should not print (to the terminal, via stdout or stderr) any more than the reference rrendr. Clean up all your debugging messages by the deadline. Or, use rndVerbose as described above to control all debugging printing. Grading will be done in a shell in which environment variable RND_VERBOSE is not set (so rndVerbose will be set to 0 by rendr upon startup.
- Your code should compile with zero warnings, using the provided Makefile, .integrity.sh, and .strip.sh files.