F. Rößler, R. P. Botchen, and T. Ertl. Dynamic Shader Generation for Flexible Multi-Volume Visualization.
In Proceedings of IEEE Pacific Visualization Symposium 2008 (PacificVis '08), pages 17-24, 2008 [pdf] |
Volume rendering of multiple intersecting volumetric objects is a
difficult visualization task, especially if different rendering
styles need to be applied to the components, in order to achieve the
desired illustration effect. Real-time performance for even complex
scenarios is obtained by exploiting the speed and flexibility of
modern GPUs, but at the same time programming the necessary shaders
turned into a task for GPU experts only. We foresee the demand for
an intermediate level of programming abstraction where visualization
specialists can realize advanced applications without the need to
deal with shader programming intricacies.
In this paper, we describe a generic technique for multi-volume
rendering, which generates shader code dynamically from an abstract
render graph. By combining pre-defined nodes, complex volume
operations can be realized. Our system efficiently creates GPU-based
fragment shader and vertex shader programs ``on-the-fly'' to achieve
the desired visual results. We demonstrate the flexibility of our
technique by applying several dynamically generated volume rendering
styles to multi-modal medical datasets.
This video demonstrates the interactive configuration of a complex rendergraph which is detailed in the paper.
Setup I: Combination of a CTA (Computed Tomography
Angiography) dataset and a related MRI (Magnetic Resonance Imaging)
dataset of a human head. The MRI dataset provides the skin and brain
tissue. It is vertically cut and the two halves are moved away from
each other to get insight to the inner structures. The CTA dataset
contains the skull and the vessels which are rendered with different
transfer functions. The upper images show three stages of an interactive
multi-volume visualization session, the lower images show the
corresponding render graphs.
Setup II: This setup shows a DVR shaded functional MRI (fMRI) dataset combined with an
corresponding brain MRI dataset rendered as illuminated semi-transparent isosurface
and a single 2D slice of the whole head as context information.
Setup III: This setup shows the combination of an illuminated DVR shaded MRI head with
a ghosting method applied to see the inside. The interior brain is
rendered as illuminated isosurface with a 3D LIC computation applied, to
emphasize the curvature.
Setup IV: For this setup the upper half of
the head is cut away, to lay open the brain, which is segmented and
colored due to a functional brain atlas.