Content-adaptive generation and parallel compositing of volumetric depth images for responsive visualization of large volume data


Content-adaptive generation and parallel compositing of volumetric depth images for responsive visualization of large volume data

Gupta, A.; Incardona, P.; Hunt, P.; Reina, G.; Frey, S.; Gumhold, S.; Günther, U.; Sbalzarini, I. F.

Abstract

We present a content-adaptive generation and parallel compositing algorithm for view-dependent explorable representations of large three-dimensional volume data. Large distributed volume data are routinely produced in both numerical simulations and experiments, yet it remains challenging to visualize them at smooth, interactive frame rates. Volumetric Depth Images (VDIs), view-dependent piece wise-constant representations of volume data, offer a potential solution: they are more compact and less expensive to render than the original data. So far, however, there is no method to generate such representations on distributed data and to automatically adapt the representation to the contents of the data. We propose an approach that addresses both issues by enabling sort-last parallel generation of VDIs with content-adaptive parameters. The resulting VDIs can be streamed for display, providing responsive visualization of large, potentially distributed, volume data.

Keywords: Visualization; Volume rendering; Parallel computing; Volumetric depth images

Permalink: https://www.hzdr.de/publications/Publ-36114