Content-adaptive generation and parallel compositing of volumetric depth images for responsive visualization of large volume data
Content-adaptive generation and parallel compositing of volumetric depth images for responsive visualization of large volume data
Gupta, A.; Incardona, P.; Hunt, P.; Reina, G.; Frey, S.; Gumhold, S.; Günther, U.; Sbalzarini, I. F.
Abstract
We present a content-adaptive generation and parallel compositing algorithm for view-dependent explorable representations of large three-dimensional volume data. Large distributed volume data are routinely produced in both numerical simulations and experiments, yet it remains challenging to visualize them at smooth, interactive frame rates. Volumetric Depth Images (VDIs), view-dependent piece wise-constant representations of volume data, offer a potential solution: they are more compact and less expensive to render than the original data. So far, however, there is no method to generate such representations on distributed data and to automatically adapt the representation to the contents of the data. We propose an approach that addresses both issues by enabling sort-last parallel generation of VDIs with content-adaptive parameters. The resulting VDIs can be streamed for display, providing responsive visualization of large, potentially distributed, volume data.
Keywords: Visualization; Volume rendering; Parallel computing; Volumetric depth images
-
WWW-Beitrag
arXiv: https://arxiv.org/abs/2206.14503
DOI: 10.48550/arXiv.2206.14503
arXiv: 2206.14503 -
Beitrag zu Proceedings
EGPGV23: Eurographics Symposium on Parallel Graphics and Visualization, 12.06.2023, Leipzig, Deutschland
Parallel Compositing of Volumetric Depth Images for Interactive Visualization of Distributed Volumes at High Frame Rates
DOI: 10.2312/pgv.20231082
Permalink: https://www.hzdr.de/publications/Publ-36114