Speaker
Description
Analysis and visualization of big data, such as 3D reconstructions of human brain models from high resolution histological sections, requires a large amount of time and HPC resources to compute and to store the results of respective computations. And becomes prohibitively expensive to perform on the whole dataset during iterative development workflows, where multiple versions need to coexist in order to be visualized and compared.
To address this problem, we are developing an advanced image service, which is performing on-demand piecewise live reconstruction of 3D data. This way it is possible to browse through the whole 3D reconstructed brain at any level of detail even though such a 2 PB dataset does not physically exist on disk. The idea is to maintain manipulations on raw data - especially image analysis and image registration - as software modules in declaratively defined and lazily executed pipelines without storing full copies of manipulated data. This follows the assumption that operations like object detection and 3D reconstruction typically produce many versions of derived image data which are not efficiently handled by storing the modified image data to disk.
The system allows for full interactivity, where the user can for example move a landmark on the screen and see the impact on reconstruction in real time.