SHAPR: Capturing Shape Information with Multi-Scale Topological Loss Terms for 3D Reconstruction

This is the official website of SHAPR, a method for reconstructing 3D images from 2D inputs using a multi-scale topological loss term.

Motivation

Reconstructing 3D objects from 2D images is both challenging for our brains and machine learning algorithms. To support this spatial reasoning task, contextual information about the overall shape of an object is critical. However, such information is not captured by established loss terms (e.g. Dice loss). We propose to complement geometrical shape information by including multi-scale topological features, such as connected components, cycles, and voids, in the reconstruction loss.

An overview of SHAPR and our novel loss term. The components shown with dotted lines are meant to be generic and can be adjusted depending on the application.

We demonstrate the utility of SHAPR by predicting the 3D shape of red blood cells from 2D microscopy images, but SHAPR can handle other modalities and tasks as well—we are very excited about SHAPR’s future!

Materials

  • Check out our paper, which was presented at MICCAI 2022. We also have a preprint of this publication.
  • Check out our code on GitHub. We are always happy about new contributors!

Examples

Here are some random examples of how the output of SHAPR is improved by a topological loss term. Select a specific model from the dropdown below to get started (by default, the original shape is shown).

Team

SHAPR is developed by members of Marr Lab and AIDOS Lab.