VONet: A deep learning network for 3D reconstruction of organoid structures with a minimal number of confocal images

Euijeong Song, Minsuh Kim, Siyoung Lee, Hui Wen Liu, Jihyun Kim, Dong Hee Choi, Roger Kamm, Seok Chung, Ji Hun Yang, Tae Hwan Kwak

Research output: Contribution to journalArticlepeer-review

Abstract

Organoids and 3D imaging techniques are crucial for studying human tissue structure and function, but traditional 3D reconstruction methods are expensive and time consuming, relying on complete z stack confocal microscopy data. This paper introduces VONet, a deep learning-based system for 3D organoid rendering that uses a fully convolutional neural network to reconstruct entire 3D structures from a minimal number of z stack images. VONet was trained on a library of over 39,000 virtual organoids (VOs) with diverse structural features and achieved an average intersection over union of 0.82 in performance validation. Remarkably, VONet can predict the structure of deeper focal plane regions, unseen by conventional confocal microscopy. This innovative approach and VO dataset offer significant advancements in 3D bioimaging technologies.

Original languageEnglish
Article number101063
JournalPatterns
Volume5
Issue number10
DOIs
Publication statusPublished - 2024 Oct 11

Bibliographical note

Publisher Copyright:
© 2024 The Author(s)

Keywords

  • 3D imaging techniques (3D bioimaging)
  • 3D organoid rendering system
  • VOs
  • deep neural network
  • organoids
  • virtual organoids

ASJC Scopus subject areas

  • General Decision Sciences

Fingerprint

Dive into the research topics of 'VONet: A deep learning network for 3D reconstruction of organoid structures with a minimal number of confocal images'. Together they form a unique fingerprint.

Cite this