Medical Imaging and Data Integration Lab

The MIDI Lab is a multidisciplinary team committed to advancing the analysis and integration of medical imaging information and biomedical data for targeted, precise healthcare. In collaboration with clinical, academic, and industrial partners, the MIDI lab addresses all levels of this challenge through the following aims: to develop innovative instrumentation and image generation methods, to construct analysis and evaluation tools, to serve as a training facility for new imaging and data scientists, and to translate developments to clinical applications for improved healthcare.


Cardiac Computed Tomography
Goal: Develop low-cost, non-invasive strategies using cardiac computed tomography to determine the root cause of myocardial ischemia.


Figure 1: Example stress state (row 1) and rest state (row 2) CT images acquired during multi-frame dynamic contrast enhancement for a single patient (color window=[-200,300] HU). The plot shows the myocardial TAC generated from the stress and rest states along with the model fit (dotted line).

Figure 2: Scatter plot comparing novel CT myocardial blood flow estimates to conventional PET blood flow estimates. The resting state estimates are presented in circle markers, and the stress state in square markers. Each patient has a unique color. (SEE=Standard Error of Estimates).



University of Washington: Drs. Branch, Caldwell, and Feigl

R01HL109327, Low-dose strategies for Myocardial Blood Flow Estimation from Dynamic CT

R56HL109327, Quantitative CT Evaluation of Focal Epicardial vs Diffuse Contributions to Coronary Artery Disease

PET Image Reconstruction
Goal: Develop image generation algorithms for PET/CT and PET/MR to provide accurate and precise quantitation of radiopharmaceutical concentrations


University of Washington: Paul Kinahan and Darrin Byrd
GE Healthcare: Charles Stearns, Scott Wollenweber, Steven Ross

K25-HL086713, Quantitative Cardiac PET/CT Imaging

Figure 1: Contrast recovery coefficient plotted against “true” noise for 35 cm background ROI. Error bars denote standard deviation in CRC across 50 realizations. The circles mark the end of 4 and 8 iterations

Figure 2: Two transaxial slices from OSEM+LOR (row 1), OSEM+LOR+3 mm postfilter (row 2), and OSEM+LOR+PSF (row 3). All images have matched color scales and row 2 and row 3 have matched pixel-to-pixel variability in central white matter

Pediatric Rib Fracture Detection
Goal: Help identify non-accidental trauma through AI methods to automatically detect rib fractures on pediatric radiographs.

Figure 1: A chest radiograph with arrows to locations of potential rib fractures (left). These fractures are challenged to detect. An automated model analyzes these types of  images to detect fractures with an example case on the right.


Seattle Children’s Hospital: Drs. Otjen and Bindschadler

R21HD097609, Automatic Rib Fracture Detection in Pediatric Radiography to Identify Non-Accidental Trauma


Coronary Plaque Imaging
Goal: Develop processing techniques to enable quantification of high-risk plaques in the coronary arteries with PET imaging of F18-FDG, F18-NaF, or other novel probes


​​​​​​​​​​​​​​​​​​​Figure 1: Isocontour plot of the SUV max values for different contrast to background levels (y-axis) and different feature sizes (x-axis). The solid X markers highlight the measured values in this 2D space that were interpolated with cubic splines. The red curve marks the values at the background plus two standard deviations, highlighting the isocontour of detectability for features. For example, a 2mm feature would require a contrast of 14:1 to be visible with this scanning protocol.


​​​Figure 2: Illustration of dual-respiratory/cardiac motion correction (MC) technique consisting of decoupled respiratory then cardiac motion correction.



Seattle Cancer Care Alliance: Drs. Savannah Partridge, Habib Rahbar, and Christoph Lee

Multimodal Learning for Breast Cancer Classification

Goal: Risk stratify breast cancer based on dynamic contrast MR exams and additional patient factors

Figure 3: 3D rendering of one-bin image (25% of PET counts) of a custom F18-NaF PET/CT scan (left), and motion-corrected image (right) superimposed on rendered CCTA volume. Increased uptake is seen in right coronary artery, left anterior descending, and left circumflex coronary arteries in high-noise, one-bin image and remains clear in motion-corrected image. (Rebeaux et al. Motion correction of 18F-NaF PET for imaging coronary atherosclerotic plaques. J Nucl Med 2016; 57(1): 54-9. © by the Society of Nuclear Medicine and Molecular Imaging, Inc.)



Cedars-Sinai: Drs. Piotr Slomka and Martin Lassen

1R01HL135557, Integrated Analysis of Coronary Anatomy and Biology using 18F-fluoride PET and CT Angiography 

Figure 1: Example dynamic contrast enhanced MRI images obtained by injecting contrast into patient’s blood stream and capturing a series of MRI volumes. The image volumes are subtracted before and after contrast. 

Figure 2: Diagram of fusion architectures that learn jointly from breast imaging and tabular non-image features.

Probability Fusion fuses information at the output level; Feature Fusion fuses learned image features with non-image features; Learned Feature Fusion fuses learned image features with learned non-image features. Dashed boxes represent feature vectors with the number inside representing the size of that vector. “FC-n” represents a fully connected-layer with n hidden units. The symbol ŷ is malignancy probability in the next 12 mo.

G. Holste, S. C. Partridge, H. Rahbar, D. Biswas, C. I. Lee and A. M. Alessio, "End-to-End Learning of Fused Image and Non-Image Features for Improved Breast Cancer Classification from MRI," 2021 IEEE/CVF International Conference on Computer Vision Workshops (ICCVW), 2021.





Additional ongoing projects will be updated soon...


© 2018 MIDI Lab