Skip to content
/ hamlet Public
forked from scotthlee/hamlet

A Python package for using deep learning to interpret chest x-rays

License

Notifications You must be signed in to change notification settings

cdcai/hamlet

 
 

Repository files navigation

Harnessing Machine Learning to Eliminate Tuberculosis (HaMLET)

DOI

Overview

This repository holds code for the HaMLET project, an internal CDC effort to use computer vision models to improve the quality of overseas health screenings for immigrants and refugees seeking entry to the U.S.. The project is the result of a collaboration between the Immigrant, Migrant, and Refugee Health Branch (IMRHB) in the Division of Global Migration and Quarantine(DGMQ) and the Office of the Director in the Center for Surveillance, Epidemiology, and Laboratory Services (CSELS).

Data

We had about 200,000 x-rays to work with for the entire project. For model training, we used about 110,000 x-rays, with labels coming from the primary reads by radiologists at the original screening sites. For validation and testing, we used about 16,000 x-rays, with labels coming from (often secondary) reads by radiologists from a small number of panel sites with large screening programs. All x-rays were collected as part of routine medical screenings for immigrants and refugees seeking entry to the U.S., which CDC helps to administer in collaboration with the Department of State (click here to learn more about the screening program).

Our main goal was to pilot models for itnernal quality control, but as a sanity check, we also evaluated on our model existing openly-available chest x-ray datasets, including the Shenzhen and Montgomery County TB datasets, made available by the National Library of Medicine, and the NIH chest x-ray dataset. For the latter, we used additional expert labels for the test data provided by Google as part of their research efforts.

Methods

Tasks

The project centers on three main tasks:

  1. Determining whether an x-ray is normal or abnormal (binary classification);
  2. Determining whether an x-ray contains abnormalities suggestive of TB (multilabel classification); and
  3. Localizing abnormalities in x-rays when they exist (object detection/image segmentation).

The second task is the most important for our particular use case, which focuses on ensuring that TB is caught as often as possible during the overseas health screenings. This focus is the source of the backronym we chose for our project name (TB or not TB--that is the question...get it?).

Architecture

For feature extraction, we used EfficientNetV2M architecture, pretrained on ImageNet. For each model, we changed the dimensionality and loss function of the final dense layer to match its particular classification task, and we added an optional custom image augmentation layer to make agumentation tunable with KerasTuner. The default augmentation values come from this paper by folks at Google, Stanford, and the Palo Alto VA, though, and should work well without tuning.

Hardware

We trained our models on a scientific workstation with 24 logical cores, 128GB of ram, and 3 NVIDIA RTX A6000 GPUs. This was just enough memory to fine-tune all layers of EfficiententV2M on images 600 by 600 pixels in size in minibatches of 12 (3 per card); if your GPU has more or less memory, you may want to change the image size, minibatch size, and/or flavor of EfficientNet to suit your needs, especially if you plan to do extensive hyperparameter tuning or other kinds of compute-heavy experimentation.

Code

Our code falls into two categories: core Python modules with functions and classes for extracting, preprocessing, and modeling images in DICOM files; and command-line scripts that use those modules to impelment various stages of our project. Both should be reusable for other projects, but just to keep things tidy, we put the core modules in their own package, keeping the command-line scripts here. Read on for more information about the latter, and see the package README for info about the former.

Image preprocessing

For our project, all of the x-rays came in as DICOM files, and they were often a bit messy, e.g., with burned-in text on them, or or with large solid black or white borders. We used the following scripts to clean them up a bit and get them ready for modeling.

  1. image_extraction.py scans a source directory for DICOM files, extracts the files' image arrays, trims any solid borders from the images, and then saves them as .png files to an output directory. If your x-rays are in DICOM files, this is the place to start.
  2. text_removal.py uses Tesseract to find images with burned-in metadata and move them from their source directory to a new directory. The script could also be modified to obscure text in the images rather than removing them entirely by altering the code for the check_text() function in hamlet.tools.image.
  3. inversion_detection.py uses a lightweight model (EfficientNet B0) to tell whether the colors in images have been inverted, and, if so, invert them back to normal. This is mainly useful for x-rays that have already been exported from their original DICOM files (those have a PhotometricInterpretation parameter that specifies inversion).

Modeling

  1. dataset_splitting.py combines the images from our three main data sources and splits thems into training, validation, and test sets. It also selects which of the columns from the DS-3030 form to keep in the final structured dataset. This is less likely to be reusable for other projects, but we're including it here for the sake of transparency.
  2. train.py and test.py train and test the models. To specify which target to use for prediction, use the --task argument (options are abnormal, abnormal_tb, and findings).

Note: data loading for the modeling scripts is handled by tf.data data generators. When specifying directories (e.g., --img_dir), the full path should be provided, unless otherwise noted, and the images themselves should be in subfolders named img/ in those directories. Please see the command-line arguments in the scripts for more information.

Inference

  1. generate_predictions.py runs inference for both model types on a new set of images in and writes an image-level CSV to the same directory with the predicted probabilities for each outcome.
  2. generate_heatmaps.py generates different kinds of saliency maps for a variety of images. Right now the only method supported is Grad-CAM, but we hope to add others soon.

Requirements

The package was written in Python 3.8. For required dependencies, please see requirements.txt.

Visualization

We used the saliency package from the People+AI Research (PAIR) group at Google to make heatmaps that show where the models think there are abnormalities in the x-rays. Here's an exampe of a Grad-CAM heatmaps for our binary model's predictions on a single abnormal x-ray.

grad cam

Grad-CAM often highlights parts of the image that wouldn't be important for making a dignosis--that is, it's not very specific as an abnormality localizer--but it does tend to capture the abnormalities when they're there. Trying another method, in this case XRAI, gives a different look at the same image.

XRAI

XRAI chops the image into segments based on their relevance to the final classification (first saliency map), which also lets black out parts of the original image that don't meet a certain level of relevance (second saliency map, which cuts out regions below the 70th quantile of relevance values). Pretty neat!

If you're using our code for your own project, try experimenting with other saliency algorithms, like Integrated Gradients and Blurred Integrated Gradients, to see which one works best. See the functions in the attribution module for more info on how to run each method.

Results

Classification metrics

Coming soon.

Embedding explorer

Coming soon.

Related documents

Public Domain Standard Notice

This repository constitutes a work of the United States Government and is not subject to domestic copyright protection under 17 USC § 105. This repository is in the public domain within the United States, and copyright and related rights in the work worldwide are waived through the CC0 1.0 Universal public domain dedication. All contributions to this repository will be released under the CC0 dedication. By submitting a pull request you are agreeing to comply with this waiver of copyright interest.

License Standard Notice

The repository utilizes code licensed under the terms of the Apache Software License and therefore is licensed under ASL v2 or later.

This source code in this repository is free: you can redistribute it and/or modify it under the terms of the Apache Software License version 2, or (at your option) any later version.

This source code in this repository is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the Apache Software License for more details.

You should have received a copy of the Apache Software License along with this program. If not, see http://www.apache.org/licenses/LICENSE-2.0.html

The source code forked from other open source projects will inherit its license.

Privacy Standard Notice

This repository contains only non-sensitive, publicly available data and information. All material and community participation is covered by the Disclaimer and Code of Conduct. For more information about CDC's privacy policy, please visit http://www.cdc.gov/privacy.html.

Contributing Standard Notice

Anyone is encouraged to contribute to the repository by forking and submitting a pull request. (If you are new to GitHub, you might start with a basic tutorial.) By contributing to this project, you grant a world-wide, royalty-free, perpetual, irrevocable, non-exclusive, transferable license to all users under the terms of the Apache Software License v2 or later.

All comments, messages, pull requests, and other submissions received through CDC including this GitHub page are subject to the Presidential Records Act and may be archived. Learn more at http://www.cdc.gov/other/privacy.html.

Records Management Standard Notice

This repository is not a source of government records, but is a copy to increase collaboration and collaborative potential. All government records will be published through the CDC web site.

Additional Standard Notices

Please refer to CDC's Template Repository for more information about contributing to this repository, public domain notices and disclaimers, and code of conduct.

General disclaimer This repository was created for use by CDC programs to collaborate on public health related projects in support of the CDC mission. Github is not hosted by the CDC, but is a third party website used by CDC and its partners to share information and collaborate on software.

About

A Python package for using deep learning to interpret chest x-rays

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages

  • Python 100.0%