DICOM Processing and Segmentation in Python
DICOM is a pain in the neck. It also happens to be very helpful. As clinical radiologists, we expect post-processing, even taking them for granted. However, the magic that occurs behind the scenes is no easy feat, so let’s explore some of that magic.
In this quest, we will be starting from raw DICOM images. We will extract voxel data from DICOM into numpy
arrays, and then perform some low-level operations to normalize and resample the data, made possible using information in the DICOM headers.
The remainder of the Quest is dedicated to visualizing the data in 1D (by histogram), 2D, and 3D. Finally, we will create segmentation masks that remove all voxel except for the lungs.
Processing raw DICOM with Python is a little like excavating a dinosaur – you’ll want to have a jackhammer to dig, but also a pickaxe and even a toothbrush for the right situations. Python has all the tools, from pre-packaged imaging process packages handling gigabytes of data at once to byte-level operations on a single voxel.
Update 1/5/2019:
The Kaggle data science bowl 2017 dataset is no longer available. However, for learning and testing purposes you can use the National Lung Screening Trial chest CT dataset.
Hi Howard,
Thanks for the tutorial, I have been looking at the images, and I think I understand most of the preprocessing. However, after the resampling (taking into account we have different pixel spacing and slice thickness), we obtain volumes of different dimensions for each patient. How would you recommend resizing in order to get for all patients a volume of dimensions XxYxZ, with spacing of 1x1x1? For me this makes more sense to be able to compare them..
Thanks in advance,
Diana
Hey Diana!
If I understand the question correctly, you have a set of DICOM images, each with different real-life size (L * W * H mm), all of which you want to be able to resample to the same pixel dimensions (X * Y * Z) while maintaining 1 x 1 x 1 mm voxel sizes. It turns out it is a natural side effect: resampling isotropically means so all voxels are the same size but each exam will be different sizes (this is a common approach) because patients are different sizes. This is because CT scans are commonly obtained at a constant 512 x 512 matrix. Since each patient is different in size, what changes is the “zoom” (field-of-view), so each voxel represents a different number of mm in real life.
You can imagine that if we scanned an 85-pound patient at the same “zoom” as a 190-pound patient, you wouldn’t want the scan to occupy only the middle 250 voxels with a wide rim of air – you’d want to zoom in at the time of acquisition so that it makes a full use of the 512 x 512. This means that each CT scan actually represents different dimensions in real life even though they are all 512 x 512 x Z slices.
This is why when we resample to isotropic 1 mm voxels, they all end up being different sizes.
Think of each examination as having a fixed millimeter-per-voxel conversion factor which is based on patient size and different from exam to exam. IIf we made all the XxYxZ the same for all exams, then the voxel size can no longer be 1 x 1 x 1 mm, and vice versa.
Hi Howard,
Thanks for your fast response, is much clear now what is happening. My aim is to be able to feed a 3d Neural network with the volumes, and this requires that all of them have the same shape. I understand the trade off you mention in the last paragraph, however, is there a transformation you could suggest to be able to get the images in the shape we want?
Thanks in advance,
Diana
I see – you may find success in using the resampling method from the blog post and simply hardcoding the new_shape variable (instead of calculating it). This should force all the resampled data to be in the same dimensions.
Hi Howard, I’m Minwoo.
Thank you for this tutorial. I try to do your segmentation tutorial. But I have some problem of your tutorials.
This problem is that some CT slices don’t make final mask or just one lung mask. Erosion and and dilation process is ok. Then color labels process also is ok. But some CT slices don’t show final mask.
I’ll waiting for your response.
Thank you.
Hi Minwoo – hope you are making good stride in your DICOM manipulation work!
The “prop.bbox” code (starting
for prop in regions:
) in themake_lungmask
function is the place you want to take a closer look.Essentially the code draws boxes around each of the labeled regions (
B = prop.bbox
). Then because boxes that represent lungs are more likely to be shaped a certain way than boxes that represent other labels, you can perform the mathematic to determine which label is most likely the lung. In the code,B = (min_row, min_col, max_row, max_col)
in absolute pixel locations.Therefore,
B[2]-B[0]
would represent the height of the box that has been drawn.Try this reference to understand how
bbox
works. http://scikit-image.org/docs/dev/api/skimage.measure.htmlThe precise numbers are determined empirically, so to get the right masks you may have to try different numbers.
For instance, if your patients tend to have smaller lungs, then you would adjust the code to get closer to the center of the DICOM image.
Thank you Howard.
I have solved my problem.
If I have some problem again, See you soon.
Hi,
If we loop through all of the images and process them. What is the best way to load them all to be put into a format to analyze the saved .npy files using PCA, neural network etc..? I.e. not sure how to put all .npy files together for analysis.
Hey Eric, npy is a good choice for this, and I would go with a numpy.ndarray so you can have a 3D array. Popular packages like sklearn, tensorflow, and Keras all support Numpy variables.
Can you support 3D Plotting using vtk?
Thank so much.
Hi khiem, as you mentioned, VTK does support 3D plotting, and does a very good job at it.
However, there is no easy way for me to show it on this blog because Jupyter does not directly support VTK, making it difficult to share the outputs.
Hi Howard Chen,
Thank for reply, can you share code in here i will run it.
Hey khiem – it’s a little out of scope for this blog post, so maybe in the future I’ll write up something in full about VTK. It’s on my list of things to explore for web-based outputs.
You can use 3D Slicer, it support VTK and Python, 3D visualization is more simple than using 3D plotting.
HI
thank you for the tutorials since am new for deep neural networks i got clear idea about preprocessing and segmentation but does it mean we can feed the result of segmented region of lung to any deep Learning model(deepCNN, DBN, SVM) so on or only for deep CNN?
Hey Wogayehu,
It should be useful for any number of the learning algorithms. The concept of doing segmentation and preprocessing in radiology is to standardize and focus on only the portion of the images you really care about (not always the right thing to do, but it often is). Any learning algorithm can potentially benefit from doing this.
If you have a background in other learning algorithms like SVM and have used it for statistical learning with standard datasets, you may recall that data preprocessing, normalization, and filtering is often a good thing to do beforehand. Conceptually this may be though of as the imaging equivalent of that.
thank you very much Howard for reply
do you give me python pseudo-code resources sites for DBNs in image recognition specially for medical images?
Dear Howard,
Greetings!! Thank you for sharing such a nice tutorial. I would be happy if I can get the PDF version.
Kassahun
Hi,
I’m glad you’ve found it helpful! We don’t typically publish PDF versions of blog posts. Maybe you can try printing the page from your web browser to a PDF file.
Best,
Howard
Hey could u please where is the cancer in slide 97 and 112 .. i can’t see it
Hi Youssef,
First let’s take at look at the right-sided lung (that’s actually the patient’s LEFT lung, but it’s just the way CT is displayed in America by convention).
The cancer is not just on slice 97 and 112, it’s on slices from 97 through 112 (all the slices in between). Remember lung cancer is a 3D object so you should expect to see it on multiple slices. It is best seen on slice 100 as a cloud-looking round thing in the lung. Hope this helps!
Hi Chen,
This is indeed a very useful tutorial. Could you please let me know whether, the images ready to hit the CNN are data saved by
np.save(output_path + “maskedimages_%d.npy” % (id), imgs)
Hi Hiroshi, that’s correct. You would be saving image data as .npy files.
Hello Chen,
I believe imgs are not the maskedimages but still the original imgs? Am i wrong? How can I save the masked imgs?
Hi Howard and thank you for this great tutorial,
I’m working on an automated segmentation and 3D surface reconstruction script for
cancellous and cortical bone. My problem is that as this is for potential total hip replacement
cases, it usually involves patients with severe osteoarthritis. Segmentation using thresholding
results in incomplete bone contours, which in turn result in holes on my reconstructed 3D model. Do you have any suggestions on how I should go about to tackle this issue?
Many thanks!
Hi Howard,
Thank you very much for your great tutorial. I am new to Python and I do not know how to implement the “Import Packages” section.
Do I have to run the code as a script?
Many thanks for your help
import is Python syntax that includes packages. In short, it’s similar to import statements in Java and other languages. When you’re in Jupyter, the notebook will automatically execute your Python code without your having to save it separately as a script. Hope this helps.
You can use 3D Slicer, it support VTK and Python, 3D visualization is more simple than using 3D plotting.
Amazing tutorial! Thank you for this!
Interesting
Hi Howard,
I‘m studying segmentation of MR brain-tissue(including segment White Matter、Gray Matter、Cerebrospinal fluid from brain-tissue),and I want to use support vertor machine to segment, I have got the feature vector from pixel, but i don’t konw how to get the labels, because SVM need the labels to complete, In other words, I don’t konw how to extract the labels of each tissue from DICOM files.Many thanks for your help!!!
Hi Jeffery – do you mean the ground truth labels for white, gray, and CSF for each pixel? (FYI – it’s technically called a voxel because the pixels have 3 dimensions).
Unfortunately, radiology tissue type labels don’t come pre-labeled in the DICOM normally, which is why segmentation AI is valuable. Establishing ground truths typically require a human expert who hand-draw regions on each slice. This information is then compiled into a data format the script can read. Then, with the ground truth labels for every image, you can separate your input files into training, validating, and testing sets.
As you can imagine, creating annotated ground truth is laborious, making annotated datasets very valuable. If you are open to using publically available datasets, take a look at MRBrainS by searching for it on Google.
Best of luck,
Howard
Thanks,Howard, Good luck!!!
Hi jeffery, even i’m working on the same project (BRAIN TUMOR DETECTION USING MRI AND MACHINE LEARNING TECHNIQUES) can you please send the code of the project if you have done successfully It will be very helpful for me to finish my project.
Thanks in advance !
Mail ID: sai4vanam@gmail.com
Hi Sir,
Great tutorial Helped a lot, can you please also help how to use convolution neural network to classify stages of lung cancer and increase accuracy….
Please mail similar kind of tutorial to train the data and classify stages..
kazisajid023@gmail.com
Hi,
I am trying to run this code using python 3.6.4 (installed using anaconda) but there is no compatible version of pydicom for 3.6.4.
I have also tried it with Python 2.7 but then I run into errors while installing sci-kit.
What should I do?
Hi Areeb,
Anaconda allows you to install a different version of Python. See here. In short, you would just create a new conda environment, like this:
conda create --name raddq python=3.5
Then depending on your operating system you can activate it accordingly via either “
activate
” (Win) or “source activate
” (Linux/Mac) commands.I have a folder contains 5 Dicom images:
Shape before resampling (5, 512, 512)
Shape after resampling (175, 340, 340)
I know that before resampling number of images is 5 and each image 512×512 (height x width), but after resampling, it showing me 175, is this mean the number of images is now 175 and each image is 340×340 (height x width)? or anything else, I am confusing with that?
Yes you probably have 175 resampled slices. If you did isometric resampling, it probably means the distance between your first and last slice was pretty big, so the algorithm tried to fill in the distance in between. Unfortunately since it only had 5 source slices my guess is your resampled images might have some quality issues.
thank you for your replay Mr.Howard, in your replay,
you said ‘distance between your first and last slice was pretty big’, is the slice thickness distance between first and last slice or distance between two slices? also, you said ‘algorithm tried to fill in the distance in between’ what you mean by this?
I use these 5 images folder for test only because I have a low computing power Pc, I have the complete folder image with 133 slices (from LIDC-IDIR) when displaying slice thickness with 5 folder images it shows 30 mm but when I use the 133 folder image it shows 2.5 mm, please can you explain these for me? my resampling code is same as your code.
sorry, when displaying slice thickness with 5 folder images it shows 35 mm not 30 mm.
Why my 3d graph shows everything in white?
Thanks for the detailed tutorial. I can no longer access the image set from Kaggle website. Is there an alternate location from where I can download the data?
Dear Howard Chen,
Thank you for your tutorial, Can you help me to delete noise image from big image. but in your tutorial you pick up lung cancers. I need solidification for big image.
Kaggle blocked access to the data. can i get (even fake data) in the same format so the code will run properly?
Meaning- this paper looks great and i want to follow it and later alter it to my use. i am running on other dicom data that i have. but a lot of the attributes/fields doesnt exit and it makes it impossible to understand and use the code.
for example:
slices.sort(key=lambda x: int(x.InstanceNumber))
produce that error:
AttributeError: ‘FileDataset’ object has no attribute ‘InstanceNumber’
please if anyone can send me data to:
assidahan@gmail.com
Hi Assi,
Consider using the National Lung Screening Trial dataset. I’ve added an update to the blog post to reflect this dataset’s availability.
Some research datasets have been scrubbed for patient privacy reasons, and sometimes it ends up deleting non-PHI DICOM tags like instance number as well. You can use Python code to test for it:
Hi Howard,
I have to use this tutorial to try to understand and I wish to follow it and modify it later for my use because before I have ever worked with medical data.
I’m working with the Luna16 dataset which is in a different DICOM format.
I changed the function load_scan with this function but I can not match the two.
def load_itk(filename):
# Reads the image using SimpleITK
itkimage = sitk.ReadImage(filename)
If you can help me and thank you in advance.
Unfortunately I don’t have a lot of experience with SimpleITK.
One thing that might help is to understand that your
ct_scan
variable in the code actually contains just 1 image and not the entire scan. So you will still need to wrapload_itk()
in a loop and process every DICOM file in the CT scan. Each file will contain a DICOM header that you can extract (unfortunately I’m not familiar with the actual syntax), and since DICOM is very standardized you might be able to find ways to extract tags likeSliceThickness
orPixelSpacing.
The other way to do it is to go with the actual official DICOM tag numbers using the official standard. http://dicom.nema.org/medical/dicom/2018a/output/chtml/part03/sect_C.7.6.2.html.Once you get a general sense of how DICOM headers and pixel data work, the SimpleITK documentation might be of value: https://simpleitk.readthedocs.io/en/master/Examples/DicomImagePrintTags/Documentation.html.
Thank you for your reply
Hi Howard,
Following up with the question about the dimension, since the spacing for all the patients are different in mm, if we resample it to a fix spacing, say 1x1x1mm, the output dimension of both resampling for each patient will be different (e.g. Shape before resampling (145, 512, 512) Shape after resampling (362, 370, 370)). For convolutional neural network, we usually need a fix size of data, can you explain a little bit more how to get it to a fix size without resizing/downsample the resampled output?
Any kind of response will be appreciated, thank you.
Hi Howard Chen Sir, thanks for the tutorial which made me to understand how to deal with DICOM files, In the tutorial you have used CT scan image of Lung cancer. I’m currently working my project on BRAIN TUMOR DETECTION USING MRI AND MACHINE LEARNING TECHNIQUES, where i used MRI images of brain.
I’m not understanding how to preprocess them and then do segmentation. Can you please help me how to do it if you have any tutorial related to my problem to solve it. It would be very helpful if you provide me with code in python language (Spyder).
anyone who worked on MRI BRAIN TUMOR DICOM help me out.
Mail ID: sai4vanam@gmail.com
Any kind of response will be appreciated, Thank you !
Hi,
If you can explain to me better the concept of mask
thank you in advance
Hi Nour,
The mask is a two dimension array with zeroes and ones. When you do an element-wise multiplication to an image of same dimensions the corresponding 0’s will be zeroes out while the 1’s will take on the pixel value of the image. It is a way to “crop out” and discard areas of an image that you don’t need or to only keep the area that you do need. Hope this helps!
Hello Howard,
Yes thank you that helped me a lot
so the resized and segmented images will be saved from this line (np.save (output_path + “maskedimages_% d.npy”% (id), imgs)) it’s them we will go to CNN algorithm ??
Thank you for your answers
I believe imgs here should be the numpy array from masked_lung, and then the saved images go to CNN. Is that correct, Howard?
Hi, Howard,
We try to follow your steps to get the 3D image and after we check the dicom file, we almost abandon XD
A 11.8 TB file is too big for us to download and operate.
Do you have any smaller file with the similar features?
Cheers!
Hello Howard,
First of all, thanks for your tutorial. I would like to know how to save the images that have undergone the masking process and recreate a 3D volume rendering from these masked images with plotly.
I tried :
np.save (output_path + “maskedimages_% d.npy”% (id), np.asarray (masked_lung)) with masked_lung the list of imgs_after_resamp that has been masked.
And I’m trying to create the plotly dynamic graph with:
imgs_to_process = np.load (output_path + “maskedimages_% d.npy”% (id))
v, f = make_mesh (imgs_to_process)
plotly_3d (v, f)
But I get the error:
ValueError Traceback (most recent call last)
in
1 imgs_to_process = np.load (output_path + “maskedimages_% d.npy”% (id))
—-> 2 v, f = make_mesh (imgs_to_process)
3 plotly_3d (v, f)
in make_mesh (image, threshold, step_size)
4 from skimage import measure
5
—-> 6 greens, faces, norm, val = measure.marching_cubes_lewiner (p, threshold, step_size = step_size, allow_degenerate = True)
7 return green, faces
8
~ \ AppData \ Local \ Continuum \ anaconda3 \ lib \ site-packages \ skimage \ measure \ _marching_cubes_lewiner.py in marching_cubes_lewiner (volume, level, spacing, gradient_direction, step_size, allow_degenerate, use_classic)
133 level = float (level)
If volume <volume.min () or level> volume.max ():
-> 135 raise ValueError (“Surface level must be within range data range.”)
136 # spacing
137 if len (spacing)! = 3:
ValueError: Surface level must be within volume range.
Hi Celia, sorry I am not sure how to fix your error becuase I don’t see the content that caused it here. It looks like you put through
make_mesh
a different array so it’d depend on the content of that array. I would probably recommend paying attention to the dimensions of your mask array and make sure dimension is the same as the actual images as a first step. Hope this helps.@Howard , Thanks for a great write-up!
This comment is a little over a year but I still would like to add to it for anyone who might come across the same problem.
@ Celia, I encountered the same error. I checked and like Howard mentioned, It is due to a different array expected but not because of shape but due to image array elements.
In short, Marching cubes() takes “image array” and “surface level” as your I/P . If the “surface level” that is to extracted is not in your “image array”, you will get the above error.
Check out this link for more details:
https://programtalk.com/vs2/python/11620/scikit-image/skimage/measure/_marching_cubes_classic.py/
Hi Howard Chen!
I need your help in how to call a image, like in this code
import numpy
from PIL import Image
def median_filter(data, filter_size):
temp = []
indexer = filter_size // 2
data_final = []
data_final = numpy.zeros((len(data),len(data[0])))
for i in range(len(data)):
def main():
img = Image.open(“File name.extension”).convert(“L”)
arr = numpy.array(Image.open(“File name.extension”))
removed_noise = median_filter(arr, 4)
img = Image.fromarray(removed_noise)
main()
I would love to know how do i can do this with DICOM images, save DICOM images, how to use Pydicom to load a image and then save it for example.
Thanks for your Attetion!
Rômulo, early in this tutorial there is a section on how to read in DICOM images with
pydicom
. Take a look. Once you get to the pixel data, you can convert to numpy arrays described.Greetings Howard, Thanks for your answer, but i really need a Median Filter for Dicom images, do you got any tips for me? or any code that you know that is close to a Median Filter that i can take as mold to my Science Project? i would really appreciate your help, i’m from Brazil and i have a strong passion for Python programming.
Once you have a numpy array, you can easily apply a median filter to it using scipy.signal.medfilt or scipy.ndimage.median_filter
Hi, Howard
I try your ‘DICOM Processing and Segmentation in Python’.
but part of Helper Functions printed error message.
Error message is AttributeError: ‘FileDataset’ object has no attribute ‘RescaleIntercept’
I search google this error, but I could not solve it.
Can you help me?
Hello Howard,
Amazing tutorial, thank you !
I have a question about the mask.
If I wanted to extract the heart instead of the lungs, What would be the differents ?
I tried to change the mask by shrinking the focus (the middle array in the code) but it didn’t change anything.
Am I modifying the wrong element ?
Or is this methode only for lung and it’s not applicable to sof tissues ?
Thank you in advance
Hi
thank you for this tutorials, interested
Have issues when I run the helper functions:
StopIteration Traceback (most recent call last)
~/anaconda3/lib/python3.7/site-packages/dicom/filereader.py in data_element_generator(fp, is_implicit_VR, is_little_endian, stop_when, defer_size, encoding)
206 fp.seek(value_tell – rewind_length)
–> 207 raise StopIteration
208
StopIteration:
The above exception was the direct cause of the following exception:
RuntimeError Traceback (most recent call last)
in
1 id=0
—-> 2 patient = load_scan(data_path)
3 imgs = get_pixels_hu(patient)
in load_scan(path)
4
5 def load_scan(path):
—-> 6 slices = [dicom.read_file(path + ‘/’ + s) for s in os.listdir(path)]
7 slices.sort(key = lambda x: int(x.InstanceNumber))
8 try:
in (.0)
4
5 def load_scan(path):
—-> 6 slices = [dicom.read_file(path + ‘/’ + s) for s in os.listdir(path)]
7 slices.sort(key = lambda x: int(x.InstanceNumber))
8 try:
~/anaconda3/lib/python3.7/site-packages/dicom/filereader.py in read_file(fp, defer_size, stop_before_pixels, force)
612 try:
613 dataset = read_partial(fp, stop_when, defer_size=defer_size,
–> 614 force=force)
615 finally:
616 if not caller_owns_file:
~/anaconda3/lib/python3.7/site-packages/dicom/filereader.py in read_partial(fileobj, stop_when, defer_size, force)
519 is_little_endian = True
520 if preamble:
–> 521 file_meta_dataset = _read_file_meta_info(fileobj)
522 transfer_syntax = file_meta_dataset.TransferSyntaxUID
523 if transfer_syntax == dicom.UID.ImplicitVRLittleEndian:
~/anaconda3/lib/python3.7/site-packages/dicom/filereader.py in _read_file_meta_info(fp)
445 fp.seek(fp_save)
446 file_meta = read_dataset(fp, is_implicit_VR=False,
–> 447 is_little_endian=True, stop_when=not_group2)
448 fp_now = fp.tell()
449 if expected_ds_start and fp_now != expected_ds_start:
~/anaconda3/lib/python3.7/site-packages/dicom/filereader.py in read_dataset(fp, is_implicit_VR, is_little_endian, bytelength, stop_when, defer_size, parent_encoding)
303 try:
304 while (bytelength is None) or (fp.tell() – fpStart < bytelength):
–> 305 raw_data_element = next(de_gen)
306 # Read data elements. Stop on some errors, but return what was read
307 tag = raw_data_element.tag
RuntimeError: generator raised StopIteration
Hello Howard,
I have some puzzles about the following codes, which convert pixel values to HU:
intercept = scans[0].RescaleIntercept
slope = scans[0].RescaleSlope
if slope != 1:
image = slope * image.astype(np.float64)
image = image.astype(np.int16)
I try to print the values of intercept and slope, and find their values are not real numbers but objects belonging to ‘pydicom.valuerep.DSfloat’. In this case, the if-statement will never be executed.
Could you give me some explanations? Thanks!
Thanks so much for this, it has been super useful.
Here are some of the minor modifications I have made to the code to get it to run in June 2019 with Python 3:
Hello, Luke!
Thanks for your modification. I fix all the bugs. But I have a none-bug problem. I would appreciate if you could give me a hand.
I tried to save resampled 3D image as DCM files using the following codes:
img_s, img_r, img_c = image.shape #image is the 3D image after resampling
image = image.astype(np.uint16)
slice_tmp = slice # slice=slices[0]
slice_tmp.SliceThickness = 1.0
slice_tmp.SpacingBetweenSlices = 1.0
slice_tmp.Rows = img_r
slice_tmp.Columns = img_c
slice_tmp.PixelData = image[1].tobytes()
slice_tmp.save_as(filepath)
Then I used the MicroDicom viewer to display the saved dcm file, but found it is just a binary image (but the pixel values of image[1] are not binary) and I cannot adjust the window width and center. Do you have any idea?
Hi Howard,
Amazing insight for 3d visualization. Could you please help me with the command line ‘make_mesh(image, threshold=-300, step_size=1):’ Why are you setting the threshold to -300? If i want to visualise the soft tissue(organs of my CT image of abdomen) how do i change this part of the code accordingly?
#Upddated code working on python 3.7
#Hopefully useful to someone
import pydicom
import numpy as np
import os
import matplotlib.pyplot as plt
from glob import glob
from mpl_toolkits.mplot3d.art3d import Poly3DCollection
import scipy.ndimage
from skimage import morphology
from skimage import measure
from skimage.transform import resize
from sklearn.cluster import KMeans
from plotly import version
from plotly.offline import download_plotlyjs, init_notebook_mode, plot
from plotly.tools import FigureFactory as FF
from plotly.graph_objs import *
data_path = r”C:\Users\Luis\Desktop\VH DICOM”
output_path = working_path = r”C:\Users\Luis\Desktop\VH DICOM\segmented”
g = glob(data_path + ‘/*.dcm’)
Print out the first 5 file names to verify we’re in the right folder.
print (“Total of %d DICOM images.\nFirst 5 filenames:” % len(g))
print( ‘,’.join(g[:5]))
Loop over the image files and store everything into a list.
def load_scan(path):
slices = [pydicom.read_file(path + ‘/’ + s, force=True) for s in os.listdir(path)]
slices.sort(key = lambda x: int(x.InstanceNumber))
try:
slice_thickness = np.abs(slices[0].ImagePositionPatient[2] – slices[1].ImagePositionPatient[2])
except:
slice_thickness = np.abs(slices[0].SliceLocation – slices[1].SliceLocation)
def get_pixels_hu(scans):
image = np.stack([s.pixel_array for s in scans])
# Convert to int16 (from sometimes int16),
# should be possible as values should always be low enough (<32k)
image = image.astype(np.int16)
id=0
patient = load_scan(data_path)
imgs = get_pixels_hu(patient)
id = 0
imgs_to_process = np.load(output_path+’fullimages_{}.npy’.format(id))
def sample_stack(stack, rows=6, cols=6, start_with=10, show_every=3):
fig,ax = plt.subplots(rows,cols,figsize=[12,12])
for i in range(rowscols):
ind = start_with + ishow_every
ax[int(i/rows),int(i % rows)].set_title(‘slice %d’ % ind)
ax[int(i/rows),int(i % rows)].imshow(stack[ind],cmap=’gray’)
ax[int(i/rows),int(i % rows)].axis(‘off’)
plt.show()
sample_stack(imgs_to_process)
print(“Slice Thickness: %f” % patient[0].SliceThickness)
print(“Pixel Spacing (row, col): (%f, %f) ” % (patient[0].PixelSpacing[0], patient[0].PixelSpacing[1]))
id = 0
imgs_to_process = np.load(output_path+’fullimages_{}.npy’.format(id))
def resample(image, scan, new_spacing=[1,1,1]):
# Determine current pixel spacing
spacing = map(float, ([scan[0].SliceThickness] + scan[0].PixelSpacing))
spacing = np.array(list(spacing))
print(“Shape before resampling\t”, imgs_to_process.shape)
imgs_after_resamp, spacing = resample(imgs_to_process, patient, [1,1,1])
print(“Shape after resampling\t”, imgs_after_resamp.shape)
def make_mesh(image, threshold=-300, step_size=1):
def plotly_3d(verts, faces):
x,y,z = zip(*verts)
colormap=[‘rgb(255,105,180)’,’rgb(255,255,51)’,’rgb(0,191,255)’]
def plt_3d(verts, faces):
print(“Drawing”)
x,y,z = zip(*verts)
fig = plt.figure(figsize=(10, 10))
ax = fig.add_subplot(111, projection=’3d’)
v, f = make_mesh(imgs_after_resamp, 350)
plt_3d(v, f)
v, f = make_mesh(imgs_after_resamp, 350, 2)
plotly_3d(v, f)
Thanks for sharing your code. I am using Python 3.7.3. I get attribute error when I run
patient =load_scan(data_path) as shown below:
AttributeError Traceback (most recent call last)
in
39
40 id=0
—> 41 patient =load_scan(data_path)
42 imgs = get_pixels_hu(patient)
in load_scan(path)
5 def load_scan(path):
6 slices = [pydicom.read_file(path + ‘/’ + s, force=True) for s in os.listdir(path)]
—-> 7 slices.sort(key = lambda x: int(x.InstanceNumber))
8 try:
9 slice_thickness = np.abs(slices[0].ImagePositionPatient[2] – slices[1].ImagePositionPatient[2])
in (x)
5 def load_scan(path):
6 slices = [pydicom.read_file(path + ‘/’ + s, force=True) for s in os.listdir(path)]
—-> 7 slices.sort(key = lambda x: int(x.InstanceNumber))
8 try:
9 slice_thickness = np.abs(slices[0].ImagePositionPatient[2] – slices[1].ImagePositionPatient[2])
~\Python\anaconda3\lib\site-packages\pydicom\dataset.py in getattr(self, name)
743 if tag not in self._dict: # DICOM DataElement not in the Dataset
744 # Try the base class attribute getter (fix for issue 332)
–> 745 return object.getattribute(self, name)
746 else:
747 data_elem = self[tag]
AttributeError: ‘FileDataset’ object has no attribute ‘InstanceNumber’
Any idea
You are lacking some dataset info dataset info. DICOM files have a lot of data that needs to be captured.
Dear Luis
I’m working on LIDC Data set for lung cancer detection. So that I downloaded complete dataset(120GB) and it contains Patient wise folders for that Im unable to understand how to categorize and apply segmentation.
In that data set one Excel file and it contains lot of information. but I’m confusing how to categorize the data.
So please suggest how to categories and lung cancer patients data in LIDC Dataset
Hello Bhaskar,
I have never ran the code because I do not have a DICOM dateset. I know I updated it correctly because it compiles until the dateset info. You can run the code, with the dateset and it will tell you one by one which info is missing. You will need to add one by one so it is a tremendous work. Other than that, you might have files that are not DICOM inside that folder.
Sorry for the late reply,
Luis
Have you solved this issue? I’m experiencing the same problem.
I solved this problem. In my case… there were files other than image dicom in the dcm directory .
Hiii,
After implementing above.I need to get the 3D output to be view in the front end (programmed by php).
I dont know how to do that ??
Any help please.Its urgent
Hi Howard,
thanks for your tutorial. I Have an issue when running the get_pixels_hu function:
OSError Traceback (most recent call last)
~\Anaconda3\lib\site-packages\pydicom\pixel_data_handlers\pillow_handler.py in get_pixeldata(dicom_dataset)
196 fio = io.BytesIO(pixel_data)
–> 197 decompressed_image = Image.open(fio)
198 except IOError as e:
~\Anaconda3\lib\site-packages\PIL\Image.py in open(fp, mode)
2817 warnings.warn(message)
-> 2818 raise IOError(“cannot identify image file %r” % (filename if filename else fp))
2819
OSError: cannot identify image file <_io.BytesIO object at 0x00000258713FEFA8>
During handling of the above exception, another exception occurred:
NotImplementedError Traceback (most recent call last)
in
20 id=0
21 patient=load_scan(data_path)
—> 22 imgs=get_pixels_hu(patient)
in get_pixels_hu(scans)
1 #helper function n2: converte i voxel (dati raw) in hu (unità houndsfeld)
2 def get_pixels_hu(scans):
—-> 3 image = np.stack([s.pixel_array for s in scans])
4 #ds=patient[1]
5 #im = pydicom.pixel_data_handlers.rle_handler.get_pixeldata(ds, rle_segment_order = “>”)
in (.0)
1 #helper function n2: converte i voxel (dati raw) in hu (unità houndsfeld)
2 def get_pixels_hu(scans):
—-> 3 image = np.stack([s.pixel_array for s in scans])
4 #ds=patient[1]
5 #im = pydicom.pixel_data_handlers.rle_handler.get_pixeldata(ds, rle_segment_order = “>”)
~\Anaconda3\lib\site-packages\pydicom\dataset.py in pixel_array(self)
1360 The Pixel Data (7FE0,0010) as a NumPy ndarray.
1361 “””
-> 1362 self.convert_pixel_data()
1363 return self._pixel_array
1364
~\Anaconda3\lib\site-packages\pydicom\dataset.py in convert_pixel_data(self)
1306 )
1307
-> 1308 raise last_exception
1309
1310 def decompress(self):
~\Anaconda3\lib\site-packages\pydicom\dataset.py in convert_pixel_data(self)
1274 try:
1275 # Use the handler to get a 1D numpy array of the pixel data
-> 1276 arr = handler.get_pixeldata(self)
1277 self._pixel_array = reshape_pixel_array(self, arr)
1278
~\Anaconda3\lib\site-packages\pydicom\pixel_data_handlers\pillow_handler.py in get_pixeldata(dicom_dataset)
197 decompressed_image = Image.open(fio)
198 except IOError as e:
–> 199 raise NotImplementedError(e.strerror)
200 UncompressedPixelData.extend(decompressed_image.tobytes())
201 except Exception:
NotImplementedError: None
I think the problem is the conversion of data pixel 7fe0 0010, could you help me please?
thank you in advance!
My guess is the code here were written for an older version of the various Python packages. Specifically, it looks like pydicom has undergone some major revision since the original post.
Admittedly I haven’t had time to update the post for 2020, which might be one reason these errors are popping up.
Alternatively, you might be using image data which actually does contain some sort of decompressed data format pydicom package doesn’t support.
Dear Howard
Thanks for your tutorial, I’ working on LIDC Data set. So it contains large volume of data.when i applied segmentation it is showing error for memory.
So, please give me the guidance to handle LIDC Data set
I’m using LIDC Dataset for lung cancer detection in that dataset 1080 patients (folders) dcm images are there. So,that should I apply segmentation Patient wise or any other mechanism is there.
If I apply patient wise I will get more .npy files(images and masks). So how can I train all these files to Deep Learning Model.
if any one is working on this please suggest me so that my work will be progress.
Hello,
Thank you for your tutorial.It is very helpful
I have one question regards to the preprocessing step.
If I want to set the offset of my CT dataset to positive and scale the pixel intensity by 2000 Hounsfield unit.
How can I do that?
hello sir i am BE student and i am working on this tutorial but i got a error in segmentation code about “img” argument
Exception in Tkinter callback
Traceback (most recent call last):
File “C:\Users\User\AppData\Local\Programs\Python\Python37\lib\tkinter__init__.py”, line 1705, in call
return self.func(*args)
File “C:/Users/User/PycharmProjects/python/Project_lung_cancer/GUI6.py”, line 148, in make_lungmask
row_size = img.shape[0]
UnboundLocalError: local variable ‘img’ referenced before assignment
This is the error of segmentation code if you have answer of this error please mail me or reply this post.
my mail -saurabhshinde778@gmail.com
You might also want to consider want to consider making DICOM enhanced multi-frame images out of your individual slices before you import them into Python (even though this format is not produced by many of the scanner vendors, there are third-party tools like my com.pixelmed.dicom.MultiFrameImageFactory)
For persistence (serialization) and re-import of segmentations, consider the DICOM SEG object (see for example, the recently released highdicom implementation for python, http://github.com/MGHComputationalPathology/highdicom). This also handles export of quantitative results
Your helper functions don’t work – I’m running get_pixels_hu() and I’m getting: RuntimeError: The following handlers are available to decode the pixel data however they are missing required dependencies: GDCM (req. GDCM)
As the Error is stating: you have to install the GDCM modul.
Hi, Howard,
Loved your blog. I am new to working with CT scans and therefore there are a few things I do not understand logically. Maybe, I do need further understanding of CT scans but…here are the queries that I have:
img[img == max_] = mean? In the documentation, you said you did it to improve threshold finding.
In the section calculation of Middle, why did you specifically divide the row_size and col_size by 5 and multiply by 4?
I have a similar question in the Bounding-Box part. Why divide by 10 and multiply by 9?
For all these questions above could you provide me with an intuitive approach as to why you did them?
Thank You,
Kanishk
Hi Kanishk,
These are good questions. They are not specific to CT processing at all but are empiric approach to finding the right area on the image to do work.
The code defining “middle” attempts to find an approximate pixel intensity averages in the scan. It’s a chicken/egg problem: you can only identify the true mean if you have a mask for the thorax, but the whole point of the calculation is to create a mask for the lungs. So empirically, the code calculates the coordinates of the “middle” portion of the scan.
Think of the divided by 5 multiplied by 4 more as “multiply by 0.8.” Likewise, you’ll also see another part of the same line of code that divides by 5 (i.e. “multiply by 0.2”) It’s just an empiric way to take the center 60% of the image between 0.2-0.8 of the image in that dimension.
For the bounding box, similarly we are looking for the center 80% of the image in that dimension (with 10% of the pixels on either side).
How did we come up with 80% and 90% cutoffs? Well – they just seem to work well for the particular dataset through trial and error. Depending on the body part and how “zoomed in” the patient is, or the organ of interest for segmentation is, these numbers would change.
Over time, you’ll develop your own algorithm to dynamically determine these cutoffs, or – with enough annotated data – build a ML model to create the mask.
But since everyone has to start somewhere and not everyone has access to well-annotated data, trying a few numbers and pick one that works is a reasonable 1st step.
Hope this helps.
I forgot to answer the part about setting img[img==max] and img[img==min] to mean values.
This has to do with eliminating imaging artificats (i.e. values that don’t represent actual data). See the section on the histogram: even though HU should only go to -1000, the CT images contain a lot of -2000. When you look at actual image examples, you’d realize that CTs actually come in circles (not surprising because the machine is donut-shaped!).
However, DICOMs like most image formats are rectangles, so something has to be done to fit a circle into a square picture. Usually, the scanner software gives those extra/fake voxels very high or very low values. These values can mess up our calculations for thresholds, so the code you see are just one way to deal with these extreme numbers.
Hii..
What is the need of calculating slice thickness?
What is the meaning of slices[0].ImagePositionPatient[2]
Hello!
Thank you for the great tutorial.
Is it possible to add an upper and lower threshold (i.e. 0-20 HU)?
Hi Howard,
how can I adapt the code to detect fat or muscle tissue?
Thanks
Josef
When I ran the code, In one series It worked but in another one ,it didn’t work and showed “InvalidDicomError: File is missing DICOM File Meta Information header or the ‘DICM’ prefix is missing from the header. Use force=True to force reading.”.Does anyone know why?
Is there a way to use pydicom to generate DRRs?
I’m using LIDC Dataset for lung cancer detection in that dataset 1080 patients (folders) dcm images are there. So,that should I apply segmentation Patient wise or any other mechanism is there.
If I apply patient wise I will get more .npy files(images and masks). So how can I train all these files to Deep Learning Model.
if any one is working on this please suggest me so that my work will be progress.
Hi Howard,
can i apply code for liver segmentation ? and if can do, can you suggest me about HU.
Thanks
linh
Liver HU is around 55 on unenhanced CT, post contrast varies widely because there are a variety of phases: arterial, standard venous phase, portal venous phase, delayed phase, depending on what kind of scan you want to segment the liver on. Then depending on whether the patient has issues like heart failure the timing can be off. In all of these cases, though, you will run into trouble using the method described here because it overlaps so much with other soft tissue organs in the abdomen. You will probably need to consider deep learning based contouring methods, trained on manually segmented data. It’s a bummer but liver, and liver tumors, are rather notorious in this regard. Here’s one example: https://github.com/ztzhao6/Liver-Segmentation-with-AttentionUNet
Hello,
Is this possible to use for mechanical parts reconstruction