Showing posts with label dicom. Show all posts
Showing posts with label dicom. Show all posts

Sunday 29 June 2014

dcm4che-tool-dcmvalidate

usage: dcmvalidate --iod <iod-file> [<dicom-file>..][<directory>..]

Utility to validate DICOM objects according a specified Information Object
Definition.
-
Options:
 -h,--help             display this help and exit
    --iod <iod-file>   path to xml file with Information Object Definition
 -V,--version          output version information and exit
Example:
$ dcmvalidate --iod etc/dcmvalidate/dicomdir-iod.xml DICOMDIR
 Validate DICOMDIR against IOD specified in etc/dcmvalidate/dicomdir.xml
 
Download Code : dcm4che-tool-dcmvalidate 
For Your reference : dicomdir-iod.xml

Monday 16 June 2014

OTech: How Dirty Is Your DICOM Data?


This looks like a nice image, but the
 metadata could be totally incorrect or corrupted
  • If you would take a snapshot of any DICOM archive and check the image headers for correctness, I would argue that there are quite a few hidden problems that you might not know about.

  • Errors in a DICOM header can cause images to be incorrectly displayed, incorrectly added to the database, or being flatly rejected by the PACS. By DICOM errors, I don’t mean an incorrect Accession Number of patient name, or duplicate ID, but rather a violation of the rules defined in the DICOM standard for a particular field entry.
  • For more details : OTech: How Dirty Is Your DICOM Data?:
  • Following is video for showing how to do..

Friday 25 April 2014

Rotate Cross Lines With help Of Circle


                             

                                double radians = Math.atan2(event.getY() - osPoint.getCenterY(), event.getX()
- osPoint.getCenterX());

double degrees = Math.round((radians * 180 / Math.PI));

double rotation = degrees + 90;

osPoint.setRotate(osPoint.getRotate() + rotation);
osHline.setRotate(osHline.getRotate() + rotation);
osVline.setRotate(osVline.getRotate() + rotation);

Thursday 13 March 2014

Regarding calibration in the dicom image using ImageJ

1. I assume you have these in DICOM format. In ImageJ it is simple to import a single DICOM image and the calibration is done for you (the DICOM header contains fields for slope and intercept (usually 1 and -1024)). Since you have a stack this is probably not much good. For importing tomographic studies I use the "Import Dicom sequence" plugin available here: http://www.iftm.de/telemedizin/dcmimex.htm

From what I remember it is not the most straight-forward plugin to install but will nicely import a sequence of DICOM images as a stack. It does not, however, seem to calibrate the gray levels into hounsfield units. To do this choose Analyse->Calibrate. Choose "Straight line" as the function, type -1024 in the left box and 0 in the right box. When you press OK you get a straight-line graph of the calibration and a label with straight line formula y = a + bx. a should be -1024 and b should be 1. If they are then we have a calibration to HU.

To demonstrate the HU calibration, move the cursor around the image and observe the "value" in the IJ status bar. The value is in HU and the gray level appears in brackets.

2. Images typically only contain 256 gray levels when displayed, even though the image may contain values of any number (eg CT from -1024 to ~32k). So gray levels have to be "binned" in an image, just like in a histogram. So the column labelled "level" is the gray level displayed in the image and the "bins" are demonstrated in the second column. The size of the bin is dictated by the min and max pixel levels.

I hope I pitched that at the right level. Enjoy ImageJ ;-)

Converting CT Data to Hounsfield Units

According to Wikipedia, the Hounsfield scale was invented in 1972 by Godfrey Newbold Hounsfield. His scale is a quantitative measure of radiodensity and is used to evaluate CAT scans. Pixels in an image obtained by CT scanning are displayed in terms of relative radiodensity. 

The pixel value is displayed according to the mean attenuation of the tissue that it corresponds to on a scale from -1024 to +3071 on the Hounsfield scale. Water has an attenuation of 0 Hounsfield units (HU) while air is -1000 HU, bone is typically +400 HU or greater and metallic implants are usually +1000 HU. 

To convert from the normal units found in CT data (a typical data set ranges from 0 to 4000 or so) you have to apply a linear transformation of the data. The equation is:
   hu = pixel_value * slope + intercept
 
The real question is where do you find the slope and intercept used in the conversion?
Normally, these values are stored in the DICOM file itself. The tags are generally called the Rescale Slope and Rescale Intercept, and typically have values of 1 and -1024, respectively.

To show you how to obtain these values, I downloaded a sample CT data set, named CT-MONO2-16-ankle.dcm. This file was created on a GE Medical Systems scanner. After unpacking the compressed file, and adding a dcm file extension to the name (a convenience), I opened the file and dumped the elements to the display.
   IDL> dicomObj = Obj_New('IDLffDICOM', 'CT-MONO2-16-ankle.dcm')
   IDL> dicomObj -> DumpElements
      0 : (0002,0000) : UL : META Group Length : 4 : 188 
      1 : (0002,0001) : OB : META File Meta Information Version : 2 : 0 1 
      2 : (0002,0002) : UI : META Media Stored SOP Class UID : 26 : 1.2.840.10008.5.1.4.1.1.7
     ...
     ...
     50 : (0028,1052) : DS : IMG Rescale Intercept : 6 : -1024 
     51 : (0028,1053) : DS : IMG Rescale Slope : 2 : 1 
     52 : (0028,1054) : LO : IMG Rescale Type : 2 : US
     53 : (7FE0,0000) : UL : PXL Group Length : 4 : 524296 
     54 : (7FE0,0010) : OW : PXL Pixel Data : 524288 : 4080 4080 4080 4080 4080 ...
I found the Rescale Slope and Rescale Intercept as elements 51 and 50. As expected, they had values of 1 and -1024.
Next, I read the data from the DICOM file, and applied the transformation.
 
   IDL> imagePtr = (dicomObj -> GetValue('7FE0'x, '0010'x))[0]
   IDL> MinMax, *imagePtr 
        32   4080
   IDL> image_hu = *imagePtr * 1 + (-1024)
   IDL> MinMax, image_hu
        -992   3056
This image will appear upside down on my display, so I want to reverse the Y direction.
   IDL> image_hu = Reverse(image_hu, 2)
If I just want to see the bone structure (probably a good idea with this ankle image), I can display it like this.
   TV, BytScl(image_hu, Min=600, Max=3000)
The CT image displayed in Hounsfield units.
The bone structure of the CT angle image, displayed in Hounsfield units.
 
Be sure to clean up your pointers and objects.
   IDL> Ptr_Free, imagePtr
   IDL> Obj_Destroy, dicomObj

Saturday 25 January 2014

DicomMultiframePlayer in JAVA

package packageTestDcm4che3;

import java.awt.BorderLayout;
import java.awt.Color;
import java.awt.Dimension;
import java.awt.Graphics;
import java.awt.event.ActionEvent;
import java.awt.event.ActionListener;
import java.awt.image.BufferedImage;
import java.io.File;
import java.io.IOException;
import java.util.Iterator;
import java.util.Vector;
import javax.imageio.ImageIO;
import javax.imageio.ImageReadParam;

import javax.imageio.ImageReader;
import javax.imageio.spi.ImageReaderSpi;
import javax.imageio.stream.FileImageInputStream;
import javax.imageio.stream.ImageInputStream;
import javax.swing.JButton;
import javax.swing.JFileChooser;
import javax.swing.JFrame;
import javax.swing.JLabel;
import javax.swing.JPanel;
import javax.swing.JTextField;
import org.dcm4che.imageio.plugins.dcm.DicomImageReadParam;
import org.dcm4che.imageio.plugins.dcm.DicomImageReader;

/**
 * Plays a multiframe DICOM instance.
 *
 * @author dimitri PIANETA
 *
 * <p>The code for this came from <a href="http://samucs.blogspot.com" target="_blank">http://samucs.blogspot.com</a>
 *    and was dated 6-January-2010.</p>
 *
 * <p> code modification 14 January 2014 for dcm4che3 <p>
 */   
public class DicomMultiframePlayer extends JFrame implements ActionListener, Runnable {
   
    private static final long serialVersionUID = 1L;
    private JLabel fileLabel;
    /**
     * Will contain name of file to be read.
     */
    private JTextField fileField;
    /**
     * Triggers process for selecting file to be read.
     * @see #actionPerformed(ActionEvent)
     */
    private JButton btnChoose;
    /**
     * Starts playing of images.
     * @see #actionPerformed(ActionEvent)
     */
    private JButton btnPlay;
    /**
     * Pauses playing of images.
     * @see #actionPerformed(ActionEvent)
     */
    private JButton btnPause;
    /**
     * Halts playing of images.
     * @see #actionPerformed(ActionEvent)
     */
    private JButton btnStop;
    private JButton btnExit;   
    private Vector<BufferedImage> images;
    private ImagePanel imagePanel;   
    private boolean stop;
    private int currentFrame;
   
       
        private int frame = 1;

       
       
       
          private final ImageReader imageReader =
            ImageIO.getImageReadersByFormatName("DICOM").next();
       
       
    public DicomMultiframePlayer() {
        super("DICOM Multiframe Player using dcm4che - by samucs-dev");
        this.setDefaultCloseOperation(JFrame.EXIT_ON_CLOSE);
        this.getContentPane().setLayout(new BorderLayout());
       
        images = new Vector<BufferedImage>();
        imagePanel = new ImagePanel();
       
        fileLabel = new JLabel("File:");
        fileField = new JTextField(30);
        btnChoose = this.createJButton(25, 25, "...");
       
        btnPlay = this.createJButton(80,25,"Play");
        btnPause = this.createJButton(80,25,"Pause");
        btnStop = this.createJButton(80,25,"Stop");       
        btnExit = this.createJButton(80,25,"Exit");
        btnPause.setEnabled(false);
        btnStop.setEnabled(false);
       
        JPanel panelNorth = new JPanel();
        panelNorth.add(fileLabel);
        panelNorth.add(fileField);
        panelNorth.add(btnChoose);
       
        JPanel panelSouth = new JPanel();
        panelSouth.add(btnPlay);
        panelSouth.add(btnPause);
        panelSouth.add(btnStop);
        panelSouth.add(btnExit);
       
        this.getContentPane().add(panelNorth, BorderLayout.NORTH);
        this.getContentPane().add(imagePanel, BorderLayout.CENTER);
        this.getContentPane().add(panelSouth, BorderLayout.SOUTH);
       
        this.setSize(new Dimension(500,500));
        this.setLocationRelativeTo(null);
        this.setVisible(true);
    }
    /**
     * Plays the frames in order.
     *
     * <p>I removed the Override annotation.</p>
     */
    // @Override
    public void run() {
        while(true) {
            if (!btnPlay.isEnabled()) {               
                if (stop) break;               
                currentFrame++;
                if (currentFrame == images.size())
                    currentFrame = 0;
                imagePanel.setImage(images.get(currentFrame));               
                try {
                    Thread.sleep(70);
                } catch (InterruptedException e) {
                    e.printStackTrace();
                }
            }
        }
    }
    /**
     * Detects clicking of button and carries out appropriate action.
     *
     * <p>I removed the Override annotation.</p>
     */
    // @Override
    public void actionPerformed(ActionEvent e) {       
        if (e.getSource().equals(btnChoose)) {
            JFileChooser chooser = new JFileChooser();
            int action = chooser.showOpenDialog(this);
            switch(action) {
                case JFileChooser.APPROVE_OPTION:
                    this.openFile(chooser.getSelectedFile());
                    break;
                case JFileChooser.CANCEL_OPTION:
                    return;
            }
        }       
        if (e.getSource().equals(btnPlay)) {
            btnPlay.setEnabled(false);
            btnPause.setEnabled(true);
            btnStop.setEnabled(true);
            stop = false;
            new Thread(this).start();           
        }
        if (e.getSource().equals(btnPause)) {
            btnPlay.setEnabled(true);
            btnPause.setEnabled(false);
            btnStop.setEnabled(true);
            stop = false;
        }
        if (e.getSource().equals(btnStop)) {
            btnPlay.setEnabled(true);
            btnPause.setEnabled(false);
            btnStop.setEnabled(false);
            stop = true;
            currentFrame = 0;
            imagePanel.setImage(images.get(0));           
        }
        if (e.getSource().equals(btnExit)) {
            System.exit(0);
        }
    }
    /**
     * Creates JButton objects on window.
     * @param width width of button in pixels.
     * @param height height of button in pixels
     * @param text text to appear in button
     * @return JButton object
     */
    private JButton createJButton(int width, int height, String text) {
        JButton b = new JButton(text);
        b.setMinimumSize(new Dimension(width, height));
        b.setMaximumSize(new Dimension(width, height));
        b.setPreferredSize(new Dimension(width, height));
        b.addActionListener(this);
        return b;
    }
    /**
     * Reads the contents of the dicom file
     * @param file file to be opened
     * @see org.dcm4che2.imageioimpl.plugins.dcm.DicomImageReaderSpi
     * @see org.dcm4che2.imageioimpl.plugins.dcm.DicomImageReader
     */
    private void openFile(File file) {
        images.clear();
        try {
           
                   
                       
            int numFrames =setNumber(file);
            //System.out.println("DICOM image has "+ numFrames +" frames...");           
            System.out.println("Extracting frames...");
            for (int i=0; i < numFrames; i++) {
                     
                       
                          
                BufferedImage img =  chargeImageDicomBufferise(file,i);
                images.add(img);
                System.out.println(" > Frame "+ (i+1));
            }           
            System.out.println("Finished.");
        } catch(Exception e) {
            e.printStackTrace();
            imagePanel.setImage(null);
            return;
        }
        stop = false;
        currentFrame = 0;
        imagePanel.setImage(images.get(0));
    }

     
       
        /**
         * Building BufferingImage
         * @param file : input file
         * @param value : number frame
         * @return
         * @throws IOException
         */
       
        public BufferedImage chargeImageDicomBufferise(File file, int value) throws IOException  {

                Iterator<ImageReader> iter = ImageIO.getImageReadersByFormatName("DICOM");//sp?cifie l'image

                              
                         

              ImageReader readers = iter.next();//on se d?place dans l'image dicom

         DicomImageReadParam   param1 =  (DicomImageReadParam) readers.getDefaultReadParam();//return DicomImageReadParam
               //    Adjust the values of Rows and Columns in it and add a Pixel Data attribute with the byte array from the DataBuffer of the scaled Raster

                ImageInputStream iis = ImageIO.createImageInputStream(file);

             readers.setInput(iis, false);//sets the input source to use the given ImageInputSteam or other  Object

               BufferedImage image = readers.read(value,param1);//BufferedImage image = reader.read(frameNumber, param); frameNumber = int qui est l'imageIndex
                System.out.println(image);//affichage au terminal des caract?res de l'image

                readers.dispose();//Releases all of the native sreen resources used by this Window, its subcomponents, and all of its owned children
              return  image;

            }
   
         /**
 *  Find number frame
 * @param file : input file
 * @return numbre frame in Dicom
 * @throws IOException
 */
   public int setNumber(File file) throws IOException  {

            /* Parcourt le fichier dicom*/
             Iterator<ImageReader> iter = ImageIO.getImageReadersByFormatName("DICOM");//sp?cifie l'image
           ImageReader readers = (ImageReader)iter.next();//on se d?place dans l'image dicom

            DicomImageReadParam   param1=  (DicomImageReadParam) readers.getDefaultReadParam();//return DicomImageReadParam
           //    Adjust the values of Rows and Columns in it and add a Pixel Data attribute with the byte array from the DataBuffer of the scaled Raster

            ImageInputStream iis = ImageIO.createImageInputStream(file);//cr?ation du fichier image


           readers.setInput(iis, false);//sets the input source to use the given ImageInputSteam or other  Object

            //iis.close();
            int  number = readers.getNumImages(true);//numberOfFrame on a "readers" qui doit ?tre DicomImage
            System.out.println(number);//return NumberOfFrame (Tag : (0028, 0008))
           return  number;
        }
   
   
   
    private class ImagePanel extends JPanel {
        private static final long serialVersionUID = 1L;
        private BufferedImage image;
        private int frame;
        public ImagePanel() {
            super();
            this.setPreferredSize(new Dimension(1024,1024));
            this.setBackground(Color.black);           
        }
        public void setImage(BufferedImage image) {
            this.image = image;
            this.updateUI();
        }
        @Override
        public void paint(Graphics g) {
            if (this.image != null) {
                g.drawImage(this.image, 0, 0, image.getWidth(), image.getHeight(), null);
            }
        }
   
        };

    public static void main(String[] args) {
        new DicomMultiframePlayer();
    }

}

Saturday 14 December 2013

Private tag numbers available for use in DICOM

Private tags are typically just documented by a device manufacturer in the DICOM Conformance Statement for the product adding the private tags. The method for adding private tags was designed to prevent conflicts between manufacturers. When adding tags, you should develop in such away to prevent conflicts. Ie, to give an example, a typical DICOM tag is composed of a 2 byte group and a 2 byte element:
(gggg,eeee)
The group needs to be an odd number greater than 0008. Your private attributes are typically in a private block that have an associated private creator data element. The private creator is encoded as such:
(gggg,00bb)
where bb in the tag is an open private block in the DICOM object and has a value in the rage of 10-FF. This private block is where conflicts between vendors are dealt with. You must assign your private tags in the object to one of these blocks.
Finally, the private elements themselves are within the block:
(gggg,bbxx)
Where the block is encoded in the tag, and then the elements themselves are defined by xx. Your conformance statement should list your private identification code, the DICOM VR of the tag, the element number (xx) of the tag, along with a description of the tag so that other vendors can use the tag, if necessary.
If you want a more detailed explanation, it can be found in Part 5 of the DICOM Standard, starting at page 45.

Tuesday 1 October 2013

ImageJ: Create ImageStack for dicom 2d slice

Hello friends following code for make ImagePlus by given path and also modify your code in your own way...enjoy..!!!!

Friday 27 September 2013

3D Dicom Testing Dataset: Patient Position And Orientation

ATIENT COORDINATE SYSTEM & INTERPRETATION OF DICOM CT/MR IMAGE SETS 


============================================

This document contains some notes on the patient coordinate system 


conventions which are used in the ACR/NEMA DICOM 3.0 standard.  



Information on how to interpret the location of CT and MR images within a patient coordinate system is also included.   

******************************************************************** 




What does the patient coordinate system look like? 



        HEAD                     +Z(H) 
              POSTERIOR           |  / +Y(P) 

         ()                       | / 

 RIGHT  -/\-  LEFT                |/ 

         ||           -X(R) ------+------ +X(L) 
ANTERIOR                         /| 
        FEET                    / | 
                               /  | 
                          -Y(A)  -Z(F) 


In this diagram the patient is facing forward.  Anterior refers to the  front of the patient.  Posterior refers to the back of the patient.  

The origin of the patient coordinate system is arbitrary, and selected by the imaging modality.  I assume that the modality is free to choose a different origin for each series of images, but the origin is fixed for a given series. 



If you wish to annotate images when displayed with notations such as (L) or (R) to indicate which side of the 2D image is considered the patient's left or right side, you need the Patient Position (0018, 5100) Attribute from the General Series module. 



This attribute contains the position of the patient relative to the imaging equipment space.Valid values are:

 
    1.Head First-Prone 
    2.Head First-Supine
    3.Head First-Decubitus Right
    4.Head First-Decubitus Left
    5.Feet First-Decubitus Left
    6.Feet First-Decubitus Right
    7.Feet First-Prone
    8.Feet First-Supine. 


Definitions:

Head First means the patient was laying on the imaging couch with the head facing the imaging device first. 


Feet first means the patient was laying on the imaging couch with the feet facing the imaging device first. 


Prone means the patient is laying on his/her stomach.  (Patient's face being positioned in a downwards (gravity) direction.) 


Supine means the patient is laying on his/her back.  (Patient's face being in an upwards direction.) 


Decubitus Right means the patient is laying with his/her right side in a downwards direction. 


Decubitus Left means the patient is laying with his/her left side in a downwards direction. 


NOTE -- Patient Position (0018, 5100) is a type 1 required attribute for both the CT and MR modalities.  This attribute is VERY IMPORTANT for accurately interpreting the patient's orientation. 


*************************************************************************************** 
What is a direction cosine? 

Basically, a direction cosine is the cosine of the angle which is created between a unit vector which starts at the origin with some direction, and a unit vector coincident with one of the axis of the coordinate system. 



To fully define the point in space at the end of a vector it requires nine direction cosines.  Since in a right handed cartesian coordinate system the cross product of two unit vectors defines the third, you can fully describe the location using only six direction cosines.  The other three may be calculated. 


******************************************************************** 
Where can I learn about direction cosines? 

The following references may be of help in understanding what a direction cosine is. 

Mathematical Methods for Physicists.  Arfken.  Academic Press, 1996.   


pp 3-5.  pp 143-160. 

Computer Graphics Principles and Practice, 2nd Edition. Foley, van 


Dam, Feiner, Hughes.  Addison Wesley, 1987.  pp 213-222, 1103-1105. 

Vector Mechanics for Engineers, STATICS and DYNAMICS, 3rd Edition.   


Beer, Johnston jr.  McGraw Hill, 1977.  pp 43-46. 

********************************************

How do I orient a single image in the patient coordinate system? 

In order to orient a single image plane in the patient coordinate system a 4x4 transform matrix must be defined.  This matrix is derived as follows: 


Given: 
        xr = x row direction cosine 


        yr = y row direction cosine 

        zr = z row direction cosine 

        xc = x column direction cosine 


        yc = y column direction cosine 

        zc = z column direction cosine 


First calculate the slice direction cosine values for x, y, and z.  This is the cross-product of the row and column direction cosines. 



        xs = (yr * zc) - (zr * yc) 
        ys = (zr * xc) - (xc * zc) 

        zs = (xr * yc) - (yr * xc) 


Then, the 4x4 matrix is as follows: 

       [ xr yr zr 1 ] 
   M = [ xc yc zc 1 ] 

       [ xs ys zs 1 ] 

       [ 0  0  0  1 ]


The image plane is then transformed from the origin to the proper location in the patient coordinate system as follows: 


Given: 

(c,r) = column and row of pixel in 2D image plane (x,y,z) = x, y, and z coordinates for pixel in patient coordinate system. 


Through matrix multiplication, the transformed coordinates are: 


        (c, r, 0, 1) * M = (x, y, z, 1)


David Clunie wrote: 


It really is a piece of cake to derive what you want ... for example in the following code assume "vector" is either a Row vector or Column vector from Image Orientation (Patient)  and "orientation" becomes what you want by picking out the "most significant" axis sequentially, and noting the sign. The DICOM convention is Left Posterior Head +ve.


  char *orientation=new char[4]; 
  char *optr = orientation; 

  *optr='\0'; 

char orientationX=vector.getX()<0?'R' :'L'; 
char orientationY=vector.getY()<0? 'A':'P'; 

char orientationZ=vector.getZ()<0?'F': 'H';

        double absX = fabs(vector.getX()); 
        double absY = fabs(vector.getY()); 

        double absZ = fabs(vector.getZ()); 


int i; 
for (i=0; i<3; ++i) { 
    if(absX>.0001 && absX>absY && absX>absZ)    { 

     *optr++=orientationX; 

     absX=0; 
}
else if(absY>.0001 && absY>absX&& absY>absZ) { 
    *optr++=orientationY; 
     absY=0; 
else if(absZ>.0001 &&absZ>absX && absZ>absY) { 
                                     *optr++=orientationZ; 
 absZ=0; 
else break; 
*optr='\0'; 

Thursday 26 September 2013

How to improve DICOM

While DICOM (Digital Imaging and Communications in Medicine) provides a necessary format for handling device connectivity, it could be improved by addressing the following concerns.

DICOM is based on legacy serial communications

DICOM originated as a way to standardized streaming imaging data, such as a series of ultra sound images. As such, the structure of the format supports assumes that the length of tags, and the file as a whole, cannot always be determined. Which is of course untrue since the length of a block, tag, or a file can certainly be computed somewhere along the line. For example, the device acquiring the data no doubt buffers data as it is collected, in which case, it can compute the length. I do consent that the total the number of “frames” collected during a “live session” can be unknown, for example, the total number of frames collected by an ultra-sound device while live-streaming. The size of each frame can be determined in a similar to knowing slices collected by a CT scanner.

DICOM is NOT object oriented

DICOM was created at a time when object-oriented programming (OOP) was itself being developed. As such, DICOM does not adhere to now well-developed OOP structures making it difficult to implement the DICOM specification in object-oriented programming languages such as C++. The developer must resolve the gap between an old-style file format specification based on a complex header-body-footer format as opposed to something as well-structured and easily parsable as XML, for example.

DICOM splits a single 3D volume scan into hundreds of 2D files

High-resolution CT and MRI scans are split into several hundred files. This is because each scan is stored as a stack of 2D DICOM image files. DICOM doesn’t support a true 3D volume in a single file. This makes file management cumbersome for volumetric data. As 3D data has become more and more important over the last few decades, new file formats for working with 3D data have become more prominent like NifTi, ANALYZE, and others. These formats allow all image slices to be stored in a single file, reducing the number of files significantly.

DICOM uses lossy compression on medical images

Image data often requires substantial storage and, when transmitting over a network, can require significant time as well. As such, DICOM incorporated various compression schemes in an effort to reduce the required amount of storage and time for image data. The problem, however, is that DICOM included “lossy” compression algorithms that in effect reduce image quality in an effort to save space and time. This is a problem when you consider that the DICOM is manage a persons medical images. It is entirely possible that the compression scheme may obscure a tumor within a CT scan causing the radiologist to miss a diagnosis. Medical image data should never be compressed with a scheme that “loses” data. Lossy compression has since been removed from DICOM, but it makes me wonder how it even made it into the specification.

DICOM duplicates patient info for each image file

There is a significant amount of redundancy within DICOM. Each 3D image slice in a 3D volume duplicates all patient information. If all of the slices were stored in a single file, a single instance of patient data would suffice.

DICOM changes compression and decompression mid-stream

The DICOM specification supports the notion of changing compression schemes mid-stream. This is due to the origins of DICOM from streaming devices. However, having a file that changes compression arbitrarily makes for difficult software implementations and compromises efficiency.

DICOM uses variable sized block headers

Again, as a result of being based on legacy streaming devices, DICOM data block headers support variable sizes. This makes software implementations complications and again compromises efficiency.

DICOM uses interpret-as-you-go methodology

The DICOM specification requires an “interpret as you go” methodology in that you must constantly read-and-decode each block of data sequentially. This is again due to the origins from a streaming architecture. In many cases, this methodology requires the file pointer to back track in order to read properly. Back-tracking reduces efficiency on hard disks since that technology is designed to efficiently read and write data in large blocks. Moving the file-pointer back-and-forth reading small amounts of data is very inefficient.

A format that let’s you put any thing into a block of data is not a standard

DICOM allows a data-producer to insert any kind of data into private tags. This is analogous to inserting any kind of file into a ZIP file. Once you open up the ZIP file, you still need to know how to interpret the data. As such, even though DICOM is a “standard”, it is still possible that two “DICOM compliant” devices may in fact not be capable of communicating with each other.

DICOM supports every data type under the sun, but only a few are used in practice

DICOM wanted to support all kinds of medical image data. Even that which came in the future. As such, it supports the storage of all data types. The problem is that in practice, it is extremely complicated to write software that can read all types of data. Further, consider that most image data is limited to only a handful of data types (such as shorts or unsigned shorts), it makes no sense to support everything else. Supporting all data types makes DICOM reader implementations complex and inefficient.

DICOM reports are crammed into images

In practice, reports (i..e, radiology reports) are crammed into DICOM compatible image data. DICOM was envisioned at a time before formats such as PDF and DOC were robust and popular. This makes encoding, decoding, and longevity of reports complicated and inefficient.

DICOM is very difficult to implement in software

Taking all these problems into consideration, DICOM is a beast to implement in software. There are so many pitfalls that a robust implementation is almost a pipe dream. Any implementation must include substantial quality assurance testing to insure robustness.

Summary

Despite many “problems,” DICOM is widely used and has provided substantial benefit to patients, physicians, and healthcare organizations. However, I wonder how much better healthcare could be if these problems were eliminated from DICOM?

Thursday 5 September 2013

DICOM Tag editing using dcm4che

dcm4che is a collection of open source applications and utilities for the healthcare enterprise. These applications have been developed in the Java programming language for performance and portability, supporting deployment on JDK 1.4 and up.
At the core of the dcm4che project is a robust implementation of the DICOM standard. The dcm4che-1.x DICOM toolkit is used in many production applications across the world, while the current (2.x) version of the toolkit has been re-architected for high performance and flexibility.
Also contained within the dcm4che project is dcm4chee (the extra ‘e’ stands for ‘enterprise’). dcm4chee is an Image Manager/Image Archive (according to IHE). The application contains the DICOM, HL7 services and interfaces that are required to provide storage, retrieval, and workflow to a healthcare environment. dcm4chee is pre-packaged and deployed within the JBoss application server. By taking advantage of many JBoss features (JMS, EJB, Servlet Engine, etc.), and assuming the role of several IHE actors for the sake of interoperability, the application provides many robust and scalable services:
Following jar files are needed for this activity
    dcm4che.jar
    dcm4che-core-2.0.21.jar
    log4j-1.2.13.jar
    slf4j-api-1.5.0.jar
    slf4j-log4j12-1.5.0.jar
    package dicomeditor;
    import java.io.BufferedOutputStream;
    import java.io.File;
    import java.io.FileNotFoundException;
    import java.io.FileOutputStream;
    import java.io.IOException;
    import org.dcm4che2.data.BasicDicomObject;
    import org.dcm4che2.data.DicomObject;
    import org.dcm4che2.data.Tag;
    import org.dcm4che2.data.VR;
    import org.dcm4che2.io.DicomInputStream;
    import org.dcm4che2.io.DicomOutputStream;
    /**
     *
     * @author awaiswaheed
     */
    public class DicomEditor {
        public void UpdateTags() {
            String sourceFolder = "D:\\WRONGTAGS";
            String destinationFolder = "D:\\CRRTAGS";
            //First Delete Temp Folder Files before copy images there
            File[] tempImages = new File(destinationFolder).listFiles();
            for (int j = 0; j < tempImages.length; j++) {
                tempImages[j].delete();
            }
            //now get all files in tag folder
            File[] allFiles = new File(sourceFolder).listFiles();
            DicomObject dcmObj = new BasicDicomObject();
            DicomInputStream din = null;
            for (int i = 0; i < allFiles.length; i++) {
                System.out.println("Current Image in Progress = " + (i + 1)
                        + " out of = " + allFiles.length);
                try {
                    din = new DicomInputStream(allFiles[i]);
                    din.readDicomObject(dcmObj, -1);
                    //System.out.println(" Tag.PatientID  *******************   " + Tag.PatientID);
                    //dcmObj.putString(1048608, VR.LO, "1234");
                    dcmObj.putString(Tag.PatientName, VR.LO, "XYZPatientName");
                    dcmObj.putString(Tag.PatientBirthDate, VR.DA, "19690101");
                    dcmObj.putString(Tag.PatientAge, VR.AS, "060Y");
                    dcmObj.putString(Tag.Modality, VR.CS, "NM");
                    dcmObj.putString(Tag.AccessionNumber, VR.SH, "1213456");
                    this.writeFile(dcmObj, destinationFolder, "\\" + allFiles[i].getName());
                    din.close();
                } catch (Exception ex) {
                    ex.printStackTrace();
                }
            } //Loop
            //get all files from tag file and delete them
        }
        public void writeFile(DicomObject obj, String copyServer, String fileName) {
            File f = new File(copyServer + fileName);
            FileOutputStream fos;
            try {
                fos = new FileOutputStream(f);
            } catch (FileNotFoundException e) {
                e.printStackTrace();
                return;
            }
            BufferedOutputStream bos = new BufferedOutputStream(fos);
            DicomOutputStream dos = new DicomOutputStream(bos);
            try {
                dos.writeDicomFile(obj);
            } catch (IOException e) {
                e.printStackTrace();
                return;
            } finally {
                try {
                    dos.close();
                } catch (IOException ignore) {
                }
            }
        }
        public static void main(String[] args) {
            DicomEditor dcmEdit = new DicomEditor();
            dcmEdit.UpdateTags();
        }
    }