Available Datasets

The datasets (24 768*768*x FA videos and late FA images in DME eyes) and manual and automated markings used in the following paper can be downloaded from HERE.

Dataset for Fluorescein Angiography (Video & Late Image) in DME eyes



The datasets (24 768*768*x FA videos and late FA images in DME eyes) and manual and automated markings used in the following paper can be downloaded from HERE.

Hossein Rabbani, Michael J. Allingham, Priyatham S. Mettu, Scott W. Cousins, Sina Farsiu,
Investigative Ophthalmology & Visual Science, 56(3) March 1482-1492, 2015 

Please reference the above paper if you would like to use any part of datasets and this method.

**OCT data & Color Fundus Images of Left & Right Eyes of 50 healthy persons:
This dataset contains OCT data (in mat format) and color fundus data (in jpg format) of left & right eyes of 50 healthy persons. Click here to download the data.

OCT data & Color Fundus Images of Left & Right Eyes of 50 healthy persons:

This dataset contains OCT data (in mat format) and color fundus data (in jpg format) of left & right eyes of 50 healthy persons. 
Each volunteer's folder includes color fundus images (.jpg) and OCT data (.mat) of the right and left eyes. The password of each rar file is 545dfds$Dfd46456as .

Please reference the following paper if you would like to use any part of this dataset or method:

* T. Mahmudi, R. Kafieh, H. Rabbani,Comparison of macular OCTs in right andleft eyes of normal people”, in Proc. SPIE 9038, Medical Imaging 2014: Biomedical Applications in Molecular, Structural, and Functional Imaging, 90381K, San Diego, California, United States Feb. 15-20, 2014. doi: 10.1117/12.2044046

patient#1                   patient#2                     patient#3

patient#4                   patient#5                     patient#6

patient#7                   patient#8                     patient#9

patient#10                 patient#11                   patient#12

patient#13                 patient#14                   patient#15

patient#16                 patient#17                   patient#18

patient#19                 patient#20                   patient#21

patient#22                 patient#23                   patient#24

patient#25                 patient#26                   patient#27

patient#28                 patient#29                   patient#30

patient#31                 patient#32                   patient#33

patient#34                 patient#35                   patient#36

patient#37                 patient#38                   patient#39

patient#40                 patient#41                   patient#42

patient#43                 patient#44                   patient#45

patient#46                 patient#47                   patient#48

patient#49                 patient#50  


**Database of 22 retinal images for the purpose of vessel-based registration of Fundus and OCT projection images of retina:
A set of eye images consisting of 22 pairs of images (17 macular and 5 prepapillary), from random patients, each pair acquired from eyes with a variety of retinal diseases. Each image pair includes a colour fundus image and one OCT image acquired with Topcon 3D OCT-1000 instrument. OCT images contain images of 650 different slices with a size of 650 × 512 × 128 voxels and a voxel resolution of 3.125 µm × 3.125 µm × 7 µm. Click here to download the data.

Database of 22 retinal images; for the purpose of vessel-based registration of Fundus and OCT projection images of retina

A set of eye images consisting of 22 pairs of images (17 macular and 5 prepapillary), from random patients, each pair acquired from eyes with a variety of retinal diseases. Each image pair includes a colour fundus image and one OCT image acquired with Topcon 3D OCT-1000 instrument. OCT images contain images of 650 different slices with a size of 650 × 512 × 128 voxels and a voxel resolution of 3.125 µm × 3.125 µm × 7 µm.

Please reference the following paper if you would like to use any part of this dataset or method.

*Golabbakhsh, M., & Rabbani, H. (2013). Vessel-based registration of fundus and optical coherence tomography projection images of retina using a quadratic registration model. IET Image Processing, 7(8), 768-776.

Please contact Mrs Marzieh Golabbakhsh - misp@mui.ac.ir - if you have questions about this dataset.


A set of 2D .mat corneal OCT images of 15 subjects. For example subject#1 includes 41 240×748 B-scans taken from Heidelberg OCT imaging device. Click here to download the data.

Database of corneal OCT taken from Heidelberg OCT imaging system (3D .mat data of 15 subjects)

A set of 2D .mat corneal OCT images of 15 subjects. For example subject#1 includes 41 240×748 B-scans taken from Heidelberg OCT imaging device.

DATA

Please reference the following paper if you would like to use any part of this dataset or method.

*Jahromi MK, Kafieh R, Rabbani H, et al. An Automatic Algorithm for Segmentation of the Boundaries of Corneal Layers in Optical Coherence Tomography Images using Gaussian Mixture ModelJournal of Medical Signals and Sensors. 2014;4(3):171-180.

Please contact Dr Hossein Rabbani - h_rabbani@med.mui.ac.ir - if you have questions about this dataset.


** Bone Marrow Microscopic Data (plasma cell lineage images)
This folder contains bone marrow microscopic images. These images are categorized into two groups: Normal Plasma Cells and Myeloma Cells. Click here to download the data.

Bone Marrow Microscopic Data

 Dataset of Bone Marrow Microscopic Images


This folder contains bone marrow microscopic images.‎‎ These images are categorized into two groups: Normal Plasma Cells and Myeloma Cells.‎‎ For data acquisition, a digital camera (Sony DSC-H9) coupled on an optical microscope (Olampus-CH40RF200) were used and 50 data were captured for extraction and recognition of myeloma cell in microscopic bone marrow aspiration images.‎

If you use these data please cite the following paper:
Saeedizadeh Z, Talebi A, Mehri-Dehnavi A, Rabbani H, Sarrafzadeh O.‎ Extraction and Recognition of Myeloma Cell in Microscopic Bone Marrow Aspiration Images.‎ J Isfahan Med Sch., 32(310), 3rd week, Jan.‎ 2015.‎

Abstract

Background: Plasma cells are developed from B lymphocytes, a type of White Blood Cells that is generated in the bone marrow.‎ The plasma cells produce antibodies to fight with bacteria and viruses and stop infection and disease.‎ In Multiple myeloma, a cancer of plasma cells, collections of abnormal plasma cells (myeloma cells) accumulate in the bone marrow.‎ Sometimes existence of infection in body causes plasma cell’s increment which could be diagnosed wrongly as Multiple Myeloma.
Diagnosis of myeloma cells is mainly based on nucleus to cytoplasm ratio, compression of chromatin at nucleus, perinuclear zone in cytoplasm and etc, so because of depending final decision on human’s eye and opinion, error risk in decision may be occurred.‎ This study presents an automatic method using image processing techniques for myeloma cells diagnosis from bone marrow smears.‎
Methods: In this study, at first by contrast enhancement algorithm and k-means clustering, nucleus and cytoplasm of cells are completely extracted from bone marrow images.‎ Then, for splitting of connected nuclei and clump cells, two algorithms based on bottleneck and watershed methods are applied.‎ Finally by feature extraction from the nucleus and cytoplasm, myeloma cells are separated from normal plasma cells.‎
Findings: This algorithm is applied on 30 digital images which are contained 64 normal plasma cells and 73 myeloma cells.‎ Applying the automatic identification of myeloma cells on provided database showed the accuracy of 99.‎27%.‎
Conclusion: In this study an automatic method for detection and classification of plasma cells from myeloma cells in bone marrow aspiration images are proposed.‎ 

Keywords: B-Cells, Plasma cell myeloma, Image analysis

Full Text

File1
File2

This dataset contains 260 CT and 202 MR images in DICOM format. Click here to download the data.

CT & MR Volumes Used for Watermarking of DICOM Images

This dataset contains 260 CT and 202 MR images in DICOM format used for dual and blind watermarking of medical images in the contourlet domain.

Medical images in digital form must be stored in a secured environment to preserve patient privacy. It is also important to detect modifications on the image. These objectives are obtained by watermarking in medical image. For this reason, we propose an adaptive dual watermarking scheme with different embedding strength in ROI and RONI. We embed watermark bits in singular value vectors of the embedded blocks within lowpass subband in contourlet domain.
The proposed contourlet-based watermarking algorithm uses an automatically selection for ROI and embeds the watermark in the singular values of contourlet subbands that makes the algorithm more efficient, and robust against noise attacks than other transform domains. The embedded watermark bits can be extracted without the original image, the proposed method has high PSNR and SSIM, and the watermarked image has high transparency and can still conform to the DICOM format.

Please reference the following paper if you would like to use any part of this dataset or method:
*** Rahimi, F., and Rabbani, H., A dual adaptive watermarking scheme in contourlet domain for DICOM images, Biomed. Eng. Online 10(53):1–18, 2011.

CT Images
MRI Images

 ** Fundus Fluorescein Angiogram Photographs of Diabetic Patients

We have collected retinal image of 70 patients of different diabetic retinopathy stages including 30 normal data and 40 abnormal data in different stages. Click here to download the data.

Fundus Fluorescein Angiogram Photographs of Diabetic Patients

This pages includes fundus fluorescein angiogram photographs of diabetic patients who were enrolled in a study in Isfahan University of Medical Sciences Persian Eye Clinic (Feiz Hospital). The size of the fundus images is 576 × 720 pixels (in jpeg format with 8-bit depth). We have collected retinal image of 70 patients of different diabetic retinopathy stages including 30 normal stage and 40 abnormal stages (including mild non-proliferative diabetic retinopathy (NPDR), moderate NPDR, severe NPDR and proliferative diabetic retinopathy (PDR)). At the screening visit, a comprehensive ophthalmic evaluation was performed and included a medical history, applanation tonometry, slit-lamp examination, dilated fundus biomicroscopy, and ophthalmoscopy. The severity of diabetic retinopathy was diagnosed at the preoperative visit using slit-lamp funduscopic biomicroscopy and classified as either no retinopathy, mild NPDR, moderate NPDR, severe NPDR and PDR based on the International Clinical Diabetic Retinopathy Disease Severity Scale [*].

[*] Wilkinson CP, Ferris FL, Klein RE, Lee PP, Agardh CD, Davis M, Dills D, Kampik A, Pararajasegaram R, Verdaguer JT (2003) Proposed International Clinical Diabetic Retinopathy and Diabetic Macular Edema Disease Severity Scales. Ophthalmology 110:1677–1682.

Please reference the following paper if you would like to use any part of this dataset or method:

This folder includes 25 colour fundus images of healthy persons and 35 colour fundus images of patients with diabetic retinopathy used for automatic curvelet-based detection of Foveal Avascular Zone (FAZ).  Click here to download the data.

Colour Fundus Images of Healthy Persons & Patients with Diabetic Retinopathy

This folder includes 25 colour fundus images of healthy persons and 35 colour fundus images of patients with Diabetic Retinopathy (DR) used for automatic curvelet-based detection of Foveal Avascular Zone (FAZ).

The shape and size of the Foveal Avascular Zone (FAZ), which is responsible for central vision, can become abnormal and contribute to loss of vision in DR. In this study, appropriate features are extracted from the FAZ by means of Digital Curvelet Transform (DCUT) and used to grade of retina images into normal and abnormal classes. For this reason, DCUT is applied on enhanced color fundus images and its coefficients are modified to highlight vessels and the optic disc (OD). Through the use of this information about the anatomical location of the FAZ related to the OD and detected end points of segmented vessels, the FAZ is extracted. Then, the area and regularity of the extracted FAZ is determined and used for DR grading. This technique showed high reproducibility in characterizing the size and contour of the FAZ in diabetic maculopathy, thus it has the potential to serve as a powerful tool in the automated assessment and grading of images in a routine clinical setting.

Please reference the following paper if you would like to use any part of this dataset or method:
*** S. H. Hajeb, H. Rabbani, M. R. Akhlaghi, S. H. Haghjoo, and A. R. Mehri, “Analysis of foveal avascular zone for grading of diabetic retinopathy severity 8 based on curvelet transform,” Graefe's Archive for Clinical and Experimental Ophthalmology, vol. 250, no. 11, pp. 1607-1614, July 2012.

Normal

 

Abnormal



Publicly available database of both fundus fluorescein fngiogram photographs and corresponding color fundus images of 30 healthy persons and 30 patients with diabetic retinopathy. Click here to download the data.

Fundus Fluorescein Angiogram Photographs & Colour Fundus Images of Diabetic Patients

Publicly available database of both fundus fluorescein fngiogram photographs and corresponding color fundus images of 30 healthy persons and 30 patients with diabetic retinopathy.

As manual analysis and diagnosis of large amount of images are time consuming, automatic detection and grading of diabetic retinopathy are desired. In this study, we use fundus fluorescein angiography and color fundus images simultaneously, extract 6 features employing curvelet transform, and feed them to support vector machine in order to determine diabetic retinopathy severity stages. These features are area of blood vessels, area, regularity of foveal avascular zone (FAZ), and the number of micro-aneurisms therein, total number of micro-aneurysms, and area of exudates. In order to extract exudates and vessels, we respectively modify curvelet coefficients of color fundus images and angiograms. The end points of extracted vessels in predefined region of interest based on optic disk are connected together to segment FAZ and can be further improved by adding several morphological operators. To extract micro-aneurysms from angiogram, first extracted vessels are subtracted from original image, and after removing detected background by morphological operators and enhancing bright small pixels, micro-aneurisms are detected. Our simulations show that the proposed system has sensitivity and specificity of 100% for grading diabetic retinopathy into 3 groups: 1) no diabetic retinopathy, 2) mild/moderate nonproliferative diabetic retinopathy, 3) severe nonproliferative/proliferative diabetic retinopathy.

Please reference the following paper if you would like to use any part of this dataset or method:
*** SH. Hajeb, H. Rabbani, MR. Akhlaghi, “Diabetic Retinopathy Grading by Digital Curvelet Transform", Computational and Mathematical Methods in Medicine, vol. 2012, Article ID 761901, 11 pages, 2012.1607-1614, July 2012.

Download Normal Data

Download Abnormal Data


35 720*576 colour retinal images  with signs of the diabetic retinopathy (microaneurysms and exudates). Click here to download the data.

Fundus Images with Exudates

This page contains 35 720*576 colour retinal images with signs of the diabetic retinopathy (microaneurysms and exudates). 
The data used in a study which presents a curvelet-based algorithm for detection of optic disk (OD) and exudates on low contrast images. This algorithm which is composed of three main stages does not require user initialization and is robust to the changes in the appearance of retinal fundus images. At first, bright candidate lesions in the image are extracted by employing DCUT and modification of curvelet coefficients of enhanced retinal image. For this purpose, the authors apply a new bright lesions enhancement on green plane of retinal image to obtain adequate illumination normalisation in the regions near the OD, and to increase brightness of lesions in dark areas such as fovea. Following this step, a new OD detection and boundary extraction method based on DCUT and level set method is introduced. Finally, bright lesions map (BLM) image is generated and to distinguish between exudates and OD (i.e. a false detection for the final exudates detection), the extracted candidate pixels in BLM that are not in OD regions (detected in previous step) are considered as actual bright lesions. 

Please reference the following paper if you would like to use any part of this dataset or method:
***M. Esmaeili, H. Rabbani, A. M. Dehnavi, A. Dehghani, “Automatic Detection of Exudates and Optic Disk in Retinal Images Using Curvelet Transform", IET Image Processing, vol. 6, no. 7, pp. 1005-1013, Oct. 2012.

Data
**Dataset of Leishmania Parasite in Microscopic Images
45 24-bit 3264*2448 microscopic images taken from bone marrow samples including leshman bodies.  Click here to download the data.

Dataset of Leishmania Parasite in Microscopic Images

Visceral leishmaniasis is a parasitic disease that affects liver, spleen and the bone marrow. According to World Health Organization report, definitive diagnosis is possible just by direct observation of leishman body in the microscopic image taken from bone marrow samples.

In this study, leishman bodies existed in microscopic images taken from bone marrow samples of patients with visceral leishmaniasis underwent automatic-segmentation using Otsu and Savoulla thresholding methods besides K-means clustering method. For data acquisition, a digital camera (Sony DSC-H9) coupled on an optical microscope (Olampus-CH40RF200) were used and 45 data were captured for Automatic Boundary Extraction of Leishman Bodies.

 

Data Set1      Data Set2       Data Set3       Data Set4       Data Set5

Data Set6      Data Set7       Data Set8       Data Set9     Data Set10

Data Set11    Data Set12     Data Set13      Data Set14   Data Set15

Data Set16    Data Set17     Data Set18      Data Set19   Data Set20

Data Set21   Data Set22     Data Set23      Data Set24   Data Set25

Data Set26   Data Set27     Data Set28      Data Set29   Data Set30

Data Set31   Data Set32     Data Set33      Data Set34   Data Set35

Data Set36   Data Set37     Data Set38      Data Set39   Data Set40

Data Set41   Data Set42      Data Set43     Data Set44   Data Set45

 

Please cite the following paper if you download the dataset:

 

M. Farahi, H. Rabbani, A. Mehri, “Automatic Boundary Extraction of Leishman Bodies in Bone Marrow Samples from Patients with Visceral Leishmaniasis”, Journal of Isfahan Medical School,  vol. 32, no. 286, 3rd week, July 2014.

 

 

Abstract

Background: According to the progress of microscopic imaging technology and suitable image processing techniques in the past decade, there is a tendency to use computer for automatic diagnosis of microscopic diseases. Automatic border detection is one of the most important steps in computer diagnosis that accuracy and specificity of the subsequent steps crucially depends on it. Microscopic images are colored to be seen more accurate and easier; after coloring، the image artifacts increases, so the boundary detection of objects is very important in order to find the exact feature extraction.

Methods: In this study, leishman bodies existed in microscopic images taken from bone marrow samples of patients with visceral leishmaniasis underwent automatic-segmentation using Otsu and Savoulla thresholding methods besides K-means clustering method. For data acquisition, a digital camera (Sony DSC-H9) coupled on an optical microscope (Olampus-CH40RF200) were used. Proposed method was tested on 20 images. For automatic diagnosis of the leishman bodies from all found objects, some geometric features like eccentricity, area ratio، roundness and solidity and some texture features like mean, variance, smoothness, third moment, uniformity and entropy were extracted. Found objects were classified into healthy and non-healthy groups using Feed-Forward Neural Network classifier.

Findings: To find the best mode for each method, a comparison were made and determined that using stage 5 for Otsu، threshold 0.1 for Sauvola and 5 clusters for k-means had minimum automatic boundary extraction error.

Conclusion: After compartment of obtained result with specialist, we found that Sauvolla method had minimum error of border detection, and Otsu method was more accurate for automatic detection of leishman bodies.

 

Keywords: Automatic disease diagnosis, Visceral leishmaniasis, Leishman body, Segmentation, Border detection

 

Full Text In Persian