先介绍了各种熵,然后再介绍了再生物医学方面的应用:
- Cornforth et al. (2013) used Renyi entropy to detect cardiac autonomic neuropathy (CAN) in diabetes patients. Multi-scale entropy was applied to the ECG recording. Their results suggest that it is easy to distinguish between people with early CAN and controls.
- Graff et al. (2012) investigated the usefulness of entropy measures, namely approximate entropy, sample entropy, fuzzy entropy, and permutation entropy. They found that with a reduction of the data set length up to 250 RR intervals, values of entropy can remain significantly different in patients with CHF compared to healthy individuals.
- Akareddy et al. (2013) improved approximate entropy and used it in EEG signal classification for epileptic seizure detection.
- Sharma et al. (2015) used different entropy measures, namely: average Shannon entropy, average Renyi’s entropy, average approximate entropy, average sample entropy, and average phase entropies on intrinsic mode functions for the automated identification of focal EEG signals.
- Ferlazzo et al. (2014) evaluated permutation entropy extracted from EEG recordings during both interictal and ictal phases in patients with typical absences (TAs) and healthy subjects. In patients with TAs, a higher randomness in fronto-temporal areas associated with high permutation entropy levels and a lower randomness in posterior areas associated with low permutation entropy levels occurred. Based on this, they claimed that permutation entropy seemed to be a useful tool to disclose abnormalities of cerebral electric activity.
- Avilov et al. (2012) also used permutation entropy on EEG recordings but to detect epileptic seizures. They showed that the PE of a sick patient is twice larger than in healthy patients, and that the PE increases for time ranges where the appearance of epileptiform activity takes place.
- Liang et al. (2015) compared twelve entropy indices, namely response entropy, state entropy, three wavelet entropy measures (Shannon, Tsallis, and Renyi), Hilbert-Huang spectral entropy, approximate entropy, sample entropy, fuzzy entropy, and three permutation entropy measures (Shannon, Tsallis and Renyi) in monitoring depth of anesthesia and detecting burst suppression. They found that Renyi permutation entropy performed best in tracking EEG changes associated with different anesthetic states, and that approximate entropy and sample entropy performed best in detecting burst suppression.
- Zhang et al. (2012) presented the possibility of using sample entropy for muscle activity onset detection by processing surface EMG against ECG artifacts. A sample entropy threshold was used for detection of muscle activity and performed significantly better.
- Diab et al. (2014) investigated the performance of four non-linearity detection methods: statistics (Time reversibility), predictability (Sample Entropy, Delay Vector Variance), and chaos theory (Lyapunov Exponents). Applied to real uterine EMG signals, they were used to distinguish between pregnancy and labor contraction bursts. Their results confirmed the sensitivity of sample entropy.
- Alamedine et al. (2014) proposed several selection methods in order to choose the best parameters to classify contractions in uterine EHG signals for the detection of preterm labor. One of them was sample entropy, which was potentially most useful to discriminate between pregnancy and labor contractions.
- Garcia-Gonzalez et al. (2013) also investigated the differences in the contractions generated by women that chose to have a vaginal delivery as opposed to those who elected to have caesarean section. They used sample entropy to calculate the irregularity of manually selected contractions of the EHG time series and confirmed that sample entropy could provide an index to evaluate the quality of the active phase of labor at term.
结论:有诸多优点,也有存在的不足:One of them is how to use entropy measures for the classification of pathological and nonpathological data. In studies concerning biomedical data, a common aim is to distinguish between two states of a system. There is little knowledge concerning how to solve the problems of classification and selecting appropriate data ranges to use. This is directly related to the choice of parameter selection concerning the threshold value r, which has an influence on the value of entropy. There are different parameter sets used, but until now all possible combinations have not been tested and no consensus has been reached. This parameter is dependent on the data and its type.