 Mezinárodní společnost pro klinickou biostatistiku v České republice, z.s. (ISCB ČR), je dobrovolné, nezávislé občanské sdružení pracovníků s právní podobou zapsaného spolku, který má povahu odborné společnosti
 Posláním spolku je ve spolupráci s Mezinárodní společností pro klinickou biostatistiku (International Society for Clinical Biostatistics, ISCB) přispívat k rozvoji biostatistiky v České republice
 Odborná činnost spolku se zaměřuje na aplikaci statistických metod v biomedicíně
 Spolek je právnickou osobou
Zprávy Královské statistické společnosti
NIH Funding Opportunities
Aktuální oznámení

An approach based on pseudoresiduals for selecting a multiplicative or an additive hazards regression model
François Lefebvre^{1,2}, Roch Giorgi^{3}
^{1}Aix Marseille Univ, INSERM, IRD, SESSTIM, Marseille, France. ^{2}Groupe méthode en recherche clinique, service de santé publique, Hôpitaux universitaires de Strasbourg, Strasbourg, France ^{3} Aix Marseille Univ, APHM, INSERM, IRD, SESSTIM, Hop Timone, BioSTIC, Marseille, France.
In survival analysis, data can be modelled with a multiplicative model, as the Cox model, or with an additive hazards model, as the Lin’s or the Aalen’s model. Covariates act on the baseline hazard multiplicatively in the first model, while they act additively in the second type of models. To correctly model the covariate, the knowledge of its effect on the baseline hazard is required, but this is rarely known a priori. So, diagnostic tools have been proposed to check goodnessoffit. For the multiplicative model, the study of Schoenfeld residuals is used to test the proportional hazards assumption and the martingale residuals is plotted to identify the more appropriate functional form of the effect of the continuous covariates (e.g., polynomial functions or cubic splines). For the additive model, the form of the effect of the continuous covariates (e.g., polynomial functions or cubic splines) is estimated by analyzing martingale residuals processes. Pseudoobservations have also been used to assess the effect of a covariate on the survival and to check the assumptions of the Cox (proportional hazards, loglinearity), the Lin (constant effect and linearity), or the Aalen (linearity) model. If these diagnostic tools permit to model a correct multiplicative or additive model, they do not permit to know which one is the more appropriate for a particular dataset. So, we propose to use the pseudoresiduals as a measure, for each individual, of the difference between a nonparametric survival estimator, and the survival estimates obtained using a regression model. For each type of regression model, multiplicative and additive, pseudoresiduals can be computed and compared to each other. Due to the fact that the pseudoresiduals are analogous to the residuals in a general linear model, the best model will be the one that minimizes the sum of the squared pseudoresiduals. In this presentation, we will firstly show a strategy to fit optimal additive and multiplicative models, and then we will show how the pseudoresiduals can be used to select a multiplicative or an additive hazards regression model.
Keywords: multiplicative hazards models, additive hazards models, pseudoobservations, pseudoresiduals
Location:
Institute of Computer Science Czech Academy of Sciences Lecture Room 222 Pod Vodárenskou věží 2 182 07 Prague
Date: Friday 22 November 2019
Time:13:30 CET

Přidáno v 23. 10. 2019 7:11, autor: Zdeněk Valenta

Comparison of regression curves for detection of differential item functioning
Authors: Adéla Hladká^{1, 2} and Patrícia Martinková^{1, 3}
^{1}Department of Statistical Modelling, Institute of Computer Science of the Czech Academy of Sciences ^{2}Department of Probability and Mathematical Statistics, Faculty of Mathematics and Physics, Charles University ^{3}Institute for Research and Development of Education, Faculty of Education, Charles University
Abstract:
Differential item functioning (DIF) is a phenomenon when two respondents with the same underlying latent trait (such as quality of life, stress, pain, attitudes, knowledge, or ability) but from different social group have different probabilities to endorse an item in multiitem measurement. Many methods for DIF detection are derived from comparison of regression curves of reference and focal group, however, most of them are limited to detection of DIF caused either by difference in difficulty or discrimination parameters (inflexion point and slope of the probability curve). Possible methodological gap can be filled by nonlinear regression models for DIF detection (Drabinová & Martinková, 2017) implemented within difNLR R package (Hladká & Martinková, 2019). These models offer possibility to account for guessing or inattention (guilt or shame in healthrelated outcome measures) when answering and moreover to test whether these item characteristics differ between groups. Another approach which can detect such differences is a newly proposed nonparametric comparison of regression curves. This method can be useful for example in situations of small sample sizes or when no specific true model is expected. A special type of DIF is differential distractor functioning (DDF), situation when two respondents with the same underlying trait but from different group have different probability to select given distractors. DDF can be modelled by nominal models among ordinal and nominal data. Similarly, cumulative logit, or adjacent category logit may be used to describe DIF in ordinal items. Other topics in DIF detection cover item purification and multiple comparison corrections. In this presentation, we introduce several approaches for DIF and DDF detection, show results of the simulation studies and illustrate usage and implementation of the methods with real and simulated examples.
Keywords: differential item functioning, logistic regression, binary outcome, polytomous outcome
Speaker: Adéla Hladká
Location:
Institute of Computer Science Czech Academy of Sciences Lecture Room 222 Pod Vodárenskou věží 2 182 07 Prague
Date: Thursday 12 September 2019
Time:13:30 o'clock

Přidáno v 23. 8. 2019 1:48, autor: Zdeněk Valenta

On the information efficiency of neuronal coding
Abstract:
The research in computational neuroscience has a tradition of more than 100 years, marked by the nowclassical Lapicque, McCullochPitts or HodgkinHuxley neuronal models. During the last three decades the field has experienced a dramatic increase, attracting a number of scientists from different disciplines. New topics have emerged alongside the traditional neuronal modeling approaches and the longstanding problem of neuronal coding is recently receiving substantial attention. The approach to the problem relies on the applications of information theory, signal detection and estimation theory and theory of stochastic processes to different aspects of neuronal information processing, including coding and decoding in individual neurons and populations, or analysis of beneficial role of the noise in the system. In the long run, understanding the principles of information processing in neurons may help to introduce new algorithms or new generation of hardware. In the talk we discuss the possibility of sourcechannel matching in the HodgkinHuxley type of neuronal models with adaptation and compare the theoretical predictions with invivo experimental recordings of excitatory neurons. Our results imply that the postsynaptic firing rate histograms of real neurons match the theoretical prediction when the model balances the information transmission and metabolic workload optimally.
Keywords: computational neuroscience; neuronal coding; information capacity; metabolic cost
Speaker:
Lubomír Košťál Institute of Physiology Czech Academy of Sciences Vídeňská 1083 142 20 Prague Czech Republic
Location:
Institute of Computer Science Czech Academy of Sciences Lecture Room 222 Pod Vodárenskou věží 2 182 07 Prague
Date: Thursday 19 September 2019
Time:14:00 o'clock 
Přidáno v 16. 8. 2019 3:31, autor: Zdeněk Valenta

Variable selection – a review and recommendations for the practicing statistician
Georg Heinze, Christine Wallisch, Daniela Dunkler
Section for Clinical Biometrics, Center for Medical Statistics, Informatics and Intelligent Systems, Medical University of Vienna, Vienna 1090, Austria
Abstract: Statistical models support medical research by facilitating individualized outcome prognostication conditional on independent variables or by estimating effects of risk factors adjusted for covariates. Theory of statistical models is wellestablished if the set of independent variables to consider is fixed and small. Hence, we can assume that effect estimates are unbiased and the usual methods for confidence interval estimation are valid. In routine work, however, it is not known a priori which covariates should be included in a model, and often we are confronted with the number of candidate variables in the range 10–30. This number is often too large to be considered in a sta tistical model. We provide an overview of various available variable selection meth ods that are based on significance or information criteria, penalized likelihood, the changeinestimate criterion, background knowledge, or combinations thereof. These methods were usually developed in the context of a linear regression model and then transferred to more generalized linear models or models for censored survival data. Variable selection, in particular if used in explanatory modeling where effect estimates are of central interest, can compromise stability of a final model, unbiasedness of regression coefficients, and validity of pvalues or confidence intervals. Therefore, we give pragmatic recommendations for the practicing statistician on application of variable selection methods in general (lowdimensional) modeling problems and on performing stability investigations and inference. We also propose some quantities based on resampling the entire variable selection process to be routinely reported by software packages offering automated variable selection algorithms.
Keywords: changeinestimate criterion, penalized likelihood, resampling, statistical model, stepwise selection
Speaker: Georg Heinze
Location:
Institute of Computer Science Czech Academy of Sciences Lecture Room 222 Pod Vodárenskou věží 2 182 07 Prague
Date: Thursday 17 October 2019
Time:13:30 o'clock 
Přidáno v 15. 10. 2019 1:14, autor: Zdeněk Valenta

addhaz: Contribution of chronic diseases to the disability burden using R
Renata T. C. Yokota^{1,2}, Caspar W. N. Looman^{3}, Wilma J. Nusselder^{3}, Herman Van Oyen^{1,4}, Geert Molenberghs^{5,6}
1. Department of Public Health and Surveillance, Scientific Institute of Public Health, Brussels, Belgium 2. Department of Sociology, Interface Demography, Vrije Universiteit Brussel, Brussels, Belgium 3. Department of Public Health, Erasmus Medical Center, Rotterdam, The Netherlands 4. Department of Public Health, Ghent University, Ghent, Belgium 5. Interuniversity Institute for Biostatistics and statistical Bioinformatics (IBioStat), Universiteit Hasselt, Diepenbeek, Belgium 6. Interuniversity Institute for Biostatistics and statistical Bioinformatics (IBioStat), KU Leuven, Leuven, Belgium
Abstract:
The increase in life expectancy followed by the growing proportion of old individuals living with chronic diseases contributes to the burden of disability worldwide. The estimation of how much each chronic condition contributes to the disability prevalence can be useful to develop public health strategies to reduce the burden. In this presentation the R package addhaz, which is based on the attribution method (Nusselder and Looman 2004) to partition the total disability prevalence into the additive contributions of chronic diseases using crosssectional data, will be introduced. The R package includes tools to fit the binomial and multinomial additive hazard models, the core of the attribution method. The models are fitted by maximizing the binomial and multinomial loglikelihood functions using constrained optimization (constrOptim function in R). The 95% Wald and bootstrap percentile confidence intervals can be obtained for the parameter estimates. Also, the absolute and relative contribution of each chronic condition to the disability prevalence and their bootstrap confidence intervals can be estimated. An additional feature of addhaz is the possibility to use parallel computing to obtain the bootstrap confidence intervals, reducing computation time. In this presentation, we will illustrate the use of addhaz with examples for the binomial and multinomial models, using the data from the Brazilian National Health Survey, 2013.
Keywords: disability, binomial outcome, multinomial outcome, additive hazard model, crosssectional data
Speaker: Renata Tiene de Carvalho Yokota
Location:
Institute of Computer Science Czech Academy of Sciences Lecture Room 222 Pod Vodárenskou věží 2 182 07 Prague
Date: Thursday 10 October 2019
Time: 14:00 o'clock 
Přidáno v 16. 8. 2019 3:32, autor: Zdeněk Valenta
