The McGill quantitative psychology brown bag series intends to serve as an informal forum to share recent developments in quantitative psychology and related disciplines, and to discuss methodological issues that substantive researchers may encounter in their research.
Winter 2020 (Location: 2001 McGill College Room 464, Time: Wednesday 2:00 – 3:00)
January 8 : Leonie Cloos (Psychology, Leiden University)
Title: Lost in translation? Accumulating knowledge about construct validity via large-scale replications
Abstract: Replication is regarded as one of the most powerful tools to establish scientific evidence. Recently psychological research has been concerned to enforce replication efforts that will support the veracity and generalizability of findings. Laboratories across the globe have entered large-scale replication projects; collecting data for the same study across geographically distributed locations. Measures include questionnaires that are translated into different languages. If a measure is applied to this new context it requires structural validity evidence to establish a) that the replication measures the same construct, and b) that the translated versions measure this construct equivalently. This issue has been understudied in the context of replication and it is unclear how measurement differences in translated questionnaires may influence analyses and bias results. I will present a study where we used data from three different translations (English, Dutch, Spanish) of the Moral Foundations Questionnaire (MFQ) applied in a replication. In this study, we analyzed the replicability of the effect, conducted confirmatory factor analyses, tested for measurement equivalence across translations, and explored how non-equivalence influenced replication results. The replication effect differed across translations. A different measurement model was applied in the replication, showing poor model fit. While the number of factors and item loadings were similar across translations, equivalence was not established for item intercepts. Using this example, we discuss how measurement differences on two levels (ie. original vs. replication, and translations within large-scale studies) may influence replication results and ultimately impact the interpretation of findings. We propose steps for planning, conducting, and interpreting replications when measures have to be translated.
January 22: Dr. Heungsun Hwang (Psychology, McGill)
Title: A gentle introduction to generalized structured component analysis and its recent developments
Abstract: Generalized structured component analysis (GSCA) was developed as a component-based approach to structural equation modeling, where constructs are represented by components or weighted composites of observed variables, rather than (common) factors. Unlike another long-lasting component-based approach – partial least squares path modeling, GSCA is a full-information method that optimizes a single criterion to estimate model parameters simultaneously, utilizing all information available in the entire system of equations. Over the decade, this approach has been refined and extended in various ways to enhance its data-analytic capability. I will briefly discuss the theoretical underpinnings of GSCA and demonstrate the use of an R package for GSCA - gesca. Moreover, I will outline some recent developments in GSCA, which include GSCAM for estimating models with factors and integrated GSCA (IGSCA) for estimating models with both factors and components.
January 23 (TR): Dr. Todd Woodward (Psychiatry, University of British Columbia) - 10:00 - 11:00
Title: Task-state functional brain networks detectable by fMRI using constrained principal component analysis: More than just a pretty picture
Abstract: Characterization of brain networks using functional magnetic resonance imaging (fMRI) has primarily been advanced by resting-state research; however, using task-based research, functional characterizations can be more robustly determined by observing how the timing of network-level evoked hemodynamic responses (HDRs) differ between task conditions. To this end, our laboratory has developed a novel approach which integrates the following principles: (1) as opposed to voxel-by-voxel univariate analyses, use multivariate/multidimensional analysis methods computing networks based on the dominant pattern of intercorrelations between voxels; (2) as opposed to mixing task-related and task-unrelated variance in brain activity, extract the task-related variance prior to network extraction; (3) as opposed to selecting brain regions of interest (ROIs), compute networks that allow every voxel to participate in every brain network; (4) as opposed to assuming HDR shapes, use data-driven explorations of HDR shapes, allowing separate HDR shapes for every subject, network and task condition separately (Finite Impulse Response [FIR] model). Principles (1) and (2) can be achieved by applying constrained principal component analysis (CPCA) to fMRI data (fMRI-CPCA). Principles (3) and (4) are achieved by decisions about the content of the matrices submitted to fMRI-CPCA. This line of research has led to the identification of a set of 10 core task-based fMRI networks, a subset of which are retrieved from all task-based fMRI data, regardless of the specific task. Based on the experimental conditions to which they respond, we have assigned a preliminary cognitive function to each of these interacting networks. Some of them are already familiar to the field from resting state studies (e.g., default mode network, response network), but others are novel and specific to the task state (e.g., cognitive evaluation, volitional attention to internal representations). Extended applications of CPCA models to fMRI, MEG, and EEG data are also discussed.
February 5: Sunmee Kim (Psychology, McGill)
Title: Interpretable data reduction in prediction model: Extended redundancy analysis and its extensions and applications
Abstract: Extended redundancy analysis (ERA) is a statistical tool that performs data reduction and regression analysis simultaneously. When investigating a complex social and behavioral phenomenon that involves multiple different sets of predictors, ERA can be useful as it provides a simpler description of predictor-response relationships by summarizing each set of predictors into its low-dimensional representation—a component. I have recently extended ERA to address issues of substantive importance in health psychology and human genetics. In this talk, I will present the theoretical underpinnings of the proposed methods and illustrate their empirical usefulness using data from the National Survey on Drug Use and Health in the US. I will also outline the ongoing development of predictive indices for ERA that seeks to measure how well a model generalizes to new data.
February 19: Dr. Sneha Shankar (Psychology, McGill)
Title: Beyond psychometrics: Comprehensive construct validation on the ground using examples from motivation and character skills assessments
Abstract: Measuring and validating unobservable constructs is a common task in psychology. Although psychometric measurement models can be applied to examine factor structure and item characteristics, these models do not wholly determine the validity of the construct. This talk presents work from systematic reviews of validation practices for motivation and goal measures, with discussion about what it means to have a valid measure. In the second part of my talk I will relate this validity process to evaluating a character skills assessment used for admission in thousands of students around the world. Moving from modelling to application I present numerous considerations in validation research, including issues of fairness and accuracy that are needed to establish that a scale effectively measures what is intended.
February 26: Dr. Geneviève Lefebvre (Math, UQAM) (1:30 – 2:30)
Title: Comparing logistic and log-binomial models for causal mediation analyses of binary mediators and rare binary outcomes: moving towards exact regression-based approaches
Abstract: In the binary outcome framework to causal mediation, standard expressions for the natural direct and indirect effect odds ratios (ORs) are established from a logistic model by invoking several approximations that hold under the rare-disease assumption. Such ORs are expected to be close to corresponding effects on the risk ratio (RR) scale based on a log-binomial model, but the robustness of interpretation to this assumption merits investigation. The objective is to report on mediation results from logistic and log-binomial models when the marginal probability of the outcome is <10%. Standard (approximate) ORs and RRs were estimated using data on pregnant asthmatic women from Québec. Prematurity and low-birthweight were the mediator and outcome variables, respectively, and two binary exposure variables were considered: treatment to inhaled corticosteroids and placental abruption. Exact ORs were also derived and estimated using a contributed SAS macro. Simulations which mimicked our data were subsequently performed to replicate findings. Many approximate ORs and RRs estimated from our data did not closely agree. Approximate ORs were systematically observed farther from RRs in comparison with exact ORs. Exact OR estimates were very close to RR estimates for exposure to inhaled corticosteroids, but less so for placental abruption. Approximate OR estimators also exhibited important bias and undercoverage in simulated scenarios which featured a strong mediator-outcome relationship. These results pave the way for exact estimators that do not rely on the rare-disease assumption.
March 18: Gyeongcheol Cho (Psychology, McGill)
Title:
Abstract:
April 1: Raymond Luong & Mairead Shaw (Psychology, McGill)
Title:
Abstract:
April 29: Josh Starr (Psychology, McGill)
Title:
Abstract: