ZOOM link:
https://us06web.zoom.us/j/2183883077?pwd=1OtoTZ4mobWrHaq6aigMi3brQUtwKm.1
Meeting ID: 218 388 3077
Passcode: 594408
Note: Keynote (45 minutes), other presentations (30 minutes each)
WELCOMING AND OPENING REMARK (Minh Nguyen, FAU) --- 8AM - 8.15AM (Eastern US time or ET)
Keynote Speaker: Quan Le (Harvard University) --- 8.15AM - 9AM ET
Title: The Technological Origins of Politically Independent Newspapers
Speakers:
Krishna Pothugunta (University of Notre Dame) --- 9AM - 9.30AM ET
Title: Digital Representations of Cross-Cultural Value Structures: Evaluating Consensus Consistency across Inference Settings
Abstract: Large language models are increasingly used to represent culturally diverse populations, yet existing evaluations often rely on aggregate distributional similarity rather than shared cultural structure. Our study examines how inference configuration shapes the cultural representativeness of LLM outputs. Using Cultural Consensus Theory, we evaluate whether LLM generated responses reflect the consensus patterns observed in human populations. We use Wave 7 of the World Values Survey to construct country level persona profiles and prompt Llama3.1:70B and Qwen2.5VL:72B across three cultural domains and three temperature settings. Our findings reveal that temperature does indeed control for cultural heterogeneity, but the effects are domain and model specific. Both models often produce high internal coherence while remaining weakly aligned with human cultural consensus, suggesting culturally displaced agreement. Our study contributes to information systems research by positioning inference configuration as a domain aware governance mechanism for evaluating culturally situated AI systems.
Tri Minh Phan (University of Basel) --- 9.30AM - 10AM ET
Title: When Risk Is Selectively Observed: Endogenous Disclosure and the Measurement of Firm Cyber Risk
Abstract: Current firm-level cyber risk measures rely on textual disclosures or focus primarily on breach likelihood, overlooking the economic impact of cyber incidents, breach costs. However, breach cost data suffer from severe selection bias in reporting. This paper develops a novel firm-level cyber risk measure defined as the expected cost of a cyber breach. We document strong evidence of selection bias: unobservables that increase the propensity of reporting breach costs are associated with lower realized costs. We propose a bias-corrected estimation approach to recover consistent estimates of breach costs and construct a cyber risk measure based on these corrected expectations. We find that this measure commands a positive risk premium: firms with higher cyber risk earn higher average stock returns. A trading strategy that longs the top decile and shorts the bottom decile generates abnormal returns of 50 basis points per month (value-weighted) and 58 basis points per month (equally weighted) relative to the Fama-French six-factor model.
Nahian Fariha (Independent Researcher, Bangladesh) --- 10AM - 10.30AM ET
Title: ShopHallu: A Benchmark for Hallucination Detection in LLM-Generated Influencer Reviews
Abstract: Large Language Models (LLMs) are extensively used by influencers to generate product review and promotional content, but their tendency to produce hallucinated product claims that are not supported by real user experience or product information can threaten consumer trust and mislead users in sensitive or necessity-driven purchase decisions. Existing work primarily focuses on domains such as question answering, summarization, and factual text generation, with limited attention to influencer-style marketing content where persuasive language may amplify these hallucinations. We introduce ShopHallu, a benchmark pipeline and dataset consisting of 1,725 influencer-style paragraphs generated using GPT-4o mini under three prompting regimes: strict, neutral, and persuasive. These paragraphs are grounded in and paired with human-written Amazon reviews, enabling human annotation for precise hallucination identification. The dataset covers three sensitivity-varied product categories: Beauty, Accessories, and Grocery, with approximately 575 samples per group to ensure balanced representation. Each paragraph is assigned a multi-level coarse-grained factual support label: supported (1), hallucinated (0), or neutral (2), followed by a fine-grained taxonomy of four hallucination types: fabricated features, exaggerated performance, contradictions, and non-verifiable emotional claims. For rigorous evaluation, we design a structured benchmark pipeline to quantify hallucination behavior across different prompting strategies. Hallucination rate is computed as the proportion of hallucinated claims among all verifiable instances, excluding neutral cases, while classification performance is evaluated using precision, recall, and F1-score. Annotation consistency is assessed through inter-annotator agreement using Cohen’s Kappa. Our results show that approximately 22% of generated paragraphs are hallucinated, with persuasive prompting significantly increasing hallucination rates as the model prioritizes marketing appeal over factual grounding. Contradiction-based hallucinations are most prevalent, and annotation achieves substantial agreement (κ = 0.77). ShopHallu establishes a benchmark for systematic evaluation, providing an evidence-grounded dataset, controlled prompting framework, and fine-grained annotations to support reproducible analysis and development of reliable detection methods in commercial content generation.
Sajjad Ahmad (University of Agriculture, Faisalabad, Pakistan) --- 10.30AM - 11AM ET
Title: VisionCare AI
Abstract: VisionCare AI is an intelligent medical image analysis system designed for early detection and monitoring of conjunctiva-related eye conditions. The system integrates deep learning-based computer vision models (YOLO for detection and segmentation) with multimodal reasoning using large language models to provide structured clinical insights. It supports region-of-interest-based analysis, visual explanations, and automated reporting, aiming to assist in clinical decision-making and patient monitoring. The system is built as an end-to-end pipeline, combining image processing, model inference, and AI interpretation.
BREAK --- 11AM - 11.15AM
Diu Tran (Independent Researcher, Vietnam) --- 11.15AM - 11.45AM ET
Title: Factor Investing and Alpha Research in the Age of AI
Abstract: This talk provides an overview of factor investing and alpha research in the context of modern quantitative models, with a focus on their integration into portfolio construction and risk management. It explains how alpha factors are used as predictive signals for asset returns and how risk factor models capture systematic sources of market risk. The presentation highlights how machine learning and artificial intelligence techniques enhance these modeling approaches by improving factor discovery and capturing complex, nonlinear relationships in financial data. It also describes how these models are combined within a portfolio optimization framework that balances expected returns against risk exposures and constraints. Overall, the talk emphasizes the shift from traditional linear approaches toward more flexible, data-driven modeling frameworks in quantitative finance for building robust, return-seeking, and risk-aware portfolios.
Harshal Sanghvi (Florida Atlantic University) --- 11.45AM - 12.15PM ET
Title: Ophthalmology in the Age of Artificial Intelligence
Abstract: Artificial Intelligence (AI) is rapidly transforming ophthalmology by enhancing diagnostic accuracy, improving clinical workflow efficiency, and enabling predictive and personalized eye care. Ophthalmology is uniquely positioned for AI integration due to its reliance on high-resolution imaging modalities, including fundus photography, optical coherence tomography (OCT), visual fields, and angiographic imaging. Recent advancements in machine learning and deep learning have enabled automated detection and classification of numerous ocular diseases, including diabetic retinopathy, glaucoma, age-related macular degeneration, retinal vascular disorders, and corneal pathologies. Beyond diagnostics, AI is increasingly being utilized for disease progression prediction, treatment response monitoring, surgical planning, teleophthalmology, and large-scale population screening. AI-driven systems also have the potential to optimize administrative and clinical workflows through automated documentation, image interpretation, referral triage, and clinical decision support systems. Furthermore, the integration of multimodal datasets, including imaging, electronic health records, genomics, and wearable device data, may facilitate the development of personalized ophthalmic care models. Despite its transformative potential, several challenges remain, including algorithmic bias, data privacy concerns, limited generalizability across diverse populations, ethical considerations, regulatory barriers, and the need for physician oversight. The successful implementation of AI in ophthalmology requires collaboration among clinicians, researchers, engineers, and policymakers to ensure safe, equitable, and clinically meaningful adoption. This presentation explores the evolving role of AI in ophthalmology, highlighting current applications, emerging technologies, research opportunities, workflow integration, and future directions that may redefine the delivery of eye care in the coming decades.
Luan Pham (RMIT and Microsoft) --- 12.15PM - 12.45PM ET
Title: EventADL: Open-Box Anomaly Detection and Localization Framework for Events in Cloud-Based Service Systems
Abstract: Anomaly detection and localization (ADL) is critical for maintaining high reliability and availability in cloud-based systems. Recent ADL developments focus on metric and log data, leaving event data relatively unexplored. To address this gap, we propose EventADL, the first open-box ADL framework for events in cloud-based service systems. To motivate the design of our framework, we conduct a systematic analysis on 520 real-world incidents, and provide several important insights into how anomalies and their root causes manifest through event data. The EventADL framework has three phases: offline training, online anomaly detection, and anomaly localization. During the training phase, EventADL learns Event Semantic Patterns (ESPs) for pointwise anomaly detection and Event Frequency Patterns (EFPs) for frequency-based anomaly detection using unlabelled historical data. In the online anomaly detection phase, any data in the event stream that deviates significantly from these patterns is identified as anomalous. In the localization phase, EventADL constructs an Intervention Graph that models the relationships between recent interventions (i.e., system changes visible through events) and the detected anomalies for automatic root cause localization. The framework is designed to operate efficiently without labeled data , and to produce interpretable results. Our evaluation on three real cloud-based service systems and two real-world incidents, compared against ten state-of-the-art baselines, demonstrates that EventADL outperforms existing methods, achieving F1-scores of at least 90% for anomaly detection and 100% top-3 accuracy in anomaly localization.
Trinh-Nguyen Phan (University of British Columbia) --- 12.45PM - 1.15PM ET
Title: From Identity to Trust: A Review and Research Agenda for Self-sovereign Identity and Verifiable Credentials for the Digital Economy
Abstract: Self-sovereign identity (SSI) and verifiable credential (VC) technologies are increasingly drawing interest from research and business communities. However, confusion remains around key terms—SSI, VC, and decentralized identity—and how they connect. This paper offers a critical literature review of SSI research and highlights four key insights. First, it traces the history of identity and credential systems to explain why VCs are gaining importance. Second, it clarifies core terminology. Third, it identifies three main gaps: knowledge (inconsistent definitions), implementation (few real-world examples versus broad proposals and potential applications), and inspiration (technical goals versus social and institutional challenges). These gaps present significant opportunities for future research and innovation in business.
Muhammad Talal Khan (University of Hawaii at Manoa) --- 1.15PM - 1.45PM ET
Title: Artificial Intelligence and the Cost of Debt
Abstract: We study how artificial intelligence adoption affects corporate borrowing costs using firm-level AI intensity measures and syndicated loan data from 2000 to 2022. AI-intensive firms pay loan spreads that are 3.5 to 21 percent higher than comparable non-AI firms. These higher spreads are accompanied by contract terms consistent with greater informational opacity, particularly heightened monitoring costs post-origination. The premium is attenuated when firms borrow from AIspecialized lenders and is concentrated among firms with weaker pre-existing information environments. Our results are robust to matching-based reweighting and an instrumental-variable approach using citations to a foundational machine learning text.
CLOSING REMARK (Minh Nguyen, FAU) --- 1.45PM - 2PM ET