Welcome to SCAI 2025
14th International Conference on Soft Computing, Artificial Intelligence and Applications (SCAI 2025)
November 22 ~ 23, 2025, London, United Kingdom
Welcome to SCAI 2025
November 22 ~ 23, 2025, London, United Kingdom
Privacy-preserving Multimodal News Recommendation Through Federated Learning
Mehdi Khalaj, Shahrzad Golestani Najafabadi, and Julita Vassileva, Department of Computer Science, University of Saskatchewan, Canada
ABSTRACT
Personalized news recommendation systems (PNR) have emerged as a solution to information overload by predicting and suggesting news items tailored to individual user interests. However, traditional PNR systems face several challenges, including an overreliance on textual content, common neglect of shortterm user interests, and significant privacy concerns due to centralized data storage. This paper addresses these issues by introducing a novel multimodal federated learning-based approach for news recommendation. First, it integrates both textual and visual features of news items using a multimodal model, enabling a more comprehensive representation of content. Second, it employs a time-aware model that balances users’ long-term and short-term interests through multi-head self-attention networks, improving recommendation accuracy. Finally, to enhance privacy, a federated learning framework is implemented, enabling collaborative model training without sharing user data. The framework divides the recommendation model into a large server-maintained news model and a lightweight user model shared between the server and clients. The client requests news representations (vectors) and a user model from the central server, then computes gradients with user local data, and finally sends their locally computed gradients to the server for aggregation. The central server aggregates gradients to update the global user model and news model. The updated news model is further used to infer news representation by the server. To further safeguard user privacy, a secure aggregation algorithm based on Shamir’s secret sharing is employed. Experiments on a real-world news dataset demonstrate strong performance compared to existing systems, representing a significant advancement in privacy-preserving personalized news recommendation.
Keywords
Personalized News Recommendations, Federated Learning, Privacy Protection, Multimodal Learning, Secure Multi-Party Computation.
Designing And Demonstrating The Individuation Wave Analysis (Iwa) Framework: A Transdisciplinary Approach Augmented By Generative Ai
Jorge Amílcar Vizcaíno People and Change Consultant, BSc Psychology, Buenos Aires, Argentina
ABSTRACT
A crisis of meaning in the modern era is marked by constant change, existential anxiety, identity confusion, social isolation, and fragmentation of traditional cultural myths and life narratives. A situation that challenges the innate process of individuation towards self-realization, as Carl Jung defined. The present study introduces an innovative solution to support individuals navigating their lifelong journey: the Individuation Wave Analysis (IWA). Designed from scratch with DeepSeek AI, IWA is a transdisciplinary framework that operationalizes the integration of two distinct theoretical approaches: the Elliott Wave Principle (financial market analysis) and Jungian Archetypal Theory (depth psychology). Thus, IWA provides a hermeneutic methodology of a 5-3 waves model and their respective dominant archetype/shadow patterns. The empirical evidence is based on a set of historical and modern figures (Carl Jung, Warren Buffett, etc.), remarkably processed and demonstrated by DeepSeek AI. The benefits of the IWA framework in enabling individuals to make more conscious and meaningful forward-looking decisions in their lifelong individuation journey, the critical analysis of its limitations, potential criticisms of subjective bias, and required next steps for a research agenda and scientific validation are addressed.
Keywords
Individuation, Elliott Wave Principle, Jungian Archetypal Theory, Psychological Education, Generative AI, DeepSeek AI.
Minimal Variance Allocation of Rights and Applications to Blockchain Consensus
Maxime Reynouard, Paris Dauphine University - PSL and Nomadic Labs, Paris, France
ABSTRACT
Randomness plays a critical role in distributed systems and blockchain technology, facilitating tasks such as load balancing, leader election, and fault tolerance, which enhance system scalability and resilience. However, while randomness is fundamental to certain systems, it can also pose liabilities, including unpredictability and security risks. In this paper, we focus on the allocation of rights, a shared random process in many blockchains, particularly in the allocation of block proposal or validation rights. We propose two new allocations protocol to reduce the randomness of this process and discuss the challenges of achieving minimal variance in this allocation. Beyond improving the allocation protocol, they offer an alternative approach to secure random seed generation by reducing its criticality. In fact, this serves as a complementary solution to existing measures that focus on preventing random seed manipulation. In addition, we evaluate the effectiveness of our proposal and analyze potential countermeasures that attackers might employ. Our approach improves the resilience and security of blockchain networks, addressing concerns associated with randomness in consensus protocols.
Keywords
Blockchain, Consensus, Randomness, Security, Apportionment, Sybil Attacks, Allocation.
Mathematical Modelling of Social Engineering Attacks in Multilingual Digital Communities: An Educational Framework for Cybersecurity Awareness
Haleema Azra and Iffath Zeeshan, American College of Education, Indianapolis, USA
ABSTRACT
Social engineering attacks exploit human psychology rather than technical vulnerabilities, making them particularly effective across diverse digital communities. This mixed-methods research investigates how language barriers and cultural differences affect susceptibility to cyber threats in multilingual populations through mathematical modelling and statistical analysis. Using surveys, controlled experiments, and longitudinal data collection across three diverse metropolitan areas, this study develops predictive models to understand vulnerability patterns and creates culturally responsive educational frameworks for cybersecurity awareness. The research combines probability theory, statistical modelling, and behavioural analysis to quantify risk factors and protective mechanisms within multilingual digital communities. Findings contribute to both cybersecurity defence strategies and mathematics education by demonstrating real-world applications of statistical analysis in protecting vulnerable populations.
Keywords
social engineering, multilingual communities, mathematical modelling, cybersecurity education, cultural responsiveness, statistical analysis.
Minimal Variance Allocation Of Rights And Applications To Blockchain Consensus
Maxime Reynouard, Paris Dauphine University - PSL and Nomadic Labs, Paris, France
ABSTRACT
Randomness plays a critical role in distributed systems and blockchain technology, facilitating tasks such as load balancing, leader election, and fault tolerance, which enhance system scalability and resilience. However, while randomness is fundamental to certain systems, it can also pose liabilities, including unpredictability and security risks. In this paper, we focus on the allocation of rights, a shared random process in many blockchains, particularly in the allocation of block proposal or validation rights. We propose two new allocations protocol to reduce the randomness of this process and discuss the challenges of achieving minimal variance in this allocation. Beyond improving the allocation protocol, they offer an alternative approach to secure random seed generation by reducing its criticality. In fact, this serves as a complementary solution to existing measures that focus on preventing random seed manipulation. In addition, we evaluate the effectiveness of our proposal and analyze potential countermeasures that attackers might employ. Our approach improves the resilience and security of blockchain networks, addressing concerns associated with randomness in consensus protocols.
Keywords
Blockchain, Consensus, Randomness, Security, Apportionment, Sybil Attacks, Allocation
The Role of Pediatric Registered Nurses in Artificial Intelligence - Assisted Surgery Pediatric Nursing Care and Artificial Intelligence: Innovations, Implications, and Integrations
Beisa Ramulic Vallejo1, Eduardo Enrique Vallejo2, Lea Zabkar3, Adlah Mohammad4, Daniel Norris5, 1University diploma in Organizational work, El Clavelito, Los Angeles, USA, 2AXJ, Los Angeles, USA, 3FRRE 6 llc, Investigative AI, Koper, Slovenia, 4AXJ International, Los Angeles, USA, 5software developer, Phoenix, Arizona, USA
ABSTRACT
The integration of Artificial Intelligence (AI) into pediatric nursing care is transforming clinical practice, decision-making, and patient outcomes. This article explores the role of AI technologies in pediatric nursing, examining their benefits, limitations, and ethical implications. Special attention is given to clinical decision support systems (CDSS), robotic surgery, remote patient monitoring, and simulation-based education. As AI systems evolve, pediatric nurses must adapt by acquiring new competencies, ensuring that care remains holistic, empathetic, and ethically sound. Artificial Intelligence gives us opportunities to make our patients less suffer during operations. Artificial Intelligence gives us more information about health care, surgery procedures, robotics and better life. During studies of Topol E. I found that AI is something that we all are studying. Guerin G. said that AI we explore for humanity. McBride S. is someone from Nursing exploration to very interesting conclusions. American PeriOperative Registered Nurses (AOARN) said that we are all in very good Emanti. Sandhu S. is for the very fast growing education with AI. In the research study I used methodology: Systematic Literature Review of comprehensive searches of the scientific literature using databases such as: PubMed, CINAHL, Cochrane Library, EMBASE.
Keywords
Artificial Intelligence (AI), Pediatric Nursing Care, Children, Health.
Ontoepistemological Limitations of Computational Intelligence Arising from Electronic Hardware: Binary Substrate Problem, Possible Solutions and Theories
Adem Bilgin, Chair of Association of Digital Ecosystem Governance and Development Research, Turkiye
ABSTRACT
Contemporary advances in computational intelligence (CI) remain constrained by a fundamental ontological limitation: the binary substrate of modern electronic hardware. This paper formulates and investigates the Binary Substrate Problem, hypothesizing that intelligence—defined as semantically emergent, ethically self-organizing, and ontologically plastic cognition—cannot arise fully from binary, voltage-threshold-based architectures. Through a comparative simulation framework involving digital neural networks, spiking neuromorphic systems, and quantum neural models, we demonstrate that only non-binary substrates support the emergence of semantic coherence, moral ambiguity resolution, and category redefinition. To address this limitation, we propose a shift from binary logic gates to substrate-sensitive computational architectures capable of continuous, field-based, and non-discrete operations. We outline two foundational theories to guide this transition: the Special Theory of Non-Binary Electronics (STNBE), which models cognition as field-based integration of charge, frequency, and semantic phase; and the General Theory of Non-Binary Electronics (GTNBE), which introduces semantic permittivity into charge flow equations, enabling topological and axiological computation. A hypothetical experimental design—the Dual Pulse Differentiation Test—is introduced to quantify semantic fidelity using a new substrate-based constant γ*, analogous to the Lorentz factor in relativity. Our findings and theoretical framework suggest a future in which intelligence is co-designed with its physical substrate, moving beyond the dichotomy of zero and one toward a post-binary ontology of machine cognition.
Keywords
Computational Intelligence, Ontoepistemological Limitation in Neuromorphic Engineering, Non-Binary Electronic Engineering, Neuromorphic and Quantum Intelligence, Ontological Plasticity, Semantic-Ethical Computation
On Secret-key Agreement Capacity Using the Linear Deterministic Model for 6G
Mustafa El-Halabi, Community College of Qatar
ABSTRACT
The open architecture of 6G Wireless Networks and Internet-of-Everything (IoE) systems makes them susceptible to security breaches and attacks [1],[2]. Communications are usually protected with cryptographic protocols that depend on secret-key agreements among a large number of nodes. Accordingly, a signi cant body of research focuses on simplifying secret-key management and lowering the energy required to support it. Providing security at the physical layer has been recently suggested as a recourse to leverage cryptography in such node-proli c environments. In particular, exploiting the noisiness and the channel state information (CSI) of the transmission medium, coding schemes are devised to generate a secret-key between the transmitter and the designated legitimate receiver. Considering the general model of a state-dependent wiretap channel, where the channel state sequence is completely known to the transmitter as side information ahead of transmission, it is shown that a near-optimal secure coding scheme can be developed to extract a common bit sequence that can be used as a secret key. This scheme is based on a combination of Wiretap codes [3] and dirty-paper codes [4], and uses the linear deterministic model (LDM) [5] as an approximation tool to suggest a way of encoding that achieves the upper-limit on the size of that bit sequence; the secret-key capacity to within 1 2 bit.
Keywords
6G, Internet of Everything, Physical-layer Security, Wiretap Channel, Dirty Paper Coding, Channel State Information, Secret-key Agreement.
Solar Panel Efficiency Modelling: Mathematical Optimization of Solar Panel Angles and Positioning Based on Geographic Location and Seasonal Change
Haleema Azra and Iffath Zeeshan, American College of Education, USA
ABSTRACT
Solar energy optimization through strategic panel positioning represents a critical pathway for maximizing renewable energy efficiency. This comprehensive study presents mathematical models and optimization frameworks for determining optimal solar panel tilt and azimuth angles based on geographic location and seasonal variations. Through analysis of solar geometry, irradiance modeling, and dynamic positioning strategies, this research demonstrates that seasonal adjustment of panel angles can yield up to 9.91% increased energy output compared to fixed installations. The study employs both isotropic and anisotropic radiation models to calculate total solar irradiance on tilted surfaces, incorporating direct beam, diffuse, and reflected radiation components. A mathematical optimization framework is developed to maximize energy capture through the integral equation Max{β, γ} , subject to practical constraints on tilt (β) and azimuth (γ) angles. Case studies across multiple geographic regions, including Pakistan, Turkey, and global datasets, validate the effectiveness of dynamic optimization approaches. Machine learning models utilizing PVGIS data achieved 99.27% accuracy in predicting optimal angles, while seasonal adjustment strategies demonstrated significant improvements over fixed-angle installations, particularly in temperate regions. The research concludes that mathematical optimization of solar panel positioning based on geographic and seasonal data substantially enhances energy efficiency, with seasonal adjustment models offering an optimal balance between performance gains and implementation complexity. These findings contribute to the advancement of sustainable energy systems and provide practical guidance for solar installation optimization.
Keywords
Solar panel optimization, Photovoltaic efficiency, Tilt angle optimization, Azimuth angle positioning, Solar irradiance modeling, Renewable energy systems, Solar geometry, Panel positioning strategies
The Impact of Wireless Channel Errors on the Visual Quality of Learning-based Image Coding
Ablah AlAmri, Charith Abhayaratne, The University of Sheffield, United Kingdom.
ABSTRACT
Recently, learning-based image codecs have improved compression leading to excellent performance in bitrate reduction. However, their performance when transmitted over lossy channels has not been well studied. This paper investigates how these learningbased image codecs perform under lossy channel transmission conditions. For this, we set up an experimental model which includes an encoder, a channel coding module, followed by a channel simulation module and the decoder to evaluate the visual quality performance under various channel conditions. We compare the performance of several AI models with the standard JPEG under various channel conditions and across various bitrates. According to the experimental results, under clean conditions, the learning-based codecs (LBCs) used in the experiments outperform JPEG in terms of PSNR and MS-SSIM. However, in a noisy channel, these codecs show significant degradation in PSNR and MS-SSIM under low-SNR conditions (especially below 12 dB SNR), whereas JPEG is more robust to channel errors and shows a more gradual degradation in quality as the SNR decreases.
Keywords
Channel errors, Error robustness, JPEG AI, Visual Quality, Learning-Based Image Coding, AI-Based Image Coding.
Minimal Variance Allocation of Rights and Applications to Blockchain Consensus
Maxime Reynouard, Paris Dauphine University - PSL and Nomadic Labs, Paris, France
ABSTRACT
Randomness plays a critical role in distributed systems and blockchain technology, facilitating tasks such as load balancing, leader election, and fault tolerance, which enhance system scalability and resilience. However, while randomness is fundamental to certain systems, it can also pose liabilities, including unpredictability and security risks. In this paper, we focus on the allocation of rights, a shared random process in many blockchains, particularly in the allocation of block proposal or validation rights. We propose two new allocations protocol to reduce the randomness of this process and discuss the challenges of achieving minimal variance in this allocation. Beyond improving the allocation protocol, they offer an alternative approach to secure random seed generation by reducing its criticality. In fact, this serves as a complementary solution to existing measures that focus on preventing random seed manipulation. In addition, we evaluate the effectiveness of our proposal and analyze potential countermeasures that attackers might employ. Our approach improves the resilience and security of blockchain networks, addressing concerns associated with randomness in consensus protocols.
Keywords
Blockchain, Consensus, Randomness, Security, Apportionment, Sybil Attacks, Allocation.
Physics-informed Neural Networks for Biomedical Engineering Applications
David Isaac Nyirenda, Department of Biomedical Engineering, Malawi University of Science and Technology
ABSTRACT
Physics-Informed Neural Networks (PINNs) have emerged as a robust framework for solving partial differential equations (PDEs) by integrating physical laws directly into the training process of neural networks. This paper explores the application of PINNs in Biomedical Engineering, with emphasis on modeling heat conduction in biological tissues—a critical problem in hyperthermia cancer treatment. By incorporating Fourier feature embeddings and adaptive loss reweighting strategies, the proposed model accurately learns the spatio-temporal temperature distribution within a tumor domain. Experimental results demonstrate that the PINN achieves high prediction accuracy and smooth generalization across space-time slices, validating its effectiveness for solving biologically relevant PDEs. Quantitative metrics such as MAE, RMSE, and relative L2 error support the model’s reliability. This work highlights the potential of PINNs for biomedical simulations where traditional numerical methods face challenges due to irregular geometries and limited data.
Approaches to Determining the Features of Data to Evaluate the Emotional Condition of Humans.
Abdurakhmon Kurbanov, Doctoral student of the Jizzakh branch of the National University of Uzbekistan named after Mirzo Ulugbek
ABSTRACT
The assessment of human emotional states is one of the important areas of affective computing and human-computer interaction (HCI), which combines psychology, neuroscience, and artificial intelligence. Emotions are an integral part of human life, and their automatic detection is widely used in areas such as medicine (e.g., depression diagnosis), education (evaluating students attention), marketing (consumer reactions), and robotics (social robots). Identifying data features for emotional state assessment is a central step in the process, as it transforms raw signals (EEG, ECG, facial expressions, voice, etc.) into a representation that can be reproduced by machine learning models. These methods help to identify emotions (joy, sadness, anger, fear) or their dimensions (valence - positive/negative; arousal - high/low).
Keywords
Affective computing, feature extraction, spatial domain, early fusion.
A Cognitively Inspired Framework for Automated Football Commentary
Gloria Virginia, Jeffrey Susilo, and Aditya Wikan Mahastama, Informatics Department, Universitas Kristen Duta Wacana, Yogyakarta, Indonesia
ABSTRACT
This study tries to develop an automated football commentary by integrating computer vision, rule-based reasoning, and natural language generation within a cognitively inspired architecture. The system design follows a modular approach. First, a YOLOv8-based object detection model identifies players, goalkeepers, referees, and the ball in real time. Team classification is then performed using clustering of dominant jersey colours, while event recognition is achieved through a rule-based reasoning module incorporating spatial, temporal, and contextual conditions such as ball possession and dead-ball states. Finally, detected events are converted into expressive, sportscaster-style commentary through a generative language model. Experimental evaluation was conducted using real football match videos, and subjective testing involved 30 football enthusiasts who rated the systems accuracy and realism using a five-point Likert scale. Results demonstrated high object detection precision, reliable team classification, and accurate identification of key match events such as kick-offs, penalties, and goal kicks.
Keywords
Cognitive AI, Computer Vision, Rule-Based Reasoning, Natural Language Generation, Automated Football Commentary
Measuring The Theoretical Limit Of Branch Prediction:an Information-theoretic Ceiling
Ximing Zhang1, Wenmao Zhou2, Rongqing Hu , 1,2School of Information Science & Engineering, Lanzhou University, Lanzhou, China
ABSTRACT
Branch Prediction Remains Critical To High-performance Processors, Yet Persistent Mispredictions Continue To Cap Efficiency. This Work Introduces A Formal Framework To Quantify The Fundamental Ceiling Of Predictability In Program Control Flow. We Decompose The Gap Between Real And Perfect Predictors Into Two Parts: The Information Gap, Capturing The Irreducible Algorithmic Randomness Of Control Flow, And The Model Gap, Reflecting The Limitations Of A Given Predictor. Our Framework Unifies Two Perspectives—a Kolmogorov-complexity–based Algorithmic Bound And A Statistical Bound From Fano’s Inequality—and Resolves Kc’s Uncomputability Through A Statistically Rigorous Ensemble-bootstrap Method That Yields Confidence Intervals For The Information Gap. We Extend This Framework With Phase-aware Analysis To Address Program Non-stationarity And Validate It Empirically Across Predictor Classes. Cycle-accurate Simulations On Spec Cpu2006 Using Tage-sc-l And Perceptron Predictors Reveal Substantial Remaining Model Gap For Many Workloads, Defining A Concrete Performance Ceiling And Offering Data-driven Guidance For Future Branch-prediction Research.
Keywords
Branch Prediction, Computer Architecture, Algorithmic Information Theory, Kolmogorov Complexity, Fano’s Inequality, Performance Analysis, Program Phase Analysis
Semcom-synth: Differentially Private Synthetic Semiconductor Operations Text with Hybrid Generation and Open Audits
Youssef Alothman, Lalit Maurya & Mohamed Bader-El-Den, Computer science /University of Portsmouth, Portsmouth, United Kingdom
ABSTRACT
We introduce SemCom-Synth, a privacy-preserving synthetic corpus of semiconductor operations text produced by a dual-track pipeline: a language model trained with DP-SGD and a privacy-aware paraphraser. A risk-aware hybrid selector enforces k-syntheticity (string/edit and embedding distances), layered PII/domain-jargon redaction, and canary audits before release. Utility is assessed with Train-on-Synthetic, Test-on-Real (TSTR) across five tasks—root-cause taxonomy, actionability, role/shift, severity, and fault-span extraction—reporting macro-F1, macro-recall, AUROC, calibration (ECE), and asymmetric cost curves. We chart the privacy–utility frontier across ε∈{1,2,4,8} (δ≈10⁻⁵) and show that ≤10% few-shot fine-tuning on real text closes most of the remaining gap to full-real baselines. We release dataset shards, prompts, cards (Data/Privacy/Model), and an open audit harness (membership inference, nearest-neighbor, canary) to support reproducible assessment and safe reuse.
Keywords
Differential Privacy; Synthetic Text; Semiconductor Manufacturing; Root Cause Analysis; Benchmarking; Membership Inference.
Challenges and Applications of Large Language Models: A Comparison of Gpt and Deepseek Family of Models
ABSTRACT
Large Language Models (LLMs) are transforming AI across industries, but their development and deployment remain complex. This survey reviews 16 key challenges in building and using LLMs and examines how these challenges are addressed by two state-of-the-art models with unique approaches: OpenAI’s closed-source GPT-4o (May 2024 up-date) and DeepSeek-V3-0324 (March 2025), a large open-source Mixture-of-Experts model. Through this comparison, we showcase the trade-offs between closed-source models (robust safety, fine-tuned reliability) and open-source models (efficiency, adaptability). We also ex-plore LLM applications across different domains (from chatbots and coding tools to health-care and education),highlighting which model attributes are best suited for each use case. This article aims to guide AI researchers, developers, and decision-makers in understanding current LLM capabilities, limitations, and best practices.
Applying Statistical and Knowledge-based Approaches to Enhance the Adoption of Records Retention and Disposal Policies in Ugandan Higher Education Institutions
Betty Kyakuwa, Olivia Nambobi & Opar Ronald Oker Lakwit, Uganda Institute of Information and Communication Technology (UICT), Uganda
ABSTRACT
This study examines the application of statistical and knowledge-based approaches to enhance the adoption and implementation of records retention and disposal policies in higher education institutions in Uganda. Although these policies are essential for effective information governance, regulatory compliance, and institutional accountability, their implementation remains uneven due to limited use of analytical and knowledge management tools. The study adopted a mixed-methods design, collecting data from 12 purposively selected universities through questionnaires, interviews, and document analysis. It explores how statistical techniques can support evidence-based policy decisions and how knowledge-based approaches including AI and decision-support systems can improve staff understanding, training, and compliance. Findings indicate that integrating these approaches strengthens data-driven decision-making, promotes adherence to policies, and enhances overall records management efficiency. The study offers actionable recommendations for higher education institutions, policymakers, and records management professionals to modernize and optimize retention and disposal practices using analytical and knowledge-based interventions.
Keywords
Statistical Approaches, Knowledge-based Approaches, Records Retention, Disposal Policies, Higher Education, Uganda, Information Governance, Data-driven Decision-making
Constructive Methods For Ultraparallel Computation: Bridging The Gap From Polynomial Time To Circuit Complexity
Juan Manuel Dato Ruiz , Spain
ABSTRACT
Nowadays, Computational Processes Are Increasingly Parallelizable, With Particular Relevance To Problems In The Complexity Class P—those Solvable Efficiently By Sequential Algorithms. Traditionally, Many Problems In P Were Considered Inherently Sequential And Resistant To Effective Parallelization. This Work Clarifies The Roles Of Classes P, Nc (Nick’s Class, Representing Highly Parallelizable Problems), And Cc (Comparator Circuit Class) In Computational Complexity, Presenting Constructive Methods That Systematically Transform Canonical Sequential Algorithms Into Ultra-efficient Parallel Solutions. Moreover, The Author Critically Examines Existing Proofs In The Complexity Field, Highlighting The Importance Of Analyzing Their Mutual Compatibility To Advance Theoretical Foundations. Through Novel Ultra-parallelization Techniques, We Demonstrate That Any Problem Efficiently Solvable In P Can Be Systematically Mapped To A Form Analogous To Nc Or Cc, Drastically Reducing Computation Time And Overcoming The Established Divide Between Sequential And Parallel Computation. These Transformations Enable Recasting Virtually Any Efficiently Solvable Problem As An Ultra-efficient Parallel Algorithm, Opening New Perspectives In Algorithmic Design And Performance. Finally, The Author Includes A Concise Workflow At The End Of The Essay, Which Visually Outlines The Key Steps Required To Transform A Problem From Class P Into Class Cc, Providing A Practical Roadmap For Applying The Theoretical Methods Discussed.
Keywords
Comparator Circuit Class (CC), Highly Parallelizable Problems (Nick’s Class), Polynomial Time Problems (P), Parallel Random Access Machine Model (PRAM), Nondeterministic Polynomial Problems (NP), Nondeterministic Logarithmic Space Problems (NLOG)