Accepted Papers
A Permissioned Blockchain Solution for the Decentralized Management and Governance of Iot Data Federations
Evangelos Athanasakis, Zisis Sakellariou, George Darzanos, Sofia Polymeni, Georgios Spanos, Thanasis G. Papaioannou, Konstantinos Votis, and Dimitrios Tzovaras Centre for Research and Technology Hellas, Information Technologies Institute, Thessaloniki, Greece, Athens University of Economics and Business, Department of Informatics, Athens, Greece National Kapodistrian University of Athens, Department of Digital Industry Technologies, Athens, Greece
ABSTRACT
The Internet of Things (IoT) paradigm has gained attention in recent years with multiple platforms supporting diverse application domains. However, the vertical focus and isolated development of these platforms pose challenges in offering cross-domain smart applications to meet societal needs. Interoperability frameworks have emerged to allow collaboration and formation of federations among IoT entities. Since the decentralized nature of IoT data federations presents challenges in ensuring fairness, security, trustworthiness, and transparency, blockchain-based solutions could offer potential in addressing these issues. Our proposed solution which is built on Hyperledger Fabric, is a management and governance system for IoT federations, leveraging Decentralized Autonomous Organizations. It includes smart contracts for federation creation and management through a set of configurable and extensible rules, along with a voting application. Our solution demonstrates secure, trustworthy, and transparent formation and membership control of IoT federations, capable for extending into data marketplaces, with reputation and tokenization mechanisms.
KEYWORDS
Permissioned-Blockchain, IoT, Federations, DAO, Hyperledger-Fabric.
Implementation of a Computer System Through Deep Learning to Optimize the Selection of Fruits: a Systematic Review of the Literature
Jaison Keird Rubin Echavarria1, Igor Aguilar-Alonso2, 1Faculty of Systems Engineering and Informatics, National University of San Marcos, Lima, Peru, 2Professional School of Systems Engineering, National Technological University of Lima Sur, Lima, Peru
ABSTRACT
Currently, the manual fruit selection process poses a major problem due to high labor requirements and inefficiency, resulting in significant fruit loss. A viable solution is computer-based fruit selection, which offers numerous advantages. Notably, it brings economic benefits by reducing food waste and production costs, creating new opportunities in the food industry. Additionally, this work reviewed 77 articles and selected 20 relevant ones, focusing on the implementation of Deep Learning. Based on the research findings, implementing a computer system utilizing neural network architecture for data processing is highly advantageous.
KEYWORDS
Deep learning, CNN, fruit classification.
An Iot Adaptive Control Scheme for Robot-assisted Rehabilitation in Healthcare
Ali Talasaz and Omar De Los Santos, Florida Atlantic University, Boca Raton, USA
ABSTRACT
The fusion of Internet of Medical Things (IoMT) and robotics has the potential to revolutionize healthcare by enhancing patient care, reducing costs, and preventing medical relapse. In this paper, an adaptive control approach is proposed in the context of IoMT to enhance the robot-assisted rehabilitation. With this approach, the patient�s functional strength is measured in real time and shared with the remote doctor/therapist through a Rehab Cloud. To increase the efficacy of robot-assisted rehabilitation, the robot resistance and activity level are controlled in a closed-loop fashion autonomously through Artificial Intelligence (AI) or manually through the doctor/therapist. AI can be also utilized to autonomously decide the right mode of operation based on the real-time patient�s functional strength and the database stored in the cloud. This paper also presents the patient outcomes, challenges and considerations toward the proposed IoMT robot-assisted architecture for rehabilitation.
KEYWORDS
Internet of Medical Things, Robot-Assisted Rehabilitation, Healthcare, Collaborative Robots (COBOTs), Artificial Intelligence.
A Goldmine or a Minefield: the Puzzle of Predictive Hr Analytics in the Private Sector
Kindness Tshuma, Midlands State University (M.S.U.), Zimbabwe
ABSTRACT
The sheer complexity of the modern business environment has compelled many organizations to re-think their operational models. Accelerated by the advent of artificial intelligence, there has been widespread adoption of human resources analytics by organizations. The ever changing business world puts pressure on various organizational functions to deliver evidence based value proposition towards promoting business growth. As a result, HR has radically repositioned its practices such as recruitment and selection, culture management, performance management, employee relations and gender balance to positively influence the attainment of business strategies based on HR data insights. It has gone a step further to emphasise much on predicting future production and employee behavioural needs desirable in the ever changing business world. This has helped in the effective calibration of management strategies and operational systems in the midst of business ambiguity. However, the utilization of predictive analytics in people management on the contrary poses numerous challenges and negative business implications. There is a wide gap between evidence based HR actions and value delivered to the organization. This paper examined the business risks and challenges associated with the use of predictive human capital analytics based on mixed methods approach methodology. A sample of 40 participants comprising managerial and non-managerial employees from Zimbabwean three private sector entities were sampled using stratified random sampling method. By unveiling the hidden assumptions, the study seeks to identify human challenges their implications to business continuity. The researcher highlighted how these challenges may be increased by complexity in employee personalities and environmental dimension factors. The researcher further proposed future research studies to expand the frontiers of knowledge on predictive human capital analytics. The chief aim of this paper was to enrich the academia by identifying business risks associated and human factor challenges that influence the utilization of predictive human capital analytics.
KEYWORDS
Human capital analytics, artificial intelligence,4th industrial revolution.
Multimodal Analysis of Google Bard:experiments in Visual Reasoning
David Noever and Samantha Elizabeth Miller Noever, PeopleTec, Inc., Huntsville, AL, USA
ABSTRACT
In a challenge-response study, we subjected Google Bard to 64 visual challenges designed to probe multimodal Large Language Models (LLMs). The challenges spanned diverse categories, including "Visual Situational Reasoning," "Visual Text Reasoning," and "Next Scene Prediction," among others, to discern Bards competence in melding visual and linguistic analyses. Our findings indicate that Bard tends to rely on making educated guesses about visuals, especially when determining cues from images. Unlike other models like GPT4, Bard does not appear to rely on optical character recognition libraries like Tesseract but recognizes text in complex images like deep learning models such as Google Lens and Visual API. Significantly Bard can solve CAPTCHAs visually that ChatGPT fails to understand, recommending Tesseract solutions. Moreover, while the Bard model proposes solutions based on visual input, it cannot recreate or modify the original visual objects to support its conclusions. Bard fails to redraw ASCII art that the text can describe or capture a simple Tic Tac Toe grid it claims to analyze for the next moves. This study provides experimental insights into the current capacities and areas for improvement in multimodal LLMs.
KEYWORDS
Transformers, Text Generation, Image Analysis, Generative Pre-trained Transformers, GPT.
Iot Based Real-time River Monitoring in Relation to Flood in Brunei Darussalam
Rasyidah Ismail, School of Computing and Informatics, Universiti Teknologi Brunei, Brunei Darussalam
ABSTRACT
Climate change has been one of the global threats experienced on the ground by people, property, and nature. Over the years, floods often occur in the area adjacent to the riverbanks and low-lying areas due to high discharge of the stream overflowing the banks. Local water experts believe that affected water quality resources are often caused by flash floods. This paper addresses the pressing problem of affected quality of water supply in Brunei Darussalam focusing on the parameters of water quality and early detection of water level in the river using the real-time monitoring and detection of potential arise of flood by prompting an alert to the Brunei Fire and Rescue Department (BFRD). The water quality monitoring will alert the Department of Water Services (DWS) for any anomalies in the water quality parameters. The system is developed in four parts namely water level, water quality, water flow and weather station.
KEYWORDS
Climate Change, Flood, Real-Time Monitoring System, Internet of Things (IoT), Alert.
A Comprehensive Data-driven Analysis of Healthcare Disparities in the United States
Yusuf, Y. Babatunde, Durojaiye, M. Olalekan, Yussuph, T. Toyyibat, Unuriode, O. Austine, Akinwande, J. Mayowa, Yusuf, K. Tobi, Afolabi, T. Osariemen
ABSTRACT
Health disparities are not uniform, manifesting across a spectrum of dimensions, encompassing not only race and ethnicity but also gender, age, disability status, and socioeconomic factors. The project highlights significant disparities in healthcare access, the quality of care, and health outcomes. Racial and ethnic disparities are particularly pronounced, with disparities in health insurance coverage, prenatal care, and maternal morbidity. Gender disparities are evident, reflecting both biological and socioeconomic influences. Addressing these disparities necessitates a multi-faceted approach encompassing social determinants of health, equitable healthcare policies, and cultural competence. Equitable access to healthcare services, high-quality care, and improved data collection and monitoring are fundamental steps toward eliminating disparities. Furthermore, this project recommends initiatives to support underserved communities, enhance the quality of healthcare services, and foster culturally competent care. It emphasizes the importance of research and evidence-based approaches in policymaking and resource allocation. Policy reforms at federal, state, and local levels, including anti-discrimination laws, Medicaid expansion, and increased funding for public health initiatives, are vital to eliminating health disparities. To carry out successful interventions, collaboration between healthcare organizations, community organizations, governmental organizations, and advocacy groups can maximize resources and knowledge.
KEYWORDS
Healthcare disparities, Race, Ethnicity, Socioeconomic status, Gender, socially disadvantaged group, Cultural competence, Access to healthcare.
Ner Few-shot Learning Approach for Heavily Formalized Evaluation Texts From Sensor Data
Tobias Dorrn and Almuth Muller, Fraunhofer IOSB, Fraunhoferstr. 1, 76131 Karlsruhe, Germany
ABSTRACT
Extracting information from images plays a vital role in medical, security, and defense domains. Such findings are typically documented as formal reports. Presently, AI models of natural language processing (NLP) are primarily pretrained on natural prose texts, so highly formalized texts can be problematic for NLP models. NLP models use not only the information of the words of a text but also its grammatical structure. The grammatical structure is usually very limited for highly formalized texts and very specific terms are used, which can reduce the recognition performance of NLP models. A possible approach to improve recognition performance for highly formalized texts using NLP will be presented here in the paper. In the following considerations, NER models from the field of NLP for the investigation of highly formalized texts will be created and investigated. Subsequently, a suitable approach for improving the recognition performance will be presented.
KEYWORDS
few-shot learning, named entity recognition, natural language processing.
Evaluation of Lime for Sentiment Analysis of Sentences With Different Lengths
Tobias Dorrn and Almuth Muller, Fraunhofer IOSB, Fraunhoferstr. 1, 76131 Karlsruhe, Germany
ABSTRACT
Local Interpretable Model-agnostic Explanations (LIME) is a widely adopted technique for explainable Artificial Intelligence (XAI), that provides insights into AI models. However, the performance of LIME on short sentences can be a challenge, potentially undermining its utility in various natural language processing (NLP) applications. This study presents an evaluation of the impact of sentence length on LIMES interpretability accuracy for sentiment analysis. A set of similar sentences of different lengths is used and the performance of LIME is evaluated using a state-of-the-art sentiment model. The findings reveal that, while semantically similar, shorter sentences result in less reliable explanations due to limited input perturbation for LIMES sampler. This research aims to raise awareness about the limitations of LIME on short sentences and initiate further investigations into improving the interpretability of AI models for short text analysis. Ultimately, this study contributes to enhancing transparency and trustworthiness in AI systems across various domains.
KEYWORDS
Explainable AI, Model Interpretability, Sentiment Analysis, Natural Language Processing, LIME.
The Evolution of Vanet Networks: a Review of Emerging Trends in Artificial Intelligence and Software-defined Networks
Lewys Correa Sanchezl and Octavio Jose Salcedo Parra
ABSTRACT
The use of vehicular ad hoc networks (Vanet) has become increasingly important in todays world due to their ability to enhance driving safety and vehicular traffic efficiency. This article will discuss artificial intelligence techniques used in Vanet, including machine learning, deep learning, and swarm intelligence techniques. Additionally, the challenges of routing in Vanet, such as communication link disruption, presence of obstacles, and vehicle speed will be analyzed. Finally, software-defined networks (SDN) and their application in Vanet, including SDN protocols and SDN architectures will be described.
KEYWORDS
Vehicular communications, routing, artificial intelligence, software-defined networks & Vanet.
Do You Speak Basquenglish? Assessing Lowresource Multilingual Proficiency of Pretrained Language Models
Inigo Parra, University of Alabama, United States of America
ABSTRACT
Multilingual language models ave democratized access to information and artificial intelligence (AI). Still, low-resource languages (LRL) remain underrepresented. This study compares the performance of GPT-4, LlaMa (7B), and PaLM 2 when asked to reproduce English-Basque code-switched outputs. The study uses code-switching as a test to argue for the multilingual capabilities of each model and compares and studies their cross-lingual understanding. All models were tested using 84 prompts (N = 252), with their responses subjected to qualitative and quantitative analysis. This study compares the naturalness of the outputs, code-switching competence (CSness), and the frequency of hallucinations. Results of pairwise comparisons show statistically significant differences in naturalness and the ability to produce grammatical code-switched output across models. This study underscores the critical role of linguistic representation in large language models (LLMs) and the necessity for improvement in handling LRLs.
KEYWORDS
Basque, code-switching, low-resource languages, multilingual models.
Early Identification of Conflicts in the Chilean Fisheries and Aquaculture Sector Using Text Mining and Machine Learning Techniques
Mauricio Figueroa Colarte
ABSTRACT
The problem addressed by this project is related to the Chilean fishing and aquaculture sector, which is very sensitive in terms of economic and social aspects, since aquaculture entrepreneurs, industrial and artisanal fishermen base their income on the exploitation of the countrys hydrobiological resources, and it is there where conflicts arise, mostly related to regulatory restrictions that materialize in management measures to maintain the biological and ecosystemic balance of the species. Based on this context, the main objective of this work is to evaluate different machine learning algorithms, applying text mining techniques, to build an Artificial Intelligence model that supports the managers of the Undersecretariat of Fisheries and Aquaculture to anticipate possible conflicts via early warnings. The CRISP-DM methodology was used, performing a series of experiments applying a model based on Neural Networks (Multilayer Perceptron) capable of processing unstructured digital information flows, coming from electronic media and from Twitter to predict conflicts with an average accuracy of 81.50%, exceeding the initial hypothesis.
KEYWORDS
Conflict, Fisheries, Aquaculture, Text Mining, Machine Learning.
A Knowledge Graph Completion Model Based on Weighted Fusion Description Information and Transform of the Dimension and the Scale
Panfei Yin, Erping Zhao, BIANBADROMA, and NGODRUP
ABSTRACT
The existing knowledge graph completion model represents entity and description information by uniform fusion, and the convolutional kernel has fewer sliding steps on the triplet matrix composed of entities and relations and does not obtain different scale characteristics of entities and relations. In this paper, a knowledge graph completion model based on weighted fusion description information and transform of the dimension and the scale, EDMSConvKE, is proposed. First, the entity description information is obtained by using the SimCSE model of comparative learning and then combined with the entity according to a certain weight to obtain the entity vector with stronger expression ability. Second, the head entity, relation, and tail entity vectors are combined into a three-column matrix, and a new matrix is generated by a dimensional transformation strategy to increase the number of sliding steps of the convolution kernel and enhance the information interaction ability of entities and relations in more dimensions. Third, the multiscale semantic features of triples are further extracted by using convolution kernels of different sizes. Finally, the model in this paper is evaluated by a link prediction experiment, and the model has significantly improved Hits@10 and MR indexes.
KEYWORDS
Knowledge graph completion , SimCSE mode , Weighted fusion , Multiscale CNN , Link prediction.
A Human-centered Approach to Abusive Language Detection: Efficacy of Llms
Zaur Gouliev and Rajesh Jaiswal, Technological University Dublin, Ireland
ABSTRACT
In the evolving digital world, the prevalence and repercussions of abusive language in online platforms have become a significant concern. This study is driven by the goal to effectively identify and mitigate instances of abusive language, fostering a more respectful and safer online environment. Leveraging large language models (LLMs) such as LSTM, GPT and BERT, we aimed to discern the capability and limitations of transfer learning in the context of abusive language detection. Our methods incorporated comprehensive steps: data preprocessing, model fine-tuning, and evaluation, addressing challenges such as class imbalances through techniques like SMOTE to ensure robust model training. Results from analysing the the Davidson et al & ConvaAbuse dataset, two highly known datasets in the field of ALD, indicated that while all LLMs exhibit considerable proficiency in detecting abusive language, the GPT model reflected the highest accuracy of 88\% for dataset 1 and 95\% for dataset 2. The study underscores the potential of LLMs in addressing online abusive language and presents a blueprint for employing such technologies ethically and effectively. It signals the importance of continual refinement in model development and a deeper consideration to navigate the complexities of digital communication censorship and freedom of speech. This exploration serves as a stepping stone toward deploying more nuanced, ethically constructed, and effective AI-driven solutions in detect online abuse, urging further research and development in this domain.
KEYWORDS
Large Language Models, Natural Language Processing, Abusive Language Detection, Transfer Learning, Explainable Artificial Intelligence, Human-Centered NLP.
An Analysis and Research of Growth Factors of Internet Celebrity Boba Milk Tea Stores Using Machine Learning and Artificial Intelligence
Fuyi Xie and Zihao Luo, California State Polytechnic University, USA
ABSTRACT
The boba milk tea industry has emerged as a dynamic and competitive sector within the broader landscape of the food and beverage industry [1]. Characterized by its unique combination of tea, milk, and chewy tapioca pearls, boba milk tea has garnered a dedicated following of enthusiasts. To secure and expand their presence in this market, boba milk tea stores aspire to achieve the status of internet celebrities, attracting a widespread and loyal customer base [2]. This research paper delves into the intricate realm of boba milk tea store growth factors using Machine Learning and Artificial Intelligence [3]. The study is motivated by the recognition that the industrys success depends on understanding and harnessing a diverse array of factors, including customer sentiment, store location attributes, and effective marketing strategies [4]. Our methodology entails data collection and preprocessing from a variety of sources, encompassing customer reviews, sales records, geospatial data, and marketing data [5]. Through rigorous feature engineering and the application of advanced Machine Learning algorithms, including sentiment analysis, geospatial analysis, and personalized marketing models, we aim to uncover the key determinants of boba milk tea store success. The results of our research offer actionable insights for both existing and aspiring boba milk tea store owners. Customer sentiment analysis reveals that customer reviews play a critical role in influencing store performance. Store location attributes, explored through geospatial analysis, indicate that proximity to target demographics, competitors, and high-traffic areas significantly impacts growth [6]. Furthermore, the effective deployment of personalized marketing strategies using Machine Learning techniques has been shown to enhance customer engagement and drive growth. While our research provides valuable insights, it is essential to acknowledge certain limitations, such as data availability and the complexity of Machine Learning models. However, we are confident that this research contributes to the broader understanding of growth factors in the boba milk tea industry and can inspire further studies and practical applications. As businesses in the boba milk tea industry navigate a landscape shaped by evolving consumer preferences, this research underscores the transformative potential of Machine Learning and Artificial Intelligence in achieving and maintaining internet celebrity status. Beyond its immediate application, the study provides a blueprint for leveraging technology, data, and industry expertise to thrive in the competitive landscape of modern retail. This paper invites stakeholders within the boba milk tea industry and the broader retail and food and beverage sectors to embrace the power of data-driven decision-making, facilitating sustainable growth and success in the ever-evolving marketplace.
KEYWORDS
Flutter, Web Scraping, Firebase Storage, Boba Milk tea.
An Empowering Mobile Aid Application to Help Visually Impaired People Navigate and Explore Places Around Them Using Location Tracking and Text Detection
Yiyao Zhang, Anne Yunsheng Zhang and Jenny Wong, California State Polytechnic University, USA
ABSTRACT
In section 1, we have done a lot of research about the population of the blind people. And what they need help the most in daily life. It turns out that many visually impaired individuals have problems navigating around and reading. In section 2, we talked about our challenges through the process of making the app and the solution to it. In section 3, it mainly talks about our apps main three systems specifically [1]. And what sources/packages did we use to build that system. In section 4, we are experimenting with the blind spot that we think we have a problem on. And analyze the graph/data that we have summarized. And explain the percentage/accuracy of the outcome. In section 5, we have searched up other solutions that tried to solve the problem as we are solving [3]. And explain what is the difference between our apps. In section 6, we summarize the limitations and improvements of our app. And how we will improve our app in the future to solve that limitation.
KEYWORDS
Blindness, Firebase, iOS, Android.
Lightweight Encryption in Post-quantum Computing Era
Peter Hillmann, University of the Bundeswehr Munich, Department of Computer Science, Werner-Heisenberg-Weg 39, 85577 Neubiberg, Germany
ABSTRACT
Condentiality in our digital world is based on the security of cryptographic algorithms. These are usually executed transparently in the background, with people often relying on them without further knowledge. In the course of technological progress with quantum computers, the protective function of common encryption algorithms is threatened. This particularly aects public-key methods such as RSA and DH based on discrete logarithms and prime factorization. Our concept describes the transformation of a classical asymmetric encryption method to a modern complexity class. Thereby the approach of Cramer Shoup is put on the new basis of elliptic curves. The system is provable cryptographically strong, especially against adaptive chosen-ciphertext attacks. In addition, the new method features small key lengths, making it suitable for Internet-of-Things. It represents an intermediate step towards an encryption scheme based on isogeny elliptic curves. This approach shows a way to a secure encryption scheme for the post-quantum computing era.
KEYWORDS
Cryptography, Public-Key Encryption, Post-Quantum Cryptography, Elliptic Curve, Isogeny Curve.
Scoredknn: an Effcient Knn Recommender Based on Dimensionality Reduction for Big Data
Seda Polat Erdeniz1, 2, Ilhan Adiyaman1, Tevk Ince1, Ata Gur1, and Alexander Felfernig2, 1Frizbit Technology S.L., Barcelona, Spain, 2Graz University of Technology, Graz, Austria
ABSTRACT
E-commerce companies have an inevitable need in employing recommender systems in order to enhance the user experience, increase customer satisfaction, and drive sales. One of the most popular, intuitive and explainable recommender algorithm is the K-nearest neighbors (KNN) algorithm which is a well-known non-parametric collaborative ?ltering (CF) method. However, when dealing with big data, applying KNN poses computational challenges in terms of both time and space consumption. Several solutions proposed, but none of them could become a standard solution up to now. To address this issue, we propose a dimension reduction based approach with scoring functions which is applicable on all neighboring methods. With the help of this approach, similarity calculation is reduced into one dimension instead of two dimensions. The proposed approach reduces the KNN complexity from O(n2) to O(n) and it has been evaluated on both publicly available datasets and also real-world e-commerce datasets of an e-commerce services provider company Frizbit S.L.. We have compared our method with state-of-the-art recommender systems algorithms and evaluated based on the criteria: time consumption, space consumption and accuracy. According to the experimental results, we have observed that our proposed approach ScoredKNN achieves a pretty good accuracy (in terms of MAE) and lower time/space costs.
Welcome to Ultaki: Exploring the Relevance of Large Language Models for Accurate Behavioral Simulation in Energy Transition
Mehdi Mounsif, Benjamin Jauvion, and Fabien Medard, Akkodis Research, Clermont-Ferrand, France
ABSTRACT
The global focus on greenhouse gases reduction places a major role on electrification of systems. While replacing fossil fuels with clean electricity is extremely appealing, the non-negligible costs associated with extracting and transforming mineral resources into renewable energy production systems as well as their world-wide deployment must be considered. As such, this study presents a novel approach to integrating Large Language Models (LLMs) into energy demand simulation, addressing the complexities and variability of human behavior as well as its profound impact on energy systems. By leveraging LLMs to impersonate diverse characters with distinct psychological traits, we explore the plausibility of reactions, prompt sensitivity, and second-order dynamics through individual agent experiments. Furthermore, we introduce a framework for multiagent scenario investigation, where a shared limited volume of energy triggers a traumatic event if the average environmental sensitivity drops below a specified threshold. A thorough result analysis and discussion concludes this work and sheds light on the relevance and current limitations of integrating modern language models both in complex systems and decision-making processes as well as more specific energy demand estimation the formulation of sustainable energy strategies.
KEYWORDS
Large Language Models, Population Dynamics, Behavioral Simulation, Energy Transition.
Machine Learning Algorithms in Judiciary: an Extrajudicial Monitoring Application
Harly Carreiro Varo sup>1, Marcelo Lisboa Rocha2, Gentil Veloso Barbosa2 and David Nadler Prata2, 1Developer at Tocantins Court of Justice (TJTO), Tocantins, Brazil, 2Graduate Program in Governance and Digital Transformation, UFT, Tocantins, Brazil
ABSTRACT
In the judicial and extrajudicial spheres, the Tocantins Court of Justice (TJTO) - Brazil has achieved high levels of computerization. The conduct of such procedures requires transparency in this scenario. The analysis of data resulting from inspections of extrajudicial services is an aspect that needs attention with regard to extrajudicial services. To analyze data resulting from on-site extrajudicial inspections, a data mining technique based on association rules was proposed. Due to the large number of association rules in general, a second step was taken in order to optimize/reduce the number of rules. The proposed method performed better than other classic techniques of the literature, such as decision trees, Support Vector Machines, and Naive Bayes.
KEYWORDS
Association Rules, Data Mining, Extrajudicial Inspection & Optimization.
A Study on Improving Multilingual Neural Machine Translation for Low-resource Languages
Tran Hong-Viet, Tran Lam-Quan, Mai Van-Thuy, Center of Digital Health, Hanoi University of Public Health, 1A Duc-Thang Street, North Tu-Liem District, Hanoi, Vietnam
ABSTRACT
In this paper, we present the method for building multilingual neural machine translation with lowresource languages, including Vietnamese, Laos, and Khmer, to improve the quality of multilingual machine translation in these areas. Our corpora includes 500,000 Vietnamese-Chinese bilingual sentence pairs; 150,000 Vietnamese-Laos bilingual sentence pairs, and 150,000 Vietnamese-Khmer bilingual sentence pairs. Experiments on multilingual neural machine translation for low-resource languages: Vietnamese -Laos, Vietnamese-Khmer and Vietnamese-Chinese show that our approach yields a statistically significant improvement compared to baseline NMT system.
KEYWORDS
Machine Translation, Neural Machine Translation, Multilingual Neural Machine Translation 1.
KEYWORDS
Association Rules, Data Mining, Extrajudicial Inspection & Optimization.
From U to U+, What Innovations Have We Made for the Treatment and Discovery of Breast Cancer
Minuo Qing1, 2, Emily X. Ding1, Robert J. Hou1, 1Vineyards AI Lab, Auckland, New Zealand, 2Westlake Girls High School,Auckland, New Zealand
ABSTRACT
In recent years, with the development of cutting-edge technology, AI technology has gradually entered the publics vision, and AI technology is active in fields such as education, finance, and health to human life and health. In this paper, we present how an AI model can be applied to Breast Cancer diagnosis and the potential benefits of this process. As a high school student oriented work, we�ll display more details on the investigated background, models learning and solutions inspirations. Specifically, we used the popular model in the current field of medical image segmentation, U-net, as our basic model. The U shape architecture, encoder VS decoder, and convolution VS Transpose convolution are explained in a simple way. In addition, we hope to improve the algorithm to efficiently draw multiple conclusions simultaneously after inputting images. The classifiers are introduced in the bottleneck layer. As shown in the experimental results, our changes have indeed improved the original model. We believe that this breakthrough might be applied in future cancer cell detection systems.
KEYWORDS
Breast Cancer, Automatic Diagnose, Classification, Semantic Segmentation, U-Net.
Axiomatic Methodology of Formalization as a Way to Intellectual Analysis in Computer Science
Viktoria Kondratenko1 and Leonid Slovianov2, 1School of Informatics, University of Edinburgh, UK, 2Centre of Innovative Medical Technologies of the National Academy of Sciences of Ukraine
ABSTRACT
The report discusses the possibility of creating a universal formalization tool for data mining. The methodology of semantic data analysis and intellectual problem solving is proposed.
KEYWORDS
Formalization, Axiomatic Modelling, Intellectual Analysis, Computer Science, Intellectual Task.
The Transformation Risk Model for Artificial Intelligence: Solutions Where Benefits Outweigh Risks
Richard Fulton1, Diane Fulton2, Nate Haynes3 and Susan Kaplan3, 1Department of Computer Science, Troy University, Troy, Alabama, USA, 2Department of Management, Clayton State University, Morrow, Georgia, USA, 3Modal Technology, Minneapolis, Minnesota, USA
ABSTRACT
This paper will summarize the most cogent and recently-cited advantages and risks associated with Artificial Intelligence from an in-depth review of the literature. Then the authors synthesize the salient risk-related models currently being used in AI, technology and business-related scenarios. Lastly, in view of the most pressing issues and updated context of AI along with theories and models reviewed, the writers propose a new framework called �The Transformation Risk Model for Artificial Intelligence� to address the increasing fears and levels of risk. Using the model characteristics as a backdrop, the article emphasizes innovative solutions where benefits outweigh risks.
KEYWORDS
Artificial Intelligence, Risk Benefit Models, AI Challenges, AI Advantages, Generative AI.
An Ai Driven Bookmark Manager to Assist in Classifying Webpages Using Machine Learning
YuhuiLi1, AngLi2, 1California State University Fullerton, 800N State College Blvd, Fullerton, CA92831 2ComputerScience Department, CaliforniaState Polytechnic University, Pomona,CA91768
ABSTRACT
This paper introduces an innovative solution to the challenges associated with manual web page bookmarking and organization [3]. Our approach leverages the capabilities of machine learning to streamline and automate the bookmarkingprocess, ultimatelyenhancingthewebbrowsingexperience[4]. The effectiveness of our solution lies in its ability to reduce the cognitive load on users. By automating the bookmarking process, it allows users to focus more on their browsing experience and less on manual organization. This approach distinguishes itself from traditional methods, which often involve time-consuming manual categorization that can lead to errors and misplacements. Moreover, some existing bookmark management tools lackintuitiveinterfacesandcomprehensiveautomation. The extension had two key experiments to evaluate its accuracy. In the beginning, the text classifier s accuracy was tested across five distinct categories: atheism , religion.christian , comp.graphics , sci.med , and sci.space . The classifier demonstrated an overall mean accuracy of 90%. The second experiment explored the classifier s performance on untrained categories, specifically sports articles. The result showed that the extension could not accurately categorize these articles, often mislabeling them under incorrect categories. At this point, the experiment highlightedthe difficulties when the classifier encounterstopicsoutside its training data orarticles containingmore thanonetrainingtopicithas.
KEYWORDS
Sklearn,AI,Chrome Extension,WebBrowsing.
Ai-based Security Enhancement and Personal Information Protection Techniques Inherited From 6g Networks
Kwang-Man Ko, Sung-Won Kim, Chul-Long Kim, Jun-Seop Lim, and Byung-Suk Seo, Department of Computer Engineering, Sangji University, Won-ju, Republic of Korea
ABSTRACT
As the realization of digital transformation expected with 5G networks has already begun and continue to evolve over this decade, the 6G communication era envisions how humans will interact with the digital virtual worlds beyond 2030. The security mechanisms designed for 5G using the concepts of SDN and NFV should be further improved to cater to the security demands in 6G networks. When 6G networks are harmonizing the concepts of SDN, NFV, and AI in an integrated environment to provide the necessary services, the system-level differences between 5G and 6G would introduce new security threats and privacy concerns. In this paper, we aims to identify and systematically analyze new security threats and privacy concerns of 5G technologies inherited to 6G networks and the corresponding 6G specific defences will also be investigated.
KEYWORDS
5G Networks, 6G Networks, Security and Privacy, Machine Learning.
Security-aware Coding Patterns
Roberto Andrade1, Jenny Torres1 and Johanna Molina2, 1Software Security Anti-patterns Research Group (SSA-RG), Facultad de Ingenier�a de Sistemas, Escuela Polit�cnica Nacional, Quito 170525, Ecuador, 2Computer Science Departament, Universidad de las Fuerzas Armadas ESPE, Sangolqu� 171103, Ecuador
ABSTRACT
In Software Development Life Cycle (SDLC), there must be a constant testing and good practices. For instance, different errors and bugs can appear in the coding process, or even the design of a pattern can generate an antipattern. Based on the SDLC model in the main phases of design, analysis, and implementation; and applying a cognitive scheme, this proposal presents a model to identification coding patterns that can affect the security of the final software product. This framework improves the cognitive process of developers in tasks such as debugging, optimization and software maintenance, since it uses several cognitive mental models to mitigate problems in the software by obverse, orient, decide and act on the changes that are made in the creation of code, obtaining the best possible results.
KEYWORDS
Antipatterns, mental model, software patterns, cybersecurity.
A Hierarchical Vision Approach for Enhanced Medical Diagnostics of Lung Tuberculosis Using Swin Transformer
Syed Amir Hamza1 and Alexander Jesser2, 1Institute for Intelligent Cyber-Physical Systems (ICPS) Heilbronn University of Applied Sciences, Max-Planck-Street, Heilbronn, Germany, 2Heilbronn University of Applied Sciences, Max-Planck-Street, Heilbronn, Germany
ABSTRACT
Lung tuberculosis remains a significant global health concern, and accurate detection of the disease from chest X-ray images is essential for early diagnosis and treatment. The primary objective is to introduce a cutting-edge approach utilizing the Swin Transformer, designed to aid physicians in making more precise diagnostic decisions in a time-efficient manner. Additionally, the focus is to reduce the cost of the testing process by expediting the detection process. The Swin Transformer is a state-of-the-art vision transformer that employs a hierarchical feature representation and shifted window mechanism to enhance image understanding. We employ the NIH Chest X-ray dataset, which consists of 1,557 images labeled as not having tuberculosis and 3,498 images depicting the disease. The dataset is randomly split into training, validation, and testing sets using a 64%, 16%, and 20% ratio, respectively. Our methodology involves preprocessing the images using random resized crop, horizontal flip, and normalization before converting them into tensors. The Swin Transformer model is trained for 50 epochs with a batch size of 8, using the Adam optimizer and a learning rate of 1e-5. We monitor the models accuracy and loss during training and calculate the F1-score, precision, and recall to evaluate its performance. The results of our study reveal a peak training dataset accuracy of 0.88 at the 43rd epoch, while the validation dataset achieves its highest accuracy of 0.88 after 20 epochs. The testing phase yields a precision of 0.7928 and 0.9008, recall of 0.7749 and 0.9099, and F1-score of 0.7837 and 0.905 for the "Negative" and "Positive" classes, respectively. The Swin Transformer exhibits encouraging performance, and we anticipate that this architecture will be easily adaptable and possess considerable potential for enhancing the speed and efficiency of diagnostic decisions made by physicians in the future.
KEYWORDS
Lung tuberculosis, Medical diagnostics, Swin Transformer, Vision transformer, Hierarchical feature representation, Shifted window mechanism, Deep learning, Computer vision, Medical image analysis, NIH Chest X-ray dataset, Early diagnosis.
Chatgpt for Generating Questions and Assessments Based on Accreditations
Rania A Aboalela, Department of Information Systems, King Abdulaziz University, Jeddah, Kingdom Of Saudi Arabia
ABSTRACT
This research aims to take advantage of artificial intelligence techniques in producing students� assessment that is compatible with the different academic accreditations of the same program. The possibility of using ChatGPT technology was studied to produce an academic accreditation-compliant test NCAAA and ABET. A novel method was introduced to map the verbs used to create the questions introduced in the tests. The method allows a possibility of using the ChatGPT technology to produce and check the validity of questions that measure educational outcomes. A questionnaire was distributed to ensure that the use of ChatGPT to create exam questions is acceptable by the faculty members, as well as to ask about the acceptance of assistance in validating questions submitted by faculty members and amending them in accordance with academic accreditations. The questionnaire was distributed to faculty members of different majors in the Kingdom of Saudi Arabias� universities. 120 responses obtained with 85% approval percentage for generate complete exam questions by ChatGPT. Whereas 98% was the approval percentage for editing and improving already existed questions.
KEYWORDS
ChatGPT AI, NCAAA, ABET, IS Program, Kingdom of Saudi Arabia, Bloom Taxonomy.
An Entertaining Application to Mix Exercise With Fun Using Pose Estimate and Unity
Alexander Wu1, Amelia Wu2, Moddwyn Andaya3, 1California Connections Academy Central Valley, 580 N. Wilma Ave., Ste. G, Ripon, CA 95366, 2Human Biology Department, UC San Diego, 9500 Gilman Drive, La Jolla, CA 92093, 3Computer Science Department, California State Polytechnic University, Pomona, CA 91768
ABSTRACT
We address the rising issue of increasing weight and declining health due to sedentary lifestyles and unhealthy diets. Our solution involves a camera tracking system that monitors users movements and offers six engaging exercise games, bridging the gap between fitness and enjoyment. The project comprises three interconnected components: pose estimation, game information, and mini-games, with C# code utilized for pose estimation and individualized coding for seamless integration of each game. Throughout the development process, three notable challenges emerged: occasional sensory issues with the pose estimate, animation complexities, and the absence of an effective scoring system. To enhance the efficiency of the pose estimate, we conducted three rounds of trials, each consisting of ten arm circles, revealing that proximity to sensors was a common cause of issues. The animation challenge was overcome by incorporating free, simple, and readily available animations from online sources into human models for games requiring user interaction with models. To address the scoring dilemma, we clarified game goals by providing text instructions, guiding users on how to achieve success. This application caters to a younger audience, offering affordability, visual appeal, intense exercise, and swift results. Balancing fitness and fun, it presents an ideal solution for those seeking an engaging and effective exercise regimen.
KEYWORDS
Exercise, AI, Unity, Mini Games.
Assessing the Viability and Socio-economic Impact of Solar Photovoltaic Systems in Off-grid Rural Communities: a Case Study of a Developing Region
Tianmu Li1, Jonathan Sahagun2, 1University of California Irvine, Irvine, CA 92697, 2Computer Science Department, California State Polytechnic University, Pomona, CA 91768
ABSTRACT
This study investigates the performance and economic feasibility of solar photovoltaic (PV) systems for powering a remote rural community in a developing region [1]. The research assesses the PV systems reliability and energy generation capacity, considering factors like varying weather conditions and daily energy demand patterns [2]. It evaluates the systems effectiveness in providing electricity to households, schools, and local businesses, addressing the critical need for sustainable energy sources in underserved areas [3]. Additionally, a cost-benefit analysis is conducted, considering the initial installation costs, maintenance, and potential environmental benefits. The findings reveal that the solar PV system demonstrates promise as a reliable and environmentally friendly energy source, especially in regions with abundant sunlight. It offers a viable solution to alleviate energy poverty and improve the quality of life in off-grid communities. The results also emphasize the importance of affordable and efficient solar technology, providing valuable insights for policymakers and stakeholders seeking to promote sustainable energy solutions in remote and underserved areas.
KEYWORDS
Solar PV Feasibility, Rural Electrification, Sustainable Energy, Cost-Benefit Analysis.
On-premise File Server Vs Cloud Storage With Incident Management: a Comparative Study
Jaymie Rae Medina1 and Jennalyn Mindoro2, 1Graduate Studies, Technological Institute of the Philippines, Manila, Philippines, 2Computer Engineering Department, Technological Institute of the Philippines, Manila, Philippines
ABSTRACT
Many organizations are already shifting their infrastructure and applications to the cloud. Cloud technology is accessing and availing products and services over the internet. The maintenance of cloud technology is often managed by a cloud service provider. A comparison has been made between an existing technology, on-premise file servers, and virtualized file servers in the form of cloud storage to determine the advantages and disadvantages of each file-sharing system�s performance. An architectural framework and a simulation of the new cloud storage architecture have been conducted to serve as file storage. Finally, several users that are currently employing the existing on-premise file server technology have participated in user acceptance testing to try the cloud-based storage as a replacement for the file server. The test outcomes proved that end-users were able to execute their regular duties using cloud storage and that they favored it over their current file storage.
KEYWORDS
Storage, Cloud, On-premise, Incident Management, Server, AWS.
Hardware Design and Implementation of Elliptic Curve Cryptography Algorithm
Aravinth M and Sasivarnam J, Department of Electronics and Communication Engineering, Bannari Amman Institute of Technology, Sathyamangalam
ABSTRACT
Elliptic Curve Cryptography (ECC), a reliable and effective technique for protecting digital communication and data, has grown in popularity. Its rising popularity can be attributed to its capacity to offer robust security with comparatively tiny key sizes, making it especially suitable for situations with limited resources. The Elliptic Curve Cryptography algorithms hardware design and implementation are thoroughly examined in this study. The research starts with a thorough analysis of the foundations of ECC, highlighting its mathematical basis and important characteristics that make it a potent cryptographic tool. The study then explores hardware design concerns for ECC, highlighting key elements including elliptic curve point operations, finite field arithmetic, and modular reduction strategies. ECC must be hardware accelerated in order to achieve high-speed encryption and decryption operations, hence effective implementation is of utmost importance. To carry out ECC procedures quickly, the architecture combines specialized hardware components and improved algorithms. The design decisions for the ECC hardware accelerator are discussed in the study, taking into consideration variables like space, power use, and performance. The designs appropriateness for various hardware platforms and applications is demonstrated by a thorough study of its benefits and trade-offs. The ECC hardware accelerator implementation is also described in depth, together with the results of simulations and the hardware description language that was utilized. To demonstrate the efficiency advantages brought about by hardware acceleration, performance measures like throughput, latency, and resource use are carefully examined. It also discusses potential improvements to the hardware architecture and future prospects for research in order to fulfil changing security needs.
KEYWORDS
Elliptic Curve Cryptography(ECC), Hardware design, Cryptography Acceleration, Finite Field Arithmetic, Security Implementation, Point Operations.
Vlsi Implementation of Rns Convolutional Neural Networks for Deep Learning Applications
Jagadheswaran M and Dharshnaa K, Department of Electronics and Communication, Bannari Amman Institute of Technology, Sathyamangalam, TamilNadu, India
ABSTRACT
As a result of Convolutional Neural Networks (CNN)s (unparalleled) advancement in core areas like Deep Learning and Machine Learning, there is an urgent need for computations to be completed more quickly while carefully preserving hardware resources. The Residue Number System (RNS) has emerged as a front-runner as a result of the pressing necessity that has driven researchers and developers to examine unusual number systems. RNS, which by nature is parallel, provides greater efficiency by significantly decreasing the overflow range issues that plague conventional systems. As the technological landscape develops, traditional arithmetic systems encounter overhead problems that are becoming more and more problematic, particularly in terms of speedy arithmetic implementation and effective bandwidth management. These issues are even more challenging when one considers the demanding computing specifications of sectors like sophisticated robotics, medical imaging, and aerospace. RNS stands out in this setting not only for its parallelism but also for its potential for resource optimization. The field of convolution arithmetic, which is at the heart of CNN operations, could experience a significant shift by making use of its potential. As companies seek for precision, speed, and efficiency, the limitations of the conventional number system become more evident. The ultimate objective is to optimize Deep Neural Network topologies and streamline them to attain efficiencies previously thought to be impractical with conventional systems. By contrasting RNS-enhanced CNNs with their conventional counterparts, we aim to show the true benefits of these techniques and to establish a new course for the development of deep learning hardware.
KEYWORDS
Convolutional Neural Network (CNN), Residue Number System (RNS), Very Large Scale Integration (VLSI), Deep Neural Network, Efficiency, Accuracy.
Advancing Web Accessibility: a Guide to Transitioning Design Systems From WCAG 2.0 to WCAG 2.1
Hardik Shah, Department of Information Technology, Rochester Institute of Technology, Rochester, New York, USA
ABSTRACT
This research focuses on the critical process of upgrading a Design System from Web Content Accessibility Guidelines (WCAG) 2.0 to WCAG 2.1, which is an essential step in enhancing web accessibility. It emphasizes the importance of staying up to date on increasing accessibility requirements, as well as the critical function of Design Systems in supporting inclusion in digital environments. The article lays out a complete strategy for meeting WCAG 2.1 compliance. Assessment, strategic planning, implementation, and testing are all part of this strategy. The need for collaboration and user involvement is emphasized as critical strategies and best practices for a successful migration journey. In addition, the article digs into migration barriers and discusses significant lessons acquired, offering a realistic view of the intricacies of this transforming road. Finally, it is a practical guide and a necessary resource for organizations committed to accessible and user-centered design. The document provides them with the knowledge and resources they need to navigate the changing world of web accessibility properly.
KEYWORDS
Web accessibility, WCAG 2.0, WCAG 2.1, Design Systems, Web accessibility tools.
Harnessing Mobile Technology for Sustainable Engagement: an Innovative Ios and Android Flutter Application for United Nations SDG Alignment and User Empowerment
Megan Wei1, Bobby Nguyen2, 1University of California Irvine, Irvine, CA 92697, 2Computer Science Department, California State Polytechnic University, Pomona, CA 91768
ABSTRACT
In a world facing pressing challenges related to sustainability and the United Nations Sustainable Development Goals (SDGs), harnessing technology for global betterment has become a paramount goal [1]. This research paper introduces an innovative iOS and Android Flutter application designed to engage users of all ages in daily tasks aligned with the SDGs [2]. The paper explores the imperative of mobilizing a diverse user base to contribute to sustainability and illustrates how technology can be a catalyst for positive change. It discusses the applications structure, core components, and unique features, such as daily prompts, interactive content, and real-world action. The study also delves into a user survey experiment, revealing significant findings that demonstrate the applications positive impact on awareness and the adoption of sustainable behaviors [3]. Through this research, we aim to emphasize the potential of mobile applications to empower individuals and communities in the pursuit of sustainability, fostering a collective drive toward a more harmonious and responsible global future.
KEYWORDS
Social/Interactive, Climate Engagement, Photos/Videos, Carbon Footprint.
A Smart Fishing Rod and Software Using Flex Sensor, Vibration Sensor and Bluetooth
Yanbo Wang1, Jonathan Sahagun2, 1Chapman University, 1 University Dr, Orange, CA 92866, 2Computer Science Department, California State Polytechnic University, Pomona, CA 9176
ABSTRACT
Fishing rod vibration sensors represent an innovative tool in the realm of angling technology, designed to revolutionize the fishing experience [1]. These sensors leverage piezoelectric technology to detect subtle vibrations in the fishing rod, alerting anglers to fish bites in real-time. This abstract explores the key features and applications of these sensors. These sensors are adept at distinguishing between external disturbances and genuine fish bites, reducing the likelihood of false alarms [2]. Their sensitivity and detection range can be calibrated, optimizing their performance for diverse fishing scenarios. By enhancing the anglers ability to detect bites promptly, fishing rod vibration sensors significantly improve catch rates and the overall fishing experience. They are particularly valuable for novice and experienced anglers alike, facilitating a deeper connection with the sport. As technology continues to advance, fishing rod vibration sensors of er a seamless blend of tradition and innovation, ensuring that the age-old practice of angling remains as thrilling and rewarding as ever
KEYWORDS
Fishing rod, Flex Sensor, Vibration Sensor, Bluetooth.
Unravelling Dns Performance: a Historical Examination of F-root in Southeast Asia
Jiajia Zhu, Chao Qi, China Academy of Information and Communications Technology, Beijing, China
ABSTRACT
The DNS root service system uses Anycast technology to provide resolution through widely distributed root nodes. In recent years, the F-root node has seen astonishing growth and now boasts the largest number of nodes among the 13 root servers. Based on Ripe Atlas measurement data, we examined the availability and query latency of the F-root within the Southeast Asian region historically. The collected data illustrates how latency varies with changes in the number of root nodes, how the geographic distribution of responding root nodes changes in different periods, and examines the most recent differences between countries in terms of latency distribution. This study sheds light on the evolving landscape of DNS infrastructure in Southeast Asia.
KEYWORDS
DNS, Southeast Asia, Ripe Atlas, Latency.
Harnessing Customized Built-in Elements: Empowering Component-based Software Engineering and Design Systems With HTML5 Web Components
Hardik Shah, Department of Information Technology, Rochester Institute of Technology, Rochester, New York, USA
ABSTRACT
Customized built-in elements in HTML5 significantly transform web development. These elements enable developers to create unique HTML components tailored with specific design and purpose. Customized built-in elements enable developers to address the unique needs of web applications more quickly, supporting consistent user interfaces and experiences across diverse digital platforms. This study investigates the role of these features in Component-Based Software Engineering (CBSE) and Design Systems, emphasizing the benefits of code modularity, reusability, and scalability in web development. Customized built-in elements enable developers to address the unique needs of web applications more quickly, supporting consistent user interfaces and experiences across diverse digital platforms. The paper also discusses the difficulties and concerns that must be addressed when creating customized built-in elements, such as browser compatibility, performance optimization, accessibility, security, styling, and interoperability. It emphasizes the importance of standardization, developer tooling, and community interaction in order to fully realize the potential of these features. Looking ahead, customized built-in elements have potential in a variety of applications, including the Internet of Things (IoT), e-commerce, and educational technologies. Their incorporation into Progressive Web Apps (PWAs) is expected to further improve web experiences. While obstacles remain, the article concludes that HTML5 customized built-in elements are a driver for web development innovation, allowing the production of efficient, adaptive, and user-centric web applications in an ever-changing digital context.
KEYWORDS
Customized built-in elements, HTML5 Web Components, Component Based Software Engineering, Design Systems, Web UI development.
Comparison of Analog Circuit Sizing Networks and Number of Steps
Kazuya Yamamoto and Nobukazu Takai, Department of VLSI, Kyoto Institute of Technology, Kyoto, Japan
ABSTRACT
This paper addresses the challenges in analog circuit design by introducing automation via DRL. It investigates the performance variation of different NNs and steps per episode in the sizing of analog circuits to optimize the FoM in pre-layout design. We specifically investigate various GNNs and their process portability. Our experiments, conducted with a two-stage amplifier circuit, suggest that the TAG demonstrated the most effective transfer of topological graph information, and the GCN exhibited suitability for TSMC 180nm to 65nm process portability. Results also indicate that the number of steps per episode was proportional to the frequency of maximum FoM updates but inversely proportional to the FoM growth rate in the early learning period. Additionally, GCN showed unique process portability, confirming that GNN can overlearn with process portability. The study provides significant insights into automated design space exploration and process portability evaluation for analog circuits.
KEYWORDS
Analog Integrated Circuits Design, Auto Design, Machine Learning, Deep Reinforcement Learning.
Contact Us