Welcome to AIAPP 2023

10th International Conference on Artificial Intelligence and Applications (AIAPP 2023)

May 20 ~ 21, 2023, Zurich, Switzerland

Accepted Papers

Altruistic Asd (Autism Spectrum Disorder) Virtual Reality Game Assisting Neurotypicals Understanding of Autistic People

Taraf alshalan and Ghala alamri, Dr. Maali Alabullhafith, College of Computer Sciences and Information at Princess Norah University, Riyadh, Saudi Arabia.

ABSTRACT

Autism Spectrum Disorder is a Developmental Disability That Can Cause Significant Social, Communication and Behavioural Challenges. Parents of Children on the Spectrum Find It Difficult for Their Kids to Communicate With Them and Other People Which Makes It Challenging for Social Interactions. Researchers Have Introduced Different Solutions Such as Therapy Robot That Teaches Social Skills to Children With Autism. Using Virtual Reality to Train Emotional and Social Skills to Children With Autism Spectrum Disorder to Solve This Problem. However, These Solutions Focus Only on the Person on the Spectrum Which Might Take Years to See Their Impact. In This Study, We Introduced a Solution That Focuses on the Other Perspective, an Advanced and Interactive Intelligent Technology That Can Educate Neurotypical People on How to Communicate With People on the Spectrum in Different Scenarios and Environments While Seeing the Consequences of That Interaction From the Point of View of a Person on the Spectrum and Be Aware of Their Actions and Fully Engaged Using Virtual Reality (Vr). Virtual Reality is a Technology That Simulates Experiences That Can Be Similar to the Real World. We Achieved This Work&s Objective by Implementing a Storyline Game That is Vr-based.

KEYWORDS

Neruodivergent, Neurotypical, Virtual Reality, Communicating on the spectrum.


A Machine Learning Model That Analysis Surrounding Road Signs to Help Drivers Reduce the Dangers Caused by Human Error

Annie Wu1, Yu Sun2, 1Barrington High School, 616 W Main St, Barrington, IL 60010, 2California State Polytechnic University, Pomona, CA, 91768, Irvine, CA 92620

ABSTRACT

Road Signs Provide Essential Information and Precautions to Drivers, and Are Crucial to the Safety of Both Drivers and Pedestrians [1]. Red Signs, Such as Stop Signs, Yield Signs, and Do Not Enter Signs, Are Regulatory Signs That Organize Traffic [2]. Yellow Signs Serve as Precautions to Prevent Accidents. Traffic Lights Dictate Whether to Go or Stop in an Intersection [3]. Although Road Signs Are Intended to Attract Drivers’ Attention and Help Them Operate Their Vehicles Safely, Drivers Can Still Misread Road Signs, Resulting in Car Accidents and Serious Injuries.

KEYWORDS

Machine Learning, Image classification, Autonomous vehicle.


A Study on user Authentication based on did and CP-ABE for Self-sovereign Identity in Smart Vehicles

Taehoon Kim and Im-Yeong Lee, Department of Software Convergence, Soonchunhyang Univ., Asan-si 31538, Republic of Korea

ABSTRACT

In existing smart vehicles, vehicle owners must secure self-sovereignty for authentication. To this end, Holders in DID (Decentralized Identifier) do not rely on traditional IdM (Identity Management), but control their identity data and authenticate their credentials with Verifiers. However, for the Verifier to authenticate the Holder, there is a situation where additional data other than the VP is required, and this is because the DID based on data access control is transmitted in a general encryption scheme, causing detailed access control problems and inefficiency problems. Study on CP-ABE (Ciphertext Policy Attribute-Based Encryption)-based data access control schemes in the DID is being actively conducted to solve this problem. However, existing schemes on DID-based CP-ABE generate various security threats. This paper proposes a study on user authentication based on DID and CP-ABE for self-sovereign identity of smart vehicles in smart vehicles.

KEYWORDS

Smart Vehicle, Decentralized Identifier, Self-Sovereign Identity, Ciphertext Policy Attribute-Based Encryption.


Sentiment Analysis of Social Media Data on Covid-19

Adwita Arora1, Krish Chopra1, Divya Chaudhary2, Ian Gorton2 and Bijendra Kumar1, 1Netaji Subhas University of Technology, India, 2Northeastern University, USA

ABSTRACT

The COVID-19 pandemic has forced people to resort to social media to express their thoughts and opinions, some of which are indicative of their thoughts and feelings, which could be analysed further. In this paper, we aim to analyse the impact of the COVID-19 pandemic on social media users by Sentiment analysis of data collected from popular social media platforms, Twitter and Reddit. The textual data is preprocessed and are made fit for proper sentiment analysis using two unsupervised methods, VADER and TextBlob. Special care is taken to translate tweets or comments not in the English language to ensure their proper classification. We perform a comprehensive analysis of the emotions of the users specific to the COVID pandemic along with a time-based analysis of the trends, and a comparison of the performance of both the tools used. Geographical distribution of the sentiments is also done to see how they vary across regional boundaries.

KEYWORDS

Sentiment Analysis, Social Media Analysis, Natural Language Processing, COVID-19.


Study of Factors Affecting the Success and Failure of Government Ict Projects

T.D.H.Jayathma, Ministry of Education, Isurupaya, Sri Lanka.

ABSTRACT

Government ICT projects play a major role in the economic development of a country, strategically empowering the digitally capable citizens. Digital infrastructure implementation is necessary for a country and the implementation of ICT projects are aimed at providing convenient, efficient and effective service delivery by creating a digitally inclusive community. Throughout the world, many countries have initiated the setup of ICT infrastructure and installation of ICT projects. Although many government organizations have started to implement ICT projects, more than 60% of the government ICT projects have failed. According to the research findings, numerous factors affect the failure of government ICT projects, and out of the reasons the ineffective project management is the prominent reason for most of the failures. Successful ICT Projects have shown a good percentage of public participation, for better delivery of e-services to citizens through ICT applications. Countries which have initiated E-Government & m-government, deliver services to the public as one-stop-service, with the usage of successful ICT projects by delivering services with a dynamic ecosystem. Some notable successes are shown in countries like Denmark, Korea, Estonia, Malaysia and Singapore, where the E-Government Development Index(EGDI) is shown a better value. But in developing countries, although they have taken e-government initiatives by implementing various ICT projects, some have shown success while most of the projects have failed. A successful government ICT project has followed the phases of the project management lifecycle such as, project initiation, project planning, project execution, monitoring and controlling, and closure of the project. Adoption of best practices, usage of agile methodologies, continuous follow-ups, proper documentation and planning for deliverables with proper vision and strategy are significant features of successful government ICT Projects.

KEYWORDS

Government, ICT Projects, IT Project Management, E-government.


Integration of Iot Heterogeneous Networks With Smart Contracts in Blockchain

Yuan-Cheng Lai, Yi-An Chen, Chuan-Kai Yang, Department of Information Management, National Taiwan University of Science and Technology, Taipei, Taiwan.

ABSTRACT

As Internet of Things (IoT) continues to flourish, integrating IoT applications across networks becomes an important research topic. In this topic, the previous work focused on achieving resource management and quality assurance with proposed job assignment and resource allocation algorithms. In this paper, we propose a mechanism, called Smart-Contract Integration (SCI), which integrates IoT heterogeneous networks with smart contracts in blockchain. SCI records every IoT process that is generated by humans, machines, and data in blockchain, with the intent to tackle the challenges of resource management, trust, and security issues. In addition, with designed smart contracts and applications, we conquer the issues of extensive involvement, privacy, and incentive frameworks through encryption, signatures, transactions, and data sharing. Finally, to prove the concept of our proposal, we implemented a control and management application that can adapt to the existing IoT infrastructure in integrating IoT heterogeneous networks. The experimental results show that our developed smart contracts can handle 64 requests of management and transactions within 200 milliseconds in heterogeneous networks.

KEYWORDS

Internet of Things, Heterogeneous Network, Smart Contract, Blockchain.


Edge Computing: Data Sharing and Intelligence

Yeghisabet Alaverdyan1, 2, Suren Poghosyan2 and Vahagn Poghosyan2, 3, 1EKENG CJSC, Yerevan, Armenia, 2Institute for Informatics and Automation Problems of NAS RA, Yerevan, Armenia, 3Synopsys Armenia

ABSTRACT

The paper introduces certain approaches for timely and secure computing which affects data intelligence related to methods and tools for real-time information processing. Timely solutions are achieved by utilizing local premises rather than supporting centralized servers or clouds. Computing within the network partly takes place near the physical endpoints, and this is the case where edge computing paradigm comes in to help. The proposed method of the cloud optimization suggests splitting and sharing data between network data centers and local computing power. Provision of distinct paths between edge and main cloud for each smart device is achieved using separate blockchain which register and store the logical links of hierarchical data. Methods for strong authentication ensuring the shared data confidentiality, integrity, availability, and consistency are also given.

KEYWORDS

Edge computing, timely solution, blockchain, data intelligence, authentication.


An Overview of Different Deep Learning Techniques Used in Road Accident Detection

Sherimon P.C.1, Vinu Sherimon2, Alaa Ismaeel1, Alex Babu3, Sajina Rose Wilson3, Sarin Abraham3 and Johnsymol Joy3, 1Faculty of Computer Studies, Arab Open University, Muscat, Sultanate of Oman, 2College of Computing and Information Sciences, University of Technology and Applied Sciences, Muscat, Sultanate of Oman, 3PG Department of Computer Applications and Artificial Intelligence, Saintgits College of Applied Sciences, Kerala, India

ABSTRACT

Every year several lives are cut short because of road traffic crashes. The most common reason is the delayed response from emergency services. Effective traffic accident detection and information communication systems must rescue injured people. Several such systems have been proposed and are currently being used for better identification of accidents using deep learning techniques. This paper presents an overview of existing research in this area. It also tries identifying commonalities between different systems, where these systems lack, and how to overcome them. The study is part of the Omani-funded research project investigating deep learning-based road accident detection system.

KEYWORDS

deep learning, road traffic, road accident detection, convolutional neural networks, machine learning .


Implementation of Wireless Charging for Electronic Gadgets

Divik Joshi, Arjun Chaudhary Lala Bhaskar, Bachelor of Technology in Electronics and Communication Engineering, Amity University Uttar Pradesh, India

ABSTRACT

The use of coils for wireless charging could relieve people from cumbersome wires. A variety of studies are now being conducted to improve the effectiveness of wireless charging. The goal of research is to increase the transfer power efficiency while also extending Distance between transmitter and receiver.The fundamental idea behind This project needs to be about Development of wireless power transmission system and charging in order to do eliminate the use of traditional copper cables and current-carrying wires.The fundamental idea behind this research/project is to prepare a gadget for implementation of wireless charging in order to replace traditional copper cables as well as current carrying wires. The circuit used in this project transforms 220v 50hz to 12v high frequency (HF.). The output is routed to an air core transformers tuned coil shape as the main. The minor coil generates an HF 12volt voltage.

KEYWORDS

wireless power transfer, coils, air core transformer, HF voltage


Analyzing Emotional Contagion in Commit Messages of Open-source Software Repositories

Rashmi Dhakad1 and Dr. Luigi Benedicenti2, 1Faculty of Computer Science, University of New Brunswick, Fredericton, Canada, 2 Dean, Faculty of Computer Science, University of New Brunswick, Fredericton, Canada

ABSTRACT

For More Than a Decade Scientist Have Focused on the Emotions of Software Developers in Order to Understand Emotion’s Impact on Their Productivity, Creativity, and Quality of Work. In Recent Times, There is a Sharp Increase in Open-source Software Collaborations and Software Development Models That Are Globally Distributed. A Crucial Aspect of These Collaborations is the Affect of Emotional Contagion. Emotional Contagion is a Phenomenon of Transfer of One’s Affective State to Another. In This Research Study, We Follow Through Previously Established Research and Build on It How Emotional Contagion Happens in Large Open-source Software Development. We Further Establish How Emotional Contagion Happens During Different Time and How It Affects the Overall Development Process.

KEYWORDS

Emotional Contagion, Software Development Process, Open-source Repositories, Oss, Sentiment Analysis, Commits


Comparative Study of Sentiment Analysis for Multi-sourced Social Media Platforms

Keshav Kapur1 and Rajitha Harikrishnan2, 1, 2 Department of Information and Communication Technology, Manipal Institute of Technology, Manipal, India

ABSTRACT

There is a Vast Amount of Data Generated Every Second Due to the Rapidly Growing Technology in the Current World. This Area of Research Attempts to Determine the Feelings or Opinions of People on Social Media Posts. The Dataset We Used Was a Multi-source Dataset From the Comment Section of Various Social Networking Sites Like Twitter, Reddit, Etc. Natural Language Processing Techniques Were Employed to Perform Sentiment Analysis on the Obtained Dataset. In This Paper, We Provide a Comparative Analysis Using Techniques of Lexicon-based, Machine Learning and Deep Learning Approaches. The Machine Learning Algorithm Used in This Work is Naive Bayes, the Lexicon-based Approach Used in This Work is Textblob, and the Deep-learning Algorithm Used in This Work is Lstm.

KEYWORDS

Natural Language Processing, Naive Bayes, Textblob, Lstm , Deep Learning.


A Utilization and Evaluation of an Entity-level Semantic Analysis Approach Towards Enhanced Policy Making

George Manias1, María Angeles Sanguino2, Sergio Salmeron2, Argyro Mavrogiorgou1, Athanasios Kiourtis1, Dimosthenis Kyriazis1, 1Department of Digital Systems, University of Piraeus, Piraeus, Greece, 2ATOS Research and Innovation, Madrid, Spain

ABSTRACT

The tremendous growth and usage of social media in modern societies have led to the production of an enormous real-time volume of social texts and posts, including tweets, that are being produced by users. These collections of social data can be potentially useful, but the extent of meaningful data in these collections is still of high research and business interest. One of the main elements in several application domains, such as policy making, addresses the scope of public opinion analysis. The latter is recently realized through sentiment analysis and Natural Language Processing (NLP), for identifying and extracting subjective information from raw texts. An additional challenge refers to the exploitation and correlation of the sentiment that can be derived for different entities into the same text or even a sentence to analyze the different sentiments that can be expressed for specific products, services, and topics by considering all available information that can be depicted within a text in a holistic way. To this end, this paper investigates the importance of the utilization of an Entity-Level Sentiment Analysis (ELSA) mechanism to enhance the knowledge and the task of sentiment analysis on tweets with main objective the overall enhancement of the policy making procedures of modern organizations and businesses.

KEYWORDS

Twitter Sentiment Analysis, Entity-Level Sentiment Analysis, Named Entity Recognition, Policy Making.


Neuralfakedetnet - Detection and Classification of Ai Generated Fake News

Poorva Sawant, Parag Rane, Mumbai, India

ABSTRACT

Unreliable and deceiving information is spreading at a great speed these days across the world through social media sources. Fake news is a growing problem in our modern society, and it has become increasingly difficult to distinguish between real and fake news due to the advancement of technology. Fake or misinformation about the latest CORONA pandemic wreaked havoc. Studies conducted during the epidemic COVID that false news might have menaced public health broadly. Another set of probes ,in association with WHO, discovered that nearly,6,000 people worldwide were hospitalized due to Covid19- related false news. It also redounded in the deaths of at least 800 people. All of this, within the initial three months of the pandemic. There have been trails of fake news transferring fake preventative measures or symptoms across the media and stoner world. numerous countries have put strict measures to contain the similar spread of viral fake news or deceiving communications which can risk mortal life. Identifying and securing against propaganda has been an ongoing exercise since before the arrival of the Internet. Detecting and averting the spread of unreliable media content is a delicate problem, especially given the rate at which news can spread online. With the increase in the use of social media platforms; the leading cause for spreading such news can be that fake news can be published and propagated online faster and is also cheaper when compared to traditional news media such as newspapers and television. Online fake news or information which is deliberately designed to deceive readers is mostly commonly manually written; but with the recent progress in natural language generation techniques, models have been built to generate realistic looking ‘Fake news’. This creates a greater need to handle the fake news identification problem in a different way to not just classify the fake and real news, but also to mark the human-generated and machine-generated (neural) fake news. The new advances in identifying false information and detecting machine-generated text using AI will help to curb the spread of false information at the source if we can prepare those in a position of influence to fight against it. Governments and News agencies are now looking at Artificial intelligence as the means to separate the good from bad in the news field. That is because artificial intelligence makes it easy to understand behaviours by the use of techniques like pattern recognition. This study looks at the problem of machine-generated fake news classification as more of a comparative analysis of Human Vs Machine Generated fake news and identify the differences or similarities of the patterns. With the explosion of large language models fake news can be easily created and with proper grammar and sentences.

KEYWORDS

NLP, Fake News , Generative AI.


Formality Style Transfer Using Deep Neural Networks for the Persian Language

Seyedeh Fatemeh Ebrahimi1 and Hossein Sameti2, 1Languages and Linguistics Center, Sharif University of Technology, Tehran, Iran, 2Department of Computer Science, Sharif University of Technology, Tehran, Iran

ABSTRACT

Deep neural networks have made significant progress in formality style transfer in natural language processing tasks for various languages. However, the lack of a parallel dataset in the Persian language has posed significant obstacles in developing similar techniques for this language. Therefore, the goal of our research is to deploy an automatic system that can transfer the style of informal texts to formal texts in the Persian language. We have employed a semi-supervised approach using a consistency regularizer technique, which has helped us achieve significant performance compared to the baseline models. Additionally, we utilized data perturbation techniques to further improve our approach. Our proposed approach has obtained an 85.3% Bleu score, 90.66% classification accuracy, and 3.83 human evaluation score, showing a considerable improvement of about 18.3 to 21% compared to other baseline models.

KEYWORDS

Formality Style Transfer, Informal to Formal Persian Language, Semi-Supervised Learning, Natural Language Processing, Deep Neural Networks.


Multimodal Transformer for Risk Classification: Analyzing the Impact of Different Data Modalities

Niklas Holtz1 and Jorge Marx Gómez2, 1Future Research, Volkswagen, Wolfsburg, Germany, 2Very Large Business Applications, Carl von Ossietzky Universität, Oldenburg, Germany

ABSTRACT

Risk classification plays a critical role in domains such as finance, insurance, and healthcare. However, identifying risks can be a challenging task when dealing with different types of data. In this paper, we present a novel approach using the Multimodal Transformer for risk classification, and we investigate the use of data augmentation for risk data through automated retrieval of news articles. We achieved this through keyword extraction based on the title and descriptions of risks and using various selection metrics. We evaluate our approach using a real-world dataset containing numerical, categorical, and textual data. Our results demonstrate that the use of the Multimodal Transformer for risk classification outperforms other models that only utilize textual data. We show that the inclusion of numerical and categorical data improves the performance of the model, particularly for risks that are difficult to classify based on textual data alone. Moreover, our study shows that by using data augmentation, better performance of the models can be achieved. This methodology provides an opportunity for businesses to better manage risks and arrive at informed decisions.

KEYWORDS

Risk classification, Multimodal Transformer, Data augmentation.


Analyzing Online Media Articles on Diabetes Using Natural Language Processing: a Comparative Study of Indian Ocean Region and France

Mohammud Shaad Ally Toofanee1,2, Nabeelah Zainab Ally Pooloo2, SabeenaDowlut2, Karim Tamine1, and Damien Sauveron1, 1XLIM, UMR CNRS 7252, University of Limoges, 123,Avenue Albert Thomas, 87060 Limoges, France, 2, Universit´e des Mascareignes, Concorde Avenue Roches Brunes Rose Hill, Mauritius

ABSTRACT

Background: Diabetes is a global health concern affecting millions of people worldwide. However, knowledge, attitudes, and practices related to this disease vary widely across different regions. This article aims to investigate mediainfluenced perceptions about diabetes in France and the Indian Ocean countries using natural language processing (NLP) techniques applied to online news articles. perceptions between France and the Indian Ocean countries in with regards to Diabetes. Method: Application of Word2Vec for word integration, LDA for topic identification, and transformer-based classification models (e.g., BERT and its variants) for sentiment analysis were applied. The findings provide insights into the characteristics of digital press articles in different regions, contributing to the field of natural language processing (NLP). Results: A dataset of online media articles were collected and NLP techniques successfully applied. Differences were found in word association, in topics identified by LDA and sentiment analysis revealed more negative discussions about diabetes in the Indian Ocean region (48%) compared to France (32%), with neutral articles dominating in France (42%). These findings highlight varying perceptions and discussions about diabetes in the two regions, with implications for public health interventions and communication strategies. Discussions: The findings of this study indicate that perceptions and discussions about diabetes differ between two regions, which have implications for public health interventions and communication strategies.These findings suggest the need for targeted communication for diabetes education and prevention. However, the study is limited by the initial amount of information captured for analysis.

KEYWORDS

Artificial Intelligence, Natural Language Processing, Mass Media,Diabetes, LDA,Transformers, BERT, Sentiment Analysis, Word Associations.


A Post-quantum Privacy-enhancing Blockchain-based Transaction Framework With Access Control

LingyunLi1,2,3*, XianhuiLu1,3, and KunpengWang1,3, 1State Key Laboratory of Information Security, Institute of Information Engineering, Chinese Academy of Sciences, China, 2School of Computer Science, Liaocheng University, China, 3School of Cyber Security, University of Chinese Academy of Sciences, China

ABSTRACT

Protecting the transaction from address-based tracking is one of the core issues in the blockchain privacy preservation. In this paper, we propose a transaction framework through which the trader of a transaction organization transacts on the public chain of blockchain with privacy-enhancing whereas the manager gets access to the transaction of the trader with access control based on cryptography. In the proposed framework, the hash-based one-time address is utilized to protect transactions from unauthorized tracking; furthermore, the hash-based one-time signature is creatively being used twice to verify and track the transactions safely in the semi-honest model;through access control, the authorized managers can obtain transaction information within their authorities.Compared with the standard Bitcoin transaction system, the proposed system achieves privacy-enhancing and post-quantum security.

KEYWORDS

Post-quantum, Privacy-enhancing, Blockchain, Security, Hash-based Signature, Security.


Design of Home Automation System Using Fpga Controller

Rama Rao Chekuri1, Anil Kumar Bandani2, Nagababu chekuri3, 1, 2Assistant Professor, Dept.of ECE, B V Raju Institute of Technology, Narsapur,Telangana, 3Asst. Professor, Dept.of ECE, MLRITM,Hyderabad, Telangana

ABSTRACT

Home which is often referred as “sweet home” is indeed sweet if we introduce a home automation system. With such thought in mind a design of a home automation system is formed that will check security and comfort of a home. Security system includes detection of fire, intruders through doors, windows and garage protection. The comfort system is designed to control the temperature and luminosity only. In this paper we introduced an efficient design of home automation system using Verilog HDL and a possible solution where the user controls device by employing a central Field Programmable Gate Array (FPGA) controller to which the devices and sensors are interfaced. This project is a reflection of digital system design to achieve our goal. We simulated the design in Verilog HDL using Xilinx and Model Sim. The solution of this project is in agreement with our expected output which is readily visible through our wave.

KEYWORDS

Home Automation System, Simulation, Synthesis and Verilog HDL.


Enhance Calling Definition Security for Android Custom Permission

Lanlan Pan, Ruonan Qiu, Zhenming Chen, Gen Li, Dian Wen, and Minghui Yang, Guangdong OPPO Mobile Telecommunications Corp. Ltd., Guangdong 518000, China

ABSTRACT

Android custom permission can be defined by any app, the system fully trusts the app which first defined the custom permission. Other apps call the custom permission without any other permission source validation, and malicious apps may potentially make permission squatting attacks. In this paper, we propose a scheme to provide permission source validation for the resource provider apps, which can enhance the calling context security for android custom permission, resistant to permission squatting attack, and suitable for app’s self-protection.

KEYWORDS

Android, App, Custom, Permission, Squatting, Security.


Research on Key Protection Method of Aes-i Based on Extended Mean Square Variance Sdsoc Based on Hashed Function Cryptographic Information Security Level Protection

WangXu, Department of Network Engineering, School of computer science, Neusoft Institute, Foshan 528225, Guangdong, China

ABSTRACT

Aiming at the problems of poor local search ability and precocious convergence of fuzzy SDSoC for key extension genetic algorithm (AES-I ), a new fuzzy SDSoC for key extension genetic algorithm based on Bayesian function adaptation search (TS) was proposed by incorporating the idea of Bayesian function adaptation search into fuzzy SDSoC for key extension genetic algorithm. The new algorithm combines the advantages of AES-I and TS. In the early stage of optimization, fuzzy SDSoC for key extension genetic algorithm is used to get a good initial value, and the individual extreme value pbest is put into Bayesian function adaptation table. In the late stage of optimization, when the searching ability of fuzzy SDSoC for key extension genetic is weakened, the short term memory function of Bayesian function adaptation table in Bayesian function adaptation search algorithm is utilized. Make it jump out of the local optimal solution, and allow bad solutions to be accepted during the search. The improved algorithm is applied to function optimization, and the simulation results show that the calculation accuracy and stability of the algorithm are improved, and the effectiveness of the improved algorithm is verified.

KEYWORDS

fuzzy Hash function password protection recursive genetic algorithm; Bayesian function adaptation search; Function optimization.


A Multi-factor Certificateless Authenticated Key Agreement Protocol for Multi-server Environment on Ecc

Jin Tang and Xiaofeng Wang, Cyberspace Security Department, National University of Defense Technology, Changsha, China

ABSTRACT

Key negotiation can establish a shared key between two or even multiple parties in a public network environment, ensuring communication confidentiality and integrity. Certificateless public key cryptography (CL-PKC) aims to achieve succinct public key management without using certificates, while avoiding the key escrow property in identity-based cryptography. As an important part of CL-PKC, certificateless authentication key agreement (CLAKA) has also received widespread attention. Most CLAKA protocols are constructed from bilinear mappings on elliptic curves which need costly operations. To improve the performance, some pairing-free CLAKA protocols have been proposed. In this paper, we propose a multi-factor authentication CLAKA protocol that can achieve local authentication factors joint unlocking. The protocol transmits messages including public keys and temporary information, without the need for bilinear pairing operations. Finally, based on the CDH and DCDH hard problems assumption, provable security under mBR model is achieved.

KEYWORDS

Certificateless Public Cryptography, Multi-factor, CLAKA, Provable Security, Non-bilinear, Elliptic Curve Cryptography(ECC).


Lightweight American Sign Language Recognition Using a Deep Learning Apporach

Yohanes Satria Nugroho Chuan-Kai Yang and Yuan-Cheng La, Department of Information Management, National Taiwan University of Science and Technology, Taipei, Taiwan

ABSTRACT

Sign Language Recognition is a variant of Action Recognition that consists of more detailed features, such as hand shapes and movements. Researchers have been trying to apply computer-based methods to tackle this task throughout the years. However, the methods proposed are constrained by hardware limitations, thus limiting them from being applied in real-life situations. In this research, we explore the possibilities of creating a lightweight Sign Language Recognition model so that it can be applied in real-life situations. We explore two different approaches. First, we extract keypoints and use a simple LSTM model to do the recognition and get 75% of Top-1 Validation Accuracy. For the second one, we used the lightweight MoViNet A0 model and achieved 71% of Top-1 Test accuracy. Although these models achieved worse results than the state-of-the-art I3D, their complexity in terms of FLOPs are far better.

KEYWORDS

Sign Language Recognition, Lightweight Model, Keypoints Estimation


Crop Recommendation Based on Machine Learning Algorithms

Nilam and Babita Choudhary, Department of Computer Engineering, SKIT Jaipur, Rajasthan, India

ABSTRACT

Agriculture is a critical sector in India that plays a major role in the country s economy and sustainability. India is recognized as a significant producer of various agricultural products, and the cultivation of crops heavily relies on the quality of soil. Traditionally, farmers who had practical experience would cultivate crops, but sometimes they may not be able to correctly choose the most appropriate crop based on characteristics of soil and other climatic conditions. To address this issue, a recommendation system has been proposed which utilizes ML classification algorithms to suggest the most suitable crop for a particular type of soil. The recommendation system employs various ML classification algorithms, like Gaussian Naive Bayes, KNN, Random Forest, , Decision Tree, LDA and many others to make crop recommendations. Comparison has been made among various ML algorithms based on efficiency and execution time.

KEYWORDS

Crop Recommendation, Machine Learning, KNN, Random Forest, Gaussian Naive Bayes, Decision Tree


Integrating Blockchain Technology With Iot Devices for Securing Mobile Applications

Zakariae Dlimi1, Said Ben Alla1 and Abdellah Ezzati1, 1, 2, 3Hassan First University of Settat, Faculté Sciences et Technique, LAVETE, Settat, 26000, Morocco

ABSTRACT

The rapid proliferation of mobile applications and the Internet of Things (IoT) devices has led to unprecedented growth in data generation, consumption, and management. However, this growth has been accompanied by a multitude of security and privacy concerns, especially in the context of mobile applications. To address these challenges, we propose a novel approach for integrating blockchain technology with IoT devices for securing mobile applications. Our proposed framework leverages the decentralized and tamper-proof nature of blockchain to establish a secure and transparent data management infrastructure for IoT-based mobile applications. In this paper, we present the architecture, design principles, and key components of our proposed framework, which includes a consensus mechanism tailored for IoT devices, smart contracts for automating processes, and a secure communication layer for data exchange. Furthermore, we discuss potential applications of our framework in smart cities and banking. Our findings indicate that the integration of blockchain and IoT can significantly enhance the security and reliability of mobile applications, providing a promising solution to the growing security concerns in the era of ubiquitous computing.

KEYWORDS

Internet of things, blockchain, mobile application, security.


Unsupervised Entity Alignment Based on Multimodal Data Fusion

Mengmeng Jia and Yan Liu, Key Laboratory of Cyberspace Situation Awareness of Henan, Zhengzhou, China

ABSTRACT

Entity alignment aims to find real-world information referenced under dif erent representations, which plays an important role in knowledge graph fusion. However, the existing unsupervised entity alignment methods do not make full use of multi-modal data, and the model cannot automatically explore the matching scheme of various types of data during the training process. In order to solve the above problems, this paper proposes an unsupervised entity alignment model based on multi-modal data fusion. In the case of missing attribute data, the model introduces image visual information to mine the pseudo data participating in training, and uses a low-rank multi-modal fusion model to fuse image visual information, attribute information and entity description information. Finally, the structure embedding vector and the multi-modal fusion vector are combined for iterative training to complete the unsupervised entity alignment task. Experiments on DBP15K show that the proposed unsupervised entity alignment model based on multi-modal information fusion deeply studies the entity alignment problem from two aspects of reducing labeled data and multi-modal fusion, and ef ectively enhances the accuracy of unsupervised entity alignment. Compared with the mainstream methods, the proposed model improves the Hit@1 value by 2.5%.

KEYWORDS

Entity alignment; Without supervision; Multimode fusion; Joint.


Msmix: an Interpolation-based Text Data Augmentation Method Manifold Swap Mixup

Mao Ye1, 2, Haitao Wang1 Zheqian Chen1, 1AI laboratory, Yiwise, Hangzhou, China, 2College of Computer Science and Technology, Zhejiang University, Hangzhou, China

ABSTRACT

To solve the problem of poor performance of deep neural network models due to insufficient data, a simple yet effective interpolation-based data augmentation method is proposed: MSMix (Manifold Swap Mixup). This method feeds two different samples to the same deep neural network model, and then randomly select a specific layer and partially replace hidden features at that layer of one of the samples by the counterpart of the other. The mixed hidden features are fed to the model and go through the rest of the network. Two different selection strategies are also proposed to obtain richer hidden representation. Experiments are conducted on three Chinese intention recognition datasets, and the results show that the MSMix method achieves better results than other methods in both full-sample and small-sample configurations.

KEYWORDS

Data Augmentation, Mixup, Intent Classification.


Cmlm-cse: Based on Conditional Mlm Contrastive Learning for Sentence Embeddings

ZHANG Wei1, 2, and CHEN Xu1, 1Hangzhou Yizhi Intelligent Technology Co., Ltd., HangZhou, ZheJiang, China, 2School of Computer Science, Zhejiang University, HangZhou, ZheJiang, China

ABSTRACT

Traditional comparative learning sentence embedding directly uses the encoder to extract sentence features, and then passes in the comparative loss function for learning. However, this method pays too much attention to the sentence body and ignores the influence of some words in the sentence on the sentence semantics. To this end, we propose CMLM-CSE, an unsupervised contrastive learning framework based on conditional MLM. On the basis of traditional contrastive learning, an additional auxiliary network is added to integrate sentence embedding to perform MLM tasks, forcing sentence embedding to learn more masked word information. Finally, when Bertbase was used as the pretraining language model, we exceeded SimCSE by 0.55 percentage points on average in textual similarity tasks, and when Robertabase was used as the pretraining language model, we exceeded SimCSE by 0.3 percentage points on average in textual similarity tasks.

KEYWORDS

Comparative learning, Conditional MLM, Sentence embedding, Auxiliary network, SimCSE.


Fakeswarm: Improving Fake News Detection With Swarming Characteristics

Jun Wu and Xuesong Ye, Georgia Institute of Technology, 2Trine University , USA

ABSTRACT

The proliferation of fake news poses a serious threat to society, as it can misinform and manipulate the public, erode trust in institutions, and undermine democratic processes. To address this issue, we present FakeSwarm, a fake news identification system that leverages the swarming characteristics of fake news. We propose a novel concept of fake news swarming characteristics and design three types of swarm features, including principal component analysis, metric representation, and position encoding, to extract the swarm behavior. We evaluate our system on a public dataset and demonstrate the effectiveness of incorporating swarm features in fake news identification, achieving an f1-score and accuracy over 97% by combining all three types of swarm features. Furthermore, we design an online learning pipeline based on the hypothesis of the temporal distribution pattern of fake news emergence, which is validated on a topic with early emerging fake news and a shortage of text samples, showing that swarm features can significantly improve recall rates in such cases. Our work provides a new perspective and approach to fake news detection and highlights the importance of considering swarming characteristics in detecting fake news.

KEYWORDS

Fake News Detection, Text Embedding, Swarming Characteristics, Metric Learning, Clustering, Dimensionality Reductio.

                       Contact Us

            aiapp_secretary@yahoo.com 

Copyright © AIAPP 2023