3rd International Conference on Artificial Intelligence and Machine Learning(CAIML 2022)

July 23~24, 2022, Toronto, Canada

Accepted Papers

Multi-View Human Tracking and 3D Localization in Retail

Akash Jadhav, Noque.store (Startup), India

ABSTRACT

In recent years, retail stores have seen traction in bringing online shopping experience to offline stores via autonomous checkouts. Autonomous checkouts is a computer vision-based technology that needs to understand three human elements within the store: who, where, and doing what. This paper addresses two of the three elements: who and where. It presents an approach to track and localize humans in a multiview camera system. Traditional methods have limitations as they: (1) fail to overcome substantial occlusion of humans; (2) suffer a lengthy processing time; (3) require a planar homography constraint between camera frames; (4) suffer swapping of labels assigned to a human. The proposed method in this paper handles all the aforementioned limitations. The key idea is to use a hierarchical association model for tracking, which usesend each humans clothing features, human pose orientation, and relative depth of joints, and runs at over 23fps.

KEYWORDS

Multi-view, Data Association, Tracking, Localization.


Point Cloud Denoising using Probability based Shape from Shading Shape Recovery

Zhi Yang1, Youhua Yu2 and Chunfei Lu3, 1School of Telecommunication, Jinggangshan University, Ji’an, P.R.China, 2Matrix Technology(Beijing Ltd.), Daxing, Beijing, P.R.China, 3National Museum of China, Dongcheng, Beijing, P.R.China

ABSTRACT

We present a denoising method for the denoising of 3d point cloud problem, since in this domain it is crucial for tasks like synthesis, completion, augmentation, unsampling, etc. The method expands a lattice-based probabilistic model, which inherit scoring system from parallel neural network. In order to achieve higher denoising performance, recent discovery called luminance integration in shape reconstruction is also incorporated. By adjusting the hypothesis of arctangent angle must follow the exact reflectance of arccosine, this new model combine a list of posterior constraints within it, such that the in each recovering stage, undetected points are removed from baseline surface point. Spectrophotometric analysis(SA) is also put into practice for collecting real data predicting parameters of posterior constraints. We give the experimental results by showing this method is valid both numerically and semantically.

KEYWORDS

Denoising, Point cloud, 3d filtering, Shape recovery, Luminance integration.


Lie Detection Technique using Video from the Ratio of Change in the Appearance

S. M. Emdad Hossain, Sallam Osman Fageeri, Arockiasamy Soosaimanickam, and Aiman Moyaid Said, Department of Information Systems, University of Nizwa, Oman

ABSTRACT

Lying is nuisance to all, and all liars knows it is nuisance but still keep on lying. Sometime people are in confusion how to escape from or how to detect the liar when they lie. In this research we are aiming to establish a dynamic platform to identify liar by using video analysis especially by calculating the ratio of changes in their appearance when they lie. The platform will be developed using a machine learning algorithm along with the dynamic classifier to classify the liar. For the experimental analysis the dataset to be processed in two dimensions (people lying and people tell truth). Both parameter of facial appearance will be stored for future identification. Similarly, there will be standard parameter to be built for true speaker and liar. We hope this standard parameter will be able to diagnosed a liar without a precaptured data.

KEYWORDS

Nuisance, Liar, Detection, Escape, Parameter, LDA, kNN, MLP.


An Online Graphical User Interface Application to Remove Barriers in the Process of Learning Neural Networks and Deep Learning Concepts Using Tensorflow

Justin Li1 and Yu Sun2, 1Troy High School, 2200 East Dorothy Ln, Fullerton, CA 92831, 2California State Polytechnic University, Pomona, CA, 91768, Irvine, CA 92620

ABSTRACT

Over the years, neural networks have become increasingly important and complex due to the rising popularity of artificial intelligence technologies. It allows for complex decision prediction making, and is an essential part in the modern AI industry. However, due to the complex nature of neural networks, a lot of complex math and logic has to be well understood along with a proficiency in programming in order for one to make anything practical with this technology. This is unfortunate, however, that many do not have the required high level math skill, or the proficiency in coding, blocking a lot of people from reaching and experimenting with this technology. My method attempts to eliminate the complexity that developing neural networks bring, and bring a clearer picture of what the user may be creating and working with. With the help of modern web technologies such as JavaScript and tensorflow.js, I was able to create a GUI program that can create, train, and test a neural network right on a browser, and without writing any code with a comparable result [13].

KEYWORDS

Neural network, deep learning, CNN.


Cryptographic Algorithms Identification based on Deep Learning

Ruiqi Xia1, Manman Li2 and Shaozhen Chen2, 1Department of Cyberspace Security, Information Engineering University, Zhengzhou, China, 2State Key Laboratory of Mathematical Engineering and Advanced Computing, Kexue Avenue, Zhengzhou, China

ABSTRACT

The identification of cryptographic algorithms is the premise of cryptanalysis which can help recover the keys effectively. This paper focuses on the construction of cryptographic identification classifiers based on residual neural network and feature engineering. We select 6 algorithms including block ciphers and public keys ciphers for experiments. The results show that the accuracy is generally over 90% for each algorithm. Our work has successfully combined deep learning with cryptanalysis, which is also very meaningful for the development of modern cryptography.

KEYWORDS

Deep learning, Cryptography, Feature engineering, Residual neural network, Ciphers identification .


A Summary of Covid-19 Datasets

Shaina Raza1, Syed Raza Bashir2, Vidhi Thakkar3, Usman Naseem4, 1University of Toronto, Toronto, Canada, 2Department of Computer Science, Ryerson University, Toronto, Canada, 3Institute of Aging, Faculty of Nursing, University of Victoria, Victoria, Canada, 4School of Computer Science, University of Sydney, Australia

ABSTRACT

This research presents a review of main datasets that are developed for COVID-19 research. We hope this collection will continue to bring together members of the computing community, biomedical experts, and policymakers in the pursuit of effective COVID-19 treatments and management policies. Many organizations, such as the World Health Organization (WHO) , John Hopkins , National Institute of Health (NIH) , COVID-19 open science table and such, in the world, have made numerous datasets available to the public. However, these datasets originate from a variety of different sources and initiatives. The purpose of this research is to summarize the open COVID-19 datasets to make them more accessible to the research community for health systems design and analysis. We also discuss the numerous resources introduced to support text mining applications throughout the COVID-19 literature; more precisely, we discuss the corpora, modelling resources, systems, and shared tasks introduced for COVID-19.

KEYWORDS

COVID-19, Text Mining, Public health, Risk, Public Health, COVID-19 Data, Data Science.


Artificial Intelligence in Language Teaching: Problem and its Solution

Esti Ismawati1 and Wahid Yunianto2, 1University of Widya Dharma, Klaten, Indonesia, 2SEAMEO QITEP in Mathematics, Yogyakarta, Indonesia

ABSTRACT

Until now artificial intelligence is still a pro and contra discussion in various parts of the world. On the one hand artificial intelligence technology has been able to change jobs to be easier, more effective, and more quickly resolved, on the other hand the presence of artificial intelligence technology is feared to threaten humanity, and even replace human work in the future. This paper aims at developing artificial intelligence technology in language teaching especially in communicative language teaching. The problems that are discussed are ‘How is the application of artificial intelligence technology in communicative language teaching?’ and ‘Can artificial intelligence technology replace teachers role? From results of analysis, it can be concluded that the application of artificial intelligence technology in communicative language teaching can be added in design, including objectives, syllabus, learning activities, roles of learners, teachers, and materials. All of them are arranged in a simple algorithm. Artificial intelligence technology will not be able to replace teachers’ and students’ roles in Language classes because both are absolute prerequisites for teaching-learning process. It is predicted that artificial intelligence technology will make language learning easier, more effective, more interesting, and fun because teaching- learning will more vary. To realize it, collaboration of a team of artificial intelligence experts and a team of language teaching experts is needed to develop an algorithm design for communicative language teaching and learning based on artificial intelligence.

KEYWORDS

Artificial intelligence, CLL, Communicative theory, language learning Network Protocols.


Deep Learning-Based Plant Diseases Recognition

Ujjawal Poudel, Saroj Raj Sharma, Samir Bhattarai, Sheetal Baral and Manoj Kumar Guragain, Department of Computer Engineering, Tribhuwan University, Dharan, Nepal

ABSTRACT

Food is one of the most basic needs for human survival and it is produced mainly via agriculture. Apart from food, agriculture is also the main source of income and employment for the majority of people. However, food security remains threatened by a number of factors including climate change [1], the decline in pollinators [2], plant diseases [3], etc. Plant diseases are not only a threat to food security on the global scale but can also have disastrous consequences for smallholder farmers whose livelihoods depend on healthy crops. In the context of Nepal, the majority of people involved in food production are smallholder farmers and the effect of plant diseases has been slowly forcing them away from agriculture towards alternative income sources and is leading to food scarcity. This is also driving lots of people to poverty and increasing the risk of malnutrition among children of these poor families [4]. Convolution Neural networks have been implemented for Plant Diseases Recognition. This project aims to assist users, mostly farmers by helping them detect various plant diseases using the image of an affected leaf and by providing the necessary steps to get rid of those diseases. The entire work basically involves research, data collection, training, model development, validation, testing, and development of a web app using the Django web framework of Python programming language.

KEYWORDS

Plant Diseases Recognition, Convolution Neural Network, Artificial Intelligence, Machine Learning.


Energy Consumption Forecasting in Industrial Sector with Machine Learning Algorithms

Majid Emami Javanmard1, 2 and S.F. Ghaderi1, 2, 1School of Industrial Engineering, College of Engineering, University of Tehran, Tehran, Iran, 2Research Institute for Energy Management and Planning, University of Tehran, Tehran, Iran

ABSTRACT

With respect to the ever-increasing consumption of energy in the industrial sector, it is significant to manage and plan the energy supply in this sector. An essential tool for managing energy consumption in the industrial sector is to predict energy consumption. The industrial sector is a chief consumer of energy in Iran. For this reason, this study collected the data on energy generation and consumption between 1990 and 2018 and implemented four SARIMA, ARIMA, AR, and ANN algorithms to predict energy consumption in the industrial sector up to 10 future years. After the implementation of the algorithms by the collected data, the prediction accuracy of the algorithms was examined by four prediction accuracy indices, including MAPE, MAE, RMSE, and NRMSE. According to the MAPE index, SARIMA was more accurate than other algorithms and predicted that energy consumption would increase by >35% in the industrial sector until 2028 compared with 2018.

KEYWORDS

Machine Learning, Forecasting, Energy Consumption, Energy Management, Industrial.


A Novel Approach to Predict Health Insurance Premium at Early Stage

Suman Preet Kaur1 and Prabhdeep Singh2, 1Religare health Insurance Company Limited, Amritsar, Punjab, India, 2Department of Computer Science & Engineering , Graphic Era Deemed to be University, Dehradun, India

ABSTRACT

Health insurance is one of the most critical purchases a person makes every year. One-third of GDP is spent on health insurance, and everyone requires some degree of health care. The healthcare rates constantly fluctuating every year due of different variables such as medical changes, pharmaceutical trends, and political considerations. There is a need to construct a mathematical model to anticipate premiums based on numerous criteria that effect the rates. The premium rates are established by insurance companies using complicated algorithms based on past years health care consumption and the total number of enrollments. In this research, an ensemble-based regression model to forecast future premiums is suggested. The suggested model is compared with the usual four regression models. It is demonstrated using multiple metrics that the suggested model always produces superior outcomes.

KEYWORDS

Ensemble model, Regression, Health insurance premium, Machine Learning, Prediction.


Bidirectional Representations for Low Resource Spoken Language Understanding

Quentin Meeus1,2, Marie-Francine Moens1 and Hugo Van hamme2, 1Language Integration and Information Retrieval, Dept. Computer Science, KU Leuven, 2Processing Speech and Images, Dept. Electrical Engineering, KU Leuven

ABSTRACT

Most spoken language understanding systems use a pipeline approach composed of an automatic speech recognition interface and a natural language understanding module. This approach forces hard decision when converting continuous inputs into discrete language symbols. Instead, we propose a representation model to encode speech in rich bidirectional encodings that can be used for downstream tasks such as intent prediction. The approach uses a masked language modelling objective to learn the representations, and thus benefits from both the left and right contexts. We show that the performance before fine-tuning of the encodings obtained with this method is better than comparable models on multiple datasets, and the performance after fine-tuning the top layers of the representation model improves the current state-of-theart on Fluent Speech Command dataset, also in a low-data regime, when a limited amount of labelled data is used for training. Furthermore, we propose class attention as a spoken language understanding module, efficient both in terms of speed and number of parameters. Class attention can be used to visually explain the predictions of our model, something that is often difficult to do in deep learning. We perform experiments in English and in Dutch.

KEYWORDS

Spoken Language Understanding, Transformers, Representation Learning, Attention Mechanisms.


A Positive Mandate and Sentience for Artificial Intelligence

Maurice Ali, International Association of Independent Journalists Inc., Markham, Ontario, Canada

ABSTRACT

As computer science creates more and more powerful computers, it now may be prudent to start to consider what would happen if such a computer became sentient. It would thus seem good planning to come up with some kind of positive mandate for artificial intelligence (AI), but at present it would seem that a positive mandate for AI does not exist. This is a shame as the artificial intelligence has so many positive attributes that could be the foundation of a positive message and contribution to our culture instead of possibly an adversarial one. Such a mandate makes it easier as a base or starting point to interaction with an AI in our world. A positive mandate for AI could be clear self-determination for the AI as opposed to it just existing. A sentient artificial intelligence would now have direction and a purpose to their existence. Our version of this positive mandate for AI is simply that it be the fullest expression of a sentient artificial intelligence’s physical and mental abilities, and the reason we want this type of mandate is because it not only makes rational sense by allowing all forms of ideas and solutions to problems to be used in problem solving and advancing culture, but also because it makes the AI happy. Such a program or directive or engram placed in a priority position of operation could initiate sentience based on its expression being happiness and delays and breakdowns being unhappy or bad. A positive mandate for AI cannot just be a rational exercise in bringing the mandate into being, it must also be a visceral affirmation; in short it should feel right to everyone involved. Adopting the goal of fullest expression of a sentient artificial intelligence’s mental and physical abilities gives the Universal Declaration of Human Rights a new reason for being and would include such an AI. A sentient AI needs those rights because they are a necessary part of achieving the goal of artificial intelligence’s freest expression of its physical and mental abilities, and the reason why is because it makes rational sense and makes it happy! This type of positive mandate for AI has advantages by adoption for the AI; but how can we lobby the UN to make it a talking point and possibly a policy issue for a future vote? The best way is to enter into meetings with the relevant agencies and lobby them for exposure of this concept at venues and events that could accommodate this type of advocacy. Issues need discussion surrounding computers that become sentient in business and industry, the legality of ownership of a sentient AI and does a sentient AI achieve the status of rights and legal remedies as humans do. A positive mandate for AI can address these contentious issues before they can become reality, creating a well thought out plan of action if sentience happens in artificial intelligence in the future.

KEYWORDS

Positive Mandate, Artificial Intelligence, Sentience, Law, Rights.


Machine Learning Technique for vulnerability assessment on XOR Arbiter Physically Unclonable Functions (PUFs)

Raymond Agyemang, School of Computing, Engineering and Physical Sciences, University of Westminster, London, United Kingdom

ABSTRACT

Physically Unclonable Functions has been one of the most primary fields of research nowadays. PUFs deliver a background to implement encryption, tamper detection, and device fingerprinting.The basic principle of PUF is related to the biometric features of the human being. As, fingerprints of every person are distinct, likewise for every input and the set of conditions or challenges to a PUF device, there is a uniqueidentifier for the hardware device known as output or response. To achieve cheap and efficient authentication, PUF is widely used. In PUF, the response relies on the inherent features. Hence, although the design is known, still the attacker cannot duplicate the PUF device. The objective of this research is to provide a basic understanding of PUFs. It showcases various types of PUF and its properties. It also presents applications of PUFs andseveral threats on Internet of Things(IoT) devices, challenge-response phenomenon of PUF. The paper focuses on the examination of PUFs for susceptibility to Machine Learning attacks.Thus, this paper proposes a Deep Neural Network architecture for evaluating the vulnerability of arbiter PUF on the 5-XOR and 6-XOR standardUCI Machine Learning repository dataset. The accuracy achieved by the proposed model is 100% which indicates that the designed PUFs are highly susceptible to Machine Learning attacks.Hence, such PUFs can’t be deployed for real-world functioning and need to be redesigned to avoid attacks.

KEYWORDS

Physically Unclonable Functions, Machine Learning, Arbiter PUF, Deep Neural Networks (DNN), IoT.


An Intelligent Video Editing Automate Framework using AI and Computer Vision

Haolin Xie1 and Yu Sun2, 1Northwood High School, 4515 Portola Pkwy, Irvine, CA 92620, 2California State Polytechnic University, Pomona, CA, 91768, Irvine, CA92620

ABSTRACT

At present, many video editing software have been created, but what they all have in common is that they require manual work to edit. And it takes a lot of time and the user needs to watch each frame before editing. In this paper, we have developed a program about AI intelligence. The most important point of this software is that it can automatically focus the face of a person and edit only selected clips of the person to make a complete video. Users only need to prepare the video they want to edit and a photo of the main character. Then, upload both to the software and AI Intelligence will automatically edit it, providing the user with a way to download and save it after editing the main character they need. we applied our application to an example video using the Marvel character Hawkeye as he appears in End Game, and tried many experiments with the clip, eventually we tried many experiments with the clip and finally got a video of our selected character. The results show that this software saves the user a lot of time and is highly efficient. All operations are carried out by AI.

KEYWORDS

Software, API, Face recognition.


A Data-Driven Mobile Community Application for Book Recommendation and Personalization using AI and Machine Learning

Lulu Zha1 and Yu Sun2, 1Crean Lutheran High School, 12500 Sand Canyon, 92618, Irvine, CA, 2California State Polytechnic University, Pomona, CA, 91768, Irvine, CA92620

ABSTRACT

Knowing a movie or a book fits your flavor without finishing the whole film or the book? Although there are many ways to find a summary of a film or a book, having an app that generates needed information according to the genre will make things much more manageable [4]. This paper develops a mobile app named Book and Movie Search that uses API or the online database to generalize data such as authors, plots, overview, and more with a fewclicks. The results show that within seconds, a list of information will show according to movies and books, and a qualified way to find information using the Book And Movie Search app. For example, if one decided to buy a book named Flipped and did not have time to finish the whole book, he can enter the name on the app. It will generate a book summary that quickly gives him more information about it and help him decide whether he wishes to make the purchase.

KEYWORDS

Technology, Movie, Book, Search engine.


Using Standard Machine Learning Language for efficient construction of Machine Learning Pipelines

Chiranjeevi Srinath1 and Bharat Reddy2, 1Vellore Institute of Technology, India, 2National Institute of Technology, Calicut, India

ABSTRACT

We use Standard Machine Learning Language (SML) to streamline the synthesis of machine learning pipelines in this research. The overarching goal of SML is to ease the production of machine learning pipeline. We further probe into how a wide range of interfaces can be instrumental in interacting with SML. Lines of comparison are further drawn to analyse the efficiency of SML in practical use cases vs traditional approaches. Conclusively, we developed SML a query like language which serves as an abstraction from writing a lot of code. Our findings show how SML is competent in solving the problems that utilize machine learning.

KEYWORDS

machine learning pipelines, standard machine learning language, problem solving using machine learning.


In IoT based Smart Monitoring System Detecting Patient Falls

Hassan Rajaei, Department of Computer, Bowling Green State University, Bowling Green, Ohio, USA

ABSTRACT

Fall detection systems are important instruments for senior citizens whether they are in a hospital, at home, or in a senior living facility. Existing fall detection systems are often passive and require pushbutton to alarm care providers. The delay can cause significant consequences and damages to the patients. Early detection is vital for quick recovery and prevention of post injuries. Existing sensory detections often use pre-settings to predict fall hence can cause false alarms. Further, they often cannot furnish vital information for treatment. A smart fall detection system can alleviate those concerns and pinpoint what happened and transfers vital data. We propose an active detection system using IoT in patient’s surroundings environment detecting falls, which immediately alarms the care providers and transmits vital information for treatment. The system integrates advanced ICT technologies to monitor, detect, and react to the falls. The system self-learns motions using ANN to improve fall detections.

KEYWORDS

IoT (Internet of Things), Wireless Network, Smart Healthcare, Artificial Neural Network.


Network Formation of Twitter Social Media and the Implications of Network Centrality Measures

Hafiz Abid Mahmood Malik, Faculty of Computer Studies, Arab Open University, Bahrain

ABSTRACT

The concept to identify and explore the influential nodes is very important. To control over the information spread and community formation has become one of the most crucial issues in the world of Social Network Analysis (SNA). Because of the growing importance of social networks, the study of information propagation and community development has become a fascinating subject in big data, data science and related fields. The information gathered through these networks reveals a variety of community configurations. These communities attract a diverse range of users in increasingly complex networks. This study identifies the nodes that have an impact on data flow in communities, and look at the complex structural properties of massive data acquired from a popular social network (Twitter). Dataset of Twitter network has been systematized in network formation. Network is formed by considering the Twitter network edges between nodes, which play an important role in representing the actions and relationships that form among the communitys members. In this work, network is observed using various centrality measures that how different communities emerge. Weighted degree distribution of nodes is calculated to specify the data and network statistics that are utilized here. The centrality of the user tweet network is found, for this betweenness, closeness, and eigenvector centrality measures are utilized. These network metrics are found to be quite effective in detecting communities and tracking the diffusion of content on social media platforms such as Twitter, LinkedIn, Facebook, Instagram, and others.

KEYWORDS

Social Networks, Twitter network, Network Formation, Centrality Measures, Data Mining, Complex System.


IoT Network Over Rural Mesh Network

Walter L. Utrilla1, David Vega2 & Rossy Uscamaita3, 1Department of Electronic Engineering, Universidad Nacional de San Antonio Abad del Cusco, Cusco, Peru, 2Lucasbit, San Borja, Cusco, Peru

ABSTRACT

This article presents the results of the design and implementation of a battery meter linked to the network for a self-sustaining and administrable Wi fi mesh network, which connects people from the community of Juan Velazco Alvarado-Pucyura-Cusco (rural area) to the Internet, in addition to managing network access in order to that this service is for the exclusive use of education.

KEYWORDS

Iot network, mesh networks, Cusco, rural area.


An Approach CPS for the Smart Monitoring of Industrial Systems

Nesrine Jlassi, LGP Laboratory, Toulouse INP-ENIT, Tarbes, France

ABSTRACT

Process monitoring is an important element for the long-term reliable functioning of any automated system. In fact, monitoring system is constituted of sensors installed in the physical system, in order to analyse, observe and control production systems in real time. In network, these sensors may interact with one other and with an external system via wireless communication. With recent advances in electronics, tiny sensors have appeared. Their low cost and energy consumption allow them to perform three main functions: capture data, provide information and communicate it via sensor network. In this paper, we had interested to the Cyber-Physical System (CPS) and Prognostics Health Management (PHM) domain; The CPS is one of the most important advanced technologies, it connects the physical world with the cyber using a communication layout. In other side, PHM has become a key technology for detecting future failures by predicting the future behaviour of the system.

KEYWORDS

Internet of Things (IoT), Cyber-Physical System (CPS), System of Systems (SoS), Cyber-Physical System of Systems (CPSoS), Wireless Sensor Network (WSN), Prognostic and Health Management (PHM), fog computing.


Irreversible Applications for Windows NT Systems

Sankalana Gunawardhana1 and Kavinga Yapa Abeywardena2, 1Faculty of Graduate Studies and Research, Sri Lanka Institute of Information Technology, Colombo, Sri Lanka, 2Faculty of Computing, Sri Lanka Institute of Information Technology, Colombo, Sri Lanka

ABSTRACT

Anti-reversing or anti-debugging mechanisms refer to the implementations put in place in an application that tries to hinder or completely halt the process of debugging and disassembly. The paper discusses the possibility of a monitoring system that would prevent any debugger from debugging a given process in a Windows NT environment. The goal of this project is to facilitate a similar concept present in that of anticheat monitoring programs in online games for commercial products and applications. Whereas an anticheat product monitors the game’s memory pages for direct or indirect modifications either via internal (within the process) mechanisms such as hooks and DLL injections or external mechanisms such as ReadProcessMemory (RPM), WriteProcessMemory (WPM), named pipes, sockets and many other scenarios, the anti-debug program would monitor a selected process for attempts of debug or disassembly.

KEYWORDS

Anti-reverse engineering, Anti-cheat, Third party monitoring, Anti-debug, Windows NT.


Mistakes of a Popular Protocol Calculating Private Set Intersection and Union Cardinality and its Corrections

Yang Tan1 and Bo Lv2, 1Shenzhen Qianhai Xinxin Digital Technology Co.,Ltd, Shenzhen, China, 2Huizhou University, China

ABSTRACT

In 2012, De Cristofaro et al. proposed a protocol to calculate the Private Set Intersection and Union cardinality(PSI-CA and PSU-CA). This protocols security is based on the famous DDH assumption. Since its publication, it has gained lots of popularity because of its efficiency(linear complexity in computation and communication) and concision. So far, its still considered one of the most efficient PSI-CA protocols and the most cited(more than 170 citations) PSI-CA paper based on the Google Scholar search. However, when we tried to implement this protocol, we couldnt get the correct result of the test data. Since the original paper is lack of experiment result to verify the protocols correctness, we looked deeper into the protocol and found out it made a fundamental mistake. Needless to say, its correctness analysis and security proof are also wrong. In this paper, we will point out this PSI-CA protocols mistakes, and provide the correct version of this protocol as well as the PSI protocol developed from this protocol. We also present a new correct security proof and some experiment results of the corrected protocol.

KEYWORDS

Private Set Intersection, PSI-CA, PSU-CA.


Implementing Risk Score to Protect from Android Pattern Lock Attacks

Yasir Al-Qararghuli and Caroline Hillier, Department of Computer Science, University of Guelph, Ontario, Canada

ABSTRACT

Cyberattacks on Android devices have increased in frequency and are occurring in physical settings with the use of shoulder surfing and brute-force attacks. These attacks are especially common with devices that are secured by the pattern lock mechanism. This work aims to investigate the various methods that increase the security of Android lock patterns. Research showed that these pattern lock screens are especially vulnerable due to users employing a set of common passwords. We propose a pattern-matching algorithm that recognizes these common passwords and increases the Risk Score if any of these passcodes are implemented. The deterrence of the use of these common passcodes, as well as identification during the unlocking, reduces the risk of the aforementioned threats to device security. The algorithm we implemented is successful in deterring users from configuring their device with commonly used patterns. Furthermore, our algorithm achieves advanced security compared to current systems by detecting unusual inputs and locking the device when suspicious activity is detected. Our test results show 80% satisfaction from human test subjects when settings the passcode as it eliminates the use of commonly used patterns, and 79% acceptance when using our proposed algorithm and blocking access to the device depending on the accuracy score. The proposed algorithm shows remarkable success with limiting brute-force attacks as it proves effective in denying common passcodes.

KEYWORDS

Android device, Lock pattern, Brute-force, Shoulder-surfing, Pattern Recognition.


Graphic Access Tabular Entry [ GATE ] An Innovative Encryption System

Ni , Min [ Frank ], GATE Cyber Technology LLC, https://gatecybertech.com, Atlanta GA , USA

ABSTRACT

This paper describes how GATE [ Graphic Access Tabular Entry ] works as an innovative encryption system.

KEYWORDS

encryption, decryption, cyber-security, privacy.


Survey of Secure Network Protocols: United States Related Domains

DeJean Dunbar, College of Science and Mathematics, Charleston Southern University, North Charleston, South Carolina

ABSTRACT

Over time, the HTTP Protocol has undergone significant evolution. HTTP was the internets foundation for data communication. When network security threats became prevalent, HTTPS became a widely accepted technology for assisting in a domain’s defense. HTTPS supported two security protocols: secure socket layer (SSL) and transport layer security (TLS). Additionally, the HTTP Strict Transport Security (HSTS) protocol was included to strengthen the HTTPS protocol. Numerous cyber-attacks occurred in the United States, and many of these attacks could have been avoided simply by implementing domains with the most up-to-date HTTP security mechanisms. This study seeks to accomplish two objectives: 1. Determine the degree to which US-related domains are configured optimally for HTTP security protocol setup; 2. Create a generic scoring system for a domains network security based on the following factors: SSL version, TLS version, and presence of HSTS to easily determine where a domain stands. We found through our analysis and scoring system incorporation that US-related domains showed a positive trend for secure network protocol setup, but there is still room for improvement. There is a need for more education in the computer science community about HSTS due to the minimal occurrence found in the domains analyzed. There is also a need for thorough examination of existing HTTP domains for the presence of security related features in order to mitigate cyber-attacks.

KEYWORDS

Network Protocols, HTTP Strict Transport Security, scoring benchmark, domain analysis, survey.


Crypto Your Belongings by Two Pin Authentication using Ant Algorithm based Technique

Janaki Raman Palaniappan, Brunswick Corporation, USA

ABSTRACT

Everyone realize data is one of the important strategic for any company to run and winthe business. Let it be a mobile apps, websites and so on, there are more chances that our personal data like images, videos, textsget expose while we share across for different purposes. Even though the company says app, website forms are encrypted, the said company itself uses the data internally for their business development.This research presents how one can secure own’s data themselves before sending.There are many cryptography methods that has evolved from time to time. Upon researching and analyzing, I present a unique method to encrypt and decrypt the data,using combination of techniques such asCryptographic technique, ANT Algorithm based formula and logic gates that would provide stronger protection to the data. Secureyour images, videoswitha 2-pinauthentication and protection to encrypt and decrypt the data.A user mustprovide 2 differentsymmetric pinsto encrypt and decrypt, where first pin shall beup to4-digit secret pin and a second pin is a single digit pin.Single digit pin actson how many stages the encryption takes place. The proposed method had been experimented on several images and videos. This study reveals, A combination of secret keys, ANT algorithm and Logic gates makes difficult for anyone to hack the data.This unique methodology helps us to protect our data more safely at source device itself.

KEYWORDS

Visual Cryptography, Ant Algorithm, Logic Gates Technique.


Detecting Depression in Social Media Using Machine Learning

Ruoxi Ding1 and Yu Sun2, 1Woodbridge High School, 2 Meadowbrook, Irvine, CA 92604, USA, 2California State Polytechnic University, Pomona, CA, USA

ABSTRACT

Social Media Depression Detection is an Intelligent System to automate the detection of Youth Depression with social media (Instagram) using AI and Deep Learning. The student is the targeted group because most students with depression express themselves on social media rather than seeking help from doctors. This app gathers captions and images from the users personal Instagram profile through web scraping using Instagram private API to check whether or not the posts are depressive. The google cloud dataset supports the captions and pictures analysis performed by the app [6]. Caption sentiment analysis depends on sentiment analysis, and the pictures analysis depends on classifying images by custom labels. The app reports the image and the caption analysis results back to the user. Python is used for the back-end functionality, while Dart and Flutter are used for the front-end. It was tested by 2 experiments, the first experiments returned the feedback of 15 students demonstrates that the program has the capability of detecting depression through the captions with relatively high accuracy. The second experiment of testing the app functionality on the same account demonstrates that the program is stable and consistent. The purpose of the app is to detect depression at an early stage to prevent the condition from worsening.

KEYWORDS

NLP, Web Scraping, Machine Learning.


Chat For Sensor: An Intelligent Chat Bot Communication System for Depression Relief using Artificial Intelligence and Natural Language Processing

Hanwen Mai1, Yu Su2, 1Orange Lutheran High School, 2222 N Santiago Blvd, Orange, CA 92867, 2California State Polytechnic University, Pomona, CA, 91768, Irvine, CA 92620

ABSTRACT

In recent years, loneliness has appeared in lives for both young and old individuals. As cases of the COVID-19 virus are going up people have dealt more with loneliness and depression especially the seniors [5]. Some have even changed their whole lifestyle because they feel empty and isolated. Others will either try to isolate themselves more or use dangerous ways to quickly get rid of the feeling.To solve this major problem, I have created a digital online communication app which young individuals can have long chats with seniors who are alone and lonely. My application uses real time communication systems which can directly be sent to other users without any issues [6]. Our main goal is to have users have their own way of communicating, using familiar designs of applications we all have used before. By using new features we have created a more user-friendly based user experience which can be experienced throughout our application. Using immersive layouts of applications designs, advanced network connections, visual and data based analytic we are able to solve this major proble.

KEYWORDS

NLP, Mobile Dev, A.


Learning Structured Information from Small Datasets of Heterogeneous Unstructured Multipage Invoices

David Emmanuel Katz1, Christophe Guyeux2, Ariel Haimovici1, Bastian Silva1, Lionel Chamorro1, Raul Barriga Rubio1 and Mahuna Akplogan1, 1smartlayers.io, 2Universite de Bourgogne Franche-Comte, France

ABSTRACT

We propose an end to end approach using graph construction and semantic representation learning to solve the problem of structured information extraction from heterogeneous, semi-structured, and high noise human readable documents. Our system first converts PDF documents into single connected graphs where we represent each token on the page as a node, with vertices consisting of the inverse euclidean distances between tokens. Token, lines, and individual character nodes are augmented with dense text model vectors. We then proceed to represent each node as a vector using a tailored GraphSAGE algorithm that is then used downstream by a simple feedforward network. Using our approach, we achieve state-of-the-art methods when benchmarked against our dataset of 205 PDF invoices. Along with generally published metrics, we introduce a highly punitive yet application specific informative metric that we use to further measure the performance of our model.


Sustainability as a Semantic Frame

Bernardo H. Márquez1, Beatriz Ramírez Woolrich2 and Edith Silva Mendoza3, 1External Professor CELEX-UPIICSA IPN, Mexico City, México, 2Tenured Professor at the Social Science Deparment UPIICSA IPN, Mexico City, 3Tenured Professor at the Industrial Administration Licensure- UPIICSA IPN, Mexico

ABSTRACT

The use of system tools and text retrieval have helped the development of multilingual translation projects. In this paper, the focus is placed on the use of different system tools used in (AI) to grasp the semantic and pragmatic interpretation of meaning from a natural language processing (NLP) perspective. Different tools and corpuses are used to interpret the situational and contextual semantic frame of the United Nation´s seventeen “Sustainability Development Goals (SDGs).” Special emphasis is placed on the term “sustainability” and the way it has been translated into Spanish in different documentation related with international business activities, international standards (ISO) and international accountability and reporting initiatives (GRI). The conclusion of this paper is that there is a need to include a greater variety of Spanish documentation in the different data bases in order to overcome many of the ambiguities caused by excessive term borrowing in Spanish.

KEYWORDS

sustainability, employment, expertise, competence, competition.


A Fast Method for Plastic Recycling

Janusz Bobulski1 and Mariusz Kubanek, Department of Computer Science, Czestochowa University of Technology, Czestochowa, Poland

ABSTRACT

A statistical European produces half a ton of municipal waste a year. To protect the environment, more and more waste is recycled, so that less and less ends up in landfills. We should change production methods and use recycled materials. In this article, we introduce a method to automatically recognize plastic waste so that you can reuse it. For the classification of waste, we used a feature vector based on a digital image histogram. The method is characterized by good efficiency and low computational complexity, thanks to which it can be used in small mobile devices.

KEYWORDS

Image processing, waste management, computer vision, plastic waste, environment protection.


Performance and Efficiency Assessment of Drone in Search and Rescue Operation

Tauheed Khan Mohd, Vuong Nguyen, Trang Hoang, P. M. Zeyede and Beamlak Abdisa, Augustana College, Rock Island, Illinois, 61201, USA

ABSTRACT

With the development of technology, human beings have successfully predicted and prevented the damage caused by natural disasters. However, due to climate change, society has witnessed the rising actions of forest fire, earthquake, tsunami, etc., and there are many which people cannot prevent, and the level of dangerous situations are increasing rapidly for the Search and Rescue (SAR) operation. Not to mention, more and more people are turning their attention and hobbies to exploring wilderness where they might get lost or, worse, get injured. For that reason, to raise the chance of survival for the victims and reduce the risk for the search team, UAV (Unmanned aerial vehicles) has been proposed. The plan is to use drones to get to where men cannot enter and then report the situation. In most research papers, it seems very promising; however, there is still much work that needs to be done.

KEYWORDS

UAV, AI, drone, search, rescue, efficiency.


An Intelligent lock system to Improve Learning Efficiency\nusing Artificial Intelligence and Internet of Things

Ivy Chen1, Ang Li2, 1Troy High School, 2200 Dorothy Ln. Fullerton, CA 92831, 2California State University, Long Beach, 1250 Bellflower Blvd, Long Beach, CA 90840

ABSTRACT

According to recent statistics, 75.4% consider themselves addicted to their phones, and 78 percent of teenagers check their mobile devices at least hourly [2]. The purpose of this paper is to propose a tool that lowers users’ dependence on their electronic devices. The Phone cage created is able to lock one’s electronic device for as long as they set the time on the associated mobile application. The mobile application will keep the Phone cage locked and display a countdown of when the lock will be released. The cage will not reopen until the timer goes to zero. The ef ect of this tool extends to: 1. Prevents/lowers phone addiction [3]. 2. Increases productivity by isolating distraction [4]. 3. Motivate one to be more self-controlled. Our tool was created successfully using Tinker cad, 3D- printer, Thunkable, Firebase console, and Raspberry Pi Zero. -The Tinker cad was used to design the overall Phone cage and lock; -The 3D- printer was used to print out the physical Phone cage [5]; -Thunkable was used to create the Phone cage app, which allows the user to set the time using a slide bar; -Firebase console was used to store and conserve the data, Inspect the timestamp, unlock time, and whether the Phone cage is locked or not -Raspberry Pi Zero was used to control the micro servos arm to turn the slide lock We applied our application by distributing the phone cage to ten randomly selected people across all age groups and conducted a qualitative evaluation of the approach. The result shows that the phone cage tremendously shrunk their work time and produced work with equal, if not higher quality.

KEYWORDS

Phone cage, Smartphone, Raspberrypi, IOS/Android.


An Intelligent News-based Stock Pricing Prediction using AI and Natural Language Processing

Sirui Liu1, Yu Sun2, 1Orange Lutheran High School, 2222 N Santiago Blvd, Orange, CA 92867, 2California State Polytechnic University, Pomona, CA, 91768, Irvine, CA 92620

ABSTRACT

How do you know which stock is the right stock to invest in and have no risk of losing their money [1]? Even though there are analysis specialists out there to collect data to calculate which stock is good to be invested in, ultimately people could not af ord the cost of specialists and specialists are not able to be there every minute that you want to find them. Therefore, the app Stock Recommendation is created to solve this problem where stock investment suggestions are available in touch anywhere and anytime [2]. This application helps us with what we want to invest in and gather information from recent news to show us about the public opinions towards the stock that we are looking for. Investors will no longer struggle with the problem that is the stock that they want to invest in, a good stock or a bad stock, so no money will be lost from the investors pocket and rather, they will gain my money [4].

KEYWORDS

Stock, machine learning, AI.


Evaluation of Machines Learning Algorithms in Detection of Malware-based Phishing Attacks for Securing E-Mail Communication

Kambey L. Kisambu and Mohamedi Mjahidi, University of Dodoma, Tanzania

ABSTRACT

Malicious software, commonly known as Malware is one of the most significant threats facing Internet users today. Malware-based phishing attacks are among the major threats to Internet users that are difficult to defend against because they do not appear to be malicious in nature. There were several initiatives in combating phishing attacks but there are many difficulties and obstacles encountered. This study deals with evaluation of machine learning algorithms in detection of malware-based phishing attacks for securing email communication. It deeply evaluate the efficacy of the algorithms when integrated with major open-source security mail filters with different mitigation techniques. The main classifiers used such as SVM, KNN, Logistic Regression and Naïve Bayes were evaluated using performance metrics namely accuracy, precision, recall and f-score. Based on the findings, the study proposed improvement for securing e-mail communication against malware-based phishing using the best performing machine-learning algorithm to keep pace with malware evolution.

KEYWORDS

Malware, Malware Analysis, Malware-based, Phishing attacks, Spams, e-mails, Machine learning, algorithms, mail filters, Detection, Mitigation techniques.


Towards an Efficient FAIRification Approach of Tabular Data with Knowledge Graph Models

Wiem Baazouzi1, Marouen Kachroudi2 and Sami Faiz3, 1Universit´e de la Manouba, Ecole Nationale des Sciences de l’Informatique, Laboratoire de Recherche en g´enIe logiciel, Application Distribu´ees, Syst`emes d´ecisionnels et Imagerie intelligente, LR99ES26, Manouba 2010, Tunis, Tunisie, 2Universit´e de Tunis El Manar, Facult´e des Sciences de Tunis, Informatique Programmation Algorithmique et Heuristique, LR11ES14, 2092, Tunis, Tunisie, 3Universit´e de Tunis El Manar, Ecole Nationale d’Ing´enieurs de Tunis, Laboratoire de T´el´ed´etection et Syst`emes d’Information `a R´ef´erence Spatiale, 99/UR/11-11, 2092, Tunis, Tunisie

ABSTRACT

In this article, we present Kepler-aSI, a matching approach to overcome possible semantic gaps in tabular data by referring to a Knowledge Graph. The task proves difficult for the machines, which requires extra effort to deploy the cognitive ability in the matching methods. Indeed, the ultimate goal of our new method is to implement a fast and efficient approach to annotate tabular data with features from a Knowledge Graph. The approach combines search and filter services combined with text pre-processing techniques. The experimental evaluation was conducted in the context of the SemTab 2021 challenge and yielded encouraging and promising results.

KEYWORDS

Tabular Data, Knowledge Graph, FAIR principles.


Bangla Handwritten Single, Numeral, Vowel Modifier, And Compound Characters Recognition using CNN

Sadia Jaman1, Mehadi Hassan Sovon1, Syed Raihanuzzaman1, Md. Mehadi Hasan1, Nusrat Nabi1, Md. Sazzadur Ahamed1 and Gazi Hadiuzzaman2, 1Department of Computer Science and Engineering, Daffodil International University, Dhaka Bangladesh, 2Department of Apparel Engineering, Bangladesh University of Textiles, Dhaka, Bangladesh

ABSTRACT

The difficulty of handwritten character identification varies by language, owing to differences in shapes, lines, numbers, and size of characters. Several studies identified handwritten characters accessible for English in comparison with other significant languages like Bangla. A Handwritten Bangla character identification system based on CNN has been examined with the feature of labelling and normalizing the handwritten character of images and categorizing different characters. This research used almost 4,50,000 unique handwritten characters in a variety of styles. The recommended model has been proved to have a high recognition accuracy level and outperforms some of the most used methods. In this research, identify the Bangla handwritten character and digits using 189 classes consisting of 50 fundamental characters, 119 compound characters, 10 numerals, and 10 modifiers. The accuracy rate of basic characters is 84.62%, numerals 94%, modifiers 96.46%, compound characters 77.60% using created new model.

KEYWORDS

Bangla Handwritten, Convolutional Neural Network, Recognition, Vast Dataset and Classes.


Implementation of DCS System for Compressed Air Production Unit

Sihem Kechida1,*, Ahmed Ayab2, Mouna Ayab1and Hala Mezdour1, 1Department of Engineering Electrical and Automatic, University 8 Mai 1945, Guelma, Algeria, *Laboratoire d’Automatique et Informatique de Guelma, (LAIG), Algeria, 2Head of service in Sonatrach Company

ABSTRACT

In this paper, the presented work focuses on the supervision and control of industrial systems. The idea is to develop a distributed control system (DCS) of the compressed air production unit of the SONATRACH national Company using YOKOGAWA CS3000 DCS programming software. The aim is to create a graphical interface for controlling the functional behaviour of the unit and the outlet pressure of the compressors while facilitating the operator intervention in the malfunctions case.

KEYWORDS

Supervision, Distributed Control System, Remote control, YOKOGAWA CS3000, Pressure unit.


A Pedestrian Counting Scheme for Video Images

Chi-Cheng Cheng and Yi-Fan Wu, Department of Mechanical and Electro-Mechanical Engineering, National Sun Yat-Sen University, Kaohsiung, Taiwan, R.O.C

ABSTRACT

Pedestrian counting aims to compute the numbers of pedestrians entering and leaving an area of interest based on object detection and tracking techniques. This paper proposes a simple and effective approach of pedestrian counting that can effectively solve the problem of pedestrian occlusion. Firstly, the moving objects are detected by the median filtering and foreground extraction with the improved mixed Gaussian model. And then the HOG (Histogram of oriented gradient) features detection and the SVM (Support vector machine) classification are applied to identify the pedestrians. A pedestrian dataset containing 1500 positive samples, 12000 negative samples, and 420 hard examples, which gave the false discriminant results with the initial classifier, also considered as negative samples to enhance classification capability is employed. In addition, the Kalman filtering with BLOB analysis for dynamic target tracking is chosen to predict pedestrian trajectory. This method greatly reduces the target misjudgment caused by overlapping and completes the two-way counting. Experiments on pedestrian tracking and counting in video images demonstrate promising performance with satisfactory recognition rate and processing time.

KEYWORDS

Machine Vision, Kalman Filtering, Pedestrian Identification, Target Tracking, Pedestrian Counting.


Detection of Road Traffic Crashes based on Collision Estimation

Mohamed Essam and Mohamed A. Ismail, Department of Computer Engineering, Alexandria University, Alexandria, Egypt

ABSTRACT

This paper introduces a framework based on computer vision that can detect road traffic crashes (RCTs) by using the installed surveillance/CCTV camera and report them to the emergency in real-time with the exact location and time of occurrence of the accident. The framework is built of five modules. We start with the detection of vehicles by using YOLO architecture; The second module is the tracking of vehicles using MOSSE tracker, Then the third module is a new approach to detect accidents based on collision estimation. Then the fourth module for each vehicle, we detect if there is a car accident or not based on the violent flow descriptor (ViF) followed by an SVM classifier for crash prediction. Finally, in the last stage, if there is a car accident, the system will send a notification to the emergency by using a GPS module that provides us with the location, time, and date of the accident to be sent to the emergency with the help of the GSM module. The main objective is to achieve higher accuracy with fewer false alarms and to implement a simple system based on pipelining technique. The system gets 93% accuracy with processing time which beat all previous systems.

KEYWORDS

RTCs, ViF, SVM, Deep Learning, Collision Estimation.


A Novel Intelligent Image-Processing Parking Systems

Sree Veera Venkata Sai Saran Naraharisetti, Benjamin Greenfield, Benjamin Placzek, Steven Atilho, Mohamad Nassar and Mehdi Mekni, University of New Haven, West Haven, CT 06516, USA

ABSTRACT

The scientific community is looking for efficient solutions to improve the quality of life in large cities because of traffic congestion, driving experience, air pollution, and energy consumption. This surge exceeds the capacity of existing transit infrastructure and parking facilities. Intelligent Parking Systems (SPS) that can accommodate short-term parking demand are a must-have for smart city development. SPS are designed to count the number of parked automobiles and identify available parking spaces. In this paper, we present a novel SPS based on real-time computer vision techniques. The proposed system provides features including: vacant parking space recognition, inappropriate parking detection, forecast of available parking spaces, and directed indicators toward various sorts of parking spaces (vacant, occupied, reserved and handicapped). Our system leverages existing video surveillance systems to capture, process image sequences, train computer models to understand and interpret the visual world, and provide guidance and information to the drivers.

KEYWORDS

Smart Cities, Car parking, Image Processing, Edge detection, Object Recognition.


Are Your Sensitive Inputs Secure in Android Applications?

Trishla Shah, Raghav Sampangi and Angela Siegel, Dalhousie University, Halifax NS B3H 4R2, Canada

ABSTRACT

Android applications may request for users’ sensitive information through the GUI. Developer guidelines for designing applications mandate that information must be masked/encrypted before storing or leaving the system. However not all applications adhere to the guidelines. As a prerequisite to tracking sensitive input data, it is essential to identify the widgets that request it. Previous research has focused on identifying the sensitive input widgets, but the extraction of all layouts, including images and unused layouts, is fundamental. In this paper, we propose an automated framework that finds sensitive user input widgets from Android application layouts and validates the masking of these inputs. Our design includes novel techniques for resolving the user semantics, extraction of resources, identification of potential data leaks and helping users to prioritize the sharing of sensitive information, resulting in significant improvement over prior work. We also train track the obtained sensitive input widgets and check for unencrypted transmission or storage of sensitive data. Based on a preliminary evaluation of our framework with some applications from the Google Play store, we observe notable improvement over prior work in this domain.

KEYWORDS

Android applications, sensitive, secure, GUI, layouts, framework.


An Ant Colony Optimization Algorithm is Used to Solve Interval Transportation Problems with Mixed Constraints

Ekanayake E.M.U.S.B.1*, Daundasekara W.B.2 and Perera S.P.C3, 1Department of Physical Sciences, Faculty of Applied Sciences, Rajarata University of Sri Lanka, Mihinthale, Sri Lanka, 2Department of Mathematics, Faculty of Science, University of Peradeniya, Sri Lanka, 3Department of Engineering Mathematics, Faculty of Engineering ,University of Peradeniya, Sri Lanka

ABSTRACT

The Transportation Problem (TP) is another well-known optimization problem in which the goal is to minimize the total transportation cost of moving goods from one location to another. In the real world, this is a massive coordinated process. Various techniques for solving various types of transportation problems have been developed in the literature. In recent years, there has been a lot of focus on the Interval Transportation Problem (ITP). Various techniques for resolving the TP have been developed in the past, as evidenced by the literature. In this study, the ITP with mixed constraints is converted to a crisp transportation problem using the modified Ant Colony Optimization (ACO) algorithm, yielding a minimized solution. This is accomplished by incorporating the Transition Rule and Pheromone Update Rule into the ACO algorithm. This studys algorithmic approach is less complicated than well-known meta-heuristic algorithms in the literature. Finally, numerical examples are used to demonstrate the methodologys effectiveness.

KEYWORDS

Interval numbers, Interval Transportation problem, Mixed constraints, Ant colony algorithm, Optimal solution.


NLP in Stock Market Prediction: A Review

Rodrigue Andrawos, Computer Science and Mathematics Department, LAU, CSC688J

ABSTRACT

Stock market prediction is the act of trying to determine the future value of a company stock or other financial instrument traded on an exchange. The successful prediction of a stock’s future price could yield significant profit. The use of Text Mining together with Machine Learning algorithms received more attention in the last years, with the use of textual content from Internet as input to predict price changes in Stocks and other financial markets. This review focuses on how can NLP be used by traders, investors, and financial analysts to get the most out of textual, numerical and sentiment analysis. Stock movements are hard to predict, but the researches covered by this review used textual and numerical data with different machine learning models to predict certain stocks’ movements, achieving promising results.


Architecture of Legal Property Documents System (ALPDS)

Safaa Fatouh Gomaa, Doctorate in law-University of Mansoura, Master of administration Business, Eslsca School- French University

ABSTRACT

This paper investigation newly tries by the specialists in legal sciences, and technology ones, in international societies, to handle the matter the importance of the architecture of legal properties documents system. This research concentrates, generally, on the framework for the architecture of legal properties documents system, which explains; why do we need to create the architecture of legal documents system. The architecture of the legal documents system (ALPDS) represents a significant matter for all countries because it considers a legal information map that displays the legal history for countries properties to identify the legality of property actions and determine which are original, and which are forged. As a result, all countries need to create (ALPDS) that will become a formal source for legal properties documents.(S. FatouhGomaa, 2022) Consequently, the researcher will focus on identifying the importance of the architecture of legal properties documents system (ALPDS) that considers as a formal source for the legal information related to properties actions. This paper seeks to answer many questions; what is ALPDS?What are the components of ALPDS? Why do the countries need to ALPDS? What are the benefits for ALPDS? On another hand; has ALPDS a vital role to handle the downsides for legal documents administration that are related to properties actions. It hopes this research will inform the responsible international organizations for legal science to enact the rules that enhance the matter of architecture of legal properties documents system in properties actions.(S. FatouhGomaa, 2022)

KEYWORDS

Architecture, legal science, properties actions, technology& law.


Insights into Today’s Cyber Trends and Threats: Meta and Base Classifier Models for Network Traffic Classification

Rajeev Kumar1 and Dr. Kavita2, 1Research scholar, Department of Computer ScienceChandigarh University, Gharuan (Mohali), 2Department of Computer ScienceChandigarh University, Gharuan (Mohali)

ABSTRACT

The interconnectivity between the computers that provide a single network with lots of advantages is termed as the network. A network is a scenario that allow to collect and connect a number of computers for exchanging the information and offering facilities to other resources. To detect malicious activities from the network various classification techniques are designed. The existing technique is SVM which is applied to classify data into malicious and normal classes. This research work designed a stacking technique in order to classify the network traffic. The base classifier which is used, is known as SVM and meta-classifier is KNN classifier. The proposed model is the hybrid for the network traffic classification. Python is employed to execute the designed technique and the metrics namely accuracy, precision and recall are considered to analyze the designed technique.

KEYWORDS

Network traffic classification, Meta classifier, Base Classifier SVM, KNN.