Welcome to WiMNET 2021

8th International Conference on Wireless and Mobile Network (WiMNET 2021)

October 29 ~ 30, 2021, Vienna, Austria

Carenvision: A Data-Driven Machine Learning Framework for Automated Car Value Prediction

TianGe (Terence) Chen1, Angel Chang1, Evan Gunnell2, Yu Sun2, 1Rancho Cucamonga High School, Rancho Cucamonga, CA, 91701, 2California State Polytechnic University, Pomona, CA, 91768

ABSTRACT

When people want to buy or sell a personal car, they struggle to know when the timing is best in order to buy their favorite vehicle for the best price or sell for the most profit. We have come up with a program that can predict each car’s future values based on experts’ opinions and reviews. Our program extracts reviews which undergo sentiment analysis to become our data in the form of positive and negative sentiment. The data is then collected and used to train the Machine Learning model, which will in turn predict the car’s retail price.

KEYWORDS

Machine Learning, Polynomial Regression, Artificial Neural Network.


Very deep convolutional neural network for an automated image classification

M.Dhilsath Fathima and R Hariharan, Assistant Professor, Vel Tech Rangarajan Dr.Sagunthala R&D Institute of Science and Technology, India

ABSTRACT

An automated image classification is an essential task of the computer vision field. The tagging of images in to a set of predefined groups is referred to as image classification. A specific image is being classified into a large number of different cat-egories. The implementation of computer vision to automate image classification would be beneficial because manual image evaluation and identification can be time-consum-ing, particularly when there are many images of different classes. Deep learning ap-proaches are proven to overperform existing machine learning techniques in a number of fields in recent years, and computer vision is one of most notable examples. Com-puter vision utilizes many deep learning techniques to an automated image classifica-tion task. The very deep neural network is a powerful deep learning model for image classification, and this paper examines it briefly using following image datasets. MNIST hand-written digit dataset is used as typical image datasets in this proposed framework to prove the efficacy of very deep neural networks over other deep learning models. An objective of this proposed work is understanding a very deep neural net-work architecture to implement essential image classification tasks: handwritten digit recognition. This paper analyses a very deep neural network architecture to train Con-volutional neural network parameters on the two datasets mentioned above. The feasi-bility of the proposed model is evaluated using classifier performance metrics such as classifier accuracy, standard deviation, and entropy loss. The study results of the very deep neural network model are compared to Convolutional neural network and convo-lutional neural network with batch normalization. According to the results of the com-parison study, very deep neural networks achieve a high accuracy of 98.9% for hand-written datasets and 90.84% classification accuracy for fashion datasets. The outcome of the proposed work is used to interpret how well a very deep neural network performs when comparison to the other two models of deep neural network. This proposed ar-chitecture may be used to automate the classification of handwritten digits classifica-tion.

KEYWORDS

Very deep neural network, Convolutional neural network, batch normali-zation, hand-written digit classification.


Political Correctness: The Effect of Gaming and The Society

Zhengye Shi, Obridge Academy, NY 11801, USA

ABSTRACT

In this article I approach the controversy over ‘political correctness’ (PC) in terms of sociological questions, as follows. 1. Why this apparently focusing in creative output on achieving social change through the gaming industry? 2. How are we to understand the relationships among the chaos of inequality in the gaming industry and putting character distortion (gender, race, ethnicity, sexual orientation)? 3. How do we connect globalization - political correctness to video games? The articles conclude with a discussion and tactics for contesting critiques.

KEYWORDS

Culture, Discourse, Political Correctness, Video Games.


Best Traffic Signs Recognition based on CNN Model for Self-Driving Cars

Said Gadri, Department of Computer Science, Faculty of Mathematics and Computer Sciences, University of M’sila, M’sila, Algeria, 28000

ABSTRACT

Self-Driving Cars or Autonomous Cars provide many benefits for humanity, such as: reduction of deaths and injuries in road accidents, reduction of air pollution, increasing the quality of car control. For this purpose, some cameras or sensors are placed on the car, and an efficient control system must be setup, this system allows to receive images from different cameras and/or sensors in real time especially those representing traffic signs and process them to allows high autonomous control and driving of the car. Among the most promising algorithms used in this field, we find convolutional neural networks CNN. In the present work, we have proposed a CNN model composed of: many convolutional layers, maxpooling layers, and full connected layers. As programming tools, we have used python, Tensorflow and Keras which are currently the most used in the field.

KEYWORDS

machine learning, deep learning, traffic signs recognition, Convolutional Neural Networks, autonomous driving, self-driving cars.


Expert Agriculture Prediction System Using Machine Learning

Rashmika Gamage, Hasitha Rajapaksa, Gimhani Hemachandra, Abhiman Sangeeth and Janaka Wijekoon, Department of Software Engineering, Sri Lanka Institute of Information Technology, Malabe, Sri Lanka

ABSTRACT

Agriculture planning plays a dominant role in economic growth and food security of agriculture-based countries like Sri Lanka. Even though agriculture plays a vital role, there are still several major complications to be addressed. Some of the major complications are lack of knowledge about yield and price prediction, not knowing how to select most suitable crops. Machine learning has a great potential to solve these complications. We have proposed a novel system which consists a mobile application, SMS and API with yield , price prediction and crop optimization. Several machine learning algorithms were used for predictions while a generic algorithm was used to optimize crops. Yield was predicted considering the environmental factors while the price was predicted considering supply and demand, import and export, and seasonal affect. To select the best suitable crops to cultivate, the output of yield and price prediction have been used. The proposed system can be used to support the Agricultural decisions.

KEYWORDS

Machine Learning, Yield Forecasting, Price Forecasting, Genetic Algorithm, Smart Agriculture.


An Intelligent System to Improve Vocabulary and Reading Comprehension using Eye Tracking and Artificial Intelligence

Harrisson Li1, Evan Gunnell2 and Yu Sun2, 1Friends Select School, 1651 Benjamin Franklin Parkway, Philadelphia, PA 19103, 2California State Polytechnic University, Pomona, CA, 91768

ABSTRACT

When reading, many people frequently come across words they struggle with, and so they approach an online dictionary to help them define the word and better comprehend it. However, this conventional method of defining unknown vocabulary seems to be inefficient and ineffective, particularly for individuals who easily get distracted. Therefore, we asked ourselves how we could develop an application such that it will simultaneously aim to help define difficult words and improve users’ vocabulary while minimizing distraction?”. In response to that question, this paper will go in depth about an application we created, utilizing an eye-tracking device, to assist users in defining words, and enhance their vocabulary skills. Moreover, it includes supplemental materials such as an image feature, “search” button, and generation report to better support users vocabulary.

KEYWORDS

Vocabulary, eye-tracking system, comprehensive.


An Adaptive and Interactive 3D Simulation Platform for Physics Education using Machine Learning and Game Engine

Weicheng Wang1 and Yu Sun2, 1Arnold O. Beckman High School, 3588 Bryan Ave, Irvine, CA 92602, 2California State Polytechnic University, Pomona, CA, 91768

ABSTRACT

When undergraduate students just got into the physics field, it might be difficult for them to understand, think, and imagine what is happening in certain phenomenons [6]. For example, when two objects have different masses and velocity collide into each other, how are they going to act? Are they going to stop, bounce away from each other, or stick together? This simulation helps the students who do not feel comfortable imagining these scenarios. Currently we only have the gravitation lab, the trajectory lab, and the collision lab. The gravitation lab is a planet orbiting a sun, where the users can input different masses for the sun and planet, and the radius (in AU), the program will then calculate the gravitational force and orbital period while the planet starts orbiting its sun at a certain speed [7]. The trajectory lab is an object doing projectile motion, where the user input variables like initial velocity, angle, height, and acceleration, the program will present current position and velocity on the screen as the object doing projectile motion [8]. The collision lab is where the user input the masses and velocities for the two objects, and after the user decide the collision is going to elastic or not, set the lab and press start, the program will calculate the total momentum and kinetic energy and have it on the right side of the screen while the objects starts colliding [9].

KEYWORDS

Physics, Simulation, Problem Solving, Animation.


Digital Transformation and its Opportunities for Sustainable Manufacturing

Issam Moghrabi, Abdulrazzaq Alkazemi, Gulf University for Science and Technology, Kuwait

ABSTRACT

This paper explores the impacts of digital technologies on supply chains and coordination, the manufacturing process, energy conservation, efficiency, and environmental conservation. Digital transformation has led to the popularization of sustainable manufacturing, which entails creating sustainable products that promote social, economic, and environmental sustainability. Digital transformation has boosted sustainability in production and manufacturing in a variety of ways. These ways include increasing cross-border communication through the internet, decentralizing supply chains, Internet of Things (IoT) solutions, artificial intelligence, machine learning, big data analytics in predictive analysis, robotics, horizontal and vertical integration of businesses, efficient management, and various other ways. The findings of the paper indicate that digital transformation has changed manufacturing in various ways. Aspects like cloud computing, vertical and horizontal integration, communication, and the internet have contributed to sustainable manufacturing by decentralizing supply chains. In addition, some digital transformation tools such as predictive analysis and big data analytics have helped optimize sustainable manufacturing by reducing overproduction or underproduction through predicting customer demands.

KEYWORDS

Internet of Things, Digital Transformation, Machine Learning, sustainable organization.


AI based E-Learning Solution to Motivate and Assist Primary School Students

Silva P.H.D.D, Sudasinghe S.A.V.D, Hansika P.D.U., Gamage M.P., Gamage M.P.A.W, Faculty of Computing, Sri Lanka Institute of Information and Technology (SLLIT), Malabe, Sri Lanka

ABSTRACT

E-learning is a form of providing education by using electronic devices. Lack of proper mechanisms to encourage and assist students are key issues faced by many students in an e-learning environment. The “Vidu Mithuru” is a question-based e-learning application that has developed as a solution to overcome these problems. This mobile application is based on Neural Networks, Natural Language Processing and Machine Learning concepts. The core objective of the proposed solution is to track the performance level and assist the student to improve in their studies while keeping them motivated. The trained Machine Learning models have achieved the accuracy of 66%, 70.4%, 82% and 86% for question categorization, speech emotion detection, facial emotion detection and model to evaluate answers as respectively. We have received favorable responses as the results after we have tested the developed "Vidu Mithuru" mobile application among students in 3rd, 4th and 5th grade in school.

KEYWORDS

Emotion Detection, Generate Questions, Track Performance, Deep Learning & Machine Learning.


Confidentiality and Integrity Mechanisms for Microservices Communication

Lenin Leines, Juan C. Pérez and Héctor X. Limón, School of Statistics and Informatics, Universidad Veracruzana, Xalapa, Ver; Mexico

ABSTRACT

The microservice architecture tries to deal with the challenges posed by distributed systems, such as scalability, availability, and deployment; by means of highly cohesive, heterogeneous, and independent microservices. But this architecture also brings new security challenges related to communication, system design, development, and operation. The literature presents a plethora of security related solutions for microservices-based systems, but the spread information makes difficult for practitioners to adopt novel security related solutions. This study focuses on microservices security from a communication standpoint, presenting a catalogue of solutions based on algorithms, protocols, standards, or implementations, supporting some principle or characteristic of information security [1], and considering the three possible states of data, according to the McCumber Cube (storage, transmission, and processing) [2]. The research follows a Systematic Literature Review, synthesizing the results with a meta-aggregation process.

KEYWORDS

Microservices, Software architecture, Secure communication, Information security.


Distributed Automated Software Testing using Automation Framework as a Service

Santanu Ray1 and Pratik Gupta2, 1Ericsson, New Jersey, USA, 2Ericsson, Kolkata, India

ABSTRACT

Conventional test automation framework executes test cases in sequential manner which increases the execution time. Even though framework supports multiple test suites execution in parallel, we are unable to perform the same due to system limitations and infrastructure cost. Build and maintenance of automation framework is also time consuming and cost-effective. The paper is design for implementing a scalable test automation framework providing test framework as a service which expedite test execution by distributing test suites in multiple services running in parallel without any extra infrastructure.

KEYWORDS

Distributed Testing, Robot Framework, Docker, Automation Framework.


Cloud Computing Strategy and Impact in Banking/Financial Services

Prudhvi Parne, Information Technology, Bank of Hope, 1655 E Redondo Brach Blvd, Gardena, CA, USA

ABSTRACT

With recent advances in technology, it is becoming more challenging for banks and financial institutions to safeguard their client’s data. This is because there are wide range of software launched regularly that enables the hackers to access financial information illegally by manipulating figures. When data breaches occur, they are costly to both the bank and their customers. As a result, cloud computing comes in to provide a solution to such challenges making banking a reliable and trustworthy service. The paper aims at cloud computing strategy and impact in banking and financial institutions.

KEYWORDS

Cloud Computing, Technology, Finance, Security.


Intelligent Speed Adaptive System using Image Regression Method for Highway and Urban Roads

Bhavesh Sharma1 and Junaid Ali2, 1Department of Electrical, Electronics and Communication Engineering, Engineering College, Ajmer, India, 2Department of Mechanical Engineering, Indian Institute of Technology, Madras, India

ABSTRACT

Intelligent Speed Adaptive System (ISAS) is an emerging technology in the field of autonomous vehicles. However, the public acceptance rate of ISAS is drastically low because of several downfalls i.e. reliability and low accuracy. Various researchers have contributed methodologies to enhance the traffic prediction scores and algorithms to improve the overall adaptability of ISAS. The literature is scarce for Image Regression in this range of application. Computer vision has proved its iota in stream of object detection in self-driving technology in which most of the models are assisted through the complex web of neural nets and live imaging systems. In this article, some major issues related to the present technology of the ISAS and discussed new methodologies to get higher prediction accuracy to control the speed of vehicle through Image Regression technique to develop a computer vision model to predict the speed of vehicle with each frame of live images.

KEYWORDS

Intelligent Systems, Self-Driving Vehicle, Image Processing, Image Regression, Computer Vision, Automotive.


An Algorithm-Adaptive Source Code Converter to Automate the Translation from Python to Java

Eric Jin1 and Yu Sun2, 1Northwood High School, 4515 Portola Parkway, Irvine, CA 92620, 2California State Polytechnic University, Pomona, CA, 91768

ABSTRACT

In the fields of computer science, there exist hundreds of different programming languages. They often have different usage and strength but also have a huge number of overlapping abilities [1]. Especially the kind of general-purpose coding language that is widely used by people, for example Java, Python and C++ [2]. However, there is a lack of comprehensive methods for the conversion for codes from one language to another [3], making the task of converting a program in between multiple coding languages hard and inconvenient. This paper thoroughly explained how my team designs a tool that converts Python source code into Java which has the exact same function and features. We applied this converter, or transpiler, to many Python codes, and successfully turned them into Java codes. Two qualitative experiments were conducted to test the effectiveness of the converter. 1. Converting Python solutions of 5 United States Computer Science Olympic (USACO) problems into Java solutions and conducting a qualitative evaluation of the correctness of the produced solution; 2. converting codes of various lengths from 10 different users to test the adaptability of this converter with randomized input. The results show that this converter is capable of an error rate less than 10% out of the entire code, and the translated code can perform the exact same function as the original code.

KEYWORDS

Algorithm, programing language translation, Python, Java.


Catwalkgrader: A Catwalk Analysis and Correction System using Machine Learning and Computer Vision

Tianjiao Dong1 and Yu Sun2, 1Northwood High School, Irvine, CA 92620, 2California State Polytechnic University, Pomona, CA, 91768

ABSTRACT

In recent years, the modeling industry has attracted many people, causing a drastic increase in the number of modeling training classes. Modeling takes practice, and without professional training, few beginners know if they are doing it right or not. In this paper, we present a real-time 2D model walk grading app based on Mediapipe, a library for real-time, multi-person keypoint detection. After capturing 2D positions of a persons joints and skeletal wireframe from an uploaded video, our app uses a scoring formula to provide accurate scores and tailored feedback to each user for their modeling skills.

KEYWORDS

Runway model, Catwalk Scoring, Flutter, Mediapie.


A Context-Aware and Immersive Puzzle Game using Machine Learning and Big Data Analysis

Peiyi Li1, John Morris2 and Yu Sun2, 1University of California, Irvine, Irvine, CA 92697, 2California State Polytechnic University, Pomona, CA, 91768

ABSTRACT

Recent years, video games have become one of the main forms of entertainment for people of all ages, in which millions of members publicly show their screenshots while playing games or share their experience of playing games [4]. Puzzle game is a popular game genre among various video games, it challenges players to find the correct solution by providing them with different logic/conceptual problems. However, designing a good puzzle game is not an easy task [5]. This paper designs a puzzle game for players of all age ranges with proper difficulty level, various puzzle mechanics and attractive background setting stories. We applied our games to different players to test play and conducted a qualitative evaluation of the approach. The results show that the pace of puzzle games affects play experience a lot and the difficulty level of the puzzles affects players feelings to the game.

KEYWORDS

Puzzle Game, Game Design, Video Games, Adventure Game.


Using Different Assessment Indicators in Supporting Online Learning

Yew Kee Wong, HuangHuai University, Zhumadian, Henan, China

ABSTRACT

The assessment outcome for many online learning methods are based on the number of correct answers and than convert it into one final mark or grade. We discovered that when using online learning, we can extract more detail information from the learning process and these information are useful for the assessor to plan an effective and efficient learning model for the learner. Statistical analysis is an important part of an assessment when performing the online learning outcome. The assessment indicators include the difficulty level of the question, time spend in answering and the variation in choosing answer. In this paper we will present the findings of these assessment indicators and how it can improve the way the learner being assessed when using online learning system. We developed a statistical analysis algorithm which can assess the online learning outcomes more effectively using quantifiable measurements. A number of examples of using this statistical analysis algorithm are presented.

KEYWORDS

Artificial Intelligence, Assessment Indicator, Online Learning, Statistical Analysis Algorithm.


An Intelligent System to Automate Humidity Monitoring and Humidifier Control using Internet-of-Things (IoT) and Artificial Intelligence

Qian Zhang1 and Yu Sun2, 1Jserra High School, San Juan Capistrano, CA 92675, 2California State Polytechnic University, Pomona, CA, 91768

ABSTRACT

Air conditioners are widely used in family homes all over the world. However, the side effects of using air conditioners and dehumidification can cause health problems if people remain in low-humidity environments. This paper traces the development of a software application and system to create an intelligent humidifier that automatically turns on or off for convenience or for those who cannot engage manual control. We applied our application to a humidifier for several days and conducted a qualitative evaluation of the approach. Results affirmed the usability and capacity of our automatic control system.

KEYWORDS

IoT, Machine Learning, Deep Learning, Artificial Intelligence.


An Intelligent Data-Driven Analytics System to Assist Sports Player Training and Improvement using Internet-Of-Things (IoT) and Big Data Analysis

Julius Wu1, Jerry Wang2, Jonathan Sahagun3 and Yu Sun4, 1Irvine High School, Irvine, USA, 2SMIC Private School, Shanghai, China, 3California State University, Los Angeles, CA, 91706, 4California State Polytechnic University, Pomona, CA, 91768

ABSTRACT

Our product is a very unique tracking tool that not only tracks the movement of players on a map, but also the velocity of each player. We have an application that coaches usually hold onto during a game or a practice. It shows coaches an accurate data sample of where each player is and what they are doing on the field whether it be grinding or fooling around. It also helps coaches see accurate gameplay during a game if the recording is not available. When coaches select elite players, they also get a presentation of each players’ skills and how accurate they are when running different routes.

KEYWORDS

IoT, Machine learning, Data Mining.


Current Security Topics and Evolving Risk Mapping Leveraging LDA Machine Learning Models

Joshua Scarpino, Marymount University, Arlington, Virginia, 22207, USA

ABSTRACT

This is a study around the application of Genism’s LDA model toward identification of critical topics within cybersecurity by leveraging social media user feeds. The research was intended to advance focus around trending topics that are critical within security as the threat landscape evolves. This research provides an op-opportunity to expanded threat intelligence by leveraging security professionals as a critical source of intel. This can help to create focus for security awareness training materials and assist in the possible early identification of emerging threats.

KEYWORDS

Gensim Latent Dirichlet Allocation, Social Media, Security Awareness.


A Data-Driven Method for Capturing Comorbidity Structure in Mental Disorders

Hojjatollah Farahani1, Parviz Azadfallah2, and Peter Watson3, 1,2Department of Psychology, Tarbiat Modares University, Iran, 3Cognition and Brain Unit, University of Cambridge, UK

ABSTRACT

The concurrent presence of a mental disorder with another mental disorder is common in the clinical practice of comorbidity structure research. In this study we look at the structure of the comorbidity, assessing the degree of overlap among the measured signs and symptoms of two mental disorders. In this paper, the newly advanced and graphical statistical method of network analysis is introduced and described. This data driven method helps mind researchers to be able to capture the most important relationships among variables in a complex and complicated system. The stages for running the network analysis using R software are explained. Accuracy testing and stability centrality measures are investigated using bootstrapping. As a practical example, this method was used on the data obtained from 254 Multiple sclerosis (MS) patients to capture the comorbidity structure between depressive and anxiety symptoms. The results are presented and discussed. Network analysis as a data-driven based model can be of interest to all mind researchers especially the researchers working in clinical, cognitive and social psychology.

KEYWORDS

Clinical, Cognitive, Psychology, Network analysis, Statistics, Multiple sclerosis.


Business Intelligence and Data Warehouse Technologies for Traffic Accident Data Analysis in Botswana

Monkgogi Mudongo, Edwin Thuma, Nkwebi Peace Motlogelwa, Tebo Leburu-Dingalo and Pulafela Majoo, Department of Computer Science, University of Botswana , Gaborone, Botswana

ABSTRACT

Road traffic accidents are a serious problem for the nation of Botswana. A large amount of money is used to compensate those who are affected by road accidents. According to Mphela [1] traffic accidents are the second largest cause of death after HIV/AIDS in Botswana. It is therefore important for relevant organizations to have a reliable source of data for accurate evaluation of traffic accidents. Similarly, data on vehicle registration must be transformed and be readily available to assist managerial decision makers. In this article, we deploy a Business Intelligence (BI) and Data Warehouse (DW) solution in an attempt to assist the relevant departments in their road traffic accidents and vehicle registration evaluation. In Our evaluation of the traffic accidents our findings suggest that across accident severity, Damage Only accidents had the most interesting recent trend with a 11.93% decrease in the last 3 years on record. Count of Accident Severity for Damage Only accidents dropped from 13,491 to 11,881 between 2018 and 2020 whilst Minor accidents experienced the longest period of growth. Most accidents take place in rural locations and more accidents take place during the weekend. At 28,439, Sunday had the highest number of accidents and was 47.59% higher than Wednesday, which had the lowest count of accidents at 19,269. The results for vehicle registration reveal that the number of vehicle registration decreased for the last 3 years on record. The number of vehicles registered dropped from 65535 to 24457 during its steepest decline between 2019 and 2021.

KEYWORDS

Business Intelligence, Data Warehousing, ETL, Accident and Vehicle registration.


Advanced Deep Learning Model

Yew Kee Wong, School of Information Engineering, HuangHuai University, Henan, China

ABSTRACT

Deep learning is a type of machine learning that trains a computer to perform human-like tasks, such as recognizing speech, identifying images or making predictions. Instead of organizing data to run through predefined equations, deep learning sets up basic parameters about the data and trains the computer to learn on its own by recognizing patterns using many layers of processing. This paper aims to illustrate some of the different deep learning algorithms and methods which can be applied to artificial intelligence analysis, as well as the opportunities provided by the application in various decision making domains.

KEYWORDS

Artificial Intelligence, Machine Learning, Deep Learning.


Global Research Decentralized Autonomous Organization (GR-DAO): A DAO of Global Researchers

Kelly L. Page1 and Adel Elmessiry2, 1LWYL Studio, Chicago, IL, USA, 2AlphaFin, Nashville, TN, USA

ABSTRACT

The latest trend in Blockchain formation is to utilize decentralized autonomous organizations (DAO) in many verticals. To date, little attention has been given to address the global research domain due to the difficulty in creating a comprehensive framework that can marry the cutting edge of academic grade scientific research with a decentralized governance body of researchers. A global research decentralized autonomous organization (GR-DAO) would have a profound impact on the research community academically, commercially, and the public good. In this paper, we propose the GR-DAO as a global community of researchers committed to collectively creating knowledge and sharing it with the world. Scientific research is the means for knowledge creation and learning. The GR-DAO provides the guidance, community and technological solutions for the evolution of a global research infrastructure and environment. Through its design, the GR-DAO embraces, enhances and extends the model of research, research on decentralization and DAO as a model for decentralised and autonomous organizing. This design, in turn, improves most of the uses for and applications of research for the greater good of society. The paper examines the core motivation, purpose and design of the GR-DAO, its strategy to embrace, enhance and extend the research ecosystem, and the GR-DAO design uses across the DAO ecosystem.

KEYWORDS

Scientific Research, Researcher, Research, Knowledge, Learning, Cocreated Knowledge, Applied Research, Decentralized Autonomous, Organization, DAO, Research Model, Research Activity, Blockchain, Emerging Technology, Incentive Design, Reputation Staking, Distributed Ledger Technology, Decentralized Infrastructure.


Proof of Renewable (PoR) The ROBe2 Protocol

Tom Davis1 and Adel Elmessiry2, 1Renewable Energy Alliance, USA, 2Crypto Bloc Tech, USA

ABSTRACT

We are at a serious crossroads as it relates to carbon emissions and the condition of our planet. Global conditions are spiraling out of control. Climate change is widespread, occurring extremely fast, and intensifying. The consumption of nonrenewable energy sources is impacting both the environment and the economy in equal proportions. Up to this point society has tried to solve these problems with local solutions but we have fallen short. The missing component to solve the global problem is an alignment of individuals and organizations coming together, taking responsibility, and creating global solutions to meet the goal of being carbon negative by 2050. In this paper, we propose the ROBe2 protocol as the global solution that brings everyone together to solve these very important issues. Renewable Obligation Base energy economy (ROBe2) is a protocol attempting to aggregate local renewable energy solutions into a global impact while providing an economically sound framework and allowing the creation of an economic incentive for using renewable energy in place of a fossil one [1].

KEYWORDS

Scientific Research, Researcher, Research, Knowledge, Learning, Applied Research, Decentralized Autonomous, Organization, DAO, Research Model, Research Activity, Blockchain, Emerging Technology, Incentive Design, Reputation Staking, Distributed Ledger Technology, Decentralized Infrastructure, Renewable, Renewable Energy.


Fast implementation of elliptic curve cryptographic algorithm on GF( 3m) based on FPGA

Tan Yongliang, He Lesheng, Jin Haonan, Kong Qingyang, Information Institute, Yunnan University, Kunming, China

ABSTRACT

As quantum computing and the theory of bilinear pairings continue being studied in depth, elliptic curves on GF(3m) are becoming of an increasing interest because they provide a higher security. What’s more, because hardware encryption is more efficient and secure than software encryption in todays IoT security environment, the scalar multiplication algorithm is implemented in this paper for elliptic curves on GF(3m) on the FPGA device platform. The arithmetic in finite fields is quickly implemented by bit-oriented operations, and then the computation speed of point addition and multiplication points is improved by a modified Jacobia projection coordinate system. The final experimental results demonstrate that the structure consumes a total of 7518 slices, which is capable of computing approximately 3000 scalar multiplications per second at 124 Mhz. It has relative advantages in terms of performance and resource consumption, which can be applied to specific confidential communication scenarios as an IP core.

KEYWORDS

GF(3m), Elliptic Curve Cryptography, Scalar Multiplication, FPGA, IoT Security.


A Mobile Platform for Food Donation and Delivery System using AI and Machine Learning

George Zhou1, Marisabel Chang2 and Yu Sun2, 1USA, 2California State Polytechnic University, Pomona, CA, 91768

ABSTRACT

Within the last year through the turmoil of the Covid-19 pandemic, an increasing number of families and individuals are experiencing food insecurity due to a loss of job, illnesses, or other financial struggles [4]. Many families in the Orange County area and abroad are turning to free food sources such as community food pantries or banks. Using specified surveys to food insecure families, we discovered a need for a solution to enhance the accessibility and usability of food pantries [5]. Therefore, we created a software application that uses artificial intelligence to locate specific items for users to request, and allow volunteers to see those requests and pick up the resources from food pantries, and deliver them directly to the homes of individuals. This paper shows the process in which this idea was created and how it was applied, along with the conduction of the qualitative evaluation of the approach. The results show that the software application allowed families and individuals to receive quality groceries at a much higher frequency, regardless of multiple constraints.

KEYWORDS

Mobile Platform, machine learning, data mining.


A Mobile Platform for Food Donation and Delivery System using AI and Machine Learning

Shi Yuhan and Huang HongXu, College of Computer Science and Information Technology,Central South University of Forestry and Technology,Changsha, Hunan China

ABSTRACT

A new partly coherent jamming technology against Synthetic Aperture Radar (SAR) is proposed in this paper. Its signal has stepped repeater time-delay and random inter-pulse phase. The research shows that this interference is intra-pulse coherent and inter-pulse non-coherent with SAR system. Its imaging output is a similar parallelogram. Comparing with other inter-pulse non-coherent region jamming technologies such as randomly shift-frequency jamming, this proposed jamming could obtain more intra-pulse matching processing gains from SAR.

KEYWORDS

synthetic aperture radar(SAR), partly coherent jamming, inter-pulse non-coherence, stepped time-delay.


Cascaded Segmentation Network based on Double Branch Boundary Enhancement

Li Zeng1, Hongqiu Wang1, Xin Wang3, Miao Tian1* and Shaozhi Wu1,2*, 1University of Electronic Science and Technology of China, Chengdu,611731, China, 2Yangtze Delta Region Institute (Quzhou), University of Electronic Science and Technology of China, Quzhou, Zhejiang 324000, China, 3Department of Abdminal Oncology, Cancer Center, West China Hospital, Sichuan University

ABSTRACT

Cervical cancer is one of the most common causes of cancer death in women. During the treatment of cervical cancer, it is necessary to make a radiation plan based on the clinical target volume(CTV) on the CT image. At present, CTV is manually sketched by physicists, which is time-consuming and laborious. With the help of deep learning model, computer can accurately draw the outline of CTV in Colleges and universities. The CDBNet proposed in this paper is a cascaded segmentation network based on double-branch boundary enhancement. First, classification network determines whether a single image contains a region of interest (ROI), and then the segmentation network uses DBNet to segment more accurately at the ROI contour. In this paper, we propose CDBNet, a cascaded segmentation network based on double-branch boundary enhancement.First, classification network determines whether a single image contains a region of interest (ROI), and then the segmentation network uses DBNet to segment more accurately at the ROI contour. The CDBNet proposed in this paper was verified on the cervical cancer dataset provided by the Department of Radiation Oncology, West China Hospital, Sichuan Province. The average dice and 95HD of the delineation results are 86.12% and 2.51mm. At the same time, the classification accuracy rate of whether the image contains ROI can reach 93.19%, and the average Dice of the image containing ROI can reach 70%.

KEYWORDS

CTV delineation, cascade, segmentation, boundary enhancement.


Fast Convolution Based on Winograd Minimum Filtering: Introduction and Development

Gan Tong and Libo Huang, School of Computer, National University of Defense Technology, Changsha, China

ABSTRACT

Convolutional Neural Network (CNN) has been widely used in various fields and played an important role. Convolution operators are the fundamental component of convolutional neural networks, and it is also the most time-consuming part of network training and inference. In recent years, researchers have proposed several fast convolution algorithms including FFT and Winograd. Among them, Winograd convolution significantly reduces the multiplication operations in convolution, and it also takes up less memory space than FFT convolution. Therefore, Winograd convolution has quickly become the first choice for fast convolution implementation within a few years. At present, there is no systematic summary of the convolution algorithm. This article aims to fill this gap and provide detailed references for follow-up researchers. This article summarizes the development of Winograd convolution from the three aspects of algorithm expansion, algorithm optimization, implementation, and application, and finally makes a simple outlook on the possible future directions.

KEYWORDS

Winograd MinimumFiltering, Winograd Convolution, Fast Convolution, Convolution Optimization.


Efficient Implementation of a Digital Chirp Generator

Andreas Falkenberg, Metawave Corporation, Carlsbad, CA, USA

ABSTRACT

This paper presents a novel and efficient implementation of a digital chirp generator. The presented solution is derived from the implementation of the well-known CORDIC algorithm for sine and cosine waveform generation. Chirp generators are used in radar applications but the herein presented solution can be applied to any problem, which requires an efficient chirp generator solution. It can be implemented on a DSP processor as well as FPGA or a general-purpose CPU.

KEYWORDS

Chirp, Cordic, Radar, FMCW.


  • Home

  • Paper Submission

  • Program Committee

  • Accepted Papers

  • Contact Us

  • Venue

Edit this menu via the Pages tab

Show me

Carenvision: A Data-Driven Machine Learning Framework for Automated Car Value Prediction

TianGe (Terence) Chen1, Angel Chang1, Evan Gunnell2, Yu Sun2, 1Rancho Cucamonga High School, Rancho Cucamonga, CA, 91701, 2California State Polytechnic University, Pomona, CA, 91768

ABSTRACT

When people want to buy or sell a personal car, they struggle to know when the timing is best in order to buy their favorite vehicle for the best price or sell for the most profit. We have come up with a program that can predict each car’s future values based on experts’ opinions and reviews. Our program extracts reviews which undergo sentiment analysis to become our data in the form of positive and negative sentiment. The data is then collected and used to train the Machine Learning model, which will in turn predict the car’s retail price.

KEYWORDS

Machine Learning, Polynomial Regression, Artificial Neural Network.


Very deep convolutional neural network for an automated image classification

M.Dhilsath Fathima and R Hariharan, Assistant Professor, Vel Tech Rangarajan Dr.Sagunthala R&D Institute of Science and Technology, India

ABSTRACT

An automated image classification is an essential task of the computer vision field. The tagging of images in to a set of predefined groups is referred to as image classification. A specific image is being classified into a large number of different cat-egories. The implementation of computer vision to automate image classification would be beneficial because manual image evaluation and identification can be time-consum-ing, particularly when there are many images of different classes. Deep learning ap-proaches are proven to overperform existing machine learning techniques in a number of fields in recent years, and computer vision is one of most notable examples. Com-puter vision utilizes many deep learning techniques to an automated image classifica-tion task. The very deep neural network is a powerful deep learning model for image classification, and this paper examines it briefly using following image datasets. MNIST hand-written digit dataset is used as typical image datasets in this proposed framework to prove the efficacy of very deep neural networks over other deep learning models. An objective of this proposed work is understanding a very deep neural net-work architecture to implement essential image classification tasks: handwritten digit recognition. This paper analyses a very deep neural network architecture to train Con-volutional neural network parameters on the two datasets mentioned above. The feasi-bility of the proposed model is evaluated using classifier performance metrics such as classifier accuracy, standard deviation, and entropy loss. The study results of the very deep neural network model are compared to Convolutional neural network and convo-lutional neural network with batch normalization. According to the results of the com-parison study, very deep neural networks achieve a high accuracy of 98.9% for hand-written datasets and 90.84% classification accuracy for fashion datasets. The outcome of the proposed work is used to interpret how well a very deep neural network performs when comparison to the other two models of deep neural network. This proposed ar-chitecture may be used to automate the classification of handwritten digits classifica-tion.

KEYWORDS

Very deep neural network, Convolutional neural network, batch normali-zation, hand-written digit classification.


Political Correctness: The Effect of Gaming and The Society

Zhengye Shi, Obridge Academy, NY 11801, USA

ABSTRACT

In this article I approach the controversy over ‘political correctness’ (PC) in terms of sociological questions, as follows. 1. Why this apparently focusing in creative output on achieving social change through the gaming industry? 2. How are we to understand the relationships among the chaos of inequality in the gaming industry and putting character distortion (gender, race, ethnicity, sexual orientation)? 3. How do we connect globalization - political correctness to video games? The articles conclude with a discussion and tactics for contesting critiques.

KEYWORDS

Culture, Discourse, Political Correctness, Video Games.


Best Traffic Signs Recognition based on CNN Model for Self-Driving Cars

Said Gadri, Department of Computer Science, Faculty of Mathematics and Computer Sciences, University of M’sila, M’sila, Algeria, 28000

ABSTRACT

Self-Driving Cars or Autonomous Cars provide many benefits for humanity, such as: reduction of deaths and injuries in road accidents, reduction of air pollution, increasing the quality of car control. For this purpose, some cameras or sensors are placed on the car, and an efficient control system must be setup, this system allows to receive images from different cameras and/or sensors in real time especially those representing traffic signs and process them to allows high autonomous control and driving of the car. Among the most promising algorithms used in this field, we find convolutional neural networks CNN. In the present work, we have proposed a CNN model composed of: many convolutional layers, maxpooling layers, and full connected layers. As programming tools, we have used python, Tensorflow and Keras which are currently the most used in the field.

KEYWORDS

machine learning, deep learning, traffic signs recognition, Convolutional Neural Networks, autonomous driving, self-driving cars.


Expert Agriculture Prediction System Using Machine Learning

Rashmika Gamage, Hasitha Rajapaksa, Gimhani Hemachandra, Abhiman Sangeeth and Janaka Wijekoon, Department of Software Engineering, Sri Lanka Institute of Information Technology, Malabe, Sri Lanka

ABSTRACT

Agriculture planning plays a dominant role in economic growth and food security of agriculture-based countries like Sri Lanka. Even though agriculture plays a vital role, there are still several major complications to be addressed. Some of the major complications are lack of knowledge about yield and price prediction, not knowing how to select most suitable crops. Machine learning has a great potential to solve these complications. We have proposed a novel system which consists a mobile application, SMS and API with yield , price prediction and crop optimization. Several machine learning algorithms were used for predictions while a generic algorithm was used to optimize crops. Yield was predicted considering the environmental factors while the price was predicted considering supply and demand, import and export, and seasonal affect. To select the best suitable crops to cultivate, the output of yield and price prediction have been used. The proposed system can be used to support the Agricultural decisions.

KEYWORDS

Machine Learning, Yield Forecasting, Price Forecasting, Genetic Algorithm, Smart Agriculture.


An Intelligent System to Improve Vocabulary and Reading Comprehension using Eye Tracking and Artificial Intelligence

Harrisson Li1, Evan Gunnell2 and Yu Sun2, 1Friends Select School, 1651 Benjamin Franklin Parkway, Philadelphia, PA 19103, 2California State Polytechnic University, Pomona, CA, 91768

ABSTRACT

When reading, many people frequently come across words they struggle with, and so they approach an online dictionary to help them define the word and better comprehend it. However, this conventional method of defining unknown vocabulary seems to be inefficient and ineffective, particularly for individuals who easily get distracted. Therefore, we asked ourselves how we could develop an application such that it will simultaneously aim to help define difficult words and improve users’ vocabulary while minimizing distraction?”. In response to that question, this paper will go in depth about an application we created, utilizing an eye-tracking device, to assist users in defining words, and enhance their vocabulary skills. Moreover, it includes supplemental materials such as an image feature, “search” button, and generation report to better support users vocabulary.

KEYWORDS

Vocabulary, eye-tracking system, comprehensive.


An Adaptive and Interactive 3D Simulation Platform for Physics Education using Machine Learning and Game Engine

Weicheng Wang1 and Yu Sun2, 1Arnold O. Beckman High School, 3588 Bryan Ave, Irvine, CA 92602, 2California State Polytechnic University, Pomona, CA, 91768

ABSTRACT

When undergraduate students just got into the physics field, it might be difficult for them to understand, think, and imagine what is happening in certain phenomenons [6]. For example, when two objects have different masses and velocity collide into each other, how are they going to act? Are they going to stop, bounce away from each other, or stick together? This simulation helps the students who do not feel comfortable imagining these scenarios. Currently we only have the gravitation lab, the trajectory lab, and the collision lab. The gravitation lab is a planet orbiting a sun, where the users can input different masses for the sun and planet, and the radius (in AU), the program will then calculate the gravitational force and orbital period while the planet starts orbiting its sun at a certain speed [7]. The trajectory lab is an object doing projectile motion, where the user input variables like initial velocity, angle, height, and acceleration, the program will present current position and velocity on the screen as the object doing projectile motion [8]. The collision lab is where the user input the masses and velocities for the two objects, and after the user decide the collision is going to elastic or not, set the lab and press start, the program will calculate the total momentum and kinetic energy and have it on the right side of the screen while the objects starts colliding [9].

KEYWORDS

Physics, Simulation, Problem Solving, Animation.


Digital Transformation and its Opportunities for Sustainable Manufacturing

Issam Moghrabi, Abdulrazzaq Alkazemi, Gulf University for Science and Technology, Kuwait

ABSTRACT

This paper explores the impacts of digital technologies on supply chains and coordination, the manufacturing process, energy conservation, efficiency, and environmental conservation. Digital transformation has led to the popularization of sustainable manufacturing, which entails creating sustainable products that promote social, economic, and environmental sustainability. Digital transformation has boosted sustainability in production and manufacturing in a variety of ways. These ways include increasing cross-border communication through the internet, decentralizing supply chains, Internet of Things (IoT) solutions, artificial intelligence, machine learning, big data analytics in predictive analysis, robotics, horizontal and vertical integration of businesses, efficient management, and various other ways. The findings of the paper indicate that digital transformation has changed manufacturing in various ways. Aspects like cloud computing, vertical and horizontal integration, communication, and the internet have contributed to sustainable manufacturing by decentralizing supply chains. In addition, some digital transformation tools such as predictive analysis and big data analytics have helped optimize sustainable manufacturing by reducing overproduction or underproduction through predicting customer demands.

KEYWORDS

Internet of Things, Digital Transformation, Machine Learning, sustainable organization.


AI based E-Learning Solution to Motivate and Assist Primary School Students

Silva P.H.D.D, Sudasinghe S.A.V.D, Hansika P.D.U., Gamage M.P., Gamage M.P.A.W, Faculty of Computing, Sri Lanka Institute of Information and Technology (SLLIT), Malabe, Sri Lanka

ABSTRACT

E-learning is a form of providing education by using electronic devices. Lack of proper mechanisms to encourage and assist students are key issues faced by many students in an e-learning environment. The “Vidu Mithuru” is a question-based e-learning application that has developed as a solution to overcome these problems. This mobile application is based on Neural Networks, Natural Language Processing and Machine Learning concepts. The core objective of the proposed solution is to track the performance level and assist the student to improve in their studies while keeping them motivated. The trained Machine Learning models have achieved the accuracy of 66%, 70.4%, 82% and 86% for question categorization, speech emotion detection, facial emotion detection and model to evaluate answers as respectively. We have received favorable responses as the results after we have tested the developed "Vidu Mithuru" mobile application among students in 3rd, 4th and 5th grade in school.

KEYWORDS

Emotion Detection, Generate Questions, Track Performance, Deep Learning & Machine Learning.


Confidentiality and Integrity Mechanisms for Microservices Communication

Lenin Leines, Juan C. Pérez and Héctor X. Limón, School of Statistics and Informatics, Universidad Veracruzana, Xalapa, Ver; Mexico

ABSTRACT

The microservice architecture tries to deal with the challenges posed by distributed systems, such as scalability, availability, and deployment; by means of highly cohesive, heterogeneous, and independent microservices. But this architecture also brings new security challenges related to communication, system design, development, and operation. The literature presents a plethora of security related solutions for microservices-based systems, but the spread information makes difficult for practitioners to adopt novel security related solutions. This study focuses on microservices security from a communication standpoint, presenting a catalogue of solutions based on algorithms, protocols, standards, or implementations, supporting some principle or characteristic of information security [1], and considering the three possible states of data, according to the McCumber Cube (storage, transmission, and processing) [2]. The research follows a Systematic Literature Review, synthesizing the results with a meta-aggregation process.

KEYWORDS

Microservices, Software architecture, Secure communication, Information security.


Distributed Automated Software Testing using Automation Framework as a Service

Santanu Ray1 and Pratik Gupta2, 1Ericsson, New Jersey, USA, 2Ericsson, Kolkata, India

ABSTRACT

Conventional test automation framework executes test cases in sequential manner which increases the execution time. Even though framework supports multiple test suites execution in parallel, we are unable to perform the same due to system limitations and infrastructure cost. Build and maintenance of automation framework is also time consuming and cost-effective. The paper is design for implementing a scalable test automation framework providing test framework as a service which expedite test execution by distributing test suites in multiple services running in parallel without any extra infrastructure.

KEYWORDS

Distributed Testing, Robot Framework, Docker, Automation Framework.


Cloud Computing Strategy and Impact in Banking/Financial Services

Prudhvi Parne, Information Technology, Bank of Hope, 1655 E Redondo Brach Blvd, Gardena, CA, USA

ABSTRACT

With recent advances in technology, it is becoming more challenging for banks and financial institutions to safeguard their client’s data. This is because there are wide range of software launched regularly that enables the hackers to access financial information illegally by manipulating figures. When data breaches occur, they are costly to both the bank and their customers. As a result, cloud computing comes in to provide a solution to such challenges making banking a reliable and trustworthy service. The paper aims at cloud computing strategy and impact in banking and financial institutions.

KEYWORDS

Cloud Computing, Technology, Finance, Security.


Intelligent Speed Adaptive System using Image Regression Method for Highway and Urban Roads

Bhavesh Sharma1 and Junaid Ali2, 1Department of Electrical, Electronics and Communication Engineering, Engineering College, Ajmer, India, 2Department of Mechanical Engineering, Indian Institute of Technology, Madras, India

ABSTRACT

Intelligent Speed Adaptive System (ISAS) is an emerging technology in the field of autonomous vehicles. However, the public acceptance rate of ISAS is drastically low because of several downfalls i.e. reliability and low accuracy. Various researchers have contributed methodologies to enhance the traffic prediction scores and algorithms to improve the overall adaptability of ISAS. The literature is scarce for Image Regression in this range of application. Computer vision has proved its iota in stream of object detection in self-driving technology in which most of the models are assisted through the complex web of neural nets and live imaging systems. In this article, some major issues related to the present technology of the ISAS and discussed new methodologies to get higher prediction accuracy to control the speed of vehicle through Image Regression technique to develop a computer vision model to predict the speed of vehicle with each frame of live images.

KEYWORDS

Intelligent Systems, Self-Driving Vehicle, Image Processing, Image Regression, Computer Vision, Automotive.


An Algorithm-Adaptive Source Code Converter to Automate the Translation from Python to Java

Eric Jin1 and Yu Sun2, 1Northwood High School, 4515 Portola Parkway, Irvine, CA 92620, 2California State Polytechnic University, Pomona, CA, 91768

ABSTRACT

In the fields of computer science, there exist hundreds of different programming languages. They often have different usage and strength but also have a huge number of overlapping abilities [1]. Especially the kind of general-purpose coding language that is widely used by people, for example Java, Python and C++ [2]. However, there is a lack of comprehensive methods for the conversion for codes from one language to another [3], making the task of converting a program in between multiple coding languages hard and inconvenient. This paper thoroughly explained how my team designs a tool that converts Python source code into Java which has the exact same function and features. We applied this converter, or transpiler, to many Python codes, and successfully turned them into Java codes. Two qualitative experiments were conducted to test the effectiveness of the converter. 1. Converting Python solutions of 5 United States Computer Science Olympic (USACO) problems into Java solutions and conducting a qualitative evaluation of the correctness of the produced solution; 2. converting codes of various lengths from 10 different users to test the adaptability of this converter with randomized input. The results show that this converter is capable of an error rate less than 10% out of the entire code, and the translated code can perform the exact same function as the original code.

KEYWORDS

Algorithm, programing language translation, Python, Java.


Catwalkgrader: A Catwalk Analysis and Correction System using Machine Learning and Computer Vision

Tianjiao Dong1 and Yu Sun2, 1Northwood High School, Irvine, CA 92620, 2California State Polytechnic University, Pomona, CA, 91768

ABSTRACT

In recent years, the modeling industry has attracted many people, causing a drastic increase in the number of modeling training classes. Modeling takes practice, and without professional training, few beginners know if they are doing it right or not. In this paper, we present a real-time 2D model walk grading app based on Mediapipe, a library for real-time, multi-person keypoint detection. After capturing 2D positions of a persons joints and skeletal wireframe from an uploaded video, our app uses a scoring formula to provide accurate scores and tailored feedback to each user for their modeling skills.

KEYWORDS

Runway model, Catwalk Scoring, Flutter, Mediapie.


A Context-Aware and Immersive Puzzle Game using Machine Learning and Big Data Analysis

Peiyi Li1, John Morris2 and Yu Sun2, 1University of California, Irvine, Irvine, CA 92697, 2California State Polytechnic University, Pomona, CA, 91768

ABSTRACT

Recent years, video games have become one of the main forms of entertainment for people of all ages, in which millions of members publicly show their screenshots while playing games or share their experience of playing games [4]. Puzzle game is a popular game genre among various video games, it challenges players to find the correct solution by providing them with different logic/conceptual problems. However, designing a good puzzle game is not an easy task [5]. This paper designs a puzzle game for players of all age ranges with proper difficulty level, various puzzle mechanics and attractive background setting stories. We applied our games to different players to test play and conducted a qualitative evaluation of the approach. The results show that the pace of puzzle games affects play experience a lot and the difficulty level of the puzzles affects players feelings to the game.

KEYWORDS

Puzzle Game, Game Design, Video Games, Adventure Game.


Using Different Assessment Indicators in Supporting Online Learning

Yew Kee Wong, HuangHuai University, Zhumadian, Henan, China

ABSTRACT

The assessment outcome for many online learning methods are based on the number of correct answers and than convert it into one final mark or grade. We discovered that when using online learning, we can extract more detail information from the learning process and these information are useful for the assessor to plan an effective and efficient learning model for the learner. Statistical analysis is an important part of an assessment when performing the online learning outcome. The assessment indicators include the difficulty level of the question, time spend in answering and the variation in choosing answer. In this paper we will present the findings of these assessment indicators and how it can improve the way the learner being assessed when using online learning system. We developed a statistical analysis algorithm which can assess the online learning outcomes more effectively using quantifiable measurements. A number of examples of using this statistical analysis algorithm are presented.

KEYWORDS

Artificial Intelligence, Assessment Indicator, Online Learning, Statistical Analysis Algorithm.


An Intelligent System to Automate Humidity Monitoring and Humidifier Control using Internet-of-Things (IoT) and Artificial Intelligence

Qian Zhang1 and Yu Sun2, 1Jserra High School, San Juan Capistrano, CA 92675, 2California State Polytechnic University, Pomona, CA, 91768

ABSTRACT

Air conditioners are widely used in family homes all over the world. However, the side effects of using air conditioners and dehumidification can cause health problems if people remain in low-humidity environments. This paper traces the development of a software application and system to create an intelligent humidifier that automatically turns on or off for convenience or for those who cannot engage manual control. We applied our application to a humidifier for several days and conducted a qualitative evaluation of the approach. Results affirmed the usability and capacity of our automatic control system.

KEYWORDS

IoT, Machine Learning, Deep Learning, Artificial Intelligence.


An Intelligent Data-Driven Analytics System to Assist Sports Player Training and Improvement using Internet-Of-Things (IoT) and Big Data Analysis

Julius Wu1, Jerry Wang2, Jonathan Sahagun3 and Yu Sun4, 1Irvine High School, Irvine, USA, 2SMIC Private School, Shanghai, China, 3California State University, Los Angeles, CA, 91706, 4California State Polytechnic University, Pomona, CA, 91768

ABSTRACT

Our product is a very unique tracking tool that not only tracks the movement of players on a map, but also the velocity of each player. We have an application that coaches usually hold onto during a game or a practice. It shows coaches an accurate data sample of where each player is and what they are doing on the field whether it be grinding or fooling around. It also helps coaches see accurate gameplay during a game if the recording is not available. When coaches select elite players, they also get a presentation of each players’ skills and how accurate they are when running different routes.

KEYWORDS

IoT, Machine learning, Data Mining.


Current Security Topics and Evolving Risk Mapping Leveraging LDA Machine Learning Models

Joshua Scarpino, Marymount University, Arlington, Virginia, 22207, USA

ABSTRACT

This is a study around the application of Genism’s LDA model toward identification of critical topics within cybersecurity by leveraging social media user feeds. The research was intended to advance focus around trending topics that are critical within security as the threat landscape evolves. This research provides an op-opportunity to expanded threat intelligence by leveraging security professionals as a critical source of intel. This can help to create focus for security awareness training materials and assist in the possible early identification of emerging threats.

KEYWORDS

Gensim Latent Dirichlet Allocation, Social Media, Security Awareness.


A Data-Driven Method for Capturing Comorbidity Structure in Mental Disorders

Hojjatollah Farahani1, Parviz Azadfallah2, and Peter Watson3, 1,2Department of Psychology, Tarbiat Modares University, Iran, 3Cognition and Brain Unit, University of Cambridge, UK

ABSTRACT

The concurrent presence of a mental disorder with another mental disorder is common in the clinical practice of comorbidity structure research. In this study we look at the structure of the comorbidity, assessing the degree of overlap among the measured signs and symptoms of two mental disorders. In this paper, the newly advanced and graphical statistical method of network analysis is introduced and described. This data driven method helps mind researchers to be able to capture the most important relationships among variables in a complex and complicated system. The stages for running the network analysis using R software are explained. Accuracy testing and stability centrality measures are investigated using bootstrapping. As a practical example, this method was used on the data obtained from 254 Multiple sclerosis (MS) patients to capture the comorbidity structure between depressive and anxiety symptoms. The results are presented and discussed. Network analysis as a data-driven based model can be of interest to all mind researchers especially the researchers working in clinical, cognitive and social psychology.

KEYWORDS

Clinical, Cognitive, Psychology, Network analysis, Statistics, Multiple sclerosis.


Business Intelligence and Data Warehouse Technologies for Traffic Accident Data Analysis in Botswana

Monkgogi Mudongo, Edwin Thuma, Nkwebi Peace Motlogelwa, Tebo Leburu-Dingalo and Pulafela Majoo, Department of Computer Science, University of Botswana , Gaborone, Botswana

ABSTRACT

Road traffic accidents are a serious problem for the nation of Botswana. A large amount of money is used to compensate those who are affected by road accidents. According to Mphela [1] traffic accidents are the second largest cause of death after HIV/AIDS in Botswana. It is therefore important for relevant organizations to have a reliable source of data for accurate evaluation of traffic accidents. Similarly, data on vehicle registration must be transformed and be readily available to assist managerial decision makers. In this article, we deploy a Business Intelligence (BI) and Data Warehouse (DW) solution in an attempt to assist the relevant departments in their road traffic accidents and vehicle registration evaluation. In Our evaluation of the traffic accidents our findings suggest that across accident severity, Damage Only accidents had the most interesting recent trend with a 11.93% decrease in the last 3 years on record. Count of Accident Severity for Damage Only accidents dropped from 13,491 to 11,881 between 2018 and 2020 whilst Minor accidents experienced the longest period of growth. Most accidents take place in rural locations and more accidents take place during the weekend. At 28,439, Sunday had the highest number of accidents and was 47.59% higher than Wednesday, which had the lowest count of accidents at 19,269. The results for vehicle registration reveal that the number of vehicle registration decreased for the last 3 years on record. The number of vehicles registered dropped from 65535 to 24457 during its steepest decline between 2019 and 2021.

KEYWORDS

Business Intelligence, Data Warehousing, ETL, Accident and Vehicle registration.


Advanced Deep Learning Model

Yew Kee Wong, School of Information Engineering, HuangHuai University, Henan, China

ABSTRACT

Deep learning is a type of machine learning that trains a computer to perform human-like tasks, such as recognizing speech, identifying images or making predictions. Instead of organizing data to run through predefined equations, deep learning sets up basic parameters about the data and trains the computer to learn on its own by recognizing patterns using many layers of processing. This paper aims to illustrate some of the different deep learning algorithms and methods which can be applied to artificial intelligence analysis, as well as the opportunities provided by the application in various decision making domains.

KEYWORDS

Artificial Intelligence, Machine Learning, Deep Learning.


Global Research Decentralized Autonomous Organization (GR-DAO): A DAO of Global Researchers

Kelly L. Page1 and Adel Elmessiry2, 1LWYL Studio, Chicago, IL, USA, 2AlphaFin, Nashville, TN, USA

ABSTRACT

The latest trend in Blockchain formation is to utilize decentralized autonomous organizations (DAO) in many verticals. To date, little attention has been given to address the global research domain due to the difficulty in creating a comprehensive framework that can marry the cutting edge of academic grade scientific research with a decentralized governance body of researchers. A global research decentralized autonomous organization (GR-DAO) would have a profound impact on the research community academically, commercially, and the public good. In this paper, we propose the GR-DAO as a global community of researchers committed to collectively creating knowledge and sharing it with the world. Scientific research is the means for knowledge creation and learning. The GR-DAO provides the guidance, community and technological solutions for the evolution of a global research infrastructure and environment. Through its design, the GR-DAO embraces, enhances and extends the model of research, research on decentralization and DAO as a model for decentralised and autonomous organizing. This design, in turn, improves most of the uses for and applications of research for the greater good of society. The paper examines the core motivation, purpose and design of the GR-DAO, its strategy to embrace, enhance and extend the research ecosystem, and the GR-DAO design uses across the DAO ecosystem.

KEYWORDS

Scientific Research, Researcher, Research, Knowledge, Learning, Cocreated Knowledge, Applied Research, Decentralized Autonomous, Organization, DAO, Research Model, Research Activity, Blockchain, Emerging Technology, Incentive Design, Reputation Staking, Distributed Ledger Technology, Decentralized Infrastructure.


Proof of Renewable (PoR) The ROBe2 Protocol

Tom Davis1 and Adel Elmessiry2, 1Renewable Energy Alliance, USA, 2Crypto Bloc Tech, USA

ABSTRACT

We are at a serious crossroads as it relates to carbon emissions and the condition of our planet. Global conditions are spiraling out of control. Climate change is widespread, occurring extremely fast, and intensifying. The consumption of nonrenewable energy sources is impacting both the environment and the economy in equal proportions. Up to this point society has tried to solve these problems with local solutions but we have fallen short. The missing component to solve the global problem is an alignment of individuals and organizations coming together, taking responsibility, and creating global solutions to meet the goal of being carbon negative by 2050. In this paper, we propose the ROBe2 protocol as the global solution that brings everyone together to solve these very important issues. Renewable Obligation Base energy economy (ROBe2) is a protocol attempting to aggregate local renewable energy solutions into a global impact while providing an economically sound framework and allowing the creation of an economic incentive for using renewable energy in place of a fossil one [1].

KEYWORDS

Scientific Research, Researcher, Research, Knowledge, Learning, Applied Research, Decentralized Autonomous, Organization, DAO, Research Model, Research Activity, Blockchain, Emerging Technology, Incentive Design, Reputation Staking, Distributed Ledger Technology, Decentralized Infrastructure, Renewable, Renewable Energy.


Fast implementation of elliptic curve cryptographic algorithm on GF( 3m) based on FPGA

Tan Yongliang, He Lesheng, Jin Haonan, Kong Qingyang, Information Institute, Yunnan University, Kunming, China

ABSTRACT

As quantum computing and the theory of bilinear pairings continue being studied in depth, elliptic curves on GF(3m) are becoming of an increasing interest because they provide a higher security. What’s more, because hardware encryption is more efficient and secure than software encryption in todays IoT security environment, the scalar multiplication algorithm is implemented in this paper for elliptic curves on GF(3m) on the FPGA device platform. The arithmetic in finite fields is quickly implemented by bit-oriented operations, and then the computation speed of point addition and multiplication points is improved by a modified Jacobia projection coordinate system. The final experimental results demonstrate that the structure consumes a total of 7518 slices, which is capable of computing approximately 3000 scalar multiplications per second at 124 Mhz. It has relative advantages in terms of performance and resource consumption, which can be applied to specific confidential communication scenarios as an IP core.

KEYWORDS

GF(3m), Elliptic Curve Cryptography, Scalar Multiplication, FPGA, IoT Security.


A Mobile Platform for Food Donation and Delivery System using AI and Machine Learning

George Zhou1, Marisabel Chang2 and Yu Sun2, 1USA, 2California State Polytechnic University, Pomona, CA, 91768

ABSTRACT

Within the last year through the turmoil of the Covid-19 pandemic, an increasing number of families and individuals are experiencing food insecurity due to a loss of job, illnesses, or other financial struggles [4]. Many families in the Orange County area and abroad are turning to free food sources such as community food pantries or banks. Using specified surveys to food insecure families, we discovered a need for a solution to enhance the accessibility and usability of food pantries [5]. Therefore, we created a software application that uses artificial intelligence to locate specific items for users to request, and allow volunteers to see those requests and pick up the resources from food pantries, and deliver them directly to the homes of individuals. This paper shows the process in which this idea was created and how it was applied, along with the conduction of the qualitative evaluation of the approach. The results show that the software application allowed families and individuals to receive quality groceries at a much higher frequency, regardless of multiple constraints.

KEYWORDS

Mobile Platform, machine learning, data mining.


A Mobile Platform for Food Donation and Delivery System using AI and Machine Learning

Shi Yuhan and Huang HongXu, College of Computer Science and Information Technology,Central South University of Forestry and Technology,Changsha, Hunan China

ABSTRACT

A new partly coherent jamming technology against Synthetic Aperture Radar (SAR) is proposed in this paper. Its signal has stepped repeater time-delay and random inter-pulse phase. The research shows that this interference is intra-pulse coherent and inter-pulse non-coherent with SAR system. Its imaging output is a similar parallelogram. Comparing with other inter-pulse non-coherent region jamming technologies such as randomly shift-frequency jamming, this proposed jamming could obtain more intra-pulse matching processing gains from SAR.

KEYWORDS

synthetic aperture radar(SAR), partly coherent jamming, inter-pulse non-coherence, stepped time-delay.


Cascaded Segmentation Network based on Double Branch Boundary Enhancement

Li Zeng1, Hongqiu Wang1, Xin Wang3, Miao Tian1* and Shaozhi Wu1,2*, 1University of Electronic Science and Technology of China, Chengdu,611731, China, 2Yangtze Delta Region Institute (Quzhou), University of Electronic Science and Technology of China, Quzhou, Zhejiang 324000, China, 3Department of Abdminal Oncology, Cancer Center, West China Hospital, Sichuan University

ABSTRACT

Cervical cancer is one of the most common causes of cancer death in women. During the treatment of cervical cancer, it is necessary to make a radiation plan based on the clinical target volume(CTV) on the CT image. At present, CTV is manually sketched by physicists, which is time-consuming and laborious. With the help of deep learning model, computer can accurately draw the outline of CTV in Colleges and universities. The CDBNet proposed in this paper is a cascaded segmentation network based on double-branch boundary enhancement. First, classification network determines whether a single image contains a region of interest (ROI), and then the segmentation network uses DBNet to segment more accurately at the ROI contour. In this paper, we propose CDBNet, a cascaded segmentation network based on double-branch boundary enhancement.First, classification network determines whether a single image contains a region of interest (ROI), and then the segmentation network uses DBNet to segment more accurately at the ROI contour. The CDBNet proposed in this paper was verified on the cervical cancer dataset provided by the Department of Radiation Oncology, West China Hospital, Sichuan Province. The average dice and 95HD of the delineation results are 86.12% and 2.51mm. At the same time, the classification accuracy rate of whether the image contains ROI can reach 93.19%, and the average Dice of the image containing ROI can reach 70%.

KEYWORDS

CTV delineation, cascade, segmentation, boundary enhancement.


Fast Convolution Based on Winograd Minimum Filtering: Introduction and Development

Gan Tong and Libo Huang, School of Computer, National University of Defense Technology, Changsha, China

ABSTRACT

Convolutional Neural Network (CNN) has been widely used in various fields and played an important role. Convolution operators are the fundamental component of convolutional neural networks, and it is also the most time-consuming part of network training and inference. In recent years, researchers have proposed several fast convolution algorithms including FFT and Winograd. Among them, Winograd convolution significantly reduces the multiplication operations in convolution, and it also takes up less memory space than FFT convolution. Therefore, Winograd convolution has quickly become the first choice for fast convolution implementation within a few years. At present, there is no systematic summary of the convolution algorithm. This article aims to fill this gap and provide detailed references for follow-up researchers. This article summarizes the development of Winograd convolution from the three aspects of algorithm expansion, algorithm optimization, implementation, and application, and finally makes a simple outlook on the possible future directions.

KEYWORDS

Winograd MinimumFiltering, Winograd Convolution, Fast Convolution, Convolution Optimization.


Efficient Implementation of a Digital Chirp Generator

Andreas Falkenberg, Metawave Corporation, Carlsbad, CA, USA

ABSTRACT

This paper presents a novel and efficient implementation of a digital chirp generator. The presented solution is derived from the implementation of the well-known CORDIC algorithm for sine and cosine waveform generation. Chirp generators are used in radar applications but the herein presented solution can be applied to any problem, which requires an efficient chirp generator solution. It can be implemented on a DSP processor as well as FPGA or a general-purpose CPU.

KEYWORDS

Chirp, Cordic, Radar, FMCW.