4th International Conference on Blockchain and Internet of Things (BIoT 2023)

March 18 ~ 19, 2023, Vienna, Austria

Accepted Papers

Plot-generation Methods using a Narrative Structure


Takahiro Higasa1, Yoji Kawano1 and Satoshi Kurihara2, 1Pastral 105 Corporation, Kanagawa, Japan, 2Faculty of Science and Technology, Keio University, Kanagawa, Japan

ABSTRACT

The market for content creation is growing due to the expansion of the games market and the rapid growth in video-distribution services. Creativity is essential for content creation, but it requires a tremendous effort, and human creativity is limited, so there is need to build an artificial intelligence that supports human creativity. We focused on creating stories, which forms the basis of content creation. To construct a system to stimulate human creativity, we attempted to generate plots that automatically stimulate peoples creativity. A plot is a synopsis written by focusing on causal relationships in a story. There are various approaches to plot-generation methods, such as using planning and deep learning, but we generate plots using a narrative structure that can prevent the story from breaking down. In the plot-generation method using narrative structure, plots are generated by linking sentences with certain functions that make up the plot, called plot units. We propose two plot-generation methods that use semantic similarity for linking plot units. We compared the proposed method with manually generated plots and found that the proposed method was comparable to the manual process in terms of coherence and being interesting. Furthermore, the plots generated by the proposed method were found to be more creative than manually generated plots. It should be noted that issues in terms of comprehension remain.

KEYWORDS

Automatic story generation, Hierarchical structure of stories, Artificial intelligence.


Procedural Generation in 2D Metroidvania Game with Answer Set Programming


John Xu1, John Morris2, 1Harvard-Westlake High School, 3700 Coldwater Canyon Ave, Studio City, CA 91604, 2Computer Science Department, California State Polytechnic University, Pomona, CA 91768

ABSTRACT

Video game designers often find themselves at a crossroad when designing levels; namely, many have a difficult time balancing the amount of control they want to have over what their levels look like [1]. If too little control is given, like in the case of pure perlin-noise generation, levels can end up with too much variation and unideal generations [2]. Softlock is an example of unideal generation in the case of metroidvania games, if the placement of keys cannot be easily controlled and end up being placed behind gates, the players can get permanently stuck [3]. Developers may usually hand-make all levels in order to try and prevent this from happening, however they risk spending too much effort and time on designing levels, resulting in a general lack of quantity in levels. Objectively speaking, both methods have their strengths and work well in specific genres of games, but limiting oneself to the boundaries of these methods does not fundamentally achieve both quantity and accuracy. This paper proposes an unique solution to this dilemma, providing automated generation of levels while also giving developers much more control over the overall output. Our method uses Answer Set Programming (ASP) to verify generation based on restrictions we place, guaranteeing the outcome to be what we want [4]. To demonstrate our method, we applied our solution to a 2D metroidvania game made in the Unity game engine and conducted quantitative tests to assess how well our method works as a level generator [5].

KEYWORDS

Procedural Generation, Answer Set Programming, Video Game Application


Can Incremental Learning help with KG Completion?


Mayar Osama, and Mervat Abu-Elkheir, Faculty of Media Engineering and Technology, German University in Cairo, Egypt

ABSTRACT

Knowledge Graphs (KGs) are a type of knowledge representation that gained a lot of attentiondue to their ability to store information in a structured format. This structure representation makes KGs naturally suited for search engines and NLP tasks like question-answering (QA) and task-oriented systems;however, KGs are hard to construct. While QA datasets are more available and easier to construct, theylack structural representation. This availability of QA datasets made them a rich resource for machinelearning models, but these models benefit from the implicit structure in such datasets. We propose aframework to make this structure more pronounced and extract KG from QA datasets in an end-to-endmanner, allowing the system to learn new knowledge in incremental learning with a human-in-the-loop(HITL) when needed. We test our framework using the SQuAD dataset and our incremental learningapproachwith twodatasets, YAGO3-10 andFB15K237, both ofwhich showpromising results.

KEYWORDS

Knowledge Graphs, Question Answering, Incremental Learning, Human in the loop


Multi-Agent Based Information RetrievalSystem: A Case Study


Reza Shokri Kalan, Digiturk Bein Media Group, Istanbul, Turkiye

ABSTRACT

Due to the volume, variety, and velocity of data, retrieving information in a shortperiod of time that is suitable for our decision-making becomes a concern. A Distributed Information Retrieval (DIR) system can help to deal with this huge amount of data located at different locations. Efficient decision-making is a target of intelligent Multi-Agent Systems (MAS), where multiple agents communicate and collaborate together to solve complex tasks by overcoming individual limitations. To this end, data needs to be partitioned and distributed for parallel processing and achieving higher performance. In this paper, agent technology is employed for distributed information retrieval. We have developed a case study by discussing all the development phases using the Tropos methodology. Also, the experiences gained and challenges faced are reported.

KEYWORDS

Information Retrieval, Multi-Agent, Jade, Tropos.


Towards Scalable EM-based Anomaly Detection for Embedded Devices Through Synthetic Fingerprinting


Kurt A. Vedros1, Georgios Michail Makrakis1, Constantinos Kolias1, Robert C. Ivans2 and Craig Rieger2, 1Department of Computer Science, University of Idaho, 1776 Science Center Dr, Idaho Falls, ID 83402, USA, 2National and Homeland Security, Idaho National Lab, 1955 N Fremont Ave, Idaho Falls, ID 83402, USA.

ABSTRACT

Embedded devices are omnipresent in modern networks including the ones operating inside critical environments. However, due to their constrained nature, novel mechanisms are required to provide external, and non-intrusive anomaly detection. Among such approaches, one that has gained traction is based on the analysis of the electromagnetic (EM) signals that get emanated during a device’s operation. However, one of the most neglected challenges of this approach is the requirement for manually gathering and fingerprinting the signals that correspond to each execution path of the software/firmware. Indeed, even simple programs are comprised of hundreds if not thousands of branches thus, making the fingerprinting stage an extremely time-consuming process that involves the manual labor of a human specialist. To address this issue, we propose a framework for generating synthetic EM signals directly from the machine code. The synthetic signals can be used to train a Machine Learning based (ML) system for anomaly detection. The main advantage of the proposed approach is that it completely removes the need for an elaborate and error-prone fingerprinting stage, thus, dramatically increasing the scalability of the corresponding protection mechanisms. The experimental evaluations indicate that our method provides high detection accuracy (above 90% AUC score) when employed for the detection of injection attacks. Moreover, the proposed methodology inflicts only a small penalty (-1.3%) in accuracy for the detection of the injection of as little as four malicious instructions when compared to the same methods if real signals were to be used.

KEYWORDS

Side Channel Analysis, Anomaly Detection, Electromagnetic Signals, Synthetic Signals.


Programmable DNN Accelerator Embedded in Instruction Extended RISC-V Core


Hansen Wang, Dongju, and Tsuyoshi Isshiki, Department of Information and Communications Engineering, Tokyo Institute of Technology, Japan

ABSTRACT

The deep neural network (DNN) is widely used in many aspects, such as Speech Recognition, Face Detection, traffic monitoring, and natural language processing. The traditional way of implementing DNN was utilizing a GPU, which is fast but inefficient enough. In pursuit of lower power consumption, higher efficiency, and flexibility, we expect application-specific hardware computing. This paper proposes a run-time programmable DNN accelerator SoC (DNN-AS) architecture embedded in an instruction-extended RISC-V core. The custom extension instruction set is designed to accelerate the frequent operations in DNN. To reduce the circuit scale, we designed an 8-bit dynamic fixed-point (DFP) scheme within the DNN-AS. The accuracy of DFP is listed and compared with TensorFlow. Furthermore, the corresponding software of RESNET and VGG16 is described and simulated with DNN-AS. Lastly, we compare the simulated result with other non-SoC FPGA designs in efficiency, throughput and power.

KEYWORDS

RISC-V, Hardware/Software co-design, DNN, Dynamic Fixed-Point, Custom Instruction.


Implementation of Power Gated Alu for Low Power Processor


K.Prasad Babu1, Dr. K.E. Sreenivasa Murthy2, Dr. M.N. Giri Prasasd3, 115PH0426, Department of ECE, JNTUA, Anantapuramu, Andhra Pradesh, India, 2Pricipal & Professor, Department of ECE, RECW, Kurnool, Andhra Pradesh,India, 3Director of Academics & Audit, Professor, Department of ECE, JNTUA, Anantapuramu, Andhra Pradesh,India

ABSTRACT

Low power designs are needed not only for moveable applications, but also to lessen power in high power systems. Power consumption can be miniaturize at the system level, architectural level, algorithm level, micro-architectural level, gate level, or circuit level. Power gating of functional system has proven to be a compelling approach to cut back power consumption. With the each requirement for high speed, low power consumption, and huge performance endure to increase year after year, devices must be scaled to lower dimensionality. Current technology sees power utilization as the restraining factor. A high power supply will affect the temperature, which will affect the cooling cost of the chip. In this work the power gated ALU with minimal functions is designed and implemented. Comparison is made with presence of nmos transistor and without it. 120nm technology is employed for the design. MOS model3 parameters with 120nm technology is used in the design.1-bit ALU, 4-bit ALU and 8-bit ALU are designed and implemented with power comparisons.

KEYWORDS

Power-Gating, ALU, Low Power, 120nm, MOS model 3, Low Power processor.


Tion Sport: A Mobile Application DesignedtoImprovea School’s Sport Event Scheduling System


Junhong Duan1, Yujia Zhang2,1Santa Margarita Catholic High School, 22062 Antonio Pkwy, Rancho Santa Margarita, CA92688,2Computer Science Department, California State Polytechnic University, Pomona, CA91768

ABSTRACT

In my freshman year, I joined the school’s football team. However, the application they used at the time wasincredibly confusing and dif icult to navigate. The scheduling system that is currently in place has much roomforimprovement. This paper covers the development of an application that implements a new scheduling systemthat ishopefully easier for people to manage. To test the ef ectiveness of the application at creating a better userexperience, an experiment was performed in which ten participants were gathered to test the features of theapplication, then complete a Google Forms survey that asked the participants to rate the functionality of theapplication and the convenience of the application on a scale from one to ten [1][2]. The results indicated that thenewly developed application would be a suitable replacement for the current school sports application, as manyof the participants stated that the application both functioned properly and was very intuitive.

KEYWORDS

Sport, management, School teams


Embedded System of Signal Processing on FPGA: Implementation OpenMP Architecture


Mhamed Hadji1, Abdelkader Elhanaoui1,4, Rachid Skouri3 and Said Agounad4, 1REPTI Laboratory, Faculty of sciences and Technology, BP 509, Boutalamine, Errachida Moulay Ismail University of Meknes, Morocco,3High School of Technology,Km 5, Road of Agouray, N6, Moulay Ismail University Meknès 50040 Morocco,4Laboratory of Métrologie et Traitement de l Information, Faculty of sciences,Ibn Zohr University, 80000 Agadir Morocco

ABSTRACT

The purpose of this paper is to study the phenomenon of acoustic scattering by using a new method.The signal processing (Fast Fourier Transform FFT Inverse Fast Fourier Transform iFFT and BESSEL functions) is widely applied to obtain information with high precision accuracy.Signal processing has a wider implementation in generalpurpose processors Our interest was focused on the use of FPGAs (Field-Programmable Gate Arrays) in order to minimize the computational complexity in single processor architecture then be accelerated on FPGA and meet real time and energy efficiency requirements.General-purpose processors are not efficient for signal processing.We implemented the acoustic backscattered signal processing model on the DE1SOC FPGA and compared it to Odroid xu4.By comparison, the computing latency of Odroid xu4 and FPGA are 60 seconds, and 20 seconds respectively.The Altera DE-SoC FPGA-based - system has shown that acoustic spectra are performed at up to 3 times faster than the Odroid xu4 implementation.FPGA-based system of processing algorithms is realized with an absolute error about 10-2.This study underlines the increasing importance of embedded systems in underwater acoustics, especially in non-destructive testing.It is possible to obtain information related to the detection and characterization of submerged cells.So we have achieved good experimental results in real time and energy efficiency.

KEYWORDS

DE1 FPGA, acoustic scattering, Form function, signal processing, Nondestructive testing.


An Interactive and Collaborative Gaming Platform to Engage the Autism Spectrum in Art Learning using Artificial Intelligence


Carina Zheng1, Yu Sun2, Yujia Zhang3, 1Orange County School of The Arts, 1010 Main Street N, Stata Ana, 92701, 2California State Polytechnic University, Pomona, CA, 91768, Irvine, CA 92620, 3University of California Irvine, Irvine, CA 92697

ABSTRACT

For decades, mental illness has been a popular topic of discussion that still lingers for ef ective treatments [1]. While current therapy of mental disorders can achieve success, it is far from enough to prevent their occurrence and impacts on individuals [2]. Because of this, mental illness is an area of study that requires professionals and specialists to take a further step. Additionally, as the use of technology advances in the current society, young children and pre-teens gradually become victims of mental disorders as well: a community that often needs careful attention from adults and caregivers [3]. This paper introduces a method of treating mental disorders in young individuals that is not considered rare, but often overlooked, by many. This application encourages creativity and interests in its users, motivating them to actively engage on their strengths and use it to reflect their struggles.

KEYWORDS

3D Modeling, Unity, Collaborative.


Business Value Impact of AI-Powered Service Operations (AIServiceOps)


Harsha Vijayakumar, Researcher, S.P. Jain School of Global Management

ABSTRACT

Artificial Intelligence (AI) has been significant technology of the 21st century. This technology is changing every aspect of modern enterprise technology tooling, from strategies to selecting and implementing to adopting digital AI transformation. The rapid development of Artificial Intelligence has prompted many changes in the field of Information Technology (IT) Service Operations. IT Service Operations are driven by AI, i.e., AIServiceOps. AI has empowered new vitality and addressed many challenges in IT Service Operations. However, there is a literature gap on the Business Value Impact of Artificial intelligence (AI) Powered IT Service Operations. It can help IT build optimized business resilience by creating value in complex and ever-changing environments as product organizations move faster than IT can handle. So, this research paper examines how AIServiceOps creates business value and sustainability, basically how AIServiceOps makes the IT staff liberation from a low-level, repetitive workout and traditional IT practices for a continuously optimized process. One of the research objectives is to compare Traditional IT Service Operations with AIServiceOPs. This paper provides the basis for how enterprises can evaluate AIServiceOps and consider it a digital transformation tool.

KEYWORDS

AI-Powered Service Operations (AIServiceOps), Business Value Assessment, IT Service Management, IT Operations Management, and Digital Transformation.


An Efficient Federated Learning Method Based on Optimized-residual and Clustering


Zhengyao Wang and Hongjiao Li, School of Computer Science and Technology, Shanghai University of Electric Power, Shanghai, China, School of Computer Science and Technology, Shanghai University of Electric Power, Shanghai, China

ABSTRACT

An efficient federated learning method based on optimized-residual and clustering is proposed to address the problem of sacrificing model accuracy and convergence speed to reduce communication cost in federated learning. A sparse ternary compression scheme is used to compress the model weight parameters. For the residuals existing in the compression scheme, the optimized-residual method is proposed for the first time in this paper, and the problem of model accuracy and convergence speed degradation due to parameter compression is solved by applying the method on the client side together with the server K-Means aggregation strategy. Experimental results on MNIST dataset show that the proposed optimized-residual and clustering-based federation learning method has less than 5% of the communication overhead of FedAvg scheme with guaranteed accuracy, and the convergence speed is improved by 18.9% and the communication overhead is only 62.5% compared with the state-of-the-art FedZip scheme.

KEYWORDS

DE1 FPGA, acoustic scattering, Form function, signal processing, Nondestructive testing.


Fun Writter: A Writing Assist Programs to HelpStudents Paraphrasing, Summarizing, or Finding Keywords by using Python


Jay Pang1, Marisabel,Chang2, 1The Webb Schools, Front Entrance, 1175 W Baseline Rd, Claremont, CA 91711,2Computer Science Department, California State Polytechnic University, Pomona, CA 91768

ABSTRACT

Essay writing can be dif icult, especially if students need to solve problems with redundancy and understanding [1]. We decided to create a project that could help solve this problem. This paper will cover the development of an online writing tool, it’s uses, and how it may be improved upon in the future [2]. The program is designed to evaluate given sentences and paragraphs, and find paraphrases, keywords, and summaries using multiple dif erent libraries. We applied our application to use paragraphs in written essays and conducted a qualitative evaluation of the approach. The results show that the program is able to provide users with dif erent versions of paraphrases and summaries of the user to choose from, and keywords as well, allowing the program to be flexible to the user’s needs [3]. The program will be a website that the user can upload the input text by copying and pasting text, and the user can select whether to paraphrase, find keywords, or summarize it.

KEYWORDS

Writing, application, paraphrase, summary, keyword

Spatial Load Prediction based on Multi-scale Ldtw and TCN


Yue Ma and Mi Wen,School of Computer Science and Technology, Shanghai University of Electric Power, Shanghai, China

ABSTRACT

Spatial load forecasting has become an indispensable part of network planning. This paper presents a spatial load prediction method based on multi-scale LDTW spectral clustering and TCN. Electricity consumption behaviours of different blocks are accurately analysed through spectral clustering, and more detailed classification is carried out based on the original classification, their simultaneous rates are determined at the same time. The selected training samples were input into the TCN to predict the load density index of each block, and were aggregated based on the simultaneity rate to realize the spatial load prediction. Compared with other methods, the relative error of the proposed method is reduced by 2.03% on average and the prediction accuracy is higher.

KEYWORDS

Spatial load forecasting, DTW, Spectral Clustering, Simultaneous rate, TCN

A Spatial-temporal Detection Method for Distribution Network Based on Random Matrix Theory


Suling Qin, Mi Wen and Yanfei Wang, College of Computer Science and Technology, Shanghai University of Electric Power

ABSTRACT

With the increase in grid-connected capacity of renewable energy and the large-scale applicationof flexible loads, the power grid structure is becoming complex. These factors increase the randomness andvolatility of the system operation, which brings great uncertainty to the safe and stable operation of thepower grid. Therefore, it is more and more critical and urgent to conduct real-time monitoring, stateanalysis and abnormal area location analysis of the power system. Combining with the phasor measurement unit (PMU) in the power system, a data driven spatial-temporal detection method for thedistribution network state is proposed based on random matrix theory(RMT). In the timing sequence, realtime awareness of grid status can be achieved. Spatially, anomalous areas where fluctuations occur canbelocated and analyzed. The ef ectiveness of the method was verified with the IEEE-39 bus system. Theresults show that the method is widely used and can detect the fluctuation of the grid without knowing thedetailed grid structure information. And it can still be applied when renewable energy is connected, realizing real-time detection of operating condition and positioning of abnormal area.

KEYWORDS

Random matrix theory (RMT), Real-time detection, Phasor measurement unit (PMU), Area location


A Systematic Review of the Relationship Between Industry 4.0 Application and Performance Improvement


Ndala Yves Mulongo, Siyabonga Ndinisa and Nita Sukdeo, Faculty of Engineering and The Built Enviroment, University of Johannesburg,Johannesburg, South Africa

ABSTRACT

The focus of this research is to look at how Industry 4.0 technologies affect the relationship between lean management and operational performance improvement in a developing economy like South Africa. Industry 4.0 is an industry defined by networked machines, intelligent systems and goods, and interconnected solutions. It drives the development of intelligent and dynamic manufacturing systems, as well as the mass production of highly personalized products. Despite its popularity, many businesses are still unsure how to incorporate Industry 4.0 s high-tech techniques into their operations. Developing nations have unique obstacles when it comes to investing in Industry 4.0, particularly in manufacturing businesses in emerging nations. Lean management is a widespread practice across a variety of industries and nations, and it includes a continual focus on eliminating unnecessary operations while also enhancing efficiency and quality as viewed by consumers. This study adds to the theoretical fields of advanced manufacturing technology and improved operational performance. It also contributes to a deeper understanding of the linkages between existing Lean management techniques and Industry 4.0 technology. The research could also help managers better understand and predict the benefits and challenges of integrating advanced manufacturing technologies into existing Lean management systems.

KEYWORDS

Industry 4.0, Systematic Review, Performance Improvement.


A reliable pub-sub based framework for decentralized financial strategy trading using proof of behavioral contribution


Wei Zhang, Qianhui Xu and Yunbo Liu, School of Computer Science and Technology, Hangzhou Dianzi University, China

ABSTRACT

The advent of the digital economy promises to make financial strategies excellent digital assets. However, traditional strategy sharing platforms face problems such as strategy leakage, data falsification, and the excessive power of third-party. Blockchain technology can improve data sharing transparency, reliability, security, and efficiency to unlock the potential value of the strategy. However, the existing blockchain-based data-sharing framework cannot be fully applied to the one-to-many strategy transaction scenario under the publish-subscribe model and lacks mechanisms to incentivize the sharing entities in the peer-to-peer network. Therefore, we propose DSTS, a decentralized financial strategy trading system that utilizes smart contracts and attribute-based encryption to achieve access control of strategy data. In addition, we construct a comprehensive contribution evaluation model to quantify the on-chain behavior of participants. Behavioral contribution incentive is combined with a cryptographic sortition-based consensus mechanism to motivate more users to participate in strategy trading and system maintenance. Finally, we implement the DSTS and perform functional and performance evaluations. The simulation results show that the proposed strategy trading framework achieves the sharing of signal data and fine-grained access control. The designed incentive and consensus mechanism improve the fairness and security of the distributed system while ensuring overall efficiency and decentralization.

KEYWORDS

blockchain, smart contract, data sharing, access control, contribution incentive, consensus mechanism.


Use of AI to Diversify and Improve the Performance of RF Sensors Drone Detection Mechanism


Fahad Alsifiany, Department of Information Technology and Communications, King Fahad Security Collage, Riyadh, Saudi Arabia

ABSTRACT

Drone terrorism may seem elementary and efforts in its mitigation may seem painless. The fact is that security bodies in many countries are still grappling with this growing security concern. The autonomous nature of drones and the unpredictable nature of drone attacks remain to be some of the unforeseen challenges undermining the mitigation efforts in combating drone terrorism. The need to upskill our security forces and the general public on the operational practices and security capabilities in the drone world cannot be overemphasized. This paper explores a futuristic solution to the current challenges encountered in the war against drone terrorism. In its design, it delves into the possibility of utilizing Artificial Intelligence (AI) in characterizing the features of drones identified in our airspace to determine their authenticity. It further enriches the employees of the security services and the general public with information on combating drone terrorism by benefiting from the accumulated experiences of the relevant and specialized affiliates.

KEYWORDS

Drone terrorism, Drone, Radio Frequency (RF), Unmanned Aerial Vehicles (UAV), Artificial Intelligence (AI), Computer Vision (CV), Machine Learning (ML).


Human Consciousness and Conscious Machines


Ines Razec, Doctoral School of Sociology, Faculty of Sociology and Social Work, University of Bucharest, Romania

ABSTRACT

The question of whether artificial intelligence (AI) can become conscious is a topic of ongoing debate in the fields of cognitive science and philosophy. Some argue its possible for AI to become conscious due to it being a property of complex information processing systems. Others argue its unique to biological organisms and cant be replicated in AI. Research in AI has made progress in mimicking human cognition, but lacks the subjective experience and self-awareness of consciousness. This study attempts to analyze the main theories regarding the human consciousness, namely the “Global Workspace Theory” and the “Integrated Information Theory” and use them as a starting point for initiating a debate regarding the possibility of an AI to become conscious, and the consequences that this could entail at both an individual and societal level. Keywords: consciousness, imitation game, Global Workspace Theory, Integrated Information Theory, Machine Theory of Mind.

KEYWORDS

Consciousness, Artificial Intelligence, Global Workspace Theory, Integrated Information Theory, Imitation Game.


An Natural Language Processed Web Application that Interpret and Convert English to Python Code


Sunny Zhao1, Ang Li2, 1St. Margaret Episcopal School, 31641 La Novia Ave, San Juan Capistrano, CA 9267,2Computer Science Department, California State Polytechnic University, Pomona, CA 9176

ABSTRACT

As the exchange between natural language and program code gradually becomes the need of industry, more and more interpreters and translators are required. Such natural language interpreters and converters can benefit society in a variety of fields, such as service industry, communication industry, and engineering industry [6]. Concise and accurate language processors will greatly boost the productivity of bottom repetitive works, provide examples and inspirations for students and industry workers, and become the tendency of the future. This paper introduces an application using natural language processing and neural network to ef ectively interpret and translate English to Python code, and detailly present the structural flow of the application [7]. This paper will also introduce the structure of the neural network, its validity, and how the python torch was applied and integrated. Furthermore, it will demonstrate the application and limitation of this model as well as its future improvements. We applied our application to educational needs and conducted qualitative evaluation of the approach. The result shows a beneficial and potential ef ect that is applicable to a greater field.

KEYWORDS

Natural language Processing , English to Python , Web Application


AI DanceFriend: An Intelligent Mobile Application to Automate the Dance Rating using Artificial Intelligence and Computer Vision


Yuanyuan Ding1, Shuyu Wang2, 1Sage Hill School, 20402 Newport Coast Dr, Newport Beach, CA 92657, 2Computer Science Department, California State Polytechnic University, Pomona, CA 91768

ABSTRACT

In recent years, dance has become a popular entertainment for many people and also an occupation. As a dancer, sometimes it is hard to check how close your cover is vs. the choreographers because our eyes are not always accurate when we are judging dynamic movement of people, so can artificial intelligence help us to do the work? This paper develops an application which utilizes artificial intelligence, and data analysis skills to develop an application which works on dance scoring [4]. In the application, users can upload two videos, one is their own cover while another one is the original choreography. Then, the application will use MediaPipe to catch the angles of dancers’ bodies in frames then store them in a data abstraction [5]. After all data are collected, the application will use clustering to line up the frames and angles information that are stored. The steps above will be applied to both videos. Next, the application will use an algorithm to compare two videos’ data and calculate a percentage of error of the covering video to the original choreograph and report a grade to the user. We applied our application to users who want to check how similar their covering dances are compared to the original choreographs in order to improve their covering quality [6]. The results show that when users are improving the quality of their covers, they improve their skills of focusing and optimizing details in dance.

KEYWORDS

Artificial Intelligence, Clustering, MediaPipe


Emerging Role of Artificial Intelligence in Addressing the Electricity Crisis


Ndala Yves Mulongo and Nita Sukdeo,Faculty of Engineering and The Built Enviroment, University of Johannesburg, Johannesburg, South Africa

ABSTRACT

South Africa is in the throes of a worse electrical energy crisis. The reason to this crisis is due to the South Africa 's state owned electricity utility, which is facing a relatively close hurricane of an increasing energy supply shortage, compounded by Eskoms declining generation due to more unsustainable ageing power stations, leading to spiking electricity bills. These factors are causing havoc on the economy. With the country recovering from a COVID-induced economic downturn during the last Three years. The harsh truth of the situation is disturbing, with roughly weekly power outages. The key questions must be addressed: What has been done to remedy this situation? how can the government speed up the reform of the electricity sector? From this back rounds, this paper aimed at investigating the role of Artificial Intelligence in addressing the factors causing the power outages. The findings demonstrate that Artificial intelligence has the potential to enhance power efficiency, reliability, and transparency

KEYWORDS

Artificial Intelligence, Load-Shedding, South Africa


Fibonacci Sequence in Genetic Algorithm


Warayu Intawongs, Wirat Leenavonganan, Tanin Sammanee, Deepscope, Bangkok, Thailand

ABSTRACT

This paper presents an application of Fibonacci Sequence (FS) in determining the population of Genetic Algorithm (GA) called Fibonacci Sequence in Genetic Algorithm (FSGA). The objective is to develop GA to be able to find better results, so we tested it against Simple Genetic Algorithm (SGA). We have tested with one-max problem that is suitable for evaluating algorithm. We have configured the parameters of GA and FSGA the same for the number of generations, crossover rate and mutation rate except for the size of population. From the experiment with one max problem in different length shows that the new algorithm better than Simple Genetic Algorithm (SGA) at all lengths tested and more stable.

KEYWORDS

Fibonacci Sequence, Genetic Algorithm, size of population


The Impact of Artificial Intelligence and Machine Learning on Workforce Skills and Economic Mobility in Developing Countries: a Case Study of Ghana and Nigeria


Abdulgaffar Muhammad1 and UwaisuAbubakar Umar2, 1Department of Business Administration, Ahmadu Bello University, Zaria, 2Department of Computer Science, Modibbo Adamawa University

ABSTRACT

This study investigates the impact of Artificial Intelligence (AI) and Machine Learning (ML) technologies on workforce skills and economic mobility in Ghana and Nigeria. Using a qualitative research design, the study involves a literature review and data collection through interviews and focus groups with workers, educators, employers, and policymakers in both countries. The study shows that the adoption of AI and ML technologies is creating a growing demand for workers with complementary skills, leading to a skills gap in the workforce as the education systems in these countries struggle to keep up with the demand. The research study highlights the need for policies and strategies to address the skills gap and promote economic mobility. The studys recommendations can inform policymakers, educators, and employers in these countries on necessary steps to prepare the workforce for the changing demands of the future of work. Overall, this study provides a comprehensive analysis of the qualitative aspects of data collection and analysis and the impact of AI and ML on workforce skills and economic mobility in Ghana and Nigeria.

KEYWORDS

Artificial Intelligence, Machine Learning, Workforce Skills, Economic Mobility, Ghana &Nigeria.


A Review of Industry 4.0 Application in Tertiary Education


Ndala Yves Mulongo, Octavia Mpanza, and Nita Sukdeo, Faculty of Engineering and The Built Enviroment, University of Johannesburg, Johannesburg, South Africa

ABSTRACT

The purpose for this study was to analyse the difference learning methods such as Traditional method and e-learning. The paper also scans the literature of the 4IR principles and the adaption that tertiary institutions implemented to incorporate it with the curriculum. The aim was to find results of student’s response globally on the context of 4IR in educational institutions. The method used in this paper aimed at investigating different secondary sources on 4IR in the teaching and learning particular in tertiary institutions. following a literature review of several qualitative students that are relevant to identify traditional face to face method and online method.

KEYWORDS

Industry 4.0; Higher Education; E-learning .


Towards Comparative Complexity Study of Some Algorithms for Solving Knapsack Problem


Bashar Bin Usman, Dr Ibrahim Abdullahi and professor A.E Okeyinka, Department of Computer science, faculty of Natural Science, Ibrahim Badamasi Babangida University Lapai, Niger State, Nigeria.

ABSTRACT

knapsack problems are paradigmatic problems in combinatorial optimization it is also known as rucksack problem. It is a combinatorial optimization which determines the number of each item to include in a collection so that the total weight is less than or equal to a given limit and the total value is as large as possible. The goals are to design a procedure towards: Computing the time complexity of knapsack problem and Comparing the complexity of knapsack problem. the proposed methodology will be use in our next study to evaluate the efficiency of the dynamic programming, greedy, and branch and bound algorithms on same knapsack model as well as the data served as input for the implementation and computational performance analysis of both algorithms.

KEYWORDS

algorithms, knapsack problem, optimization, complexity, & computational .


6G Data Analytics Platform for Smart Factory


Maryam Arabshahi1, Hans D. Schotten2, 1Intelligent Networks Research Group, German Research Center for Artificial Intelligence (DFKI GmbH), Kaiserslautern, Germany, 1Institute for Wireless Communication and Navigation, University of Kaiserslautern, D-67663 Kaiserslautern.

ABSTRACT

5G is the new generation mobile communication and a lot of researches are being done on it. Network Data Analytics Function (NWDAF) has been proposed to be a core function in 5G. And the intention is that NWDAF does different analysis tasks on the network data. Data Science is a field of science which has gained great attention in the last years due to uprising amount of generated data and the necessity to extract information from it. Condition monitoring is an approach to maintain the machinery in industry and especially in the past years using AI/ML methods, scientists could achieve a lot in this field. In this paper we first discuss about all the above mentioned topics in more details, then we propose a data analysis platform using NWDAF in a smart factory scenario. The idea is to apply condition monitoring on the machinery status which are connected via 6G network.

KEYWORDS

6G, Data Science, Condition Monitoring, Smart Factory.


A Novel Ensemble Framework Driven by Diversity and Cooperativity for Non-stationary Data Stream Classification


Kuangyan Zhang, Tuyi Zhang, and Sanmin Liu,School of Computer and Information, Anhui Polytechnic University,Wuhu-241000, China Kaiserslautern.

ABSTRACT

Data stream classification is highly relevant to many real-world scenarios. However, the existing data stream classification methods influenced by concept drift and label scarcity are unreliable in nonstationary environments. Therefore, the novel ensemble learning method is presented to solve the concerned problem in this paper. Intuitively, the key to success of most ensemble models is increasing diversity among members. Currently, ensembles that improve diversity have been proposed in the literature. Unfortunately, there is no way to verify the positive effect of cooperativity on performance. Motivated by the gap in this field, we developed an ensemble learning framework driven by diversity and cooperativity, called EDDC. First, EDDC dynamically maintains multiple groups of classifiers, and the primary classifier in each group is selected with the goal of improving diversity. And then, cooperativity is used to update groups and replace obsolete members in each group. Finally, EDDC adaptively selects diversity and cooperativity as different strategies for predicting sample labeling and establishes a better performance guarantee when the environment changes. In simulation experiments, we evaluate the performance of EDDC and the hyperparameter sensitivity analysis on multiple datasets in supervised setting. The evaluation results show EDDC is effective and stable in most cases. To solve label scarcity in non-stationary environments, we developed an online active learning framework using EDDC. Through the analysis of experiment results, our method EDDC with sampling policy diversity entropy (DE) has the advantage over the other common active learning method.

KEYWORDS

Ensemble learning, data stream classification, concept drift, active learning, diversity and cooperativity.


Effective Automatic Feature Engineering on Financial Statements for Bankruptcy Prediction


Xinlin Wang, Mats Brorsson, and Maciej Zurad, Interdisciplinary Centre for Security, Reliability and Trust, University of Luxembourg, Yoba S.A.

ABSTRACT

Feature engineering on financial records for bankruptcy prediction has traditionally relied significantly on domain knowledge and typically results in a range of financial ratios but with limited complexity and feature utilization due to manual design. This is not only a time-consuming and error-prone procedure, confined to the domain experts’ experience, but also does not take into account the characteristics of different datasets. We propose in this study an automated feature engineering pproach to generate effective, explainable, and extensible model training features. The experiments have been conducted using a publicly available record of financial statements submitted to the Luxembourg Business Registers. The experimental results suggest that the proposed approach can provide valuable features for ankruptcy prediction and in most of the cases, the model’s outcomes exceed the models trained on the traditional, manually derived, financial ratios and other feature generation approaches. Therefore, researchers lacking the domain knowledge will be able to deal with specific data using this automatic feature engineering approach.

KEYWORDS

Automatic feature engineering, domain data mining, bankruptcy prediction.


Framework Architecture Design for ERS


Marco Ruiz Herrera and Juan Sánchez Díaz, Department of Computer Systems and Computation, Universitat Politècnica de València, Valencia, Spain

ABSTRACT

Emergency management is essential to mitigate the effects of unforeseen situations. However, this task is complex due to a large amount of information and complex procedures to be handled. To address these challenges, it is necessary to have technological tools that allow flexible responses to problems classified as knowledge-intensive procedures (KIP). In this sense, we propose the design of a framework for an Emergency Response System (ERS) based on Service Oriented Architecture (SOA) that integrates Adaptive Case Management (ACM) and Business Process Modelling (BPM). This framework is characterised by its interoperability with devices and collaborative systems, which allows the creation and association of content related to emergency management, thus improving usability. In addition, it is designed to be scalable, allowing the incorporation of new modular functionalities. Once the development of the framework has been completed, future lines of research will be opened for its validation and comparison with other traditional response systems through performance and usability metrics. The objective is to present the design of a framework capable of providing fast, flexible, and efficient responses to emergencies, thus improving the safety and security of the population.

KEYWORDS

Adaptive Case Management, Business Process Modelling, Knowledge-Intensive Procedures, Emergency Response System, Framework Architecture .


Reliability Model of Grid Computing Job Scheduling based on Discrete Time Markov Chain


Ali Sarhadi, Javad Akbari Torkestani and Abbas Karimi, Department of Computer Engineering, Arak Branch, Islamic Azad University, sarak, Iran

ABSTRACT

The cloud computing mainly focus on gives effective access to geographically distributed resources. Job scheduling is one of the most important services in cloud computing. Job scheduling is used to schedule the user’s jobs to con- tribute suitable resource in cloud environment. Recently number of efficient job scheduling algorithms has been proposed for cloud computing. The purpose of designing these algorithms is to achieve time and cost optimization for job execution. On the other hand, it is necessary to provide a model to assess the reliability of the different job scheduling algorithms. Unfortunately, no accu- rate and complete model has been proposed to evaluate the reliability of these algorithms. Since scheduling failures are inevitable in cloud computing due to the harsh deployment environment, resource migration, Non-compliance time or cost, etc. Therefore, providing a reliability model is necessary for successful scheduling algorithms. In this paper, a reliability model that is based on discrete time Markov chain, incorporated for famous scheduling algorithms. We propose an approach to evaluating the reliability of scheduling algorithms based on time and cost parameters. Proffered approach is modeled by Discrete Time Markov Chain and the results show the reliability of scheduling algorithms in different state.

KEYWORDS

Cloud computing, scheduling, Markov chain, reliability.


A Simplified View Into How KVM Handles Processes of Virtual Machines


Divyanshu Gupta and InIT Labs, ZHAW, Winterthur, Switzerland

ABSTRACT

TThis paper focuses on internal working of Kernel based virtual machine and Quick Emulator to get an improved perspective on area of improvements. The main focus areas are CPU and process scheduling. This paper is divided into 4 sections, section 1 is Introduction to KVM and QEMU. Section 2 is focused on internal working of QEMU using KVM and role of Linux Kernel in vCPU execution. Section 3 is a look into process and VM scheduling in KVM.

KEYWORDS

QEMU, KVM, Linux Kernel, Scheduling, hypervisor, VMM, vCPU.


Documenting Critical Cloud Outsourcing Risks and Governance Concerns of Financial Institutions in Europe, US and Canada


Jamelia M. Anderson - Princen, PhD Candidate, Tilburg Law School, Private, Business and Labour Law (PBLL), University of Tilburg, the Netherlands

ABSTRACT

This study investigates how well thirteen financial institutions adapt their governance structures to manage cloud risk exposures and whethertheir risk and governance challenges are similar or invariant. Using a Multigroup path model this study surveys senior cloud risk and IT experts at large US, EU and Canadian financial institutions to capture their concerns on 37 different cloud risk and governance issues prevalent in cloud outsourcing.The results show thatinstitutional risk was a primary risk factor, risk and governance challenges differ (but not significantly) and there was low to medium inefficiencies with inconsistencies in the control environment. Whilst all institutions were not equally efficient, US regulated institutions (e >1.9), were amongst the most inefficient in the governance process, due to higher variations in state policy on data privacy.The results establish that hybrid transactions are difficult to govern due to the incomplete nature of cloud contracts, and institution risk.

KEYWORDS

cloud outsourcing, incomplete contracts, internal governance, financial institutions, transaction cost theory.


Addressing Class Variable Imbalance in Federated Semi-supervised Learning

Zehui Dong, Wenjing Liu, Siyuan Liu1and Xingzhi Chen, School of Data Science and Application, Inner Mongolia University of Technology, Hohhot, China

ABSTRACT

Federated Semi-supervised Learning (FSSL) combines techniques from both fields of federated and semi-supervised learning to improve the accuracy and performance of models in a distributed environment by using a small fraction of labeled data and a large amount of unlabeled data. Without the need to centralize all data in one place for training, itcollectupdatesofmodeltrainingafterdevicestrainmodelsatlocal,and thus can protect the privacy of user data. However, during the federal training process, some of the devices fail to collect enough data for local training, while new devices will be included to the group training.This leads to an unbalanced global data distribution and thus affect the performance of the global model training. Most of the current research is focusing on class imbalance with a fixed number of classes, while little attention is paid to data imbalance with a variable number of classes. Therefore, in this paper, we propose Federated Semi-supervised Learning for Class Variable Imbalance (FCVI) to solve class variable imbalance. The class-variable learning algorithm is used to mitigate the data imbalance due to changes of the number of classes. Our scheme isproved to be significantly better than baseline methods, while maintaining client privacy.

KEYWORDS

Federal semi-supervised learning,Federated learning, Semi-supervised learning, Class variable imbalance.


LSTNETA: A Hybrid Artificial Neural Network Model for Electricity Consumption Prediction - A Comparative Study


Ricardo Augusto Manfredini, Instituto Federal de Educação, Ciências e Tecnologia do Rio Grande do Sul, campus Farroupilha, Brasil

ABSTRACT

We present a comparative study of electricity consumption forecasts using the SARIMAX (Seasonal Auto Regressive Moving Average eXogenous variables) method, the HyFis2 (Hybrid Neural Fuzzy Inference System) model, and the LSTNetA (Long and Short Time Series Network Adapted) model, a hybrid neural network containing GRU (Gated Recurrent Unit), CNN (Convolutional Neural Network) layers, and dense layers, especially adapted for this case study. The comparative experimental study developed showed a superior result for the LSTNetA model with forecasts of consumption much closer to actual consumption. In the case study, the LSTNetA model had a root mean squared error (rmse) of 198.44, the HyFis2 model 602.71, and the SARIMAX method 604.58.

KEYWORDS

Artificial Neural Network, Artificial Intelligence, electricity consumption predictions, time series.


Airbnb Research: An Analysis in Nexus BetweenVisual Description and Product Rating


Chun Kit Fu1, Yu Sun2, 17Lakes high school, 9251 S Fry Rd, Katy, TX 77494,2Computer Science Department, California State Polytechnic University, Pomona, CA91768

ABSTRACT

Hosts are often desperate to find ways to rent their house, However, most of them do not have possess theknowledge of knowing what type of image cover would grasp the attention of their customer. Gilded by these needs, I have designed an application that uses machine learning to find the relationship between the images andtheirrating [1]. I first used JSON to convert the HTML file resource to a format where we can use in python for webscraping [2]. This paper designs an application tool to find all the object or characters inside images by webscraping and changes it into a model for machine learning [3]. Applied our application to predict the ratingandconducted a qualitative evaluation of the approach. In order to prove our result, I imported an image fromAirbnband found its rating. It turns out that the predicted rating is extremely close to the real rating, Proving The system’susability.

KEYWORDS

Web scraping, Machine learning, Airbnb

Big Data Management for Peacebuilding and Sustainable Development


Muhammad Abdullahi, University of Maiduguri, Nigeria

ABSTRACT

Big Data Management represents the systematic organisation, administration and governance of large volumes of structured and unstructured data which could be utilized for different purposes as the need arises. The instabilities across the world today as well as the global advancements in Internet penetration and Information and Communication Technology (ICT) presents an opportunity for Big Data to be adequately generated and harnessed for peacebuilding and development purposes. This paper examines the relevance of Big Data and how it is being utilized in enhancing peace and security in vulnerable communities as well as in facilitating effective and efficient development interventions. The paper is a desk research with data drawn from desk review of published literature and development intervention reports. The paper reveals how data is re-shaping the world with advanced technologies and systems such as the move from analogue, face-to-face negotiations to online, asynchronous, web-mediated negotiations in peacebuilding.Big Data equally enhances monitoring and evaluation of peacebuilding, securityand development interventions through practices such as systematic data collection, data analysis and the creation of context and project-specific mapping platforms and intervention analysis. The speed with which Big Data is generated reduces the time lag between the start of an intervention and when authorities are able to respond. It also reduces the knowledge gap about how people respond to the interventions. The utilization of Big Data therefore undoubtedly has great value and significance in peacebuilding and sustainable development, and contributes in making informed, more effective, efficient, and inclusive decision making.

KEYWORDS

Big Data, Peacebuilding, Security, Development

Robust Outdoor Positioning via Raytracing


Vladislav Ryzhov,Department of Multimedia Technologies and Telecommunications, Moscow Institute of Physics and Technology, Moscow, Russia.

ABSTRACT

Positioning problem can be solved with multiple approaches in 5G. This paper introduces RT (Ray Tracing) techniques for Outdoor positioning and tracking for systems with distributed architecture with multipath processing gain. Proposed approach exploits high-resolution angular spectrum estimation and known radio propagation environment and solves positioning problem robustly even in NLOS cases via CoMP (Cooperative Multi Point) reception and joint processing at cloud CPU. This paper intends to offer new positioning technique for NR with a knowledge of Outdoor environment. Proposed algorithm was compared with real measurement and demonstrated significant gain to typical GNSS accuracy.

KEYWORDS

Ray Tracing, positioning, angular domain, CoMP, channel modelling, GPS measurement


An Intelligent and Data-Based Skate Analyzer to Assist in Analyzing Movements of Skate on Ice


Yirina Wang1, Yu Sun2, 1Santa Margarita Catholic High School, 22062 Antonio Pkwy, Rancho Santa Margarita, CA 92688, 2Computer Science Department, California State Polytechnic University, Pomona, CA 91768

ABSTRACT

Ever since the start of Figure Skating, there has been an emphasis on skating technique, especially in the step sequences of a skater’s choreography [1]. But Figure Skaters often are not able to detect the motion, edge, or placement of their blade on the ice without watching themselves skate. The solution to this problem would be to have a skate analyzer. A skate analyzer would record the movements of a skate on ice and one would be able to playback the recorded data and view their skate motion precisely [2]. Three main components that my project links together are the QTPY-ESP 32 microcontroller, the sensor that combines the accelerometer, gyroscope, and magnetometer, and the SD card reader. The QTPY-ESP32 is a microcontroller that acts as a main computer controlling the whole board. The QTPY is then connected to a sensor board through an I2c protocol. Then, through an SPI protocol, the QTPY is connected to an SD card reader. After the skater is finished recording, they can insert the SD card in a computer, upload the data into the app, and play it back. There is also a slider on the top of the screen that the skater can slide back and forth to view the skate at specific times in the file. This would be a great technology to use for skaters as they can playback their movements on ice and improve their technique [3].

KEYWORDS

FMCW Radar, Acceleration Estimate, Range Migration, Fractional Fourier Transform, Linear Frequency Modulated, Chirp Rate


A Frft Based Real-time Estimation of Moving Target Acceleration Method for Fmcw Radar


Qingbo Wang, College of Computer Science and Technology, Nanjing University of Aeronautics and Astronautics, Nanjing, China

ABSTRACT

This paper considers the problem of real-time estimating the moving target acceleration in frequency modulated continuous wave (FMCW) radar. Based on accelerated target FMCW radar echo signal model, after utilizing KeyStone transform to eliminate the effect of range migration on the signal parameters estimation. A improved fractional Fourier transform (FrFT) and optimized best matching rotation angle search strategy are proposed to estimating the chirp rate of doppler dimension echo signals, which is related to the target acceleration. Compared with the traditional FrFT, the approach in this paper has less computation and significantly reduced processing time while ensuring the estimation accuracy. Theoretical analysis and simulation experiments prove the feasibility of the proposed approach. Moreover, the proposed method is implemented in TI cascade radar system and real scenarios tests demonstrate the proposed method.

KEYWORDS

FMCW Radar, Acceleration Estimate, Range Migration, Fractional Fourier Transform, Linear Frequency Modulated, Chirp Rate


Color Classification by Knn Classifier Using Rgb Histogram and Hsv Color Space


V.R Marasinghe, Department of Mechanical Engineering, The Open University of Sri Lanka, Nawala, Nugegoda, Sri Lanka

ABSTRACT

Color identification in images has become an emerging topic as it is used in many real-world applications. Image processing techniques have the ability to present the result in a way that is much closer to the human eyes perception. This research classifies images based on HSV color space-based classification and K-Nearest Neighbor (KNN) Machine Learning classifier algorithm. The approach to the second method employs RGB color histogram data from a set of colors used as training images to train the KNN classifier. Color classification can be divided into the following two steps: ROI (Region of Interest) detection, and color identification. The experimental procedure, pros, and cons of both methods are identified and discussed in this research.

KEYWORDS

ROI, HSV color space, KNN classifier, Machine learning, RGB color histogram.


A Fully Automated Music Equalizer based on Music Genre Detection using Deep Learning and Neural Network


Kevin Hu1, Yu Sun2, Yujia Zhang3, 1Sage Hill School, 20402 Newport Coast Dr, Newport Beach, CA 92657, 2California State Polytechnic University, Pomona, CA, 91768, Irvine, CA 92620 , 3University of California Irvine, Irvine, CA 92697

ABSTRACT

Recent years have witnessed the dramatic popularity of online music streaming and the use of headphones like AirPods, which millions of people use daily [1]. Melodic EQ was inspired by these users to create the best audio listening experience for listeners with various preferences [2]. Melodic EQ is a project that creates custom EQs to the user's custom music tastes and filters the audio to fit their favorite settings. To achieve this goal, the process starts with a song file taken from an existing file, for example, Spotify downloads or mp3s. This file is then uploaded to the app. The software sorts the song in a genre detecting Algorithm and assigns a genre label to that song. Inside the app, the user will create or select EQs for that genre and apply it to their music. The interface is easy to use and the app aims to make everyone's preferences achievable and on the fly. That’s why there are presets for each category for users who are unfamiliar with equalizers, and custom settings for advanced users to create their perfect sound for each genre.

KEYWORDS

AI auto genre detection, Automatic genre switching, EQ, Convolution music equalizer network



Power Data Governance Basedonedgeblockchain and Reinforcementlearning


Jiachun Chen1 and Zhaogong Zhang2, 1Department of Computer Science and Technology, Heilongjiang University, HeiLongJiang, China, 2Heilongjiang University, HeiLongJiang, China

ABSTRACT

With the deepening of power grid informatization construction and application, power data has becomean important asset of the company, so how to solve the professional business collaboration andinformation sharing, data long input, data accuracy, real-time sex is not strong, data extraction, redundant storage, quality is not high, privacy protection, further overall manage data, mining dataresource value has become the power enterprise to improve the level of lean management and scientificdecision-making is one of the important work. Traditional solutions usually use blockchain combinedwith edge computing for a single security protection, or use reinforcement learning to solve the energyconsumption and delay problems in the process of data communication, but In order to solve theseproblems, this paper proposes an edge blockchain for privacy protection data aggregation, called EBDA. Through theoretical analysis and simulation, EBDA has great advantages in resisting network attacks, reducing system computing cost and communication overhead. At the same time, we also analyze thecross-platform power governance based on edge computing and deep reinforcement learning. The final experimental results show that our proposed scheme is less delay and less energy consumption.

KEYWORDS

Edge computing, blockchain, smart grid,deep reinforcement learning.



Contact US

biotconf@yahoo.com