3rd International Conference on Machine Learning Techniques (MLTEC 2022)

November 19 ~ 20, 2022, Zurich, Switzerland

Accepted Papers

Random Zeroing Data Augmentation

Nezhurina Marianna, Kuban State Technological University, Krasnodar, Russia

ABSTRACT

In the current paper we present a new Data Augmentation technique – Random Zeroing. Thistechnique is easy to implement, it can be usedalongside with the existing data augmentation techniques. We trained Convolutional Neural Networks on three datasets CIFAR10, MNIST and Fashion MNIST and compared validation accuracies of networks trained with different variations of Random Zeroing techniques.

KEYWORDS

Data Augmentation, Computer Vision, Convolutional Neural Networks, Random Erasing.


Comparing Hierarchical Approaches to Enhance Supervised Emotive Text Classification

Lowri Williams, Eirini Anthi, Amir Javed, Pete Burnap, School of Computer Science & Informatics, Cardiff University, Cardiff, UK

ABSTRACT

The performance of emotive text classification using affective hierarchical schemes (e.g., WordNet-Affect) is often evaluated using the same traditional measures used to evaluate the performance of when a finite set of isolated classes are used. However, applying such measures means the full characteristics and structure of the emotive hierarchical scheme are not considered. Thus, the overall performance of emotive text classification using emotion hierarchical schemes is often inaccurately reported and may lead to ineffective information retrieval and decision making. This paper provides a comparative investigation of how four extended evaluation metrics which consider the characteristics of the hierarchical scheme can be applied and subsequently improve the performance of the classification of emotive texts. This study investigates the classification performance of three widely used classifiers, Naive Bayes, J48 Decision Tree, and SVM, following the application of the aforementioned methods. The results demonstrated that all methods improved the performance of all classifiers. However, the most notable improvement was recorded when a depth-based method was applied to both the testing and validation data, where the precision, recall, and F1-score were significantly improved by around 70 percentage points for each classifier.

KEYWORDS

Sentiment Analysis, Emotion Classification, Supervised Machine Learning, Hierarchical Classification, Natural Language Processing.


Depth based region proposal: multi-stage real-time object detection

Shehab Eldeen Ayman1, Walid Hussein2, Omar H. Karam3, 1Department of Software Engineering, Faculty of Informatics and Computer Science, The British University in Egypt, Cairo, Egypt, 2Department of Computer Science, Faculty of Informatics and Computer Science, The British University in Egypt, Cairo, Egypt, 3Department of Information Systems, Faculty of Informatics and Computer Science, The British University in Egypt, Cairo, Egypt

ABSTRACT

Many real-time object recognition systems operate on two-dimensional images, degrading the influence of the involved objects third-dimensional (i.e., depth) information. The depth information of a captured scene provides a thorough understanding of an object in full-dimensional space. During the last decade, several region proposal techniques have been integrated into object detection. scenes’objects are then localized and classified but only in a two-dimensional space.Such techniques exist under the umbrella of two-dimensional object detection models such as YOLO and SSD. However, these techniques have the issue of being uncertain that an objects boundaries are properly specified intothescene. This paper proposes a unique region proposal and object detection strategy based on retrieving depth information forlocalization and segmentation of the scenes’ objects in real-time manner. The obtained results on different datasets show superior accuracy in comparison to the commonly implemented techniques with regards to not only detection but also apixel-by-pixel accurate localization of objects.

KEYWORDS

Real time object detection, region proposal, computer vision, RGBD object detection, two stage object detection.


Utilizing Deep Machine Learning to Create a Contextallyand Environmentally Aware Application to Prevent Spinal Tendonitis

Barry Li1 and Yu Sun2, 1Northwood High School, 4515 Portola Pkwy, Irvine, CA 92620, 2California State Polytechnic University, Pomona, CA, 91768, Irvine, CA92620

ABSTRACT

Recently, we have discovered when a person is using their computer, they often begin to lean forward towardthescreen without noticing. Leaning forward can cause many problems in their body, especially to the back bone knownas spinal tendonitis, and the problem can spread throughout the entire body [1][2]. I created an app to warn usersto sit up straight when they lean toward the screen too much, ef ectively protecting them from damaging their backbone. This app uses deep learning to calculate the body posture, and draw an imaginary triangle betweentheshoulders, hips, and knees [3]. The point at the hips is most vital in calculating the angle of the body. This app takespictures in a given interval of time (by default 30 second), when the body leans forward, this angle decreases, andwhen the angle becomes lower than a given amount (by default 30 degrees), it will send a warning message toaskthe user to fix their sitting posture [4].

KEYWORDS

Machine Learning, Application, Spinal Tendonitis.


Its a Long Way! Layer-Wise Relevance Propagation for Echo State Networks Applied to Earth System Variability

Marco Landt-Hayen1, Peer Kröger2, Martin Claus3 and Willi Rath4, 1GEOMAR Helmholtz Centre for Ocean Research, Kiel, Germany, 2Christian-Albrechts-Universität, Kiel, Germany, 3GEOMAR Helmholtz Centre for Ocean Research, Kiel, Germany, 4GEOMAR Helmholtz Centre for Ocean Research, Kiel, Germany

ABSTRACT

Artificial neural networks (ANNs) are powerful methods for many hard problems (e.g. image classification or time series prediction). However, these models are often difficult to interpret. Layer-wise relevance propagation (LRP) is a widely used technique to understand how ANN models come to their conclusion and to understand what a model has learned. Here, we focus on Echo State Networks (ESNs) as a certain type of recurrent neural networks. ESNs are easy to train and only require a small number of trainable parameters. We show how LRP can be applied to ESNs to open the black-box. We also show an efficient way of how ESNs can be used for image classification: Our ESN model serves as a detector for El Niño Southern Oscillation (ENSO) from sea surface temperature anomalies. ENSO is a well-known problem. But here we use this problem to demonstrate how LRP can significantly enhance the explainablility of ESNs.

KEYWORDS

Reservoir Computing, Echo State Networks, Layer-wise Relevance Propagation, Explainable AI.


Stock Price Prediction Model Based On Dual Attention and TCN

Yifeng Fu and He Xiao, Department of Software Engineering,Jiangxi University of Science and Technology, Nanchang, China

ABSTRACT

The stock market is affected by many variables and factors, and the current forecasting models for time series are often difficult to capture the complex laws among multiple factors. Aiming at this problem, a stock price prediction model based on dual attention mechanism and temporal convolutional network is proposed. First, a convolution network that is more suitable for time series is used as the feature extraction layer, and feature attention is introduced to dynamically mine the potential correlation between the input factor features and closing prices, and second, on the basis of Gated Recurrent Unit on the other hand, a temporal attention mechanism is introduced to improve the models ability to learn important time points and obtain importance measures from a temporal perspective. The experimental results show that the proposed model performs better than the traditional prediction model in the error index of stock price prediction, and realizes the interpret-ability of the model in terms of index characteristics and time.

KEYWORDS

Time convolutional network, GRU,Temporary attention, Feature attention, Interpretability.


Serverless Web Application for the Life Cycle of Software Development Projects using Scrum

Pablo Josué Francia Del Busto, Rodson Vladimir Ayme Tambra and Juan Antonio Flores Moroco, Department of Software Engineering, Universidad Peruana de Ciencias Aplicadas, Lima, Perú

ABSTRACT

Serverless models are one of the latest architecture models provided by Cloud vendors such as AWS and Microsoft, we are exploring serverless applications to develop a progressive web application that will help future developers and project leaders to better manage their projects. As seen in the investigationdone, we see Latin America being below the global average incompleting projects on time or within the planned budget. In this work we will explain the agile life cycle of the web application developed, focusing on the Scrum aspects of the tool design, the serverless architecture of the app and its following development, while also applying an analysis of the current projects to measurethe level of effectiveness that a proper Scrum method can haveon finishing a project correctly on time and within budget Journals.

KEYWORDS

Progressive app, Web application, FaaS, Serverless, Scrum, Agile.


Optimal short-time Fourier transforms Parameters for Enhancing Signal Separation

Sameir A. Aziez, Electromechanical Engineering Dept., University of Technology, Dr. Asst. Prof. Saad M, khaleefah, Al- hikma college university, BAGHDAD – IRAQ, Bassam H. Abed, Electrical Engineering Dept., University of Technology, Iraq, Thamir R. Saeed, Electrical Engineering Dept., University of Technology, Iraq, Shaymaa A. Mohammed, Electrical Engineering Dept., University of Technology, Iraq, Ghufran. M. Hatema, Najaf Technical College, Al-Najaf Al-Ashraf, Iraq

ABSTRACT

The observation of dynamic systems is essential in many fields. Many algorithms are used to estimate this dynamic system. Short-Time Fourier transform (STFT) is one of these algorithms. This paper presents an optimized STFT to extract the Doppler properties of that system. The improvement reached 7-11% against the unoptimized process, and the processing speed was also affected by 35%.

KEYWORDS

Short-time Fourier transform, optimization, Doppler, Dynamic system.


Enterprise Model Library for Business-IT-Alignment

Peter Hillmann, Diana Schnell, Harald Hagel, and Andreas Karcher, Department of Computer Science, Universität der Bundeswehr, Munich, Germany

ABSTRACT

The knowledge of the world is passed on through libraries. Accordingly, domain expertise and experiences should also be transferred within an enterprise by a knowledge base. Therefore, models are an established medium to describe good practices for complex systems, processes, and interconnections. However, there is no structured and detailed approach for a design of an enterprise model library. The objective of this work is the reference architecture of a repository for models with function of reuse. It includes the design of the data structure for filing, the processes for administration and possibilities for usage. A case study with industry demonstrates the practical benefits of reusing work already done. It provides an organization with systematic access to specifications, standards and guidelines. Thus, a further development is accelerated and supported in a structured way, the complexity remains controllable. The presented approach details various enterprise architecture frameworks. It provides benefits for development based on models.

KEYWORDS

Enterprise Architecture, Model Library, Business-IT-Alignment, Reference Architecture, Enterprise Repository for reusable Models.


Six Memos in Software Development: Insights from Italo Calvino’s “Six Memos for the Next Millennium”

Mohammad Khalil1, Artem Kruglov1, Maxim Ksenofontov1, Vladislav Lamzenkov1 and Giancarlo Succi2, 1Innopolis University, Innopolis, Russia, 2University of Bologna, Bologna, Italy

ABSTRACT

Is there a connection between professional writers and software developers? May artists contribute to the process of development? Can software engineers use powerful practices without knowing their origin? In this paper, we urge readers to reflect on the asked questions. Our primary goal was to read the book “Six Memos for the Next Millennium” written by Italian writer and journalist Italo Calvino and find practical applications of his ideas in modern software development. In addition, we approach to propose reflection for new investigations in the exploration of influence caused by principles and concepts coming from literature in general. In our work, we aim to demonstrate that Calvino’s principles are the fundamentals of many smart and powerful decisions, and further research can be made.

KEYWORDS

Software engineering, Human behavior, Software development process, Team, Agile.


A Cryptographically Secured Real-Time Peer-to-Peer Multiplayer Framework For WebRTC using Resynchronizing-at-Root, Random Authority Shuffle, and Hash Commitment Scheme

Haochen Han1 and Yu Sun2, 1Troy High School, 2200 Dorothy Ln, Fullerton, CA 92831, 2California State Polytechnic University, Pomona, CA, 91768, Irvine, CA92620

ABSTRACT

P2P(peer-to-peer) multiplayer protocols, such as lockstep and rollback net-code, have historically been the cheaper, direct alternative to the Client-Server model [1]. Recent advances in WebRTC technology raise interestingprospects for independent developers to build serverless, P2P multiplayer games on the browser. P2P has several advantages over the Client-Server model in multiplayer games, such as reduced latency, significantly cheaperservers that only handle handshakes, etc. However, as the browser environment does not allow for third-party anti- cheat software, having a secure protocol that catches potential cheaters is crucial. Furthermore, traditional P2Pprotocols, such as deterministic lockstep, are unusable in the browser environment because dif erent players couldbe running the game on dif erent browser engines [2]. This paper introduces a framework called Peercraft for P2PWebRTC games with both security and synchronization. We propose two P2P cheat-proofing protocols, RandomAuthority Shuf le and Speculation-Based State Verification. Both are built on known secure cryptographic primitives. We also propose a time-based synchronization protocol that does not require determinism, Resynchronizing-at-Root, which tolerates desynchronizations due to browser instability while fixing the entire desynchronization chainwithonly one re-simulation call, greatly improving the browser game’s performance [3].

KEYWORDS

Cyber Security, Anti-Cheat, Peer-to-Peer multiplayer, WebRTC.


The perspectives of using java server pages

Ahmed Al-Tameem, King Saud University, Riyadh 11451, Kingdom of Saudi Arabia

ABSTRACT

This paper considers the possibilities of using Java Server Pages technologies as an alternative to the existing platforms on PHP basis. There have been collected and processes the data on the relative productivity, capacity and the main gaps of this system. There had been also analyzed the statistical data on the trends for supporting different programming languages considering the technologies with the significant potential. The paper also presents the short discourse of the unit, chosen as such an alternative JSP with the example of the simplest program. There had been given the approximate (not the full one) list of possibilities of the above platform to demonstrate its capacity and the potential in the development of the powerful web-applications. The given examples and analytical issues allowed to make a conclusion about the application of JSP as an alternative to PHP.

KEYWORDS

Servlet, Java Server Pages technologies, PHP, Python.


Research on Wireless Powered Communication Networks Sum Rate Maxinization based on Time Reversal OFDM

Wei Liu1, Fang Wei Li2, Hai Bo Zhang3, Bo Li4, 1School of Communication and Information Engineering, Chongqing University of Posts and Telecommunications, Chongqing, China, 2Chongqing Key Laboratory of Public Big Data Security Technology, Chongqing, China, 3School of Communication and Information Engineering, Chongqing University of Posts and Telecommunications, Chongqing, China, 4Chongqing Key Lab of Mobile Communications Technology,Chongqing Universityof Post and Communications

ABSTRACT

This paper studies a wireless power communication network(WPCN) based on orthogonal frequency division multiplexing (OFDM) with time reversal(TR). In this paper, the " Harvest Then Transmit " protocol is adopted, and the transmission time block is divided into three stages, the first stage is for power transmission, the second stage is for TR detection, and the third stage is for informationtransmission. The energy limited access point (AP) and the terminal node obtain energy fromthe radiofrequency signal sent by the power beacon (PB) to assist the terminal data transmission. The energy limited AP and the terminal node obtain energy from the radio frequency signal sent by the PB to assist the terminal data transmission. In the TR phase and the wireless information transmission (WIT) phase, the terminal transmits the TR detection signal to the AP using the collected energy, and the AP uses the collected energy to transmit independent signals to a plurality of terminals through OFDM. In order tomaximize the sum rate of WPCN, the energy collection time and AP power allocation are jointly optimized. Under the energy causal constraint, the subcarrier allocation, power allocation and time allocation of the whole process are studied, and because of the binary variables involved in the subcarrier allocation, the problem belongs to the mixed integer non-convex programming problem. the problem is transformed into a quasiconvex problem, and then binary search is used to obtain the optimal solution. The simulation results verify the ef ectiveness of this scheme. The results showthat the proposed scheme significantly improves the sum rate of the terminal compared to the reference scheme.

KEYWORDS

Wireless Powered Communication Network, Ttime Reversal, OFDM.


Screening Viral, Bacterial, And Covid-19 Pneumonia using Deep Learning Framework from Chest X-ray Images

Muhammad E. H. Chowdhury, Tawsifur Rahman, Amith Khandakar, Sakib Mahmud, Department of Electrical Engineering, Qatar University, Doha, Qatar

ABSTRACT

The novelcoronavirus disease (COVID-19) is a highly contagious infectious disease. Even though there isa large pool of articles that showed the potential of using chest X-ray images in COVID-19 detection, a detailed study using a wide range of pre-trained convolutional neural network (CNN) encoders-based deep learning framework in screening viral, bacterial, and COVID-19 pneumonia are still missing. Deep learning network training is challenging without a properly annotated huge database. Transfer learning is a crucial technique for transferring knowledge from real-world objectclassification tasks to domain-specific tasks, and it may offer a viable answer. Although COVID-19 infection on the lungs and bacterial and viral pneumonia share many similarities, they are treated differently. Therefore, it is crucial to appropriately diagnose them. The authors have complied alarge X-ray dataset (QU-MLG-COV) consisting of 16,712 CXR images with 8851 normal, 3616 COVID-19, 1485 viral, and 2740 bacterial pneumonia CXR images. We employed image pre-processing methods and 21 deep pre-trained CNN encoders to extract features, which were then dimensionality reduced using principal component analysis (PCA) and classified into 4-classes. We trained and evaluated every cutting-edge pre-trained network to extract features to improve performance. CheXNet surpasses other networks for identifying COVID-19, Bacterial, Viral, and Normal, with an accuracy of 98.89 percent, 97.87 percent, 97.55 percent, and 99.09 percent, respectively. The deep layer network found significant overlaps between viral and bacterial images. The paper validates the network learning from the relevant area of the images by Score-CAM visualization.The performance of the various pre-trained networks is also thoroughly examined in the paper in terms of both inference time and well-known performance criteria.

KEYWORDS

Novel Coronavirus disease, COVID-19, viral pneumonia, bacterial pneumonia, deep learning, Convolutional neural network, Principal component analysis.


Deep Learning Technique to Denoise EMG Artifacts from Single-Channel EEG Signals

Muhammad E. H. Chowdhury1, Md Shafayet Hossain2, Sakib Mahmud1, Amith Khandakar1, 1Department of Electrical Engineering, Qatar University, Doha, 2713, Qatar, 2Department of Electrical, Electronic and Systems Engineering, Universiti Kebangsaan Malaysia, Bangi, 43600, Selangor, Malaysia

ABSTRACT

The adoption of dependable and robust techniques to remove electromyogram (EMG) artifacts from electroencephalogram (EEG) is essential to enable the exact identification of several neurological diseases. It is still a challenge to design an effective technique to eliminate EMG artifacts from EEG recordings, even though many classical signal processing-based techniques have been used in the past and only a few deep-learning-based models have been proposed very recently. In this work, deep learning (DL) techniques have been used to remove EMG artifacts from single-channel EEG data by employing four popular 1D convolutional neural networks (CNN) models for signal synthesis. To train, validate, and test four CNN models, a semi-synthetic publicly accessible EEG dataset known as EEGdenoiseNet has been used. The performance of 1D CNN models has been assessed by calculating the relative root mean squared error (RRMSE) in both the time and frequency domain and the temporal and spectral percentage reduction in EMG artifacts. Moreover, the average power ratios between five EEG bands to whole spectra are separately calculated. The U-Net model outperformed the other three 1D CNN models in most cases in removing EMG artifacts from EEG. U-Net achieved the highest temporal and spectral percentage reduction in EMG artifacts (90.01% and 95.49%); the closest average power ratio for theta, alpha, beta, and gamma band (0.55701, 0.12904, 0.07516, and 0.01822, respectively) compared to ground truth EEG (0.5429; 0.13225; 0.08214; 0.002146; and 0.02146, respectively). It is expected from the reported results that the proposed framework can be used for real-time EMG artifact reduction from multi-channel EEG data.

KEYWORDS

EEG, EMG artifacts, Deep Learning, Single Channel, Denoising, Convolutional neural network.


A Systematic Literature Review on Insect Detection

Lotfi Souifi1, Afef Mdhaffar1,2, Ismael Bouassida Rodriguez1, Mohamed Jmaiel1,2, and Bernd Freisleben3, 1University of Sfax, ENIS, ReDCAD Laboratory, B.P. 1173 Sfax, Tunisia, 2Digital Research Center of Sfax, 3021 Sfax, Tunisia, 3Dept. of Math. & Comp. Sci., Philipps-Universität Marburg, Germany

ABSTRACT

Due to the advancements of deep learning (DL), particularly in the areas of visual object detection and convolutional neural networks (CNN), insect detection in images has received a lot of attention from the research community in the last few years. This paper presents a systematic review of the literature on the topic of insect detection in images. It covers 50 research papers on the subject and responds to three research questions: i) type of dataset used; ii) detection technique used; iii) insect location. The paper also provides a summary of existing methods used for insect detection.

KEYWORDS

Systematic Literature Review (SLR), Deep Learning (DL), Object Detection, Insect Detection.


Optimized Machine Learning Classifiers for the Prediction of Covid-19 from the Clinical Inpatient Data

Abbas Jafar1, Rizwan Ali Naqvi2 and Myungho Lee1, 1Department of Computer Engineering, Myongji University, Yongin, Korea, 2Department of Unmanned Vehicle Engineering, Sejong University, Seoul, Korea

ABSTRACT

COVID-19 is a viral pandemic disease that spreads widely all around the world. The only way to control the spread of the virus is to identify the early detection of COVID-19 patients. Different approaches are used to diagnose, such as RT-PCR, Chest X-rays, and CT images. However, these require a specialized lab and expensive medical equipment. Also, the diagnosis method is costly and time-consuming. Therefore, there is a need to develop a cost-effective and time-efficient diagnosis method to detect COVID-19 patients. The proposed method predicts the presence of coronavirus based on clinical symptoms. The clinical dataset is collected from the Israeli Ministry of Health and pre-processed. We used different machine learning classifiers (i.e., XGB, DT, RF, and NB) to diagnose COVID-19. XGB is the most accurate classifier, with an accuracy of 94.74%. Later, each classifier is optimized with the Bayesian hyperparameter optimization approach to improve the performance. The optimized RF outperformed and achieved an accuracy of 97.23% on the testing data. This study helps in the early diagnosis of COVID-19 patients, especially in the low economic countries where RT-PCT kits are not sufficient.

KEYWORDS

Covid-19, Machine Learning, Bayesian Optimization, Hyperparameters, Decision Tree.


Towards a Smart Multi-Modal Image Registration Process

Marwa Chaabane1,2, Bruno Koller2 and Ismael Bouassida Rodriguez3, 1Department of Computer Science, University of Kiel, Germany, 2Scanco Medical AG, 8306 Wangen-Brüttisellen Switzerland, 3ReDCAD Laboratory, ENIS, University of Sfax, Tunisia

ABSTRACT

The multi-modal image registration is a complex task in the medical domain. It requires usually several manual interventions by the user/expert of the domain to adjust the image registration parameters properly to the characteristics of the processed image data. For this aim, the user needs to extract the relevant information from the image data and their meta-information. In this paper, we propose a novel architecture for a smart fully automatic multi-modal registration process. This architecture is based on a MAPE-K loop inspired by the architecture of autonomous systems.

KEYWORDS

Image registration, multi-modal image data, MAPE-K loop, medical domain.


A Novel Joint Training Approach for Reducing Task Overheads in Multi-Task Systems

Hamed Tabkhi, Department of Electrical and Computer Engineering, University of North Carolina at Charlotte

ABSTRACT

With the advent of deep learning, there has been an ever growing list of applications that Deep Convolutional Neural Networks (DCNNs) can be applied to. However, the community has still not found a generalizable way to create models that are streamlined for their unique tasks. Because of this, models are frequently over-parameterized for their application, leading to wasted memory and compute resources. While task-specific optimizations can be very powerful, they do not account for larger systems that are trying to accomplish complex goals. In these systems, just optimizing singular tasks can be particularly wasteful for computer vision systems, as early stages of a network tend to build very similar representations of the same image. To this end, we propose both a training methodology and network framework to reduce the overheads for multi-task systems. We accomplish this by reducing computation and parameters for these multiple tasks by simply sharing a common backbone and alternating task datasets at train time. With these changes, we find that our framework is capable of reducing parameters by up to 24.55% and GFLOPS by 31.44% for a small loss in network accuracy.

KEYWORDS

Deep Neural Networks, Computer Vision, Classification, Object Detection, Segmentation.


Improving Robustness of Age and Gender Prediction based on Custom Speech Data

Veera Vignesh Kandasamy and Anup Bera, Accenture Solutions India Pvt Ltd, India

ABSTRACT

With the increased use of human-machine interaction via voice enabled smart devices over the years, there are growing demands for better accuracy of the speech analytics systems The several studies show that speech analytics system exhibits bias towards speaker demographics, such age, gender, race, accent etc. To avoid such a bias, speaker demographic information can be used to prepare training dataset for the speech analytics model. Also, speaker demographic information can be used for targeted advertisement, recommendation, and forensic science. In this research we will demonstrate some algorithms for age gender prediction from speech data with our custom dataset that covers speakers from around the world with varying accents. In order to extract speaker age gender from speech data, we’ve also included a method for determining the appropriate length of audio file to be ingested into the system, which will reduce computational time. This study also identifies the most effective padding mechanism for obtaining the best results from the input audio file. We investigated the impact of various parameters on the performance and end-to-end implementation of a real-time speaker age gender information extraction system. Our best model has an accuracy in terms of RMSE of 4.1 for age prediction and 99.5% for gender prediction on custom test dataset.

KEYWORDS

Age Gender prediction, Data Bias, Speech Analytics, CNN, LSTM, Wav2Vec.


The Economic Productivity of Water in Agriculture based on Ordered Weighted Average Operators

José Manuel Brotons Martínez, Economic and Financial Department, Miguel Hernández University, Elche, Spain

ABSTRACT

Since water is an essential element for agriculture, it is crucial to measure its productivity. In this regard, regions with a scarcity of water coexist with others that have an abundance of it, and whose cost is practically non-existent. So, to make the results comparable, we need to obtain a correct measurement, which will require setting a market price for water in areas where no price has yet been set. Therefore, the aim of this paper is to propose new productivity indicators based on fuzzy logic, whereby experts’ opinions about the possible price of the use of water as well as the annual variability of agricultural prices can be added. Therefore, the fuzzy willingness to pay (FWTP) and fuzzy willingness to accept (FWTA) methodology will be applied to create an artificial water market. The use of fuzzy logic will allow the uncertainty inherent in the experts’ answers to be collected. Ordered Weighted Averaging (OWA) operators and their different extensions will allow different aggregations based on the sentiment or interests reflected by the experts. These same aggregators, applied to the prices of the products at origin, will make it possible to create new indicators of the economic productivity of water. Finally, through an empirical application for a pepper crop in south-eastern Spain we can visualize the importance of the different indicators and their influence on the final results.

KEYWORDS

Water Economic Productivity, OWA, Fuzzy Willingness to Pay, Fuzzy Willingness to Accept.


Information Processing in Automated Control Systems Using an Ultrasonic Sensor

Behruz Saidov and Vladimir Telezhkin, South Ural State University (National Research University), Chelyabinsk, Russia

ABSTRACT

This article explores the methods of information processing in automated control systems based on ultrasonic transceivers. Modern automated control systems (ACS), including those for special purposes, widely use information processing methods that use digital technologies and network data exchange between various sensors. Recently, the ultrasonic sensor has been widely used in various applications, in particular, in transmitting and receiving information. The advantage of these systems (ultrasonic sensor) is, on the one hand, the possibility of communication, both with close and remote access, on the other hand, a high probability of leak detection and elimination of transmitted information. Ultrasonic transducers and ultrasonic sensors are devices that generate or receive ultrasonic energy. They can be divided into three broad categories: transmitters, receivers and transceivers. Transmitters convert electrical signals to ultrasound, receivers convert ultrasound to electrical signals, and transceivers can both transmit and receive ultrasound. The purpose of this work is to process information in automated control systems using an ultrasonic sensor. To solve this problem, an experiment was conducted to study transceivers at different distances. According to the results of the pilot study, it was concluded that it is possible to transmit and receive an ultrasonic sensor effectively at a frequency of 20 kHz at a distance of about 10 meters.

KEYWORDS

information processing, ultrasonic signal, automated control systems.


Agent-based Modeling and Simulation of Complex Industrial Systems: Case Study of the Dehydration Process

Noureddine Seddari1,2, Sohaib Hamioud3, Abdelghani Bouras4 Sara Kerraoui1 and Nesrine Menai1, 1LICUS Laboratory, Department of Computer Science, Université 20 Août 1955-Skikda, Skikda 21000, Algeria, 2LIRE Laboratory, Abdelhamid Mehri-Constantine 2 University, Constantine 25000, Algeria, 3LISCO Laboratory, Computer Science Department, Badji-Mokhtar University, Annaba 23000, Algeria, 4Department of Industrial Engineering, College of Engineering, Alfaisal University,Riyadh 11533, Saudi Arabia

ABSTRACT

Agent-based modeling and simulation (ABMS) is a new approach to modeling autonomous systems and interacting agents. This method is becoming more and more popular for its efficiency and simplicity. It constitutes an approach in the field of modeling complex systems. Indeed, ABMS offers, contrary to other types of simulations, the possibility of directly representing the simulated entities, their behaviors, and their interactions without having to recourse to mathematical equations. This work is a contribution in this sense, the goal is to propose an agent-based model to simulate an industrial system. The latter presents the problem of complexity, which can be mastered with the Multi-Agent Systems (MAS) approach. Our model is validated by a case study of the natural gas dehydration process. The latter is consolidated by a simulation made in the multi-agent platform JADE (Java Agent DEvelopment Framework).

KEYWORDS

Agent-based modeling and simulation (ABMS); Industrial system; Multi-Agent Systems (MAS); Multi-agent platform JADE.


Design and Development of Advance Fire Detection and Alarm System for Integrated Buildings

Mana Saleh Al Reshan1, Zuhaibullah Shaikh2, Abdullah Alghamdi1, Asadullah Shaikh1, 1College of Computer Science and Information Systems, Najran University, Najran 61441, Saudi Arabia, 2Kuliyah of Engineering, International Islamic University Malaysia

ABSTRACT

The rapid advancement in infrastructures such as apartments, industrial buildings and hospitals, require a high-tech fire detection and alarm system to detecting, monitoring and control any unfortunate fire threat. Modern Fire alarm system should be capable enough to rectify such situations in shortest possible time to minimize any sort of damages. The objective of this research project is to propose the frame work for Fire Alarm and detection system in multiple buildings situated in same geographical space. Further shortest path algorithm is proposed for series of buildings connected together by fiber optic network, which is controlled by the Central control building. This research aims to develop a prototype design model system based on real life system. In this frame work information will be disseminated between distant Fire Alarm Control Panels as workgroup based network fashion to provide declaration of system alarm using input from any building in network. Also, to distributed system operation for ensuring survivability. The information is sequentially transmitted from one building to another. This purposed fire detection and alarm system is different from other traditional fire alarm and detection system in its topology as it manages group of buildings in an optimal and efficient manner.

KEYWORDS

Fire Detection System, Fire Alarm, Fire protection System.


Flexible Genetic Algorithm for Complex Optimization Problems

Allaoua Hemmak, Department of Computer Science, Mohamed Boudiaf University, Msila, Algeria

ABSTRACT

In this work we propose to specify, describe and test a variant of a more powerful and flexible genetic algorithm that could be better suitable to tackle complex optimization problems such as in dynamic, stochastic or robust optimization. Our main goal is to give a new strong tool more efficient in terms of both solution quality and time processing for complex NP-hard optimization problems, which know great importance these past few decades in economy, management, manufacturing and many other fields. This algorithm gives a significant improvement to the basic genetic algorithm of J. Holland in order to imitate and simulate as close as possible the naturel selection phenomenal established in the theory of C. Darwin. Thus, in the evolution process of generations, the population should not keep a fixed size, but it should evolve over the generations. In the other hand, the population should contain several breeds of the species under study. Therefore, much kind of crossovers could be applied randomly such as crossover of pure or hybrid breeds. In addition, many types of mutation would be possible such as substitution, addition or deletion which could also happen randomly in the nature. The main idea is based on the maximal projection of the evolution theory on the optimization field to tackle complex problems. We aim to design flexible genetic algorithm by looking empirically for good compromise of adjusting the genetic parameters on sample cases.

KEYWORDS

Flexible Genetic Algorithm, Metaheuristics, Combinatorial Optimization, Knapsack Problem.


Relative Investigation of Big Data Analytics among Different Strategies

J.Keziya, M.Sri Lakshmi and D.Sahithi, Department of Computer science and Engineering, Sri Krishnadevaraya University College of Engineering & Technology, India

ABSTRACT

Big data analytics assessesthe huge volume of data to determine hidden patterns, correlations, and other perceptions. With todays innovativeBig Data analytics supports organizations to connect their data and practice to classify new prospects.That, leading-edge to smarter business changes, more efficientprocesses,high gains and happier customers. In this paper, we are showing aRelative investigation of Big Data Analytics, among variousstrategies such as On-Premises, Cloud and a Hybrid Strategies.Our relative investigation significantly focused on Hardware expenses,implementation phase, customization and control of data security measures.Finally, our paper concludes organizations can increase importantunderstanding into their inner workings and helps in decision-making which predict future events based on current trends.

KEYWORDS

Cloud Computing, On-Premises, Cloud and a Hybrid Strategies, Big data Analytics.


Streaming Punctuation for Long-Form Dictation with Transformers

Piyush Behre, Sharman Tan, Padma Varadharajan and Shuangyu Chang, Microsoft Corporation, United States of America

ABSTRACT

While speech recognition Word Error Rate (WER) has reached human parity for English, long-form dictation scenarios still suffer from segmentation and punctuation problems resulting from irregular pausing patterns or slow speakers. Transformer sequence tagging models are effective at capturing long bi-directional context, which is crucial for automatic punctuation. A typical Automatic Speech Recognition (ASR) production system, however, is constrained by real-time requirements, making it hardto incorporate right context when making punctuation decisions. In this paper, we propose a streaming approach for punctuation or re-punctuation of ASR output using dynamic decoding windows and measure its impact on punctuation and segmentation accuracy in a variety of scenarios. The new system tackles over-segmentation issues, improving segmentation F0.5-score by 13.9%. Streaming punctuation achieves an average BLEU-score gain of 0.66 for the downstream task of Machine Translation (MT).

KEYWORDS

automatic punctuation, automatic speech recognition, re-punctuation, speech segmentation.


Sentiment Classification of Codeswitched Text using Pre-Trained Multilingual Embeddings and Switch-point Detection

Saurav K. Aryal, Howard Prioleau and Gloria Washington, Department of Electrical Engineering and Computer Science, Howard University, Washington DC, USA

ABSTRACT

With increasing globalization and immigration, various studies have estimated that about half of the world population is bilingual. Consequently, individuals concurrently use two or more languages or dialects in casual conversational settings. However, most research is natural language processing is focused on monolingual text. To further the work in code-switched sentiment analysis, we propose a multi-step natural language processing algorithm utilizing points of code-switching in mixed text and conduct sentiment analysis around those identified points. The proposed sentiment analysis algorithm uses semantic similarity derived from large pre-trained multilingual models with a handcrafted set of positive and negative words to determine the polarity of code-switched text. The proposed approach outperforms a comparable baseline model by 11.2% for accuracy and 11.64% for F1-score on a SpanishEnglish dataset. Theoretically, the proposed algorithm can be expanded for sentiment analysis of multiple languages with limited human expertise.

KEYWORDS

Code-switching, Sentiment Analysis, Multilingual Embeddings, Code-switch points, Semantic Similarity.


A Framework to Protect IoT Devices from Enslavement in a Home Environment

Khalid Al-Begain1, Murad Khan1, Basil Alothman1, Chibli Joumaa1, Abdulrahman Serhan1 and Ibrahim Rashed2, 1Kuwait College of Science and Technology, Kuwait, 2Department of Computer Engineering Kuwait University, Kuwait

ABSTRACT

The Internet of Things (IoT) mainly consists of devices with limited processing capabilities and memory. These devices could be easily infected with malicious code and can be used as botnets. In this regard, we propose a framework to detect and prevent botnet activities in an IoT network. We first describe the working mechanism of how an attacker infects an IoT device and then spreads the infection to the entire network. Secondly, we propose a set of mechanisms consisting of detection, identifying the abnormal traffic generated from IoT devices using filtering and screening mechanisms, and publishing the abnormal traffic patterns to the rest of the home routers on the network. Further, the proposed approach is lightweight and requires fewer computing capabilities for installation on the home routers. In the future, we will test the proposed system on real hardware, and the results will be presented to identify the abnormal traffic generated from malicious IoT devices.

KEYWORDS

Botnet, IoT, Malicious Activities, Abnormal Traffic Detection.


Problems of the Multifunctional Morphemes

Dr. SafiaZivingi, Applied Linguistics, Damascus University(2), Syria

ABSTRACT

The Internet of Things (IoT) mainly consists of devices with limited processing capabilities and memory. These devices could be easily infected with malicious code and can be used as botnets. In this regard, we propose a framework to detect and prevent botnet activities in an IoT network. We first describe the working mechanism of how an attacker infects an IoT device and then spreads the infection to the entire network. Secondly, we propose a set of mechanisms consisting of detection, identifying the abnormal traffic generated from IoT devices using filtering and screening mechanisms, and publishing the abnormal traffic patterns to the rest of the home routers on the network. Further, the proposed approach is lightweight and requires fewer computing capabilities for installation on the home routers. In the future, we will test the proposed system on real hardware, and the results will be presented to identify the abnormal traffic generated from malicious IoT devices.

KEYWORDS

quantitatively, qualitatively, homonymy, polysemy, synonymy, correspond, equivalent& symmetry.