2nd International Conference on Machine Learning Techniques (MLTEC 2021)

December 23 ~ 24, 2021, Sydney, Australia

Accepted Papers

Arabic Speech Recognition Model based on Deep Learning

Rafik Amari1,2 and Mounir Zrigui1,2, 1University of Monastir, Faculty of Science of Monastir, Tunisia, 2University of Monastir, Research Laboratory in Algebra, Numbers Theory and Intelligent Systems RLANTIS, Tunisia

ABSTRACT

Speech recognition is considered as the main task of the speech processing field. In this paper, we study the problem of discontinuous speech recognition (Isolated Words) for the Arabic language. Two architectures based on deep learning are compared in this work. The first one is based on CNN networks and the second combine CNN and LSTM networks. The “Arabic Speech Corpus for isolated Words” (ASD) database is used for all experiments. The results proved the performance and the advantage of CNN-LSTM approach compared to the CNN approach.

KEYWORDS

Deep Learnig, CNN, LSTM, Arabic speech Corpus, Speech recognition.


Relation Extraction between Biomedical Entities from Literature using Semi- Supervised Learning Approach

Saranya M1, Arockia Xavier Annie R2 and Geetha T V3, 1Computer Science and Engineering, CEG, Anna University, India, 2Assistant Professor, Computer Science and Engineering, CEG, Anna University, Chennai, India, 3UGC-BSR Faculty Fellow, Computer Science and Engineering, former Dean CEG, Anna University, Chennai, India

ABSTRACT

Now-a-days people around the world are infected by many new diseases. Developing or discovering a new drug for the newly discovered disease is an expensive and time consuming process and these could be eliminated if already existing resources could be used. For identifying the candidates from available drugs we need to perform text mining of a large-scale literature repository to extract the relation between the chemical, target and disease. Computational approaches for identifying the relationships between the entities in biomedical domain are appearing as an active area of research for drug discovery as it requires more man power. Currently, the computational approaches for extracting the biomedical relations such as drug-gene and gene-disease relationships are limited as the construction of drug-gene and gene-disease association from the unstructured biomedical documents is very hard. In this work, we propose pattern based bootstrapping method which is a semi-supervised learning algorithm to extract the direct relations between drug, gene and disease from the biomedical documents. These direct relationships are used to infer indirect relationships between entities such as drug and disease. Now these indirect relationships are used to determine the new candidates for drug repositioning which in turn will reduce the time and the patient’s risk.

KEYWORDS

Text Mining, Drug Discovery, Drug Repositioning, Bootstrapping, Machine Learning.


Representation Learning and Similarity of Legal Judgements using Citation Networks

Harshit Jain and Naveen Pundir, Department of Computer Science and Engineering, IIT Kanpur, India

ABSTRACT

India and many other countries like UK, Australia, Canada follow the ‘common law system’ which gives substantial importance to prior related cases in determining the outcome of the current case. Better similarity methods can help in finding earlier similar cases, which can help lawyers searching for precedents. Prior approaches in computing similarity of legal judgements use a basic representation which is either a bag-of-words or dense embedding which is learned by only using the words present in the document. They, however, either neglect or do not emphasize the vital ‘legal’ information in the judgements, e.g. citations to prior cases, act and article numbers or names etc. In this paper, we propose a novel approach to learn the embeddings of legal documents using the citation network of documents. Experimental results demonstrate that the learned embedding is at par with the state-of-the-art methods for document similarity on a standard legal dataset.

KEYWORDS

Representation Learning, Similarity, Citation Network, Graph Embedding, Legal Judgements.


An Enhanced Machine Learning Topic Classification Methodology for Cybersecurity

Elijah Pelofske1, Lorie M. Liebrock2, and Vincent Urias3, 1Cybersecurity Centers, New Mexico Institute of Mining and Technology, Socorro, New Mexico, USA, 2Cybersecurity Centers, New Mexico Institute of Mining and Technology, Socorro, New Mexico, USA, 3Sandia National Laboratories, Albuquerque, New Mexico, USA

ABSTRACT

In this research, we use user defined labels from three internet text sources (Reddit, Stackexchange, Arxiv) to train 21 different machine learning models for the topic classification task of detecting cybersecurity discussions in natural text. We analyze the false positive and false negative rates of each of the 21 model’s in a cross validation experiment. Then we present a Cybersecurity Topic Classification (CTC) tool, which takes the majority vote of the 21 trained machine learning models as the decision mechanism for detecting cybersecurity related text. We also show that the majority vote mechanism of the CTC tool provides lower false negative and false positive rates on average than any of the 21 individual models. We show that the CTC tool is scalable to the hundreds of thousands of documents with a wall clock time on the order of hours.

KEYWORDS

cybersecurity, topic modeling, text classification, machine learning, neural networks, natural language processing, Stackexchange, Reddit, Arxiv, social media.


Sign Language Recognition for Sentence Level Continuous Signings

Ishika Godage, Ruvan Weerasignhe and Damitha Sandaruwan, University of Colombo School of Computing, Colombo 07, Sri Lanka

ABSTRACT

It is no doubt that communication plays a vital role in human life. There is, however, a significant population of hearing-impaired people who use non-verbal techniques for communication, which a majority of the people cannot understand. The predominant of these techniques is based on sign language, the main communication protocol among hearing impaired people. In this research, we propose a method to bridge the communication gap between hearing impaired people and others, which translates signed gestures into text. Most existing solutions, based on technologies such as Kinect, Leap Motion, Computer vision, EMG and IMU try to recognize and translate individual signs of hearing impaired people. The few approaches to sentence-level sign language recognition suffer from not being user-friendly or even practical owing to the devices they use. The proposed system is designed to provide full freedom to the user to sign an uninterrupted full sentence at a time. For this purpose, we employ two Myo armbands for gesture-capturing. Using signal processing and supervised learning based on a vocabulary of 49 words and 346 sentences for training with a single signer, we were able to achieve 75-80% word-level accuracy and 45-50% sentence level accuracy using gestural (EMG) and spatial (IMU) features for our signer-dependent experiment.

KEYWORDS

Sign Language, Word-Level Recognition, Sentence-Level Recognition, Myo Armband, EMG, IMU, Supervised Learning.


Para-Social Interaction Analysis of Valentino Virtual Spokesperson based on Weibo Data and Search Popularity

Dandan Yu1, Wuying Liu1, 2, 1Shandong Key Laboratory of Language Resources Development and Application, Ludong University, 264025 Yantai, Shandong, China, 2Laboratory of Language Engineering and Computing, Guangdong University of Foreign Studies, 510420 Guangzhou, Guangdong, China

ABSTRACT

Brand image is an important market competitiveness of luxury brands. Luxury brands gradually tend to adopt virtual spokespersons to reduce the negative impact of celebrities scandal on the economic benefits of luxury brands. In this context, the article describes the development trends of virtual spokespersons in recent years and the specific applications in luxury brand advertising, and uses Valentinos Weibo interaction data as the basis to explore does virtual spokespersons impact the para-social interaction relationship between luxury brands and their consumers. Finally, it discusses the problems between the virtual spokespersons of luxury brands and the para-social interaction and puts forward corresponding development suggestions.

KEYWORDS

Luxury Brands, Virtual Spokesperson, Para-social Interaction, Weibo Data.


A Deep Learning Approach to Integrate Human-Level Understanding in a Chatbot

Afia Fairoose Abedin, Amirul Islam Al Mamun, Rownak Jahan Nowrin, Amitabha Chakrabarty, Moin Mostakim1 and Sudip Kumar Naskar2, 1Department of Computer Science and Engineering, Brac University, Dhaka, Bangladesh, 2Department of Computer Science and Engineering, Jadavpur University, Kolkata, India

ABSTRACT

Chatbots, today, are used as virtual assistants to reduce human workload. Unlike humans, chatbots can serve multiple customers at a time, are available 24/7 and reply in less than a fraction of a second. Though chatbots perform well in task-oriented activities, in most cases they fail to understand personalised opinions, statements or even queries which later impact the organization for poor service management. Lack of understanding capabilities in bots disinterest humans to continue conversations with them. Usually, chatbots give absurd responses when they are unable to interpret a user’s text accurately. The major gap of understanding between the users and the chatbot can be reduced if organizations use chatbots more efficiently and improve their quality of products and services by extracting the client reviews from conversations. Thus, in our research we incorporated all the key elements that are necessary for a chatbot to analyse and understand an input text precisely and accurately. We performed sentiment analysis, emotion detection, intent classification and named-entity recognition using deep learning to develop chatbots with humanistic understanding and intelligence. The efficiency of our approach can be demonstrated accordingly by the detailed analysis.

KEYWORDS

Natural Language Processing, Humanistic, Deep learning, Sentiment analysis, Emotion detection, Intent classification, Named-entity recognition.


My Mind is a Prison: A Boosted Deep Learning Approach to Detect the Rise in Depression Since Covid-19 using a Stacked Bi-Lstm Catboost Model

Shahana Nandy1 and Vishrut Kumar2, 1Department of Electrical and Electronics Engineering, National Institute of Technology, Warangal, India, 2Department of Information and Communication Technology, Manipal Institute of Technology, India

ABSTRACT

The Covid-19 pandemic has significantly altered our way of life. Physical, social interactions are being steadily replaced with virtual connections and remote interactions. Social media platforms such as Facebook, Twitter, and Instagram have become the primary medium of communication. However, being relegated to a solely online presence has had a major impact on the mental health of users since the onset of the pandemic. The present study aims to identify depressed Twitter users by analyzing their tweets. We propose a deep learning model which stacks a bidirectional LSTM layer along with a Catboost Algorithm layer to classify tweets and detect depression. The results show that the proposed model outperforms standard machine learning approaches to classification and that there was a definite rise in depression since the beginning of the pandemic. The studys primary contribution is the novel deep learning model and its ability to detect depression.

KEYWORDS

Sentiment analysis, Text analysis, Long Short Term Memory, Catboost, COVID-19.


Extraction of Linguistic Speech Patterns of Japanese Fictional Characters using Subword Units

Mika Kishino1 and Kanako Komiya2, 1Ibaraki University, Ibaraki, Japan, 2Tokyo University of Agriculture and Technology, Tokyo, Japan

ABSTRACT

The linguistic speech patterns that characterize lines of Japanese anime or game characters were extracted and analyzed in this study. Conventional morphological analyzers, such as MeCab, segment words with high performance, but they are unable to segment broken expressions or utterance endings that are not listed in the dictionary, which often appears in lines of anime or game characters. To overcome this challenge, we propose segmenting lines of Japanese anime or game characters using subword units that were proposed mainly for deep learning, and extracting frequently occurring strings to obtain expressions that characterize their utterances. We analyzed the subword units weighted by TF/IDF according to gender, age, and each anime character and show that they are linguistic speech patterns that are specific for each feature. Additionally, a classification experiment shows that the model with subword units outperformed that with the conventional method.

KEYWORDS

Pattern extraction, Characterization of fictional characters, Subword units, Linguistic speech patterns, word segmentation.


Morphological Analysis of Japanese Hiragana Sentences using the BI-LSTM CRF Model

Jun Izutsu1 and Kanako Komiya2, 1Ibaraki University, 2Tokyo University of Agriculture and Technology

ABSTRACT

This study proposes a method to develop neural models of the morphological analyzer for Japanese Hiragana sentences using the Bi-LSTM CRF model.Morphological analysis is a technique that divides text data into words and assigns information such as parts of speech. In Japanese natural language processing systems, this technique plays an essential role in downstream applications because the Japanese language does not have word delimiters between words.Hiragana is a type of Japanese phonogramic characters, which is used for texts for children or people who cannot read Chinese characters.Morphological analysis of Hiragana sentences is more difficult than that of ordinary Japanese sentences because there is less information for dividing. For morphological analysis of Hiragana sentences, we demonstrated the effectiveness of fine-tuning using a model based on ordinary Japanese text and examined the influence of training data of morphological analysis on texts of various genres.

KEYWORDS

Morphological analysis, Hiragana texts, Bi-LSTM CRF model, Fine-tuning, Domain adaptation.


An Adaptive and Interactive Educational Game Platform for English Learning Enhancement using AI and Chatbot Techniques

Yichen Liu1, Jonathan Sahagun2 and Yu Sun3, 1Shen Wai International School, 29, Baishi 3rd Road Nanshan Shenzhen China 518053, 2California State Polytechnic University, Los Angeles, CA, 91748, 3California State Polytechnic University, Pomona, CA, 91768

ABSTRACT

As our world becomes more globalized, learning new languages will be an essential skill to communicate across countries and cultures and as a means to create better opportunities for oneself [4]. This holds especially true for the English language [5]. Since the rise of smartphones, there have been many apps created to teach new languages such as Babbel and Duolingo that have made learning new languages cheap and approachable by allowing users to practice briefly whenever they have a free moment for. This is where we believe those apps fail. These apps do not capture the interest or attention of the user’s for long enough for them to meaningfully learn. Our approach is to make a video game that immerses our player in a world where they get to practice English verbally with NPCs and engage with them in scenarios they may encounter in the real world [6]. Our approach will include using chatbot AI to engage our users in realistic natural conversation while using speech to text technology such that our user will practice speaking English [7].

KEYWORDS

Machine Learning, NLP, Data Mining, Game Development.


Apersonality Prediction Method of WEIBO Users based on Personality Lexicon

Feng Yuanyuan and Liu Kejian, Department of Computer and Software Engineering, Xihua University, Chengdu, China

ABSTRACT

Personality is the dominant factor affecting human behavior. With the rise of social network platforms, increasing attention has been paid to predict personality traits by analyzing users behavior information, and pay little attention to the text contents, making it insufficient to explain personality from the perspective of texts. Therefore, in this paper, we propose a personality prediction method based on personality lexicon. Firstly, we extract keywords from texts, and use word embedding techniques to construct a Chinese personality lexicon. Based on the lexicon, we analyze the correlation between personality traits and different semantic categories of words, and extract the semantic features of the texts posted by Weibo users toconstruct personality prediction models using classificationalgorithmn. The final experimentsshows that compared with SC-LIWC, the personality lexicon constructed in this paper can achieve a better performance.

KEYWORDS

Personality Lexicon, Machine Learning, Personality Prediction.


ANEC: Artificial Named Entity Classifier based on BI-LSTM for an AI-based Business Analyst

Taaniya Arora, Neha Prabhugaonkar, Ganesh Subramanian, Kathy Leake, Crux Intelligence, New York City, New York, USA

ABSTRACT

Business users across enterprises today rely on reports and dashboards created by IT organizations to understand the dynamics of their business better and get insights into the data. In many cases, these users are underserved and do not possess the technical skillset to query the data source to get the information they need. There is a need for users to access information in the most natural way possible. AI-based Business Analysts are going to change the future of business analytics and business intelligence by providing a natural language interface between the user and data. This natural language interface can understand ambiguous questions from users, the intent and convert the same into a database query. One of the important elements of an AI-based business analyst is to interpret a natural language question. It also requires identification of key business entities within the question and relationship between them to generate insights. The Artificial Named Entity Classifier (ANEC) helps us take a huge step forward in that direction by not only identifying but also classifying entities with the help of the sequence recognising prowess of BiLSTMs.

KEYWORDS

Named Entity Recognition System, Natural Language Processing , Business Analytics, Question Answering Systems, Bi-directional LSTMs.


Fake or Genuine? Contextualised Text Representation for Fake Review Detection

Rami Mohawesh1, Shuxiang Xu2, Matthew Springer3, Muna Al-Hawawreh4, Sumbal Maqsood5, 1School of Information and Communications Technology, University of Tasmania, Tasmania, Australia, 2School of Information and Communications Technology, University of Tasmania, Tasmania, Australia, 3School of Information and Communications Technology, University of Tasmania, Tasmania, Australia, 4School of Engineering and Information Technology, University of New South Wales, Australian Defence Force Academy (ADFA), Canberra, Australia, 5School of Information and Communications Technology, University of Tasmania, Tasmania, Australia

ABSTRACT

Online reviews have a significant influence on customers’ decisions in purchasing any products or services. However, fake reviews can mislead both consumers and companies. Several models have been developed to detect fake reviews using machine learning approaches. Many of these models have some limitations resulting in low accuracy in distinguishing between fake and genuine reviews. These models focused only on linguistic features to detect fake reviews and failed to capture the semantic meaning of the reviews. To deal with this, this paper proposes a new ensemble model that employs transformer architecture to discover the hidden patterns in a sequence of fake reviews and detect them precisely. The proposed approach combines three transformer models to improve the robustness of fake and genuine behaviour profiling and modelling to detect fake reviews. The experimental results using semi-real benchmark datasets showed the superiority of the proposed model over state-of-theart models.

KEYWORDS

Fake review, detection, Transformer, Ensemble, Deep learning.


Text Summarization without Pronoun Ambiguity using Tree based Approach

Anishik Gupta1 Rakesh R. Warier2, 1Department of Electrical and Electronics Engineering, BITS Pilani KK Birla Goa Campus, India, 2Assistant Professor, Department of Electrical and Electronics Engineering, BITS Pilani KK Birla Goa Campus, India

ABSTRACT

Extractive summarization can form summaries with high grammatical correctness, however, the selected sentences might contain pronouns that refer to nouns that are not present in the summary, thus making the summary ambiguous. Whereas, an abstractive summary can be computationally heavy and may produce grammatical mistakes. We propose a text summarization algorithm that reduces pronoun ambiguity without adding significant computational burden. The proposed algorithm removes pronoun ambiguity using coreference resolution. Then finds important words in a sentence using dependency parsing which are then used to find similarity between sentences using word vectors of those words. Then, a tree-based approach is used to find and remove sentences with similar meaning to shorten the summary. Final step fixes the additional pronouns introduced due to coreference resolution. The algorithm was applied on the examples from CNN/Daily Mail Corpus. ROUGE score is computed and found to be satisfactory. Human evaluation was conducted to check the summary quality, the results showed that the algorithm produces summaries with very good grammatical correctness and is effective in rejecting unnecessary information.

KEYWORDS

Text Summarization, Pronoun Ambiguity, Coreference Resolution, Tree Based Algorithm, Dependency Parsing, Word Vector.


A New Strategy for the Design of Coronary Stents Obtained by Ultrasonic-Microcasting: A Finite Element Analysis Approach

I.V.Gomes1,2, H.Puga2 and J.L.Alves2, 1MIT Portugal, Guimarães, Portugal, 2CMEMS – Center for Microelectromechanical Systems University of Minho, Portugal

ABSTRACT

Ultrasonic-microcasting is a manufacturing technique that opens the possibility of obtaining biodegradable magnesium stents through a faster and cheaper process whilst it also brings important features such as the production of devices with cross-section variation.This way, it may be feasible tailoring the expansion profile of the stent. Even so, there are still geometric constraints which are essentially associated with the minimum thickness that the process allows to obtain and that is, currently, about 0.20 mm. Moreover, the nature of the material used - magnesium alloy - also demands thicker structures which may be harmful to the stent performance. In this work, a numerical model for stent shape optimization based on its cross-section variation is presented, aiming at reducing the dogboning phenomenon observed in this type of device. Such model is in agreement with a set of optimization variables and limiting values of the design and optimization parameters, which are defined considering both the advantages and constraints of the ultrasonic-microcasting process. Moreover, this model suggests an optimized geometry that despite it presents higher thickness, has a performance comparable to that ofthe most popular stent models currently being used.

KEYWORDS

Stent, Optimization, Ultrasonic-Microcasting, Dogboning.


High Multiplicity Strip Packing with Three Rectangle Types

Andrew Bloch-Hansen , Roberto Solis Oba , and Andy Yu, Department of Computer Science, Western University, Ontario, Canada

ABSTRACT

The two-dimensional strip packing problem consists of packing in a rectangular strip of width 1 and minimum height a set of n rectangles, where each rectangle has width 0 < w ≤ 1 and height 0 < h ≤ 1. We consider the high-multiplicity version of the problem in which there are only K different types of rectangles. For the case when K = 3, we give an algorithm which provides a solution requiring at most height 3/2 +ε plus the height of an optimal solution, where ε is any positive constant.

KEYWORDS

LP-relaxation, two-dimensional strip packing, high multiplicity, approximation algorithm.


Railway Process Automation with the Aid of IoT and Image Processing to Ensure Human Safety

Vedasingha K. S, K. K. M. T. Perera, Akalanka H. W. I, Hathurusinghe K. I, Nelum Chathuranga Amarasena and Nalaka R. Dissanayake, Department of Information Technology, Sri Lanka Institute of Information Technology, Malabe, Sri Lanka

ABSTRACT

Railways provide the most convenient and economically beneficial mode of transportation, and it has been the most popular transportation method among all. According to the past analysed data, it reveals a considerable number of accidents which occurred at railways and caused damages to not only precious lives but also to the economy of the countries. There are some major issues which need to be addressed in railways of South Asian countries since they fall under the developing category. The goal of this research is to minimize the influencing aspect of railway level crossing accidents by developing “Railway Process Automation System”, as there are high-risk areas that are prone to accidents and safety at these places is of utmost significance. This paper describes the implementation methodology and the success of the study. The main purpose of the system is to ensure human safety by using Internet of Things (IoT) and image processing techniques. The system can detect the current location of the train and close the railway gate automatically. And it is possible to do the above-mentioned process through a decision-making system by using past data. The specialty is both processes working parallel. As usual, if the system fails to close the railway gate due to technical or a network failure, the proposed system can identify the current location and close the railway gate through decision making system which is a revolutionary feature. The proposed system introduces further two features to reduce the causes of railway accidents. Railway track crack detection and motion detection are those features which play a significant role in reducing the risk of railway accidents. Moreover, the system is capable of detecting rule violations at a level crossing by using sensors. The proposed system is implemented through a prototype, and it is tested with real-world scenarios to gain the above 90% of accuracy.

KEYWORDS

Crack Detection, Decision-Making, Image Processing, Internet of Things, Motion Detection, Prototype, Sensors.


An NLP Approach to Predict and Suggest Next Word in Urdu Typing

Muhammad Hassan1, Dr. Saman Hina2 and Dr Saad Ahmed3, 1Department of Computer Sciences, NED University of Engineering and Technology, Karachi, Pakistan, 2Department of Computer Sciences, NED University of Engineering and Technology, Karachi, Pakistan, 3Department of Computer Sciences, Iqra University, Karachi, Pakistan

ABSTRACT

This paper states research and development about the low speed in Urdu typing and high demand of it that make it more expensive and less available in your surroundings so we found out that due to 35+ alphabets in the Urdu language, the international ISO standard keyboards are only on English alphabets that are 25+ that make a quiet big difference of about 10 alphabets that means we have to press and hold SHIFT key while typing these 10+ alphabets that are wasting our time and slowing our speed of typing so we tried to solve this problem by keeping the standard along as they are. This paper is based on the last research [12] by this author for word prediction and suggestion in Urdu Language (UL) based on a stochastic model, Hidden Markov Model is used to predict the next word, while Unigram Model was also used to suggest the current word and the next upcoming word, N-Gram Model was followed keeping N=2. Now, the biggest achievement in this Paper is POS tagging as each suggestion and prediction is also based upon Tagged words with a dataset of thousands of Tag combinations based upon frequency of occurrence is on test data. This tool is developed to implement this concept for Urdu Language (UL) and tested by regular and new URDU content writers to check their improvements in their typing speeds. We made some programs to let you type less and choose more.

KEYWORDS

Urdu, Urdu Keyboard, Urdu Suggester, Urdu Predictor, Rule-Based, Statistical Based, Dictionary Based Techniques, Word Prediction, Natural Language Processing, Stochastic Model, Unigram, Interpolation, Markov Model, POS Tagging, Part-Of-Speech.


Disease Diagnosis by Nadi Analysis Using Traditional Ayurvedic Methods with Portable Nadi Pariksha Device & Web Application

Heshani Kadurugoda Bandaraa, Nethun Kawitha Madanayake b, Kasun Devaka c, Geethan Maduranged, Suranjini Silva e, Pradeep Abeygunawardhana f

ABSTRACT

As the world’s population grows, so does the number of people who suffer from asthma, gastritis, and hypotension. In the future, there will be a need for the development of a system with early human health diagnostics. There are various methods to examine a patient’s pulse in the medical field today. However, their fundamental concepts, clinical procedures are also evolved from our ayurvedic treatments. The purpose of this study is to develop a minimally invasive device and questionnaire to aid doctors in diagnosing diseases based on traditional Ayurvedic methods. Nadi Pariksha is an old medical technique that dates back to India and China. Ayurveda is the root of this approach. Ayurveda, in addition to examining the pulse, the patient is also diagnosed by asking questions and examining the cough, eyes, and face. Once the disease is diagnosed with these two methods, a prescription is generated for the diagnosed disease and the nearest hospitals and pharmacies are displayed to the patient for further convenience.

KEYWORDS

Nadi Pariksha, Vata; Pitta, Kapha, Machine Learning, Artificial neural network, Asthma, Gastritis, Hypertension


Voice Information Retrieval in Collaborative Information Seeking

Sulaiman AdesegunKukoyi, O.F.W Onifade, Kamorudeen A. Amuda, Department of Computer Science, University of Ibadan, Nigeria

ABSTRACT

Voice information retrieval is a technique that provides Information Retrieval System with the capacity to transcribe spoken queries and use the text output for information search. CIS is a field of research that involves studying the situation, motivations, and methods for people working in a collaborative group for information seeking projects, as well as building a system for supporting such activities. Human find it easier to communicate and express ideas via speech. Existing voice search like Google and other mainstream voice search does not support collaborative search. The spoken speeches passed through the ASR for feature extraction using MFCC and HMM, Viterbi algorithm precisely for pattern matching. The result of the ASR is then passed as input into CIS System, results is then filtered have an aggregate result. The result from the simulation shows that our model was able to achieve 81.25% transcription accuracy.

KEYWORDS

Information Retrieval, Collaborative Search, Collaborative Information Seeking, Automatic Speech Recognition, Feature Extraction, MFCC, Hidden Markov Model, Acoustic Model, Viterbi Algorithm.


Earthquake Magnitude and B-Value Prediction Model using Extreme Learning Machine

Gunbir Singh Baveja1 and Jaspreet Singh2, 1Delhi Public School, Dwarka, New Delhi, Delhi, India, 2GD Goenka University, Sohna, Haryana, India

ABSTRACT

Earthquake Prediction has been a challenging research area for many decades, where the future occurrence of this highly uncertain calamity is predicted. In this paper, several parametric and non-parametric features were calculated, where the non-parametric features were calculated using the parametric features. 8 seismic features were calculated using Gutenberg-Richter law, total recurrence time, seismic energy release. Additionally, criterions such as Maximum Relevance and Maximum Redundancy were applied to choose the pertinent features. These features along with others were used as input for an Extreme Learning Machine (ELM) Regression Model. Magnitude and Time data of 5 decades from the Assam-Guwahati region were used to create this model for magnitude prediction. The Testing Accuracy and Testing Speed were computed taking Root Mean Squared Error (RMSE) as the parameter for evaluating the model. As confirmed by the results, ELM shows better scalability with much faster Training and Testing Speed (up to thousand times faster) than traditional Support Vector Machines. The Testing RMSE (Root Mean Squared Error) came out to be . To further test the model’s robustness, magnitude-time data from California was used to- calculate the seismic indicators, fed into neural network (ELM) and tested on the Assam-Guwahati region. The model proves to be successful and can be implemented in early warning systems as it continues to be a major part of Disaster Response and Management.

KEYWORDS

Earthquake Prediction, Machine Learning, Extreme Learning Machine, Seismological Features, Gutenberg-Richter Law, Support Vector Machine.


Automated Skills Mapping and Career Development using AI

Yew Kee Wong, School of Information Engineering, HuangHuai University, Henan, China

ABSTRACT

Artificial intelligence has been an eye-popping word that is impacting every industry in the world. With the rise of such advanced technology, there will be always a question regarding its impact on our social life, environment and economy thus impacting all efforts exerted towards continuous development. From the definition, the welfare of human beings is the core of continuous development. Continuous development is useful only when ordinary people’s lives are improved whether in health, education, employment, environment, equality or justice. Securing decent jobs is a key enabler to promote the components of continuous development, economic growth, social welfare and environmental sustainability. The human resources are the precious resource for nations. The high unemployment and underemployment rates especially in youth is a great threat affecting the continuous economic development of many countries and is influenced by investment in education, and quality of living.

KEYWORDS

Artificial Intelligence, Conceptual Blueprint, Continuous Development, Human Resources, Learning and Employability Blueprint.


One-Class Model for Fabric Defect Detection

Hao Zhou, Yixin Chen, David Troendle, Byunghyun Jang, Computer Information and Science, University of Mississippi, University, USA

ABSTRACT

An automated and accurate fabric defect inspection system is in high demand as a replacement for slow, inconsistent, error-prone, and expensive human operators in the textile industry. Previous efforts focused on certain types of fabrics or defects, which is not an ideal solution. In this paper, we propose a novel one-class model that is capable of detecting various defects on different fabric types. Our model takes advantage of a well designed Gabor filter bank to analyze fabric texture. We then leverage an advanced deep learning algorithm, autoencoder, to learn general feature representations from the outputs of the Gabor filter bank. Lastly, we develop a nearest neighbor density estimator to locate potential defects and draw them on the fabric images. We demonstrate the effectiveness and robustness of the proposed model by testing it on various types of fabrics such as plain, patterned, and rotated fabrics. Our model also achieves a true positive rate (a.k.a recall) value of 0.895 with no false alarms on our dataset based upon the Standard Fabric Defect Glossary.

KEYWORDS

Fabric defect detection, One-class classification, Gabor filter bank.


Anime4You: An Intelligent Analytical Framework for Anime Recommendation and Personalization using AI and Big Data Analysis

Kaiho Cheung1, Ishmael Rico2, Tao Li3 and Yu Sun4, 1Sentinel Secondary, 1250 Chartwell Dr, West Vancouver, BC V7S 2R2, 2University of California, Berkeley, CA, 94709, 3Purdue University, West Lafayette, IN 47907, 4California State Polytechnic University, Pomona, CA, 91768

ABSTRACT

In resistant year the popularity of anime has steadily grown and similarly to other forms of media consumers often face a pressing issue: “What do I watch next?”. In this study we thoroughly examined the current method of solving this issue and determined that the learning curve to effectively utilize the current solution is too high. We developed a program to ensure easier answers to the issue. The program uses a python based machine learning algorithm from Scikit-Learn and data from MyAnime list to create an accurate model that delivers what consumers want, goodrecommendation [9]. We also carried out different experiments with several iterations to study the difference in accuracy when applying different factors. Through these tests, we have successfully created a reliable Support vector machine model recommending users what to watch.

KEYWORDS

Machine learning, anime, recommendations, Python.


GAN-Based Data Augmentation and Anonymization for Mask Classification

Mustafa Çelik1,2 and Ahmet Haydar Örnek1,3, 1Huawei Turkey R&D Center, Istanbul, Turkey, 2Department of Computer Engineering, Istanbul Technical University, Istanbul, Turkey, 3Deparment of Computer Electrical & Electronics Engineering, Konya Technical University, Konya, Turkey

ABSTRACT

Deep learning methods, especially convolutional neural networks (CNNs), have made a major contribution in computer vision. However, deep learning classifiers need large-scale annotated dataset to be trained without over-fitting. Also, in high-data diversity, trained models generalize better. However, collecting such large-scale dataset remains challenging. Furthermore, it is invaluable for researchers to protect the subjects confidentiality when using their personal data such as face images. In this paper, we propose a deep learning Generative Adversarial Networks (GANs) which generates synthetic samples for our mask classification model. Our contributions in this work is two-fold that the synthetics images provide. First, GANs models can be used as an anonymization tool when the subjects confidentiality is matter. Second, the generated masked/unmasked face images boost the performance of mask classification model by using the synthetic images as a form of data augmentation. In our work, the classification accuracy using only traditional data augmentations is 93.71 %. By using both synthetic data and original data with traditional data augmentations the result is 95.50 %. It is shown that the GAN-generated synthetic data boosts the performance of deep learning classifiers.

KEYWORDS

Convolutional Neural Network, Data Anonymization, Data Augmentation, Generative Adversarial Network, Mask Classification.


Clustering Decomposition of the Greenhouse Climate Fuzzy Model

Paulo Salgado, Dep.to Engenharias-ECT, Universidade de Trás-os-Montes e Alto Douro, Vila Real, Portugal

ABSTRACT

Fuzzy modelling has been widely applied as a powerful methodology for the identification of nonlinear systems, which it is the case of Agricultural Greenhouse Climate system. In this paper, a generalized Probabilistic Fuzzy Clustering of Fuzzy Rules (PFCFR) algorithm applied to TS-Fuzzy System clustering is proposed to split the (flat) fuzzy model in a set of fuzzy sub-models. The decomposition is realized by using a generalized fuzzy c-means clustering method to decompose the T-S fuzzy system, which generates several clusters of rules. This new methodology was tested to split the inside greenhouse air temperature flat fuzzy models into fuzzy sub-models that are compared to their counterpart’s physical sub-models.

KEYWORDS

Fuzzy clustering, Fuzzy modelling, Fuzzy model decomposition, Greenhouse climate.


Proposal for a Methodology to Create an Artificial Pancreas for the Control of Type 1 Diabetes using Machine Learning and Automated Insulin Dosing Systems (AID)

Rodrigo Vilanova1 and Anderson Jefferson Cerqueira2, 1Avantsec, Brasilia, Brazil, 2Computer Science Department, University of Brasília (UnB), Brasilia, Brazil

ABSTRACT

The number of children, adolescents, and adults living with diabetes increases annually due to the lack of physical activity, poor diet habits, stress, and genetic factors, and there are greater numbers in low income countries. Therefore, the aim of this article is to present a proposal for a methodology for developing a pancreas using artificial intelligence to control the required doses of insulin for a patient with type 1 diabetes (T1D), according to data received from monitoring sensors. The information collected can be used by physicians to make medication changes and improve patients’ glucose control using insulin pumps for optimum performance. Therefore, using the model proposed in this work, the patient is offered a gain in glucose control and, therefore, an improvement in quality of life, as well as in the costs related to hospitalization.

KEYWORDS

Machine Learning, Artificial Intelligence, Data Mining, Diabetes, T1D, AID, iCGM, Clustering, Regression, Decision-making, Time series, Cybersecurity, Methodology.


Smart Tab Predictor: A Chrome Extension to Assist Browser Task Management using Machine Learning and Data Analysis

Brian Hu1, Evan Gunnell2 and Yu Sun2, 1Arnold O. Beckman High School, Irvine, CA 92602, 2California State Polytechnic University, Pomona, CA, 91768

ABSTRACT

The outbreak of the Covid 19 pandemic has forced most schools and businesses to use digital learning and working. Many people have repetitive web browsing activities or encounter too many open tabs causing slowness in surfing the websites. This paper presents a tab predictor application, a Chrome browser extension that uses Machine Learning (ML) to predict the next URL to open based on the time and frequency of current and previous tabs. Nowadays, AI technology has expanded in peoples daily lives like self-driving cars and assistive-type robots. The AI ML module in our application is more basic and is built using Python and Scikit-Learn (Sklearn) machine learning libraries. We use JavaScript and Chrome API to collect the browser tab data and store it in a Firebase Cloud Firestore. The ML module then loads data from the Firebase, trains datasets to adapt to a users patterns, and predicts URLs to recommend opening new URLs. For Machine Learning, we compare three ML models and select the Random Forest Classifier. We also apply SMOTE (Synthetic Minority Oversampling Technique) to make the data-set more balanced, thus improving the prediction accuracy. Both manual tests and Cross Validation are performed to verify the predicted URLs. As a result, using the Smart Tab Predictor application will help students and business workers manage the web browser tabs more efficiently in their daily routine for online classes, online meetings, and other websites.

KEYWORDS

Machine Learning, Chrome extension, Task Management.


Adaptive Fault Resolution for Database Replication Systems

Chee Keong Wee1 and Nathan Wee2, 1Digital Application Services, eHealth Queensland, Queensland, Australia, 2Science &Engineering Faculty, Queensland University of Technology, Queensland, Australia

ABSTRACT

Database replication is ubiquitous among organizations’ IT infrastructure when data is shared across multiple systems and their service uptime is critical. But complex software will eventually suffer outages due to different type of circumstances and it is important to resolve them promptly and restore the services. This paper proposes an approach to resolve data replication software’s through deep reinforcement learning. Empirical results show that the new method can resolve the software faults quickly with high accuracy.

KEYWORDS

Database Management, Data replication, reinforcement learning, fault resolution.


AI4Truth: An In-Depth Analysis on Misinformation using Machine Learning and Data Science

Kevin Qu and Yu Sun, California State Polytechnic University, Pomona, CA, 91768

ABSTRACT

A number of social issues have been grown due to the increasing amount of “fake news”. With the inevitable exposure to this misinformation, it has become a real challenge for the public to process the correct truth and knowledge with accuracy. In this paper, we have applied machine learning to investigate the correlations between the information and the way people treat it. With enough data, we are able to safely and accurately predict which groups are most vulnerable to misinformation. In addition, we realized that the structure of the survey itself could help with future studies, and the method by which the news articles are presented, and the news articles itself also contributes to the result.

KEYWORDS

Machine Learning, Cross Validation, Training and Prediction, Misinformation.


Job Satisfaction and Organisational Climate Teaching Staff of Chaudhary Bansi Lal University Bhiwani : A study

Dr Jitender Kumar, Library and Information Science Professional Assistant Chaudhary Bansi Lal University Bhiwani

ABSTRACT

The purpose of this study is to determine job satisfaction among teaching personnel at Chaudhary Bansi Lal University in Bhiwani, Haryana. Respondents choice of profession, importance of job rotation, respondents job satisfaction with their current employer, role of professional bodies in protecting employees welfare interests, job security, work environment, interpersonal relationships, appreciation, advancement, organisational administration, and so on are all factors that influence job satisfaction among teaching staff.

KEYWORDS

Job satisfaction, Organisational Climate, Teaching staff.


A Daily Covid-19 Cases Prediction System using Data Mining and Machine Learning Algorithm

Yiqi Gao1 and Yu Sun2, 1Sage High School, Newport Coast, CA 92657, 2California State Polytechnic University, Pomona, CA, 91768

ABSTRACT

The start of 2020 marked the beginning of the deadly COVID-19 pandemic caused by the novel SARSCOV-2 from Wuhan, China. At the time of writing, the virus has infected over 150 million people worldwide and resulted in more than 3.5 million global deaths. Accurate predictions made using machine learning algorithms can be useful as a guide for hospitals and policy makers to make adequate preparations and enact effective policies to combat the pandemic. This paper takes a two-pronged approach to analyzing COVID-19. First, it attempts to utilize machine learning algorithms such as linear regression, polynomial regression, and random forest regression to make accurate predictions of daily COVID-19 cases using combinations of a range of predictors. Then, using the feature significance of random forest regression, it attempts to compare the influence of the individual predictors on the general trend of COVID-19 with the predictions made and to also highlight factors of high influence, which can then be targeted by policies for efficient pandemic response.

KEYWORDS

Covid-19 Case Prediction, Data Mining, Machine Learning Algorithm.


Distributed Automated Software Testing using Automation Framework as a Service

Santanu Ray1 and Pratik Gupta2, 1Ericsson, New Jersey, USA, 2Ericsson, Kolkata, India

ABSTRACT

Conventional test automation framework executes test cases in sequential manner which increases the execution time. Even though framework supports multiple test suites execution in parallel, we are unable to perform the same due to system limitations and infrastructure cost. Build and maintenance of automation framework is also time consuming and cost-effective. The paper is design for implementing ascalable test automation framework providing test framework as a service which expedite test execution by distributing test suites in multiple services running in parallel without any extra infrastructure.

KEYWORDS

Distributed Testing, Robot Framework, Docker, Automation Framework


Airborne Software Development Processes Certification Review Strategy based on RTCA/Do-178C

Jinghua Sun1, Samuel Edwards2, Nic Connelly3, Andrew Bridge4 and Lei Zhang1, 1COMAC Shanghai Aircraft Design and Research Institute, Shanghai, China, 2Defence Aviation Safety Authority, 661 Bourke St, Melbourne, VIC, Australia, 3School of Engineering, RMIT University, Melbourne, VIC, Australia, 4European Union Aviation Safety Agency, Cologne, Germany

ABSTRACT

Airborne software is invisible and intangible, and it can significantly impact the safety of the aircraft. However, it cannot be exhaustively tested and only assured through a structured, process, activity, and objective-based approach. The paper studied the development processes and objectives applicable to different software levels based on RTCA/DO-178C. Identified 82 technical focus points based on each airborne software development sub-process, then created a Process Technology Coverage matrix to demonstrate the technical focuses of each process. Developed an objective-oriented top-down and bottom-up sampling strategy for the four software Stage of Involvement reviews by considering the frequency and depth of involvement. Finally, created the Technology Objective Coverage matrix, which can support the reviewers to perform the efficient risk-based SOI reviews by considering the identified technical points, thus ensuring the safety of the aircraft from the software assurance perspective.

KEYWORDS

Airborne Software, SOI, DO-178C, Objective, Sampling Strategy.


Checklist Usage in Secure Software Development

Zhongwei Teng, Jacob Tate, William Nock, Carlos Olea, Jules White, Vanderbilt University, USA

ABSTRACT

Checklists have been used to increase safety in aviation and help prevent mistakes in surgeries. However, despite the success of checklists in many domains, checklists have not been universally successful in improving safety. A large volume of checklists is being published online for helping software developers produce more secure code and avoid mistakes that lead to cyber-security vulnerabilities. It is not clear if these secure development checklists are an effective method of teaching developers to avoid cyber-security mistakes and reducing coding errors that introduce vulnerabilities. This paper presents in-process research looking at the secure coding checklists available online, how they map to well-known checklist formats investigated in prior human factors research, and unique pitfalls that some secure development checklists exhibit related to decidability, abstraction, and reuse.

KEYWORDS

Checklists, Cyber Security, Software Development.


Mapafa: An Intelligent Mobile System to Assist School Commute Planning using Big Data and Machine Learning

Marco Meng1, Yu Sun2 and Ang Li3, 1Sage Hill, 20402 Newport Coast Dr, Newport Coast, CA 92657, 2California State Polytechnic University, Pomona, CA, 91768, 3California State University Long Beach, Long Beach, CA 90840

ABSTRACT

While many functional maps exist in our society, it is very easy to track down and navigate to better places. However, there are certain functions that digital maps like the google maps don’t have that can really help people get to certain places faster and in a more ef icient way. The mapafa application can provide you with information to get to your destination from your house or your current location. Have you ever forgotten to go somewhere but wonder if you should still go at that certain time? Mapafa can also help parents pick up students in the afternoon and send notifications everyday at that time to remind parents in case they forget [5]. You can also set the application to help you get to school on time by reminding you when to go to school and current traf ic and more. In conclusion, in the app Mapafa can be very helpful in dif erent cases, such as time to go to school, go home, or a certain place you forgot to go to because you are engaged in another activity.

KEYWORDS

Machine Learning, routing recommendation, Data Mining.


Crystal: A Privacy-Preserving Distributed Reputation Management

Ngoc Hong Tran1, Tri Nguyen2, Quoc Binh Nguyen3, Susanna Pirttikangas4, M-Tahar Kechadi5, 1Vietnamese-German University, Vietnam, 2Center for Ubiquitous Computing, University of Oulu, Finland, 3Ton Duc Thang University, Vietnam, 4School of Computer Science, University College Dublin, Ireland, 5Insight Centre for Data Analytics, University College Dublin

ABSTRACT

This paper investigates the situation in which exists the unshared Internet in specific areas while users in there need instant advice from others nearby. Hence, a peer-to-peer network is necessary and established by connecting all neighbouring mobile devices so that they can exchange questions and recommendations. However, not all received recommendations are reliable as users may be unknown to each other. Therefore, the trustworthiness of advice is evaluated based on the advisors reputation score. The reputation score is locally stored in the users mobile device. It is not completely guaranteed that the reputation score is trustful if its owner uses it for a wrong intention. In addition, another privacy problem is about honestly auditing the reputation score on the advising user by the questioning user. Therefore, this work proposes a security model, namely Crystal, for securely managing distributed reputation scores and for preserving user privacy. Crystal ensures that the reputation score can be verified, computed and audited in a secret way. Another significant point is that the device in the peer-to-peer network have limits in physical resources such as bandwidth, power and memory. For this issue, Crystal applies lightweight Elliptic Curve Cryptographic algorithms so that Crystal consumes less the physical resources of devices. The experimental results prove that our proposed model performance is promising.

KEYWORDS

Reputation, peer to peer, privacy, security, homomorphic encryption, decentralized network.


An Application of Neural Network Theory in Mathematics Teaching in a Secondary School or College Setting

CyprianOtutu Alozie, Department of Education, Canterbury Christ Church University the UK

ABSTRACT

This essay aims to identify and analyse a theoretical approach to the teaching of mathematics in a secondary school or college setting. It is intended that the theory can be organised into a reliable and succinct knowledge framework that educators might consider and use when planning and teaching mathematics, and as a provision of credible arguments to be put to potential schools to help raise achievement in mathematics. Approaching from a socio-technical background, I intend to explore the use of neural network theory in the teaching of mathematics. The property that is of significance for the neural network is the ability of the network (students) to learn from the environment (teachers/schools) and to improve its performance through learning. The neural networks learn about their environment through an interactive process of adjustments, applied to their synaptic weights and levels (Haykin, 1999). The network becomes more knowledgeable about its environment after each iteration of the learning process.

KEYWORDS

Artificial Neural Network, Iterative Error-correction Learning, Feedback, Feedforward, Receptors, Neural net, and effectors.


Using AI Applications on Internet of Things (IoT)

Yew Kee Wong, School of Information Engineering, HuangHuai University, Henan, China

ABSTRACT

In the information era, enormous amounts of data have become available on hand to decision makers.Big data refers to datasets that are not only big, but also high in variety and velocity, which makes them difficult to handle using traditional tools and techniques. Due to the rapid growth of such data, solutions need to be studied and provided in order to handle and extract value and knowledge from these datasets. The Internet of Things, or "IoT" for short, is about extending the power of the internet beyond computers and smartphones to a whole range of other things, processes and environments. IoT is at the epicentre of the Digital Transformation Revolution that is changing the shape of business, enterprise and people’s lives. This transformation influences everything from how we manage and operate our homes to automating processes across nearly all industries. This paper aims to analyse the relationships of AI, big data and IoT, as well as the opportunities provided by the applications in various operational domains.

KEYWORDS

Artificial Intelligence, Big Data, IoT, Digital Transformation Revolution, Machine Learning.


Internet of things (IoT): conceptual definitions, information security and privacy concerns under the general data protection regulation (GDPR)

Olumide Babalola, School of Law, University of Reading, Whiteknights, Reading, United Kingdom

ABSTRACT

Internet of Things (IoT) refers to the seamless communication and interconnectivity of multiple devices within a certain network enabled by sensors and other technologies facilitating unusual processing of personal data for the performance of a certain goal. This article examines the various definitions of the IoT from technical and socio-technical perspectives and goes ahead to describe some practical examples of IoT by demonstrating their functionalities vis a vis the anticipated privacy and information security implications. Predominantly, the article discusses the information security and privacy risks posed by the operationality of IoT as envisaged under the EU GDPR and makes a few recommendations on how to address the risks.

KEYWORDS

Data Protection, GDPR, Information Security, Internet of Things, Privacy.


New Continuous-Discrete Model for Wireless Sensor Networks Security

Yevgen Kotukh, Volodymyr Lubchak and Oleksandr Strakh, Department of Cybersecurity, Sumy State University, Sumy, Ukraine

ABSTRACT

A wireless sensor network (WSN) is a group of "smart" sensors with a wireless infrastructure designed to monitor the environment. This technology is the basic concept of the Internet of Things (IoT). WSN can transmit confidential information while working in an insecure environment. Therefore, appropriate security measures must be considered in the network design. However, computational node constraints, limited storage space, an unstable power supply, and unreliable communication channels, and unattended operations are significant barriers to the application of cybersecurity techniques in these networks. This paper considers a new continuous-discrete model of malware propagation through wireless sensor network nodes, which is based on a system of so-called dynamic equations with a pulse effect on the time scale.

KEYWORDS

IoT, wireless network, security model, national cybersecurity.


Emotion Classification using 1D-CNN and RNN based on Deap Dataset

Farhad Zamani and Retno Wulansari, Telkom Corporate University Center, Telkom Indonesia, Bandung, Indonesia

ABSTRACT

Recently, emotion recognition began to be implemented in the industry and human resource field. In the time we can perceive the emotional state of the employee, the employer could gain benefits from it as they could improve the quality of decision makings regarding their employee. Hence, this subject would become an embryo for emotion recognition tasks in the human resource field. In a fact, emotion recognition has become an important topic of research, especially one based on physiological signals, such as EEG. One of the reasons is due to the availability of EEG datasets that can be widely used by researchers. Moreover, the development of many machine learning methods has been significantly contributed to this research topic over time. Here, we investigated the classification method for emotion and propose two models to address this task, which are a hybrid of two deep learning architectures: One-Dimensional Convolutional Neural Network (CNN-1D) and Recurrent Neural Network (RNN). We implement Gated Recurrent Unit (GRU) and Long Short-Term Memory (LSTM) in the RNN architecture, that specifically designed to address the vanishing gradient problem which usually becomes an issue in the time-series dataset. We use this model to classify four emotional regions from the valence-arousal plane: High Valence High Arousal (HVHA), High Valence Low Arousal (HVLA), Low Valence High Arousal (LVHA), and Low Valence Low Arousal (LVLA). This experiment was implemented on the well-known DEAP dataset. Experimental results show that proposed methods achieve a training accuracy of 93.2% and 95.8% in the 1DCNN-GRU model and 1DCNN-LSTM model, respectively. Therefore, both models are quite robust to perform this emotion classification task.

KEYWORDS

Emotion Recognition, 1D Convolutional Neural Network, LSTM, GRU, DEAP.


The Future of Internet of Things (IoT) and AI

Yew Kee Wong, School of Information Engineering, HuangHuai University, Henan, China

ABSTRACT

In the information era, enormous amounts of data have become available on hand to decision makers. Big data refers to datasets that are not only big, but also high in variety and velocity, which makes them difficult to handle using traditional tools and techniques. Due to the rapid growth of such data, solutions need to be studied and provided in order to handle and extract value and knowledge from these datasets. The Internet of Things, or "IoT" for short, is about extending the power of the internet beyond computers and smartphones to a whole range of other things, processes and environments. IoT is at the epicentre of the Digital Transformation Revolution that is changing the shape of business, enterprise and people’s lives. This transformation influences everything from how we manage and operate our homes to automating processes across nearly all industries. This paper aims to analyse the relationships of AI, big data and IoT, as well as the opportunities provided by the applications in various operational domains.

KEYWORDS

Artificial Intelligence, Big Data, IoT, Digital Transformation Revolution, Machine Learning.


A Maintainability Estimation Framework and Metrics for Object Oriented Software (MEFOOS)

Elijah Macharia, Prof. Waweru Mwangi, Dr. Michael Kimwele, School of Computing and IT., Jomo Kenyatta University of Agriculture & Technology, Nairobi, Kenya

ABSTRACT

Time, effort, and moneyneeded in keeping up with maintenance of a computer software has always beenviewed greater than time taken to develop it. Likewise, its vagueness in determining maintainability of a softwareat beginning phases of software development makes the process more confounded. This paper demonstrates thenecessity and significance of software maintainability at design phase and builds up a multilinear regression model, ‘Maintainability Estimation Framework and Metrics for Object Oriented Software (MEFOOS)’ by broadening the MOOD metrics. The framework estimates the maintainability of object-oriented software components in terms of their Testability, understandability, Modifiability, Reusability and Portability by using design level object-oriented metrics of software components. Such early measurement of maintainability will essentially help software designers to vary the software component, if theres any shortcoming, in early stages of designing and subsequently the maintainability of ultimate software.Due to this, time, effort, and cash required in maintaining software goes to scale back significantly. The framework has been validated through proper statical measuresand logical interpretation has been drawn.

KEYWORDS

Software maintenance, object-oriented design, software metrics, software maintainability, mood metrics, software component, maintainability model.


Fair allocation algorithm tailored to predictive policing in Bogota

Mateo Dulce, Quantil, Colombia

ABSTRACT

We address the tradeoff of developing good predictive models for police allocation vs. optimally allocating a scarce resource, police officers, over the city in a way that does not imply and unfair allocation of resources. We modify a fair allocation algorithm to tackle a real world problem, crime in the city of Bogota, Colombia, allowing for more sophisticated prediction models and we ´ show that the whole methodology outperforms the current police allocating mechanism in the city. Results show that even with a simple model such as a Kernel Density Estimation (KDE) model of crime, one can have much better prediction than the current police model and at the same time mitigate fairness concerns. Although we can not provide general performance guarantees, as with a Poisson model of crime, our results apply to a real life problem and should be tacking seriously by policymakers.

KEYWORDS

Predictive Policing, Algorithmic Fairness, Optimal Allocation of Police, Censored Data.


Modeling a 3G Mobile Phone base Radio using Artificial Intelligence Techniques

Eduardo Calo, Cristina Sánchez, David Jines, Giovanny Amancha and Alex Santana G

ABSTRACT

The main objective of this work is to be able to use artificial intelligence techniques to be able to design a predictive model of the performance of a third-generation mobile phone base radio, using the analysis of KPIs obtained in a statistical data set of the daily behaviour of an RBS. For the realization of these models, various techniques such as Decision Trees, Neural Networks and Random Forest were used. which will allow faster progress in the deep analysis of large amounts of data statistics and get better results. As a conclusion of this work, it was determined that the development of a predictive model based on artificial intelligence techniques is very useful for the analysis of large amounts of data in order to find or predict complex results, more quickly and trustworthy.

KEYWORDS

Neural Networks, Performance, Radio Base, Random Forest, Throughput.


Rtos based Embedded Solution for Multi-Purpose Radio Frequency Communication

Meghang Nagavekar and Arthur Gomes, Manipal Institute of Technology, Manipal-576104, India

ABSTRACT

Based on Real-Time Operating System (RTOS) concepts, a continuous data transceiver system is designed. The wireless data transmission is enabled using the HC-12 board as a Radio Frequency (Bluetooth) module. To achieve computations for data/signal processing, an STM32 microcontroller is chosen. An open-source Middleware— FreeRTOS is used for implementing RTOS in the microcontroller. The complete transceiver system consists of an electronic remote controller as the transmitter and a multi-purpose electronic driver setup as a receiver. The receiver module can be integrated into various systems as per the user’s requirements. The controller’s application in future research prospects ranges from the manual operation of industrial machinery to the safety testing/prototyping of medical robots. The overall system is fast, reliable and convenient.

KEYWORDS

Embedded Systems, Radio Frequency, Bluetooth, FreeRTOS, STM32.


10-Bit, 1Gs/S Time-Interleaved SAR ADC

Shravan K Donthula1 and Supravat Debnath2, 1Department of Electrical Engineering, Indian Institute of Technology, Hyderabad, India, 2Integrated Sensor Systems, Centre for Interdisciplinary Programs, Indian Institute of Technology, Hyderabad, India

ABSTRACT

This paper describes the implementation of a 4-channel 10-bit, 1 GS/s time-interleaved analog to digital converter (TI-ADC) in 65nm CMOS technology. Each channel consists of interleaved T/H and ADC array operating at 250 MS/s, each ADC array consists of 14 time-interleaved sub-ADCs. This configuration provides high sampling rate even though each sub-ADC works at a moderate sampling rate. We have selected 10-bit successive approximation ADC (SAR ADC) as a sub-ADC, since this architecture is most suitable for low power and medium resolution. SAR ADC works on binary search algorithm, since it resolves 1-bit at a time. The target sampling rate was 20 MS/s in this design, however the sampling rate achieved is 15 MS/s. As a result, the 10-bit SAR ADC operates at 15 MS/s with power consumption of 560 µW at 1.2 V supply and achieves SNDR of 57 dB (i.e. ENOB 9.2 bits) near nyquist rate input. The resulting Figure of Merit (FoM) is 63.5 fJ/step. The achieved DNL and INL is +0.85\-0.9 LSB and +1\-1.1 LSB respectively. The 10-bit SAR ADC occupies active area of 300 µm × 440 µm. The functionality of single channel TI-SAR ADC has been verified by simulation with input signal frequency of 33.2 MHz and clock frequency of 250 MHz. The desired SNDR of 59.3 dB has been achieved with power consumption of 11.6 mW. This results in a FoM value of 60 fJ/step.

KEYWORDS

ADC, SAR, TI-ADC, LSB, MSB, T/H, SCDAC, CDAC, SFDR, SINAD, SNR, TG, EOC, D-FF, MIM, MOM, DNL, INL.


Multichannel ADC IP CORE on XILINX SOC FPGA

A.Suresh, S.Shyama, Sangeeta Srivastava and Nihar Ranjan, Embedded Systems, Product Development and Innovation Center (PDIC), BEL, India

ABSTRACT

Sensing of analogue signals such as voltage, temperature, pressure, current etc. is required to acquire the real time analog signals in the form digital streams. Most of the static analog signals are converted into voltage using sensors, transducers etc and then measured using ADCs. The digitized samples from ADC are collected either through serial or parallel interface and processed by the programmable chips such as processors, controllers, FPGAs, SOCs etc and the appropriate critical mission decisions are taken in the system. In some cases, Multichannel supported ADCs [1] are used to save the layout area when the functionalities are to be realized in a small form factor. In such scenarios, parallel interface for each channel is not a preferred interface considering the more number of interfaces / traces between the components. Specifically considering the exponential growth of serial interfaces for high speed applications, latest ADCs available in the market have the serial interfaces even for the multichannel support, but with multiplexing of n number of channels. Custom Sink multichannel IP core has been developed using VHDL coding to interwork with multichannel supported, time division multiplexed ADCs with serial interface. The developed IP core can be used either as it is with the SPI interface complied as specified in this paper or with necessary modifications based on the amount of variation with SPI interface in terms of it’s number of channels, sample size, sampling frequency, data transfer clock, control signals and the sequence of the operations performed to configure ADC and to achieve the required data transfer between the ADC and the programmable chip, here it is FPGA. The efficiency of implementation is validated using the measurements of throughput, accuracy. ZYNQ [4] FPGA and LTC2358 [1] ADC are used to evaluate the developed IP core. Integrated Logic Analyser (ILA) [7] which is an integrated verification tool of Vivado is used for Verification. No Third party tool is required, whereas [2] uses Synopsis Discovery AMS platform.

KEYWORDS

ADC, Sensor, Multichannel, Accuracy, Sink Synchronization, FPGA, VHDL [3].


Handling Trust in a Cloud based Multi Agent System

Imen Bouabdallah1 and Hakima Mellah2, 1Department of Computer Science, USTHB, Bab ezzouar, Algeria, 2Information and multimedia system, CERIST, Ben Aknoun, Algeria

ABSTRACT

Cloud computing can guarantee access to a large amount of IT infrastructure at several levels (software, hardware...). Yet, handling clients’ needs is getting increasingly challenging with the high demand. In this paper, by implementing multi agent systems in the cloud to handle interactions for the providers, trust is introduced at agent level to filter the clients asking for services by using Particle Swarm Optimization and acquaintance knowledge to determine malicious and trustworthy clients. The selection depends on previous knowledge and overall rating of trusted peers. The results show that the model outputs relevant results even with a small number of peers.

KEYWORDS

Multi agent system, cloud, trust, interaction, PSO.