6th International Conference on Foundations of Computer Science & Technology (CST 2019)

September 28 ~ 29, 2019, Copenhagen, Denmark

Accepted Papers

Distributed Vibration Control for the Large Space Structure

Enmei Wang1 and Shunan Wu2, 1School of Aeronautics and Astronautics, Dalian University of Technology, Dalian City, China and 2Key Laboratory of Advanced Technology for Aerospace Vehicles, Dalian University of Technology, Dalian City, China

ABSTRACT

To deal with the issues of vibration suppression of the large space structures (LSS) such as design complexity, fault-tolerant limitation, repeated expansion difficulty and etc., a distributed vibration control approach is proposed in this paper. According to the structure characteristics, the LSS is firstly divided into different control units, and the dynamic model of each unit is developed. The distributed LQR vibration controller of each unit is then designed and the final distributed vibration control system of the whole structure is therefore integrated. Simulations are presented to verify the validity of the proposed controller, and the results demonstrate that repeatable distributed controllers can achieve vibration suppression for LSS and provide good fault-tolerance performance.

KEYWORDS

Large Space Structure, Distributed Control, Linear Quadratic Regulator, Fault Tolerance

The Technique for Automatic Highlighting the Lungs on X-Ray Images Based on Images PRE-Processing and KMEANS Clustering

Nataly Ilyasova1,2 and Alexander Shirokanev1,2, 1IPSI RAS - branch of the FSRC «Crystallography and Photonics» RAS, Samara, Russia 2Samara National Research University, Samara, Russia

ABSTRACT

In this paper, information technology has been developed for automatic highlighting the lungs on x-ray images, based on the images pre-processing, calculation of textural properties and classification of kmeans. In some cases, the highlighted objects can describe not only the current patient’s condition but also specific characteristics regarding age, gender, constitution, etc. While using the k-means method, the relationship between the segmentation error and fragmentation window size was revealed. Within the study, both a visual criterion for evaluating the quality of the segmentation result and a criterion based on calculating the clustering error on a large set of fragmented images were implemented. The study also included image pre-processing techniques. Thus, the study showed that the technology provided key objects highlighting error at 26%. However, the equalizing procedure has lessened this error to 14%. Xray image clustering errors for fragmentation windows of 12x12, 24x24 and 36x36 were presented.

KEYWORDS

Lungs X-rays Images, Image Processing, Texture Analysis, Selection Technique of Interest Regions

Convolutional Neural Network Application for Analysis of Fundus Images

Nataly Ilyasova1,2 and Alexander Shirokanev1,2, 1IPSI RAS - branch of the FSRC «Crystallography and Photonics» RAS, Samara, Russia and 2Samara National Research University, Samara, Russia

ABSTRACT

The article proposes a new method for analyzing eye fundus images. The method is based on the convolutional neural network (CNN). The CNN architecture was constructed, followed by network learning on a balanced dataset composed of four classes of images, composed of thick and thin blood vessels, healthy areas, and exudate areas. Segmentation of fundus images was performed using CNN. Considering that exudates are a primary target of laser coagulation surgery, the segmentation error was calculated on the exudate class, amounting to 5%. In the course of this research, the HSL color system was found to be most informative, using which the segmentation error was reduced to 3%.

KEYWORDS

Convolution Neural Networks, Fundus Image, Diabetic Retinopathy, Exudates, Laser Coagulation Image Processing, Image Segmentation

2D Image Features Detector and Descriptor Selection Expert System

Ibon Merino1, Jon Azpiazu1, Anthony Remazeilles1, and Basilio Sierra2, 1Industry and Transport, Tecnalia Research and Innovation, Donostia-San Sebastian, Spain and 2Computer Science and Artificial Intelligence, University of the Basque Country UPV/EHU, Donostia-San Sebastian, Spain

ABSTRACT

Detection and description of keypoints from an image is a well-studied problem in Computer Vision. Some methods like SIFT, SURF or ORB are computationally really efficient. This paper proposes a solution for a particular case study on object recognition of industrial parts based on hierarchical classification. Reducing the number of instances leads to better performance, indeed, that is what the use of the hierarchical classification is looking for. We demonstrate that this method performs better than using just one method like ORB, SIFT or FREAK, despite being fairly slower.

KEYWORDS

Computer vision, Descriptors, Feature-based object recognition, Expert system

Intent Modelling from Natural Language

Constantina Nicolaou1, Amal Vaidya1, Fabon Dzogang2, David Wardrope1,2 and Nikos Konstantinidis1, 1Department of Physics and Astronomy, University College London, Gower Street, London WC1E 6BT, UK and 2ASOS AI, Greater London House,Hampstead Road, London NW1 7FB, UK

ABSTRACT

We study the performance of customer intent classifiers designed to predict the most popular intent received through ASOS customer care, namely “Where is my order?”. We conduct extensive experiments to compare the accuracy of two popular classification models: logistic regression via N-grams that account for sequences in the data and recurrent neural networks that perform the extraction of sequential patterns automatically. A Mann-Whitney U test indicated that F1 score on a representative sample of held out labelled messages was greater for linear N-grams classifiers than for recurrent neural networks classifiers (M1=0.828, M2=0.815; U=1,196, P=1.46e-20), unless all neural layers including the word representation layer were trained jointly on the classification task (M1=0.831, M2=0.828, U=4,280, P=8.24e-4). Overall our results indicate that using simple linear models in modern AI production systems is a judicious choice unless the necessity for higher accuracy significantly outweighs the cost of much longer training times.

KEYWORDS

Natural Language Processing, Intent Classification, Bag-of-words, Recurrent Neural Networks

#BREXIT VS. #STOPBREXIT: WHAT IS TRENDIER? AN NLP ANALYSIS

Marco A. Palomino1 and Adithya Murali2, 1School of Computing, Electronics and Mathematics, University of Plymouth, Drake Circus, Plymouth, PL4 8AA, United Kingdom and 2School of Computing Science and Engineering, Vellore Institute of Technology, Vellore - 632 014, Tamil Nadu, India

ABSTRACT

Online trends have established themselves as a new method of information propagation that is reshaping journalism in the digital age. Services such as Google Trends and Twitter Trends have recently attracted a great deal of attention. Taking election campaigns as an example, journalists, campaign managers and political analysts have looked into trends to determine candidates’ popularity and predict likely election outcomes. Trend discovery has therefore become a fundamental aid to monitor and summarise information. While previous research on trend discovery has focused on the dynamics of data streams, we argue that sentiment analysis—the classification of human emotion expressed in text—can enhance existing algorithms for trend discovery. By highlighting topics that are strongly polarised, sentiment analysis can offer further insight into the influence of users who are involved in a trend, and how other users adopt such a trend. As a case study, we have investigated a highly topical subject: Brexit, the withdrawal of the United Kingdom from the European Union. We retrieved an experimental corpus of publicly available tweets referring to Brexit and used them to test a proposed algorithm to identify trends. We validate the efficiency of the algorithm and gauge the sentiment expressed on the captured trends to confirm that highly polarised data ensures the emergence of trends.

KEYWORDS

Text mining, Twitter, sentiment analysis, information retrieval.

Flexible Log File Parsing Using Hidden Markov Models

Nadine Kuhnert1,2 and Andreas Maier1, 1Pattern Recognition, Friedrich-Alexander University, Erlangen-Nueremberg, Germany and 2Siemens Healthcare GmbH, Erlangen, Germany

ABSTRACT

We aim to model unknown file processing. As the content of log files often evolves over time, we established a dynamic statistical model which learns and adapts processing and parsing rules. First, we limit the amount of unstructured text by focusing only on those frequent patterns which lead to the desired output table similar to Vaarandi [10]. Second, we transform the found frequent patterns and the output stating the parsed table into a Hidden Markov Model (HMM). We use this HMM as a specific, however, flexible representation of a pattern for log file processing. With changes in the raw log file distorting learned patterns, we aim the model to adapt automatically in order to maintain high quality output. After training our model on one system type, applying the model and the resulting parsing rule to a different system with slightly different log file patterns, we achieve an accuracy over 99%.

KEYWORDS

Hidden Markov Models, Parameter Extraction, Parsing, Text Mining, Information Retrieval.

Including Natural Language Processing And Machine Learning into Information Retrieval

Piotr Malak, Institute of Information Science and Book Studies, University of Wroclaw, Poland

ABSTRACT

In current paper we discuss the results of preliminary, but promising, research on including some Natural Language Processing and Machine Learning approaches into Information Retrieval. Classical IR uses indexing and term weighting in order to increase pertinence of answers given to users queries. However, such approach allows for meaning matching, i.e. matching all keywords of the same or very similar meaning as expressed in user query. For most cases this approach is sufficient enough to fulfil user information needs. However indexing and retrieving information over professional language texts brings new challenges as well as new possibilities. One of them is different grammar, causing the need of adjusting NLP tools for a given professiolect. One of the possibilities is detecting the context of occurrence of indexed term in the text. In our research we made an attempt to answer the question whether Natural Language Processing (NLP) approach combined with supervised Machine Learning (ML) is capable of detecting contextual features of professional language texts.

KEYWORDS

Enhanced Information Retrieval, Contextual IR, NLP, Machine Learning.

An Adaptive and Smart System for Parental Control on Digital Games

Clark Ren, Yu Sun and Fangyan Zhang, California State Polytechnic University, USA

ABSTRACT

As more and more students get access to computers to aid them in their studies, they also gain access to machines that can play games, which can negatively affect a student's academic performance. However, it is also debated that playing video games could also positively affect a student’s academic performance. In order to address both sides of the argument, we can create an app that limits the amount of time a student has to play games while not completely removing the ability for students to play games.

KEYWORDS

Parental Control, Smart System, Digital Games, Web Service

Adapting service to user context

Mohamed Sbai and Hajer Taktak, Department of Science Computer, Faculty of Sciences, Tunisia

ABSTRACT

The diversity of the terminals used by the user to access resources (PC, PDA, mobile phone, etc.) using several types of networks (WiFi, local,...) generates a growing need to dynamically adapt services to user context. In this article, we present our architecture that aims to adapt the content and presentation of a service to user context. We realize the adaptation of the content based on a new data modeling methodology. This methodology is based on taking into account the different structures associated with the same data, which allows a representation of the data according to several points of view. Therefore, it is possible to induce a customization of representations according to various uses and contexts. The adaptation of the presentation is based on a process of automatic generation of the complete code of the interfaces of the service. The context of our architecture is presented by a generic model concerning the user and the service. Our adaptation process is detailed on a scenario.

KEYWORDS

User context, adaptation, dynamic, content, presentation, service.


Contact Us : cst.conf@yahoo.com