7th International Conference on Artificial Intelligence and Soft Computing (AIS 2021)

April 24~25, 2021, Copenhagen, Denmark

Accepted Papers

The Financial Market and the price of copper explained from Covid-19 and policy tweets: an application to the case of Chile


Rodrigo Alfaro1 Matías Moreno-Faguett2 and Jennifer Peña3, 1Pontificia Universidad Católica de Valparaíso, Valparaíso, Chile, 2Universidad de Talca, Curicó, Chile, 3Central Bank of Chile, Santiago, Chile

ABSTRACT

This paper analyses the potential use of textual data from Twitter as a source that summarizes the main national and international news that potentially explains local financial markets and commodity prices, as an example we consider the case of the Chilean financial market and the price of copper in the context of the COVID-19 Pandemic. For this, we use tweets from Chile between January and August 2020 on a daily frequency and we apply topic modelling to extract the main topics. In the period of analysis, two topics reflect the events that have affected the Chilean economy: the COVID-19 pandemic and the political aspect. Our results suggest that the two topics generate uncertainty in the market that translates into a reduction in the Indice de Precio Selectivo de Acciones (IPSA) and the price of copper, as well as an increase in volatility.

KEYWORDS

Twitter, Topic Modeling, COVID-19, Financial Market, Cooper Price.


Streaming of GNSS Data from the GLONASS Satellite Navigation System


Liliana Ibeth Barbosa-Santillán, Juan Jaime Sánchez-Escobar, Luis Francisco Barbosa-Santillán,, Amilcar Meneses-Viveros, Zhan Gao, Julio César Roa-Gil and Gabriel A. León-Paredes University of Guadalajara, Computer Science, Guadalajara, México, Technical and Industrial Teaching Center, Research department, Guadalajara, México , Universidad Politécnica Salesiana,Cuenca, Ecuador, CINVESTAV, Computer Science Department, México City, Nantong University, School of Computer Science and Technology, Nantong, China, ITESM, Computer Science, Zapopan, México

ABSTRACT

The Big Data phenomenon has driven a revolution in the field of data, and has provided competitive advantages in the domains of business and science through data analysis. By Big Data, we mean the large volumes of information generated at high speeds from a variety of information sources including social networks, sensors for various devices, and satellites, among others. In real applications, one of the main problems is the extraction of accurate information from large volumes of unstructured information, in the streaming process. Here, we extract information from data obtained from the GLONASS satellite navigation system. The knowledge acquired in the discovery of geolocation of an object has been essential to the satellite systems. However, many of these findings have suered changes as error localizations and a lot of data. The Global Navigation Satellite System (GNSS) combines several existing systems for navigation and geospatial positioning, including the Global Positioning System, GLONASS , and Galileo. We focus in GLONASS because has a constellation with 31 satellites. The diculties of our research are (a) to handle the amount of data that GLONASS produce in an ecient way and b) to accelerate the pipeline of data with parallelization and dynamic access to data because these have only structured one art. The main contribution in this work is the Streaming of GNSS Data from the GLONASS Satellite Navigation System for GNSS data processing and dynamic management of meta-data. We achieve a three-fold improvement in performance when the program is running with 8 and 10 threads.

KEYWORDS

GLONASS, streaming, extraction, satellites data, observation files, metadata.


Event-driven timeseries analysis and the comparison of public reactions on COVID-19


Md. Khayrul Bashar, Tokyo Foundation for Policy Research, Tokyo 106-6234, Japan

ABSTRACT

The rapid spread of COVID-19 has already affected human lives throughout the globe. Governments of different countries have taken various measures, but how they affected people lives is not clear. In this study, a rule-based and a machine-learning based models are applied to answer the above question using public tweets from Japan, USA, UK, and Australia. Two polarity timeseries (meanPol and pnRatio) and two events, namely “lockdown or emergency (LED)” and “the economic support package (ESP)”, are considered in this study. Statistical testing on the sub-series around LED and ESP events showed their positive effects to the people of (UK and Australia) and (USA and UK), respectively. Manual validation with the relevant tweets shows an agreement with the statistical results. A case study with Japanese tweets using the supervised logistic regression classifies tweets into heath-worry, economy-worry and other classes with 83.11% accuracy. Predicted tweets around events re-confirm the statistical outcomes.

KEYWORDS

COVID-19, lockdown, economic support, public reactions, polarity timeseries, statistical analysis, machine learning, sentiment comparison.


Covid-19 - The emotional side effect in Monterrey, Mexico


Jesus Angel Salazar Marcatoma, School of Engineering and Science, Tecnologico de Monterrey, Monterrey, N.L, Mexico

ABSTRACT

In the last few months, the world has experienced the worst health crisis in modern life. The surge of COVID-19 virus has forced people to change their way of living which mainly represents a negative impact on their behavior. The aim of this research is to determine the emotional ef ect that COVID-19 has had on the city of Monterrey, Mexico through the analysis of comments related to the virus published on the social platform Twitter.

KEYWORDS

COVID-19, Virus, Emotion Detection, Sentiment Analysis, Social media.


Comparing Cognitive Efforts between Portuguese-Chinese Translation and Post-editing: The process of construing cohesive chains


Wang Tianlong1, Ana LuisaVarani Leal2 and Igor A. Lourenço da Silva3, 1Department of Portuguese, Faculty of Arts and Humanities, University of Macau, China, 2Department of Portuguese, Faculty of Arts and Humanities, University of Macau, China, 3Institute of Letters and Linguistics, Federal University of Uberlandia, Brazil

ABSTRACT

This article reports on the results of an experimental study aimedat analyzing and comparing the cognitive efforts involved in the Portuguese-Chinese translation and post-editing process, more specifically forconstruing cohesive chains. The main purpose is to deepen the understanding of the translation process related to Portuguese-Chinese language pair. A questionnaire, retrospective verbal protocol, eye tracking and keyloggingwere used as methodological tools. Findings for the translation process showed that the number of eye and keyboard movements were directly related to the difficulty in processing some of the cohesive items and construing the cohesive chains in the text. In addition, post-editing of the outputs provided by the PCT machine translation system involved more cognitive effort than the human translation.

KEYWORDS

Translation Process Research, Translation, Post-Editing, Cognitive Effort, Cohesive Chain, Portuguese-Chinese.


Data Transformer for Anomalous Trajectory Detection


Wen-Jiin Tsai and Hsuan-Jen Pan, Department of Computer Science, National Chiao Tunge University, Hsinchu, Taiwan

ABSTRACT

Anomaly detection is an important task in many traffic applications. Methods based on convolutional neural networks reach state-of-the-art accuracy; however, they typically rely on supervised training with large labeled data and the trained network is only applicable to the intersection that the training data are collected from. Considering that anomaly data are generally hard to obtain, we present data transformation methods for converting data obtained from one intersection to other intersections to mitigate the effort of training data collection. We demonstrate our methods on the task of anomalous trajectory detection and leverage an unsupervised method that require only normal trajectories for network training. We proposed a General model and a Universal model for our transformation methods. The General model focuses on saving data collection effort; while the Universal model aims at training a universal network for being used by other intersections. We evaluated our methods on the dataset with trajectories collected from GTA V virtual world. The experimental results show that with significant reduction in data collecting and network training efforts, our methods still can achieve state-of-the-art accuracy for anomalous trajectory detection.

KEYWORDS

Anomaly detection, trajectory, data transformation, variational auto-encoder (VAE).


Informative Multimodal Unsupervised Image-to-Image Translation


Tien Tai Doan1, Guillaume Ghyselinck2 and Blaise Hanczar1, 1Dental Monitoring, Paris, France, 2IBISC Laboratory, University of Evry Val d’Essonne, Evry-Courcouronnes, France

ABSTRACT

We propose a new method of multimodal image translation, called InfoMUNIT, which is an extension of the state-of-the-art method MUNIT. Our method allows controlling the style of the generated images and improves their quality and diversity. It learns to maximize the mutual information between a subset of style code and the distribution of the output images. Experiments show that our model cannot only translate one image from the source domain to multiple images in the target domain but also explore and manipulate features of the outputs without annotation. Furthermore, it achieves a superior diversity and a competitive image quality to state-of-the-art methods in multiple image translation tasks.

KEYWORDS

Image-to-image translation, multimodal, mutual information, GANs, manipulating features, disentangled representation.


Image Classifiers for Network Intrusions


David A. Noever and Samantha E. Miller Noever, PeopleTec, Inc., Huntsville, Alabama, USA

ABSTRACT

This research recasts the network attack dataset from UNSW-NB15 as an intrusion detection problem in image space. Using one-hot-encodings, the resulting grayscale thumbnails provide a quarter-million examples for deep learning algorithms. Applying the MobileNetV2’s convolutional neural network architecture, the work demonstrates a 97% accuracy in distinguishing normal and attack traffic. Further class refinements to 9 individual attack families (exploits, worms, shellcodes) show an overall 56% accuracy. Using feature importance rank, a random forest solution on subsets show the most important source-destination factors and the least important ones as mainly obscure protocols. The dataset is available on Kaggle.

KEYWORDS

Neural Networks, Computer Vision, Image Classification, Intrusion Detection, MNIST Benchmark.


A Review on Image Interpolation based Data Hiding Techniques


Ababil Naghiyeva, Department of Computer Engineering and Telecommunication, Azerbaijan Technological University, Ganja, Azerbaijan

ABSTRACT

Last decades of our centruy is characterised by encreasing an amount of digital cameras and devises of mobile communication and accordingly flow of information and circulated over Internet on the shape of multimedia. Potential users of global computer nets have got an opportunity to assesibilitiy to all kinds of information sources, to get and modifiy those data. In conjunction with that appeared a new problem-securely protection of stored and transmitted information. In this work steganographic information security technic metodology of data hiding by using interpolation method is observed. Metodology of improving quality an image to be used as container for secret message by utilisation of interpolation technic is given.

KEYWORDS

Image steganography, image processing, data hiding, image interpolation.


Federated Identity Management (FIDM) Systems Limitation and Solutions


Maha Aldosary and Norah Alqahtani, Department of Computer Sciences, Imam Mohammad Ibn Saud Islamic University, Riyadh, KSA

ABSTRACT

An efficient identity management system has become one of the fundamental requirements for ensuring safe, secure, and transparent use of identifiable information and attributes. FIdM allows users to distribute their identity information across security domains which increases the portability of their digital identities. However, it also raises new architectural challenges and significant security and privacy issues that need to be mitigated. In this paper, we presented the limitations and risks in the Federated Identity Management system and discuss the results and proposed solutions.

KEYWORDS

Federated Identity Management, Identity Management, Limitations, Identity Federation.


Role-Based Embedded Domain-Specific Language for Collaborative Multi-Agent Systems through Blockchain Technology


Orcun Oruc, TU Dresden, Software Technology Group, N¨othnitzer Straße 46, 01187, Dresden

ABSTRACT

Multi-agent systems have evolved with their complexities over the past few decades. To create multi-agent systems, developers should understand the design, analysis, and implementation together. Agent-oriented software engineering applies best practices through mainly software agents with abstraction levels in the multi-agent systems. However, abstraction levels take a considerable amount of time due to the design complexity and adversity of the analysis phase before implementing them. Moreover, trust and security of multi-agent systems have never been detailed in the design and analysis phase even though the implementation of trust and security on the tamper-proof data are necessary for developers. Nonetheless, object-oriented programming is the right way to do it, when implementing complex software agents, one of the major problems is that the object-oriented programming approach still has a complex process-interaction and a burden of event-goal combination to represent actions by multi-agents. Designated roles with their relationships, invariants, and constraints of roles can be constructed based on blockchain contracts between agents. Furthermore, in the case of new agents who participate in an agent network, decentralization and transparency are two key parameters, which agents can exchange trusted information and reach a consensus aspect of roles. This study will take the software agent development as a whole with analysis, design, and development with role-object pattern in terms of smart contract applications. In this paper, we aim to propose a role-based domain-specific language that enables smart contracts which can be used in agent-oriented frameworks. Furthermore, we would like to refer to methodology, results of the research, and case study to enlighten readers in a better way. Finally, we summarize findings and highlight the main research points by inferencing in the conclusion section.

KEYWORDS

Software agents, Domain-specific languages, Blockchain technology, Smart contracts, Role-based programming languages.


A Study of Regression Testing for Trade me Website


Kenil Manishkumar Patel and Shahid Ali, Department of Information Technology, AGI Institute, Auckland, New Zealand

ABSTRACT

Regression testing plays a critical role to verify the functionality testing of a product. Trade MeisNewZealand based website.It is one of the major websites in NewZealand dealing in buying and selling online. The aim of this research is to find out the functionalities of Trade Me website after injecting new features.Automation regression suiteis used to execute testscripts which helped the company tosave time and cost compared to manual testing. Automation regression test suite also helped to prioritize test cases are designed in such away that it can maximize the fault detection. For research analysis scrum methodology is used to meet the ultimate desires of software development companies and to increase the client satisfaction.

KEYWORDS

Regression testing, automation testing, scrum methodology, testNG, selenium.


Gated Convolution for Sparse Samples-Based Depth Estimation


Tao Zhao, Shuguo Pan, Hui Zhang, and Chao Sheng, Department School of Instrument Science and Engineering, Southeast University, Nanjing, China

ABSTRACT

Depth estimation from a single RGB image with a sparse depth measurement set has already been proved to be an effective method for predicting dense and high-precision depth images. However, the design of most networks follows RGB-based depth estimation architecture and mainly focuses on the perspective of sensor fusion, which leads to insufficient utilization of sparse information. To further improve the depth estimation accuracy, we extend the original U-net by gated convolutions to extract more useful information from sparse depth measurements. Experimental results verify the effectiveness of gated convolutions with U-net architecture on the NYUv2 dataset.

KEYWORDS

Depth estimation; Gated convolution; Sparse Samples.


UP-BAC - User Preference Based Access Control


Shawn Johnson and George Karabatis, Information Systems Department, University of Maryland, Baltimore County (UMBC), 1000 Hilltop Circle, Baltimore, MD 21250, USA

ABSTRACT

Current access control systems have inadequate methods to avoid safety leakages based on user specified policies. This can often allow malicious users to gain access to system resources that they otherwise should not have access to. To exacerbate the problem, these policies are often static and are often unable to dynamically adapt to fluctuations of risk to an organization. To address this problem, we have developed a user preference driven purpose-based access control model that enables policies to be adaptively enforced based on both user preferences as well as changes in context. In our work, a user specifies preferences that are used to build and enforce an adaptive access control matrix. This work can enable users to protect containers based on almost any purpose. We have developed a prototype and conducted experiments using the measurements of precision and recall to prove the efficacy of our approach.

KEYWORDS

access control, purpose-based access control, identity and access management.