International Conference on Advances in Computing & Information Technologies (CACIT 2021)

December 11 ~ 12, 2021, Chennai, India

Accepted Papers

A Proposed Learning Model Based on Fog Computing Technology

Mohamed Saied M. ElSayed Amer, Lecturer, Canadian International College

ABSTRACT

Today and during the current years, the internet is a place to access learning resources that allow users to deal with environmental education and implementation of software applications for learning purposes. Thus, the use of distance learning application software has been widely adopted in the learning field to improve the quality of learning. In this context, approaches based on Fog Computing have been used to store and process the learning resources generated from these learning environments and make them closer to end-users. However, using Cloud only could create delays or latency that are intolerable for learning applications. Thus, the Fog Computing paradigm emerged as a proposed solution to avoid and prevent the latency or delay that occurs when a lot number of software application users access it. Moreover, requirements of performance, availability of learning platforms need to be clearly defined in approaches that aim to avoid latency and delay of response. So, this article shows an experimental architecture model based on Fog Computing as extending for Cloud services that are created to facilitate the management of learning resources. This model uses JSON formatting techniques to provide resources exchange over Fog Nodes and to carry out the learning processes in a distributed way to end-users. Finally, using fog computing in a learning environment makes the learning resources more closer to the end user as described in the results of this paper which shows that using fog computing gives high performance.

KEYWORDS

fog computing, fog nodes, latency, learning model, learning performance.


Multi-Layer Encryption Algorithm

Akula Vamsi Krishna Rao1, V.N. Aditya Datta Chivukula2, Sri Keshava Reddy Adupala2 and Abhiram Reddy Cholleti3, 1Department of Computer Science and Engineering, CMR Engineering College, Medchal, 2Department of Computer Science and Engineering, International Institute of Information Technology, Bhubaneswar, 3Department of Electronics and Telecom Engineering, International Institute of Information Technology, Bhubaneswar

ABSTRACT

In recent years, security has become a big issue for many applications to defend attacks from intruders. Exchanging credentials in plaintext might expose it to stealers. Many techniques are required to protect the data of the consumers from attackers. Cryptography has come up with a solution to provide security for the users to exchange data securely by the means of the process called as Encryption/ Decryption. In this field, there are basically two techniques of cryptography i.e Symmetric and asymmetric, developed to achieve a secure connection between the sender and receiver. These techniques provide specific goals in maintaining privacy by converting original message to non-readable form and sends it over a communication channel. The unauthorized members try to break the non-readable form but the difficulty depends upon the techniques that were used to encrypt the data. In this paper, we proposed a quadruple encryption algorithm consists of novel phase-shift algorithm, AES (Advanced Encryption Standard), TwoFish and RC4 and making it hard to attack by common methods.

KEYWORDS

Cryptography, AES, Two-fish, RC4, Phase-shift.


Search in a Redecentralised Web

Thanassis Tiropanis1, Alexandra Poulovassilis2, Adriane Chapman1 and George Roussos2, 1School of Electronics and Computer Science, University of Southampton, UK, 2Department of Computer Science and Information Systems, Birkbeck, University of London, UK

ABSTRACT

Search has been central to the development of the Web, enabling increasing engagement by a growing number of users. Proposals for the redecentalisation of the Web such as SOLID aim to give individuals sovereignty over their data by means of personal online datastores (pods). However, it is not clear whether search utilities that we currently take for granted would work efficiently in a redecentralised Web. In this paper we discuss the challenges of supporting distributed search on a large scale of pods. We present a system architecture which can allow research, development and testing of new algorithms for decentralised search across pods. We undertake an initial validation of this architecture by usage scenarios for decentralised search under user-defined access control and data governance constraints. We conclude with research directions for decentralised search algorithms and deployment.

KEYWORDS

Redecentralisation, search, decentralised systems.


Temporal-Sound based User Interface for Smart Home

Kido Tani and Nobuyuki Umezu, Mechanical Systems Engineering, sci. and Eng., Ibaraki University, Japan

ABSTRACT

We propose a gesture-based interface to control a smart home. Our system replaces existing physical controls with our temporal sound commands using accelerometer. In our preliminary experiments, we recorded the sounds generated by six different gestures (knocking the desk, mouse clicking, and clapping) and converted them into spectrogram images. Classification learning was performed on these images using a CNN. Due to the difference between the microphones used, the classification results are not successful for most of the data. We then recorded acceleration values, instead of sounds, using a smart watch. 5 types of motions were performed in our experiments to execute activity classification on these acceleration data using a machine learning library named Core ML provided by Apple Inc.. These results still have much room to be improved.

KEYWORDS

Smart Home, Sound Categorizing, IoT, machine learning.


Implementation of Minimips Data Path using System Verilog

Darshan Vaghani and Sachin Gajjar, Department of Electronics and Communication Engineering, Nirma University, Ahmedabad, India

ABSTRACT

In this Paper, The implementation of the Minimips data path using system verilog is discussed. System Verilog is a very useful platform to verify and design digital design. Some of the very important concepts like interface, Assertions, Coverage, Class, Dynamic Array are very useful in the design and verification of the digital design. System Verilog concept enables to check the efficiency of the Testbench. To Design the Minimips Data path basic Architecture and Intruction set of the Minimips data path is required. The first ALU of the Data path is designed and verified using Quartus II software after that design all remaining module of the data path designed and connect them using the interface. All the Supported instructions are verified using the cadence Xcelium simulator.

KEYWORDS

Interface, Queues, Minimips single -cycle data path.


VLSI Physical Design using Opensource Tool

Kartik Jain and Pratik Navadiya, Department of Electronics and Communication Engineering, Institute of Technology Nirma University, Ahmedabad, India

ABSTRACT

Nowadays ASIC design flow is a very mature process in silicon turnkey design. It can refer to semiconductor solutions design for any specific application. One specific application is FPGA (fieldprogrammable gate arrays), it can be programmable multiple times to perform a different function. Sometimes ASIC (Application-specific integrated circuit) is referred to as SoC (system on chip) also. To perform successful ASIC design, the engineer must follow or proven ASIC design flow which is based on a good understanding of its specification, low power design, performance, and requirement. Every stage of the ASIC design cycle has an EDA tool that helps to implement or run every stage of the ASIC design flow. VLSI physical design is a process of transforming any circuit into a physical layout, it describes the position of cells and the route for the interconnects between them. VLSI physical design stages are chip partitioning, synthesis, floor planning, power planning, placement, clock tree synthesis (CTS), signal routing, and physical verification. In this paper, we use two different open-source EDA tools. First is the proton EDA tool. Proton is a fully open-source place and route tool. It uses Iverilog, Yosys- Qflow for synthesis step, Gray Wolf for placement stages, Qrouter for routing stages. And second is the Qflow manager tool. In this tool, we run all the stages of physical design and generate each stages report also. Here we take full adder as an example and run the all stages in these two EDA tools and Compare both EDA tools.

KEYWORDS

Proton EDA tool, Qflow manager tool, ASIC design flow, VLSI physical design.


Bulk Driven Logarithmic Amplifier for Ultra - low voltage low power biomedical applications

Dipesh Panchal and Amisha Naik, Department of Electronics & Communication Engineering, Nirma University, Gujarat

ABSTRACT

Mostly, for sensor interface systems, analog front ends dissipate maximum power which includes an amplifier, filters, and data converter. This paper proposed utilizing a novel approach for ultra-low voltage ultra-low-power logarithmic amplifier(LA) using bulk driven Non-conventional method. The amplifier block of analog Front End utilized as a Logarithmic amplifier, based on the progressive-compression parallel-summation architecture with DC offset cancellation by adding an off-chip coupling capacitor at each stage. The core cell differential amplifier with current bias as load utilized for symmetric output. The circuit operates with a 0.25V power supply voltage and dissipates 5 nW. The Simulated input dynamic range is about 46.27dB, which covers the input amplitudes ranging from 0.1mV to 1V, and the -3-dB bandwidth of the amplifier is from 100Hz to 1kHz with Simulated total input-referred noise is 4μV@1kHz using cadence virtuoso.

KEYWORDS

Bulk Driven, Gate Driven, Biomedical Applications, Variable Gain Amplifier, Programmable Gain Amplifier.


Design and Performance Analysis of Hetero-Dielectric based Junctionless SOI-MOSFET

Prateek Tiwari and Mamta Khosala, Department of Electronics and Communication Engineering Dr. B.R. Ambedkar National Institute of Technology Jalandhar 144011

ABSTRACT

The advancement of MOS has been proficient by the downscaling of their dimensions, since their start in the mid-1970s. Moores law has been driving the semiconductor industry for four decades. The ambition to move forward with device scaling prompted the essential research in physical science, materials, and devices. In any instance, the strict power constraints of electrical circuits, as well as the inadaptability of the subthreshold slope in a normal MOS, cause significant failure to the subsequent scaling of field effect transistors. These constraints generated the motivation for the discovery for new and innovative technologies that are not restricted by these limitations. The necessity for a new technology that meets performance requirements while eliminating the constraints of conventional bulk silicon technology opened the step for the introduction and adoption of Silicon-On-Insulator (SOI) technology as the best alternative. Silicon-on-insulator Junction-less transistors have been created as a competent nanoscale technology. The high leakage current, low Ion to Ioff (on-current to off-current) ratio, and subthreshold slope are highlighted as the main hurdles that may limit the adoption of junction-less SOI transistors. To compensate, a new window with highly doped p-type silicon is opened inside the buried oxide area of a conventional junction-less SOI MOS. The purpose of this thesis is to improve the electrical performance of modified junction-less SOI MOSs by optimizing the new windows opened underneath the channel area, gate length, hetero dielectric, and buried oxide (BOX) thickness. By creating a reduction sheet at the interface between the channel area and the new window, this re-organization successfully reduces the amount of leakage current within the transistor in the conventional junction-less transistor. Despite the fact that the parameters have different spectra, the re-enactment of the structures described in the study indicated that the optimized device is preferable for low power digital applications. The proposed device has reduced OFF current ID(OFF)6.11x10-12 A/μm,better ON current ID(ON)3.96x10-5 A/μm, Enhanced Ratio of ON and OFF current (Ion/Ioff) is 1.8x108, Subthreshold slope (SS) of 94.11 mV/dec, Threshold voltage (𝐕𝐓) of 0.396V, Transconductance (gm) of 2.2 mSas compare to Single dielectric based JLFET.

KEYWORDS

Low Power Application, VLSI, JunctionlessFET, SOI Technology, MOSs.


Magnetic resonance image processing using deep learning: segmentation of corpus callosum

1L. E. Mendoza, 2J.D. Fernández, 1Department of Telecommunications, 1Group GIBUP, 2Group on systems applied to industry, 1University of Pamplona, 1, 2Pontifical Bolivarian University, Medellín

ABSTRACT

The automatic detection of specific areas in medical images using mathematical techniques has been growing significantly, due to the applications this allows. This article presents the results in the automatic segmentation of the cerebral corpus callosum in cerebral magnetic resonance imaging using Deep learning. 1450 images were used for training, each image with a resolution of 512*512. A conditioning stage was developed to modify the contrast of the image, remove irrelevant information and perform a pattern extraction process using wavelet transformation. The results show the segmentation of the corpus callosum and the percentage of accuracy was 99.514%. The system was validated with 415 images.

KEYWORDS

resonance image, cerebral corpus callosum, image segmentation and Deep learning.


Security Assessment Rating Framework for Enterprises using MITRE ATT&CK® Matrix

Akash Srivastava, Bhavya Bansal, Chetan Verma, Hardik Manocha and Ratan Gupta, India

ABSTRACT

Threats targeting cyberspace are becoming more prominent and intelligent day by day; this inherently leads to a dire demand for continuous security validation and testing. Using this paper, we aim to provide a holistic and precise security analysis rating framework for organizations that increases the overall coherency of the outcomes of such testing. This scorecard is based on the security assessment performed following the globally accessible knowledge base of adversary tactics and techniques called the MITRE ATT&CK matrix. The scorecard for an evaluation is generated by ingesting the security testing results into our framework, which provides an organization’s overall risk assessment rating and the risk related to each of the different tactics from the ATT&CK matrix.

KEYWORDS

SOC, Cyber-security awareness, Cyber-security threats, Scorecard, MITRE ATT&CK.


Security Threats and Approaches in an E-Health Security Cloud-Based System with Big Data Strategy using Cryptographic Algorithms

G.Dhanalakshmi1, and Dr.G.Victo Sudha George2, 1Research scholar, 2Professor 1,2Department of Computer Science and Engineering, 1,2Dr. M.G.R. Educational and Research Institute, Chennai, Tamil Nadu, India

ABSTRACT

Cloud computing refers to the use of the Internet to deliver software to cloud consumers. For storing and analyzing huge volumes of data from social media, enterprises, organizations, government sectors, and other sources, cloud infrastructure is essential. As a result, a large storage capacity is required to store all of the data on the cloud. The existing system still has a lot of work to do in terms of improving DDoS attacks, data breaches, and data loss, as well as susceptible access points, warnings, and alarms. To address the aforementioned difficulties, many academics have focused on the creation of powerful and sensitive data, the lack of data recovery, and data storage, utilizing cloud protection strategies to capitalize on them. The goal is to reveal a significant pattern for cloud systems by employing appropriate algorithms for optimizing data storage in a cloud environment. We shall conduct a survey of important security alerts, assaults, and remedies in this paper. The main goal of this paper is to outline a Big data E-Healthcare Information System that uses cloud platforms to help bridge the gap between hospitals, patients, and clinics by creating a central hub of patient details and health care history that is accessible via two interfaces: a mobile app or a web application, and managing large volumes of data with respect to big data approach and applications.

KEYWORDS

Cloud Computing, Cloud Storage, Security Issues, Threats, Big Data, Cryptography algorithms.


PendingMutent: Protecting PendingIntent from Malware Apps using a novel Ownership-Based Authentication

Pradeepkumar Duraisamy S and Geetha S, Vellore Institute of Technology, Chennai, India

ABSTRACT

PendingIntent is an authority to use the sender’s permissions and identity by the receiver. Unprotected broadcast and PendingIntents with an empty base intent are some of the vulnerable features that malware utilizes to perform unauthorized access and privilege escalation attacks on the delegated PendingIntent. Pendingµtent is an application-layer solution that uses ownership-based authentication to dynamically control the accessibility of the PendingIntent where both the sender and receiver’s capability is validated before granting access to the delegated PendingIntent. Pendingµtent is the first holistic work to use ownership-types to dynamically control the accessibility and visibility of PendingIntents delegation. We demonstrate our solution in real-time by modifying some random apps with the Pendingµtent library instead of the PendingIntent library. The results of our experiment prove that Pendingµtent provides dynamic security by protecting the PendingIntent from malware access. Furthermore, with the proposed solution the end-user suffers only negligible execution overhead in the screen response and notification delays.

KEYWORDS

PendingIntent, Ownership-Types, Intent analysis, Android, Intent Receipt.


AI, Machine Learning and Deep Learning Development and Applications

Yew Kee Wong, School of Information Engineering, HuangHuai University, Henan, China

ABSTRACT

In the information era, enormous amounts of data have become available on hand to decision makers. Big data refers to datasets that are not only big, but also high in variety and velocity, which makes them difficult to handle using traditional tools and techniques. Due to the rapid growth of such data, solutions need to be studied and provided in order to handle and extract value and knowledge from these datasets. Machine learning is a method of data analysis that automates analytical model building. It is a branch of artificial intelligence based on the idea that systems can learn from data, identify patterns and make decisions with minimal human intervention. Such minimal human intervention can be provided using machine learning, which is the application of advanced deep learning techniques on big data. This paper aims to analyse some of the different machine learning and deep learning algorithms and methods, as well as the opportunities provided by the AI applications in various decision making domains.

KEYWORDS

Artificial Intelligence, Machine Learning, Deep Learning.


Current flaws in Deep Learning: An Analysis

Bhavi Dave1 and Tejas Shyam2, 1Department of Computer Engineering, Dwarkadas J. Sanghvi College, Mumbai, India, 2The Education Journey, Mumbai, India

ABSTRACT

While Deep Learning algorithms have markedly improved the paradigm of Artificial Intelligence across domains like Natural Language Processing and computer vision, their performance comes with certain critical, potentially fatal flaws. This paper explains and analyses five areas of concern in neural networks and their design – the lack of necessary data, a lack of interpretability, software concerns during implementation, their biological plausibility, and the inability to encode knowledge. By citing and critiquing actual use-cases, challenges have been flagged. Finally, this paper makes a threefold recommendation- integrating traditional algorithms and explicit background knowledge into the newer methods, creating a hybrid design that amalgamates both supervised and unsupervised components and standardising data collection across domains. The approaches suggested herein will make Deep Learning more sustainable and impactful by reducing computational resource requirements, making systems more biologically plausible and mitigating human bias.

KEYWORDS

Deep Learning, Computer Vision, Natural Language Processing, Model interpretability, Knowledge encoding, Biological plausibility, Software development challenges.


Thermal Comfort of the Environment with Internet of Things, Big Data and Machine Learning

Matheus G. do Nascimento and Paulo B. Lopes, Graduate Program in Electrical Engineering and Computing, Mackenzie Presbyterian University, São Paulo, Brazil

ABSTRACT

This research proposes to evaluate the level of thermal comfort of the environment in real time using Internet of Things (IoT), Big Data and Machine Learning (ML) techniques for collecting, storage, processing and analysis of the concerned information. The search for thermal comfort provides the best living and health conditions for human beings. The environment, as one of its functions, must present the climatic conditions necessary for human thermal comfort. In the research, wireless sensors are used to monitor the Heat Index, the Thermal Discomfort Index and the Temperature and Humidity Index of remote indoor environments to intelligently monitor the level of comfort and alert possible hazards to the people present. Machine learning algorithms are also used to analyse the history of stored data and formulate models capable of making predictions of the parameters of the environment to determine preventive actions or optimize the environment control for reducing energy consumption.

KEYWORDS

Big data, internet of things, machine learning, thermal comfort.


Liveness Detection by Analysing Lips & Chin Movement

Shahed Mohammadi1 and Sahand Majdabadi Farahani2 Nadia Faramarzi2, Parsa Jamshidi2, 1Assoc.Prof. Computer Since & Systems Engineering Dep. Ayandegan University, Mazandaran, Iran, 2Student of Software Engineering IAUCTB University, Tehran, Iran

ABSTRACT

In these days, electronic authentication is one of the important services which is provided by almost all systems. Among all biometric methods, face verification is one of the most used methods and that’s because it is economical, reachable for almost everyone and it doesn’t require any specific sensors or devices. However, face authentication methods have faced with some security challenges like fraud and spoofing attacks. There are so many methods that are presented for detecting these kinds of activities that threaten the systems. These systems are known as face liveness detection. In this paper, a liveness method that detects 2D and photo spoofing attacks in videos is provided. This method can detect fraud activities with linear calculation without using any machine learning algorithms and tools, but just by detecting and analysing the movement of user’s lips and chin. This method gives us some advantages like processing speed, lower cost and most importantly gives us the ability of real time or semi-real time liveness detection. For testing and evaluating the proposed method, two datasets have been chosen. The first dataset is a standard data set called CASIA FASD and the second one is a handmade and native dataset. At the end, by evaluating and testing the proposed method 92.37% accuracy on CASIA FASD dataset and 89.57% accuracy on native dataset was achieved.

KEYWORDS

Face Liveness Detection, Face Spoofing, Liveness, Lips Movement, Chin Movement.


Real-Time Face Verification Convolutional Neural Network Technique based on Video

Shahed Mohammadi1, *Nadia Faramarzi2, Parsa Jamshidi3 and Sahand Majdabadi Farahani4, 1Assoc. Prof. Computer Science & Systems Eng. Dep. Ayandegan University, Mazandaran, Iran, 2,3,4Student of Software Engineering IAUCTB University, Tehran, Iran

ABSTRACT

Face recognition has been an important issue since the invention of cameras and it has been progressing rapidly during these years. In proposed machine, a convolutional neural network which is a deep neural network class has been used to improve its functionality. After receiving the video, the first frame of each second is matched to the original photo, and then every five frames among the rest of the current second are processed. After the first second process is completed, the next seconds are processed respectively and this is continued until the end of the video. Using this method as well as fuzzy system, the output will be displayed at any time. To test this machine, two standard and local datasets were used and as a result the accuracy of 99.07 ± 0.11 was obtained. The result shows that this machine is reliable with a high level of accuracy and it is predicted that in the future, by adding a Liveness system, more reliable results will be achieved.

KEYWORDS

face recognition, real time video processing, face matching, convolutional neural network face matching, FR.


Electromagnetic Scattering Characteristic Analysis for Low Frequency Ultra-Wideband Bistatic Bar

Hongtu Xie1, Ni Xie2, Kang Liang1, Xingqiao Jiang1 and Guoqian Wang3, *, 1School of Electronics and Communication Engineering, Sun Yat-sen University, Guangzhou, China, 2School of Business, Hunan University of Science and Technology, Xiangtan, China, 3Sun Yat-sen Memorial Hospital, Sun Yat-sen University, Guangzhou, China

ABSTRACT

The Low frequency ultra-wideband bistatic synthetic aperture radar (LF UWB BSAR) has the well penetrating capability, high-resolution imaging, as well as the increased information, which is able to penetrate the foliage to detect the concealed target in the forests. In this paper, the electromagnetic scattering characteristic analysis for the LF UWB BSAR was proposed. Firstly, the equivalent model of the targets has been established. Besides, the electromagnetic scattering characteristic of targets was analysed to calculate the BSAR radar cross section (RCS) of the targets. Finally, simulation experiments are shown to prove the correctness and validity of the analysis of the electromagnetic scattering characteristics of the targets for the LF UWB BSAR system.

KEYWORDS

Bistatic Synthetic Aperture Radar, Electromagnetic Scattering Characteristic, Low Frequency Ultrawideband.


A Study of the Classification of Motor Imagery Signal using Machine Learning Tools

Anam Hashmi, Bilal Alam Khan and Omar Farooq, Department of Electronics Engineering, Aligarh Muslim University, Aligarh, India

ABSTRACT

In this paper, we propose a system for the purpose of classifying Electroencephalography (EEG) signals associated with imagined movement of right hand and relaxation state using machine learning algorithm namely Random Forest Algorithm. The EEG dataset used in this research was created by the University of Tubingen, Germany. EEG signals associated with the imagined movement of right hand and relaxation state were processed using wavelet transform analysis with Daubechies orthogonal wavelet as the mother wavelet. After the wavelet transform analysis, eight features were extracted. Subsequently, a feature selection method based on Random Forest Algorithm was employed giving us the best features out of the eight proposed features. The feature selection stage was followed by classification stage in which eight different models combining the different features based on their importance were constructed. The optimum classification performance of 85.41% was achieved with the Random Forest classifier. This research shows that this system of classification of motor movements can be used in a Brain Computer Interface system (BCI) to mentally control a robotic device or an exoskeleton.

KEYWORDS

EEG, Machine learning, BCI, Motor Imagery signals, Random Forest.


Multimodal Data Evaluation for Classification Problems

Daniela Moctezuma1 and Víctor Muníz2 and Jorge García2, 1Centro de Investigación en Ciencias de Información Geoespacial, Aguascalientes, Ags., Mexico, 2Centro de Investigación en Matemáticas, Monterrey, Nuevo León, Mexico

ABSTRACT

Social media data is currently the main input to a wide variety of research works in many knowledge fields. This kind of data is generally multimodal, i.e., it contains different modalities of information such as text, images, video or audio, mainly. To deal with multimodal data to tackle a specific task could be very difficult. One of the main challenges is to find useful representations of the data, capable of capturing the subtle information that the users who generate that information provided, or even the way they use it. In this paper, we analysed the usage of two modalities of data, images, and text, both in a separate way and by combining them to address two classification problems: memes classification and user profiling. For images, we use a textual semantic representation by using a pre-trained model of image captioning. Later, a text classifier based on optimal lexical representations was used to build a classification model. Interesting findings were found in the usage of these two modalities of data, and the pros and cons of using them to solve the two classification problems are also discussed.

KEYWORDS

Multimodal Data, Deep Learning, Natural Language Processing, Image captioning.