9th International Conference on Computer Science and Information Technology (CSTY 2023)
October 21 ~ 22, 2023, Sydney, Australia
Accepted Papers
Iot Based Currency Validation Vending Machine a Systematic Literature Review
Abdalrahman Ashry, Abdalrahman Shehta and Ahmad Abulfotouh, Department of Computer Engineering, Badr University, Badr City, Egypt
ABSTRACT
Background/Introduction: Vending Machines are automated machines thatdispense selling products such as snacks, beverages, lottery tickets, and etc. It is vital to save time and reduce human energy. These vending machines are developed in the way of Non IoT based and IoT based methods. These Non IoT-based machines are not smart and are not operated in real-time data, which are functioned when giving cash or card and inputs (vending things) of the machine. It is controlled by a microcontroller and distributed the given inputs. IoT-based machines arecomputerized, which have cash and cashless payment facilities, order facility before going to the vending machine to order things, and can be identified the location of machines by the customer. Objective: Create a system for IoT-based machines to check counterfeit currencies by using artificial intelligence Method: This SLR includes 22 papers published between 2015—2022. These IoT-based machines are assisted to suppliers to identify the availability of the stocks. Simulation software and prototype are used to validate the machines. Conclusion: it is found that most of the vending machines developed are capable of operating without IoT technology, and nowadays, vending machine systems are required to implement using IoT with machine learning, and artificial technologies to detects the fake currency note in a less time and in a more efficient manner to satisfy the customer preferences.
KEYWORDS
Artificial intelligence, Currency detection, Deep learning, Image processing, Vending Machines , billacceptors.
Stochastic Dual Coordinate Ascent for Learning Sign Constrained Linear Predictors
Miya Nakajima, Rikuto Mochida, Yuya Takada, and Tsuyoshi Kato, Graduate School of Science and Technology, Gunma University, Tenjin, Kiryu, 376-8515, Japan
ABSTRACT
Sign constraints are a handy representation of domain-specific prior knowledge that can be incorporated to machine learning. Under the sign constraints, the signs of the weight coefficients for linear predictors cannot be flipped from the ones specified in advance according to the prior knowledge. This paper presents new stochastic dual coordinate ascent (SDCA) algorithms that find the minimizer of the empirical risk under the sign constraints. Generic surrogate loss functions can be plugged into the proposed algorithm with the strong convergence guarantee inherited from the vanilla SDCA. A technical contribution of this work is the finding of an efficient algorithm that performs the SDCA update with a cost linear to the number of input features which coincides with the SDCA update without the sign constraints. Eventually,the computational cost O(nd) is achieved to attain an ϵ-accuracy solution. Pattern recognition experiments were carried out using a classification task for microbiological water quality analysis. The experimental results demonstrate the powerful prediction performance of the sign constraints.
KEYWORDS
sign constraints, convex optimization, stochastic dual coordinate ascent, empirical risk minimization, microbiological water quality analysis.
A New Proof for the Completeness Theoremof Propositional Logiczhuolei Duan1, Shaobo
Deng1, Sujie Guan1, Min Li1, Cungen Cao2, YuefeiSui2, 1Scholar of Information Engineer Information. Nanchang Institute of Tech-nology,Nanchang, 330099, China, 1University of Chinese Academy of Sciences, Beijing, 100049,China
ABSTRACT
For any consistent set of formulas Γ, Γ can be extended to amaximal consistent set of formulas Σ. However, as for Γ, there are too manyirrelevant formulas involved in Σ. So in order to eliminate such formulas, Γ willbe extended to a consistent set Σ′such that each member of Σ′has somethingto do with Γ and Σ′is closed with respect to the operators¬,∨and∧. Then, anew method has been made to prove the completeness theorem for propositionallogic with such idea.
KEYWORDS
The completeness theorem; propositional logic; a consistent setof formulas.
Data Smoothing Filling Method Based on Scrna-seq Data Zero-value Identification
Linfeng Jian1,2,3 and Yuan Zhu, School of Automation, China University of Geosciences, Hongshan District,No. 388 Lumo Road, 430074, Wuhan, China, Hubei Key Laboratory of Advanced Control and Intelligent Automation for Complex Systems, Hongshan District,No. 388 Lumo Road, 430074, Wuhan, China, Engineering Research Center of Intelligent Technology for Geo-Exploration, Hongshan District,No. 388 Lumo Road, 430074, Wuhan, China
ABSTRACT
Single-cell RNA sequencing (scRNA-seq) determines RNA expression at single-cell resolution. It provides a powerful tool for studying immunity, regulation, and other life activities of cells. However, due to the limitations of the sequencing technique, the scRNA-seq data are represented with sparsity, which contains missing gene values, i.e., zero values, called dropout. Therefore, it is necessary to impute missing values before analyzing scRNA-seq data. However, existing imputation computation methods often only focus on the identification of technical zeros or imputing all zeros based on cell similarity. This study proposes a new method (SFAG) to reconstruct the gene expression relationship matrix by using graph regularization technology to preserve the high-dimensional manifold information of the data, and to mine the relationship between genes and cells in the data, and then uses a method of averaging the clustering results to fill in the identified technical zeros.
KEYWORDS
scRNA-seq, graph regularization, data smoothing .
Monocular Depth Estimation Using a Deep Learning Model With Pre-depth Estimation Based on Size Perspective
Takanori Asano1 and Yoshiaki Yasumura2, 1Graduate School of Engineering and Science,Shibaura Institute of Technology, Tokyo, Japan, 2College of Engineering, Shibaura Institute of Technology, Tokyo, Japan
ABSTRACT
This paper presents a depth estimation method using a deep learning model that incorporates sizeperspective (size constancy cues). This method utilizes two neural networks: a size perspective model and a depth estimation model. The size perspective model estimates approximate depth using the size perspective for each object in the image. Based on these rough depth estimation(pre-depth estimation) results, this method generates a rough depth image (pre-depth image) and this image is input with the RGB image into the depth estimation model. The pre-depth image is used as a hint for depth estimation and improves the performance of the depth estimation model. In the experiments, our method demonstrated improvement in accuracy compared to the method without pre-depth images.
KEYWORDS
Depth Estimation, Deep Learning, Image Processing, Size Perspective, YOLOv8.
The Use of Artificial Intelligence (Ai) to Improve Human Life: an Introduction
Nesar Ahmed Titu,Bangladesh
ABSTRACT
This study is an introductory part of a series on how human lives can be better with the help of data-driven techniques defined as Artificial intelligence (AI.) It innovates a method that combines predictive and prescriptive modeling interactively to answer two basic questions about human lives.
KEYWORDS
Artificial Intelligence, Machine Learning, Data Science, Human Factor, Human Error, Human Factor Error.
Legal Interpretation and Contextuality as Tools to Establish Privacy and Fundamental Rights Protection in Geopolitical and Legal Uncertainty
Pierangelo Blandino Ph.D. Candidate - Law, Technology and Design Thinking Research Group University of Lapland – School of Law
ABSTRACT
This summary explores the degree of rights’ protection when it comes to States form of surveillance under a concise legal comparative outline. Given today’s interdependence put into being firstly by Global Governance patterns and then by exchange on platforms, the attention will be drawn to Chinese Personal Information Protection Law (PIPL), American Clarifying Lawful Overseas Use of Data Act (Cloud Act) and EU General Data Protection Regulation(GDPR). At a second stage, new possible techniques are considered to properly tackle these unprecedented changes challenging traditional legal patterns. Methodologically, the argument is built on today’s vacuum of regulation both at a domestic and international level. Hence, legal interpretation appears to be a viable approach to fill these regulatory gaps.
KEYWORDS
Privacy, Data Protection, Geopolitics, Legal Interpretation, GDPR, Cloud Act,PIPL
Task Approach and Semantic Programming for Trusted Artificial Intelligence
Sergey Goncharov1 and Andrey Nechesov2, 1Laboratory of Computability Theory and Applied Logic, Sobolev Institute of Mathematics, Novosibirsk, Russia, 2Laboratory of Computability Theory and Applied Logic, Sobolev Institute of Mathematics, Novosibirsk, Russia
ABSTRACT
The paper investigates how the principles of the task approach and the concept of semantic programming can be applied to address various unresolved issues related to trusted artificial intelligence. First of all, this concerns such problems as: the AI centralization problem, the AI Black Box problem, and the AI audit problem, which leads to the problem of trust. This paper presents the Delta framework, which allows the implementation of AI algorithms as smart contracts in a decentralized environment based on multi-blockchain structures. This allows us to achieve transparency, reliability, and decentralization of AI systems, which brings us one step closer to trusted artificial intelligence.
KEYWORDS
Artificial intelligence, trusted AI, strong AI, explainable AI, task approach, semantic programming, multi-blockchains, smart contracts, Bitcoin, ChatGPT, Telegram, cryptocurrency, Delta framework.
Bayesian Intelligent And Soft Measurement
Svetlana Prokopchina1 and Veronika Zaslavskaia2, 1Financial University under the Government of the Russian Federation, Moscow, Russia, 2Zello Russia, "Artificial Intelligence" Committee of "RUSSOFT" Association, St. Petersburg, Russia
ABSTRACT
Modern measurement tasks are solved under conditions of uncertainty Significant information uncertainty is caused by the lack of complete and accurate knowledge about the models of measurement objects, influencing factors, measurement conditions, and the diversity of experimental data. In the article briefly discusses the history of the development of methods for the intellectualization of measurement processes, which are oriented to the situation with uncertainty also and the classification of measurements and measurement systems. The basic requirements for intelligent measurement systems and technologies are formulated. The article considers the conceptual aspects of intelligent measurements as measurements based on the integration of metrologically certified data and knowledge and defines intelligent measurements. The properties of intelligent measurements are determined. The article considers the main properties of soft measurements and their differences from deterministic classical measurements of physical quantities. Cognitive, systemic, and global measurement are marked as new types of measurement. In this paper, the methodology and technologies of Bayesian intelligent measurements based on the regularizing Bayesian approach are considered in detail. In this type of measurement, a new concept of measurement is implemented, in which the measurement problem is posed as the inverse problem of pattern recognition in accordance with the postulates of the Bayesian approach. Within the framework of this concept, new types of models and measurement scales are proposed in the form of models and coupled scales with dynamic constraints that provide for the creation of developing measurement technologies in order to implement the processes of cognition and interpretation of measurement results by means of measurement systems. The new type of scale allows the integration of numerical (for numerical data) and linguistic (for information in the form of knowledge) information in order to improve the quality of measurement solutions. A new set of metrological characteristics of intelligent measurements is proposed, including indicators of accuracy, reliability (error levels of the 1st and 2nd kind), reliability, risk, and entropy characteristics. The paper presents formulas for the implementation of the measurement process with a complete metrological justification of the solutions. An example of solving an applied problem by means of an intelligent measuring complex for monitoring the state of water supply networks based on the methodology and technologies of Bayesian intelligent measurements is considered. In conclusion, the advantages and prospects of using intelligent measurements are formulated, both for solving applied problems, and for the development and integration of artificial intelligence and measurement theory technologies.
KEYWORDS
Measurement Theory, Bayesian approach, Uncertainty
Teaching Reading Skills More Effectively
Julia Koifman, Beit Ekstein Rupin high school, Emek Hefer, Israel
ABSTRACT
Reading is one of the most crucial skills in learning. Children learn to read very early, and before they start school, they are supposed to be able to read. Nevertheless, some of them struggle. For instance, some of them confuse letters or may have difficulty reading comprehension, while others may have difficulty remembering, which might be the consequence of learning difficulties (LD), for instance, dyslexia, one of the most common cognitive disorders. It often affects reading and language skills. Researchers estimate that more than 40 million people in the USA have dyslexia, but only about 2 million of them have been diagnosed with dyslexia. At the same time, about 30% of people diagnosed with dyslexia also suffer from autism spectrum disorders (ASD) and attention deficit hyperactivity disorder (ADHD) to one degree or another.
KEYWORDS
Dyslexia, dysgraphia, dyspraxia, ADHD, ASD, learning difficulties, neurodiversity.
Mimo Mobile-to-mobile 5g Communication Systems Along Elliptical Geometrical Channel Modeling for Analysis of Channel Parameters
Samra Urooj Khan1, Sundas Naqeeb Khan2, Zoya Khan3, 1Department of Electrical Engineering Technology, Punjab University of Technology, Rasool, Mandi Bahauddin, Punjab, Pakistan, 2Department of graphics, Computer vision and digital systems, Silesian University of Technology, Gliwice, Poland
ABSTRACT
The condition of the propagation environment is essential in the design and execution of any transmission medium. As a result, mathematical modeling of transport routes has been a focus of study for centuries. Geometrical channel modeling, as demonstrated by researchers and theorists, is best suited for mobile-to-mobile (M2M) communication settings. Several hollow cylindrical geometrical systematic collections have been thoroughly studied in this research. According to the literature, an elliptical modeling technique could more centralize national the transmission channel. Furthermore, the influence of different channel coefficients across multiple-in-multiple-out (MIMO) resonators has been illustrated utilizing geometrical models. Moreover, the velocity of a mobile station (MS) inside the M2M presenter has still not been assessed among the MIMO resonators. For 5G communications networks, a study of several MS variables would be given.
KEYWORDS
M2M, MIMO, MS, Geometrical modeling, Transmission, 5G, Communication.
Batch-stochastic Sub-gradient Method for Solving Non-smooth Convex Loss Function Problems
KasimuJuma Ahmed, Mathematics Unit, Department of General Studies, Federal Polytechnic Bali, Taraba State, Nigeria
ABSTRACT
Mean Absolute Error (MAE) and Mean Square Error (MSE) are the two loss functions that can fit well into machine learning techniques to make accurate prediction on a continuous data. MAE has non-differentiable property but penalizes outliers unlike MSE which has differentiable property but does not penalize outliers. Batch sub-gradient method is expensive but stable because iteration is over the entire dataset while stochastic sub-gradient method is less expensive but not stable because epoch is over a single data point. A batch-stochastic sub-gradient method is developed for its computational efficiency than batch and stability than stochastic because epoch is defined over collection of data. We tested the computational efficiency of the method using Structured Query Language (SQL). The new method shows greater stability, efficiency, accuracy and convergence than any other existing method.
KEYWORDS
Machine learning, Loss function, sub-gradientMean Absolute Error (MAE) and Prediction.
Microbe2Pixel: Taxonomy informed deep-learning models and explanations
B. Voermans1, 2, M.C. de Goffau2, 3, M. Nieuwdorp1, 4 , and E. Levin2, 1Department of Experimental Vascular Medicine, Amsterdam University Medical Centers, University of Amsterdam , Meibergdreef 9, Amsterdam, the Netherlands, 2HORAIZON Technology BV, Marshallaan 2, Delft, the Netherlands, 3Tytgat Institute for Liver and Intestinal Research, Amsterdam University Medical Centers, Meibergdreef 69-71, Amsterdam,the Netherlands, 4Department of Vascular Medicine, Amsterdam University Medical Centers, University of Amsterdam, Meibergdreef 9, Amsterdam, the Netherlands
ABSTRACT
In recent years, machine learning, especially deep learning, has garnered substantial attention in the biomedical field. For instance, deep learning has become a preferred method for medical image analysis tasks. However, in other areas like fecal metagenomics analysis, the application of deep learning remains underdeveloped. This can be attributed to the tabular nature of metagenomics data, feature sparsity, and the complexity of deep learning techniques, which often lead to perceived inexplicability. In this paper, we introduce Microbe2Pixel, an innovative technique that applies deep neural networks to fecal metagenomics data by transforming tabular data into images. This transformation is achieved by inferring location from the taxonomic information inherently present in the data. A significant advantage of our method is the use of transfer learning, which reduces the number of samples required for training compared to traditional deep learning. Our method aims to develop a local model-agnostic feature importance algorithm that provides interpretable explanations. We evaluate these explanations against other local image explainer methods using quantitative (statistical performance) and qualitative (biological relevance) assessments. Microbe2Pixel outperforms all other tested methods from both perspectives. The feature importance values align better with current microbiology knowledge and are more robust concerning the number of samples used to train the model. This is particularly significant for the application of deep learning in smaller interventional clinical trials (e.g., fecal microbial transplant studies), where large sample sizes are unattainable and model interpretability is crucial.
KEYWORDS
metagenomics, interpretable deep learning, local explanations.
Contact Us