In axiomatic set theory, large cardinals are a collective term for cardinals whose existence cannot be proved in the axiomatic system ZFC of set theory. Classical large cardinals include measurable cardinals and weakly compact cardinals. The facts that a cardinal $\kappa$ is measurable or weakly compact are defined via the existence of filters on additive families over κ. By defining new large cardinals using a game called the filter game, Holy and Schlicht refined the hierarchy between measurable and weakly compact cardinals. An interesting problem about this game is its determinacy: while determinacy questions for several lengths were solved by Holy et al. and by Nielsen and Welch, the determinacy for length $\kappa^+$, for example, remained completely open. To resolve this issue, we define another game $G_\alpha(\kappa)$, give a negative solution to the previously open determinacy problem for $G_{\kappa^+}(\kappa)$, and show that the determinacy of two filter games whose lengths are different regular cardinals can be separated.
Epistemic logic is a logic that is used to express knowledge and belief that agents have. In this talk, we ask: are there any formulas that cannot be known or believed at least in the sense of epistemic logic?
We first propose new, yet the most straightforward definitions of unknowability and unbelievability and analyze the sources of the two notions. We show that a formula is unknowable iff it is logically equivalent to a formula known as the Moore sentence. The absurdity of asserting the sentence has been discussed over the years as Moore's paradox.
Our main result shows that in the class of frames S5, the static notions of unknowability and unbelievability in epistemic logic, and the dynamic notions of always informativeness and eventual self-refutation in dynamic epistemic logic are all equivalent to ``Moorean phenomena.''These results are not only philosophically and linguistically interesting but also expected to give theoretical foundations for multi-agent systems, security protocols, and AI.
This study discusses the inverse kinematics problem and its analytical solution for the manipulator xArm 7, which possesses seven rotating joints. The number of joints that can be moved independently is called the degrees of freedom (DoF). The xArm 7 is a 7-DoF manipulator. To solve the inverse kinematic problem for its position and orientation, it is necessary to eliminate the redundancy caused by the 7-DoF configuration. The proposed method derives a system of equations with a finite number of solutions by providing the angle of the joint farthest from the base and the position of the next farthest joint. Although a numerical inverse kinematics solver has already been implemented for the xArm 7, to the best of the author's knowledge, an analytical solution has not yet been developed or implemented. We propose an analytical method that solves the equations formulated under the previously mentioned conditions using a comprehensive Gröbner system. This method achieves more stable solutions than numerical methods and can find multiple solutions simultaneously. Furthermore, the ability to adjust the position of the next farthest joint makes it an effective technique for obstacle avoidance.
This study discusses the inverse kinematics problem and trajectory planning problem of robot manipulators using computer algebra. The inverse kinematics problem is the task of determining the joint configurations required to achieve the end-effector position and orientation as inputs. The trajectory planning problem extends the input from a single point to a trajectory. In this study, we treat the Elephant Robotics myCobot 280 as a 3-DOF manipulator by limiting the control to the three base joints. We propose a method for solving the inverse kinematics problem using Comprehensive Gröbner Systems (CGS) and certifying the existence of solutions using a CGS-based method of quantifier elimination (CGS-QE). Furthermore, for the trajectory planning problem, we propose a method to certify the existence of solutions to the inverse kinematics problem at every point along the trajectory. We have implemented methods and verified their effectiveness.
This presentation numerically investigates whether logarithmic behaviour is observed in the two-dimensional discrete Green's function. The domain was set as a unit square region, subdivided sufficiently finely, and a point $Q$ was set at a specific location to compute the discrete Green's function $G_h(\cdot,Q)$. The results showed that logarithmic behaviour was observed in $G_h(\cdot,Q)$ when $Q$ was located near the centre of the domain.
B. Cockburn et al. introduced an iterative post-processing technique for the spatial discretization of the one-dimensional transport equation using the discontinuous Galerkin (DG) method, which is referred to as the turbo post-processing (TPP). For ordinary differential equations (ODEs), they proposed a post-processing that makes the DG solution continuous, which increases the order of convergence by one.However, further extensions of the post-processing have not yet been investigated. In this talk, we propose a TPP scheme for the continuous Galerkin solution of ODEs and give an overview of our theoretical results. We also present several numerical results suggesting that the order of convergence is increased by one with each iteration of post-processing.
We consider asymptotic properties of the L2-regularized logistic regression (L2-LR) in high-dimension, low-sample-size (HDLSS) settings, where the dimension tends to infinity while the sample size is fixed. Suppose we have two independent d-dimensional populations, each having an unknown mean vector and an unknown covariance matrix. We have independent and identically distributed observations from each population. We assume that each population has at least two observations. We show that, in logistic regression, the parameters minimizing the L2-regularized negative log-likelihood can be derived explicitly in the HDLSS settings. Also, we show that the L2-LR suffers from a huge bias in the HDLSS settings. In particular, under certain conditions on the bias, strong inconsistency occurs, in which one of the misclassification rates converges to 1 asymptotically. In order to overcome such difficulties, we propose a bias-corrected L2-LR (BC-LR). In the HDLSS settings, we show that the BC-LR satisfies the consistency property in which the error rates tend to zero asymptotically. Finally, we evaluate the performance of the BC-LR in numerical simulations and real data analyses.
This study considers the estimation of regression coefficient matrix in high dimensions. In high-dimensional settings, using the least squares method to estimate regression coefficients results in solutions that are highly susceptible to noise, leading to increased estimation errors. Therefore, to mitigate the effect of noise, we apply a signal matrix reconstruction method based on the noise reduction method, as proposed by Yata and Aoshima (2012) and Yata and Aoshima (2016), to regression models. This provides an estimation of the regression coefficient matrix that is usable even in high dimensions. We also examine the estimation errors associated with the least squares method. We evaluate the estimation error of the least squares method and the upper bound on the error of the new estimator, and compare the behavior of these estimators in high dimensions. Furthermore, through numerical experiments, we demonstrate that under high-dimentional conditions, the new estimator actually exhibits better performance than the least squares method.
Two-dimensional principal component analysis (2DPCA) is a fundamental method for analyzing matrix-valued data. In this paper, we study the consistency of 2DPCA in high-dimension, low-sample-size settings. We show that the sample 2DPCA operator is affected by noise and establish consistency of leading eigenvalues and eigenvectors under mild assumptions, supported by simulations.
This paper considers the problem of clustering p×q-dimensional matrix-type data when n observations are available. Specifically, we focus on high-dimensional low-sample-size data where both p and q are much larger than n, a challenging setting that is increasingly common in contemporary data analysis but poses significant computational and statistical difficulties.
To address this problem, we treat the matrix data as a p×q×n tensor, stacked in three modes, and explore methods utilizing tensor decomposition techniques, with particular emphasis on CP (CANDECOMP/PARAFAC) decomposition. This approach allows us to capture the multi-way structure inherent in the data while reducing dimensionality. We introduce CP-ALS (Alternating Least Squares), a widely-used iterative algorithm for CP decomposition, and emphasize that selecting appropriate parameters corresponding to the rank of the CP decomposition is crucial for obtaining a meaningful and accurate decomposition. The choice of rank directly impacts both the interpretability and the clustering performance of the resulting model. While CP decomposition rank estimation is generally known to be NP-hard, we verify that the spike number estimation method for eigenvalues using the cross-data-matrix (CDM) methodology, originally proposed by Aoshima and Yata (2018) for high-dimensional eigenvalue problems, can be effectively adapted for rank estimation of CP decomposition under appropriate modeling assumptions.
This study proposes conditions to keep nonzero components in sparse estimation of high-dimensional mean vectors using the automatic sparse estimation methodology by Yata and Aoshima (2025)
Yata and Aoshima (2025) proposed the automatic sparse estimation methodology and constructed consistent estimators of eigenvectors. Using this method, a sparse estimation method for high-dimensional mean vectors was proposed, and previous studies showed that it is consistent with respect to the Euclidean norm in high-dimensional settings.
However, during the sparsification process, some components of the mean vector that are nonzero may be estimated as zero. In this case, important variables may be removed, which is a problem for feature selection. To solve this problem, this study proposes conditions to keep estimators for the nonzero components of the mean vector and summarizes them as a theorem. It also proposes conditions to keep only estimators for the nonzero components and summarizes them as a corollary. Finally, as an application, we propose a discriminant function using the automatic sparse estimation methodology.
[References]
Yata, K. and Aoshima, M. (2025). Automatic sparse PCA for high-dimensional data, Statistica Sinica, 35, 1069-1090.
I will give an overview of: what is wall-crossing; geometric techniques for producing wall-crossing formulas; recent advances in such techniques for enumerative invariants, particularly those of "3-Calabi-Yau type", in various equivariant cohomology theories (e.g. equivariant K-theory or equivariant elliptic cohomology). This includes Donaldson-Thomas theory --- the study of enumerative invariants of moduli spaces of coherent sheaves on complex smooth (quasi-)projective Calabi-Yau 3-folds --- generalizing results of Joyce-Song and Kontsevich-Soibelman.