This paper studies the algorithms for the minimisation of weighted automata. It starts with the definition of morphisms-which generalises and unifies the notion of bisimulation to the whole class of weighted automata-and the unicity of a minimal quotient for every automaton, obtained by partition refinement. From a general scheme for the refinement of partitions, two strategies are considered for the computation of the minimal quotient: the Domain Split and the Predecesor Class Split algorithms. They correspond respectivly to the classical Moore and Hopcroft algorithms for the computation of the minimal quotient of deterministic Boolean automata. We show that these two strategies yield algorithms with the same quadratic complexity and we study the cases when the second one can be improved in order to achieve a complexity similar to the one of Hopcroft algorithm.
We propose two hard problems in cellular automata. In particular the problems are: [DDP$^M_{n,p}$] Given two \emph{randomly} chosen configurations $t$ and $s$ of a cellular automata of length $n$, find the number of transitions $\tau$ between $s$ and $t$. [SDDP$^\delta_{k,n}$] Given two \emph{randomly} chosen configurations $s$ of a cellular automata of length $n$ and $x$ of length $k<n$, find the configuration $t$ such that $k$ number of cells of $t$ is fixed to $x$ and $t$ is reachable from $s$ within $\delta$ transitions. We show that the discrete logarithm problem over the finite field reduces to DDP$^M_{n,p}$ and the short integer solution problem over lattices reduces to SDDP$^\delta_{k,n}$. The advantage of using such problems as the hardness assumptions in cryptographic protocols is that proving the security of the protocols requires only the reduction from these problems to the designed protocols. We design one such protocol namely a proof-of-work out of SDDP$^\delta_{k,n}$.
It is known that each word of length $n$ contains at most $n+1$ distinct palindromes. A finite rich word is a word with maximal number of palindromic factors. The definition of palindromic richness can be naturally extended to infinite words. Sturmian words and Rote complementary symmetric sequences form two classes of binary rich words, while episturmian words and words coding symmetric $d$-interval exchange transformations give us other examples on larger alphabets. In this paper we look for morphisms of the free monoid, which allow us to construct new rich words from already known rich words. We focus on morphisms in Class $P_{ret}$. This class contains morphisms injective on the alphabet and satisfying a particular palindromicity property: for every morphism $\varphi$ in the class there exists a palindrome $w$ such that $\varphi(a)w$ is a first complete return word to $w$ for each letter $a$. We characterize $P_{ret}$ morphisms which preserve richness over a binary alphabet. We also study marked $P_{ret}$ morphisms acting on alphabets with more letters. In particular we show that every Arnoux-Rauzy morphism is conjugated to a morphism in Class $P_{ret}$ and that it preserves richness.
An evident challenge ahead for the integrated circuit (IC) industry in the nanometer regime is the investigation and development of methods that can reduce the design complexity ensuing from growing process variations and curtail the turnaround time of chip manufacturing. Conventional methodologies employed for such tasks are largely manual; thus, time-consuming and resource-intensive. In contrast, the unique learning strategies of artificial intelligence (AI) provide numerous exciting automated approaches for handling complex and data-intensive tasks in very-large-scale integration (VLSI) design and testing. Employing AI and machine learning (ML) algorithms in VLSI design and manufacturing reduces the time and effort for understanding and processing the data within and across different abstraction levels via automated learning algorithms. It, in turn, improves the IC yield and reduces the manufacturing turnaround time. This paper thoroughly reviews the AI/ML automated approaches introduced in the past towards VLSI design and manufacturing. Moreover, we discuss the scope of AI/ML applications in the future at various abstraction levels to revolutionize the field of VLSI design, aiming for high-speed, highly intelligent, and efficient implementations.
We consider the numerical approximation of the ill-posed data assimilation problem for stationary convection-diffusion equations and extend our previous analysis in [Numer. Math. 144, 451--477, 2020] to the convection-dominated regime. Slightly adjusting the stabilized finite element method proposed for dominant diffusion, we draw upon a local error analysis to obtain quasi-optimal convergence along the characteristics of the convective field through the data set. The weight function multiplying the discrete solution is taken to be Lipschitz and a corresponding super approximation result (discrete commutator property) is proven. The effect of data perturbations is included in the analysis and we conclude the paper with some numerical experiments.
In this study, we consider the nonlinear Sch\"odinger equation (NLS) with the zero-boundary condition on a two- or three-dimensional large finite cubic lattice. We prove that its solution converges to that of the NLS on the entire Euclidean space with simultaneous reduction in the lattice distance and expansion of the domain. Moreover, we obtain a precise global-in-time bound for the rate of convergence. Our proof heavily relies on Strichartz estimates on a finite lattice. A key observation is that, compared to the case of a lattice with a fixed size [Y. Hong, C. Kwak, S. Nakamura, and C. Yang, \emph{Finite difference scheme for two-dimensional periodic nonlinear {S}chr\"{o}dinger equations}, Journal of Evolution Equations \textbf{21} (2021), no.~1, 391--418.], the loss of regularity in Strichartz estimates can be reduced as the domain expands, depending on the speed of expansion. This allows us to address the physically important three-dimensional case.
Scoring rules aggregate individual rankings by assigning some points to each position in each ranking such that the total sum of points provides the overall ranking of the alternatives. They are widely used in sports competitions consisting of multiple contests. We study the tradeoff between two risks in this setting: (1) the threat of early clinch when the title has been clinched before the last contest(s) of the competition take place; (2) the danger of winning the competition without finishing first in any contest. In particular, four historical points scoring systems of the Formula One World Championship are compared with the family of geometric scoring rules, recently proposed by an axiomatic approach. The schemes used in practice are found to be competitive with respect to these goals, and the current rule seems to be a reasonable compromise close to the Pareto frontier. Our results shed more light on the evolution of the Formula One points scoring systems and contribute to the issue of choosing the set of point values.
In this paper we consider a class of unfitted finite element methods for scalar elliptic problems. These so-called CutFEM methods use standard finite element spaces on a fixed unfitted triangulation combined with the Nitsche technique and a ghost penalty stabilization. As a model problem we consider the application of such a method to the Poisson interface problem. We introduce and analyze a new class of preconditioners that is based on a subspace decomposition approach. The unfitted finite element space is split into two subspaces, where one subspace is the standard finite element space associated to the background mesh and the second subspace is spanned by all cut basis functions corresponding to nodes on the cut elements. We will show that this splitting is stable, uniformly in the discretization parameter and in the location of the interface in the triangulation. Based on this we introduce an efficient preconditioner that is uniformly spectrally equivalent to the stiffness matrix. Using a similar splitting, it is shown that the same preconditioning approach can also be applied to a fictitious domain CutFEM discretization of the Poisson equation. Results of numerical experiments are included that illustrate optimality of such preconditioners for the Poisson interface problem and the Poisson fictitious domain problem.
In this work, we consider the distributed optimization of non-smooth convex functions using a network of computing units. We investigate this problem under two regularity assumptions: (1) the Lipschitz continuity of the global objective function, and (2) the Lipschitz continuity of local individual functions. Under the local regularity assumption, we provide the first optimal first-order decentralized algorithm called multi-step primal-dual (MSPD) and its corresponding optimal convergence rate. A notable aspect of this result is that, for non-smooth functions, while the dominant term of the error is in $O(1/\sqrt{t})$, the structure of the communication network only impacts a second-order term in $O(1/t)$, where $t$ is time. In other words, the error due to limits in communication resources decreases at a fast rate even in the case of non-strongly-convex objective functions. Under the global regularity assumption, we provide a simple yet efficient algorithm called distributed randomized smoothing (DRS) based on a local smoothing of the objective function, and show that DRS is within a $d^{1/4}$ multiplicative factor of the optimal convergence rate, where $d$ is the underlying dimension.
This work considers the problem of provably optimal reinforcement learning for episodic finite horizon MDPs, i.e. how an agent learns to maximize his/her long term reward in an uncertain environment. The main contribution is in providing a novel algorithm --- Variance-reduced Upper Confidence Q-learning (vUCQ) --- which enjoys a regret bound of $\widetilde{O}(\sqrt{HSAT} + H^5SA)$, where the $T$ is the number of time steps the agent acts in the MDP, $S$ is the number of states, $A$ is the number of actions, and $H$ is the (episodic) horizon time. This is the first regret bound that is both sub-linear in the model size and asymptotically optimal. The algorithm is sub-linear in that the time to achieve $\epsilon$-average regret for any constant $\epsilon$ is $O(SA)$, which is a number of samples that is far less than that required to learn any non-trivial estimate of the transition model (the transition model is specified by $O(S^2A)$ parameters). The importance of sub-linear algorithms is largely the motivation for algorithms such as $Q$-learning and other "model free" approaches. vUCQ algorithm also enjoys minimax optimal regret in the long run, matching the $\Omega(\sqrt{HSAT})$ lower bound. Variance-reduced Upper Confidence Q-learning (vUCQ) is a successive refinement method in which the algorithm reduces the variance in $Q$-value estimates and couples this estimation scheme with an upper confidence based algorithm. Technically, the coupling of both of these techniques is what leads to the algorithm enjoying both the sub-linear regret property and the asymptotically optimal regret.
In this paper, we study the optimal convergence rate for distributed convex optimization problems in networks. We model the communication restrictions imposed by the network as a set of affine constraints and provide optimal complexity bounds for four different setups, namely: the function $F(\xb) \triangleq \sum_{i=1}^{m}f_i(\xb)$ is strongly convex and smooth, either strongly convex or smooth or just convex. Our results show that Nesterov's accelerated gradient descent on the dual problem can be executed in a distributed manner and obtains the same optimal rates as in the centralized version of the problem (up to constant or logarithmic factors) with an additional cost related to the spectral gap of the interaction matrix. Finally, we discuss some extensions to the proposed setup such as proximal friendly functions, time-varying graphs, improvement of the condition numbers.