亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

Generating series are crucial in enumerative combinatorics, analytic combinatorics, and combinatorics on words. Though it might seem at first view that generating Dirichlet series are less used in these fields than ordinary and exponential generating series, there are many notable papers where they play a fundamental role, as can be seen in particular in the work of Flajolet and several of his co-authors. In this paper, we study Dirichlet series of integers with missing digits or blocks of digits in some integer base $b$, i.e., where the summation ranges over the integers whose expansions form some language strictly included in the set of all words on the alphabet $\{0, 1, \dots, b-1\}$ that do not begin with a $0$. We show how to unify and extend results proved by Nathanson in 2021 and by K\"ohler and Spilker in 2009. En route, we encounter several sequences from Sloane's On-Line Encyclopedia of Integer Sequences, as well as some famous $q$-automatic sequences or $q$-regular sequences.

相關內容

 Notability 是一款功能強大的備注記錄軟件,可用于注釋文稿、草擬想法、錄制演講、記錄備注等。它將鍵入、手寫、錄音和照片結合在一起,便于您根據需要創建相應的備注。在 iCloud 的支持下,您的備注在 iPad、iPhone 和 Mac 上將始終可用。晨昏相伴,如影隨行。

Robust Markov Decision Processes (RMDPs) are a widely used framework for sequential decision-making under parameter uncertainty. RMDPs have been extensively studied when the objective is to maximize the discounted return, but little is known for average optimality (optimizing the long-run average of the rewards obtained over time) and Blackwell optimality (remaining discount optimal for all discount factors sufficiently close to 1). In this paper, we prove several foundational results for RMDPs beyond the discounted return. We show that average optimal policies can be chosen stationary and deterministic for sa-rectangular RMDPs but, perhaps surprisingly, that history-dependent (Markovian) policies strictly outperform stationary policies for average optimality in s-rectangular RMDPs. We also study Blackwell optimality for sa-rectangular RMDPs, where we show that {\em approximate} Blackwell optimal policies always exist, although Blackwell optimal policies may not exist. We also provide a sufficient condition for their existence, which encompasses virtually any examples from the literature. We then discuss the connection between average and Blackwell optimality, and we describe several algorithms to compute the optimal average return. Interestingly, our approach leverages the connections between RMDPs and stochastic games.

One of the main challenges for interpreting black-box models is the ability to uniquely decompose square-integrable functions of non-independent random inputs into a sum of functions of every possible subset of variables. However, dealing with dependencies among inputs can be complicated. We propose a novel framework to study this problem, linking three domains of mathematics: probability theory, functional analysis, and combinatorics. We show that, under two reasonable assumptions on the inputs (non-perfect functional dependence and non-degenerate stochastic dependence), it is always possible to decompose such a function uniquely. This generalizes the well-known Hoeffding decomposition. The elements of this decomposition can be expressed using oblique projections and allow for novel interpretability indices for evaluation and variance decomposition purposes. The properties of these novel indices are studied and discussed. This generalization offers a path towards a more precise uncertainty quantification, which can benefit sensitivity analysis and interpretability studies whenever the inputs are dependent. This decomposition is illustrated analytically, and the challenges for adopting these results in practice are discussed.

This paper focuses on two highly publicized formal trade-offs in the field of responsible artificial intelligence (AI) -- between predictive accuracy and fairness and between predictive accuracy and interpretability. These formal trade-offs are often taken by researchers, practitioners, and policy-makers to directly imply corresponding tensions between underlying values. Thus interpreted, the trade-offs have formed a core focus of normative engagement in AI governance, accompanied by a particular division of labor along disciplinary lines. This paper argues against this prevalent interpretation by drawing attention to three sets of considerations that are critical for bridging the gap between these formal trade-offs and their practical impacts on relevant values. I show how neglecting these considerations can distort our normative deliberations, and result in costly and misaligned interventions and justifications. Taken together, these considerations form a sociotechnical framework that could guide those involved in AI governance to assess how, in many cases, we can and should have higher aspirations than the prevalent interpretation of the trade-offs would suggest. I end by drawing out the normative opportunities and challenges that emerge out of these considerations, and highlighting the imperative of interdisciplinary collaboration in fostering responsible AI.

We show that the known list-decoding algorithms for univariate multiplicity and folded Reed-Solomon (FRS) codes can be made to run in nearly-linear time. This yields, to our knowledge, the first known family of codes that can be decoded in nearly linear time, even as they approach the list decoding capacity. Univariate multiplicity codes and FRS codes are natural variants of Reed-Solomon codes that were discovered and studied for their applications to list-decoding. It is known that for every $\epsilon >0$, and rate $R \in (0,1)$, there exist explicit families of these codes that have rate $R$ and can be list-decoded from a $(1-R-\epsilon)$ fraction of errors with constant list size in polynomial time (Guruswami & Wang (IEEE Trans. Inform. Theory, 2013) and Kopparty, Ron-Zewi, Saraf & Wootters (SIAM J. Comput. 2023)). In this work, we present randomized algorithms that perform the above tasks in nearly linear time. Our algorithms have two main components. The first builds upon the lattice-based approach of Alekhnovich (IEEE Trans. Inf. Theory 2005), who designed a nearly linear time list-decoding algorithm for Reed-Solomon codes approaching the Johnson radius. As part of the second component, we design nearly-linear time algorithms for two natural algebraic problems. The first algorithm solves linear differential equations of the form $Q\left(x, f(x), \frac{df}{dx}, \dots,\frac{d^m f}{dx^m}\right) \equiv 0$ where $Q$ has the form $Q(x,y_0,\dots,y_m) = \tilde{Q}(x) + \sum_{i = 0}^m Q_i(x)\cdot y_i$. The second solves functional equations of the form $Q\left(x, f(x), f(\gamma x), \dots,f(\gamma^m x)\right) \equiv 0$ where $\gamma$ is a high-order field element. These algorithms can be viewed as generalizations of classical algorithms of Sieveking (Computing 1972) and Kung (Numer. Math. 1974) for computing the modular inverse of a power series, and might be of independent interest.

Deployable polyhedrons can transform between Platonic and Archimedean polyhedrons to meet the demands of various engineering applications. However, the existing design solutions are often with multiple degrees of freedom and complicated mechanism links and joints, which greatly limited their potential in practice. Combining the fundamentals of solid geometry and mechanism kinematics, this paper proposes a family of kirigami Archimedean polyhedrons based on the N-fold-symmetric loops of spatial 7R linkage, which perform one-DOF radial transformation following tetrahedral, octahedral, or icosahedral symmetry. Moreover, in each symmetric polyhedral group, three different transforming paths can be achieved from one identical deployed configuration. We also demonstrated that such design strategy can be readily applied to polyhedral tessellation. This work provides a family of rich solutions for deployable polyhedrons to facilitate their applications in aerospace exploration, architecture, metamaterials and so on.

The Conformer has become the most popular encoder model for automatic speech recognition (ASR). It adds convolution modules to a transformer to learn both local and global dependencies. In this work we describe a faster, more memory-efficient, and better-performing transformer, called Zipformer. Modeling changes include: 1) a U-Net-like encoder structure where middle stacks operate at lower frame rates; 2) reorganized block structure with more modules, within which we re-use attention weights for efficiency; 3) a modified form of LayerNorm called BiasNorm allows us to retain some length information; 4) new activation functions SwooshR and SwooshL work better than Swish. We also propose a new optimizer, called ScaledAdam, which scales the update by each tensor's current scale to keep the relative change about the same, and also explictly learns the parameter scale. It achieves faster convergence and better performance than Adam. Extensive experiments on LibriSpeech, Aishell-1, and WenetSpeech datasets demonstrate the effectiveness of our proposed Zipformer over other state-of-the-art ASR models. Our code is publicly available at //github.com/k2-fsa/icefall.

In this paper, we consider the task of efficiently computing the numerical solution of evolutionary complex Ginzburg--Landau equations. To this aim, we employ high-order exponential methods of splitting and Lawson type for the time integration. These schemes enjoy favorable stability properties and, in particular, do not show restrictions on the time step size due to the underlying stiffness of the models. The needed actions of matrix exponentials are efficiently realized with pointwise operations in Fourier space (when the model is considered with periodic boundary conditions) or by using a tensor-oriented approach that suitably employs the so-called $\mu$-mode products (when the semidiscretization in space is performed with finite differences). The overall effectiveness of the approach is demonstrated by running simulations on a variety of two- and three-dimensional (systems of) complex Ginzburg--Landau equations with cubic and cubic-quintic nonlinearities, which are widely considered in literature to model relevant physical phenomena. In fact, in all instances high-order exponential-type schemes can outperform standard techniques to integrate in time the models under consideration, i.e., the well-known split-step method and the explicit fourth-order Runge--Kutta integrator.

The fundamental computational issues in Bayesian inverse problems (BIP) governed by partial differential equations (PDEs) stem from the requirement of repeated forward model evaluations. A popular strategy to reduce such costs is to replace expensive model simulations with computationally efficient approximations using operator learning, motivated by recent progress in deep learning. However, using the approximated model directly may introduce a modeling error, exacerbating the already ill-posedness of inverse problems. Thus, balancing between accuracy and efficiency is essential for the effective implementation of such approaches. To this end, we develop an adaptive operator learning framework that can reduce modeling error gradually by forcing the surrogate to be accurate in local areas. This is accomplished by adaptively fine-tuning the pre-trained approximate model with train- ing points chosen by a greedy algorithm during the posterior computational process. To validate our approach, we use DeepOnet to construct the surrogate and unscented Kalman inversion (UKI) to approximate the BIP solution, respectively. Furthermore, we present a rigorous convergence guarantee in the linear case using the UKI framework. The approach is tested on a number of benchmarks, including the Darcy flow, the heat source inversion problem, and the reaction-diffusion problem. The numerical results show that our method can significantly reduce computational costs while maintaining inversion accuracy.

This work develops, for the first time, a face-centred finite volume (FCFV) solver for the simulation of laminar and turbulent viscous incompressible flows. The formulation relies on the Reynolds-averaged Navier-Stokes (RANS) equations coupled with the negative Spalart-Allmaras (SA) model and three novel convective stabilisations, inspired by Riemann solvers, are derived and compared numerically. The resulting method achieves first-order convergence of the velocity, the velocity-gradient tensor and the pressure. FCFV accurately predicts engineering quantities of interest, such as drag and lift, on unstructured meshes and, by avoiding gradient reconstruction, the method is insensitive to mesh quality, even in the presence of highly distorted and stretched cells. A monolithic and a staggered solution strategies for the RANS-SA system are derived and compared numerically. Numerical benchmarks, involving laminar and turbulent, steady and transient cases are used to assess the performance, accuracy and robustness of the proposed FCFV method.

Artificial neural networks thrive in solving the classification problem for a particular rigid task, acquiring knowledge through generalized learning behaviour from a distinct training phase. The resulting network resembles a static entity of knowledge, with endeavours to extend this knowledge without targeting the original task resulting in a catastrophic forgetting. Continual learning shifts this paradigm towards networks that can continually accumulate knowledge over different tasks without the need to retrain from scratch. We focus on task incremental classification, where tasks arrive sequentially and are delineated by clear boundaries. Our main contributions concern 1) a taxonomy and extensive overview of the state-of-the-art, 2) a novel framework to continually determine the stability-plasticity trade-off of the continual learner, 3) a comprehensive experimental comparison of 11 state-of-the-art continual learning methods and 4 baselines. We empirically scrutinize method strengths and weaknesses on three benchmarks, considering Tiny Imagenet and large-scale unbalanced iNaturalist and a sequence of recognition datasets. We study the influence of model capacity, weight decay and dropout regularization, and the order in which the tasks are presented, and qualitatively compare methods in terms of required memory, computation time, and storage.

北京阿比特科技有限公司