亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

We present a spectral method for one-sided linear fractional integral equations on a closed interval that achieves exponentially fast convergence for a variety of equations, including ones with irrational order, multiple fractional orders, non-trivial variable coefficients, and initial-boundary conditions. The method uses an orthogonal basis that we refer to as Jacobi fractional polynomials, which are obtained from an appropriate change of variable in weighted classical Jacobi polynomials. New algorithms for building the matrices used to represent fractional integration operators are presented and compared. Even though these algorithms are unstable and require the use of high-precision computations, the spectral method nonetheless yields well-conditioned linear systems and is therefore stable and efficient. For time-fractional heat and wave equations, we show that our method (which is not sparse but uses an orthogonal basis) outperforms a sparse spectral method (which uses a basis that is not orthogonal) due to its superior stability.

相關內容

This research conducts a thorough reevaluation of seismic fragility curves by utilizing ordinal regression models, moving away from the commonly used log-normal distribution function known for its simplicity. It explores the nuanced differences and interrelations among various ordinal regression approaches, including Cumulative, Sequential, and Adjacent Category models, alongside their enhanced versions that incorporate category-specific effects and variance heterogeneity. The study applies these methodologies to empirical bridge damage data from the 2008 Wenchuan earthquake, using both frequentist and Bayesian inference methods, and conducts model diagnostics using surrogate residuals. The analysis covers eleven models, from basic to those with heteroscedastic extensions and category-specific effects. Through rigorous leave-one-out cross-validation, the Sequential model with category-specific effects emerges as the most effective. The findings underscore a notable divergence in damage probability predictions between this model and conventional Cumulative probit models, advocating for a substantial transition towards more adaptable fragility curve modeling techniques that enhance the precision of seismic risk assessments. In conclusion, this research not only readdresses the challenge of fitting seismic fragility curves but also advances methodological standards and expands the scope of seismic fragility analysis. It advocates for ongoing innovation and critical reevaluation of conventional methods to advance the predictive accuracy and applicability of seismic fragility models within the performance-based earthquake engineering domain.

Multi-index models -- functions which only depend on the covariates through a non-linear transformation of their projection on a subspace -- are a useful benchmark for investigating feature learning with neural networks. This paper examines the theoretical boundaries of learnability in this hypothesis class, focusing particularly on the minimum sample complexity required for weakly recovering their low-dimensional structure with first-order iterative algorithms, in the high-dimensional regime where the number of samples is $n=\alpha d$ is proportional to the covariate dimension $d$. Our findings unfold in three parts: (i) first, we identify under which conditions a \textit{trivial subspace} can be learned with a single step of a first-order algorithm for any $\alpha\!>\!0$; (ii) second, in the case where the trivial subspace is empty, we provide necessary and sufficient conditions for the existence of an {\it easy subspace} consisting of directions that can be learned only above a certain sample complexity $\alpha\!>\!\alpha_c$. The critical threshold $\alpha_{c}$ marks the presence of a computational phase transition, in the sense that no efficient iterative algorithm can succeed for $\alpha\!<\!\alpha_c$. In a limited but interesting set of really hard directions -- akin to the parity problem -- $\alpha_c$ is found to diverge. Finally, (iii) we demonstrate that interactions between different directions can result in an intricate hierarchical learning phenomenon, where some directions can be learned sequentially when coupled to easier ones. Our analytical approach is built on the optimality of approximate message-passing algorithms among first-order iterative methods, delineating the fundamental learnability limit across a broad spectrum of algorithms, including neural networks trained with gradient descent.

In this paper, we introduce a discretization scheme for the Yang-Mills equations in the two-dimensional case using a framework based on discrete exterior calculus. Within this framework, we define discrete versions of the exterior covariant derivative operator and its adjoint, which capture essential geometric features similar to their continuous counterparts. Our focus is on discrete models defined on a combinatorial torus, where the discrete Yang-Mills equations are presented in the form of both a system of difference equations and a matrix form.

Driven by exploring the power of quantum computation with a limited number of qubits, we present a novel complete characterization for space-bounded quantum computation, which encompasses settings with one-sided error (unitary coRQL) and two-sided error (BQL), approached from a quantum state testing perspective: - The first family of natural complete problems for unitary coRQL, i.e., space-bounded quantum state certification for trace distance and Hilbert-Schmidt distance; - A new family of natural complete problems for BQL, i.e., space-bounded quantum state testing for trace distance, Hilbert-Schmidt distance, and quantum entropy difference. In the space-bounded quantum state testing problem, we consider two logarithmic-qubit quantum circuits (devices) denoted as $Q_0$ and $Q_1$, which prepare quantum states $\rho_0$ and $\rho_1$, respectively, with access to their ``source code''. Our goal is to decide whether $\rho_0$ is $\epsilon_1$-close to or $\epsilon_2$-far from $\rho_1$ with respect to a specified distance-like measure. Interestingly, unlike time-bounded state testing problems, our results reveal that the space-bounded state testing problems all correspond to the same class. Moreover, our algorithms on the trace distance inspire an algorithmic Holevo-Helstrom measurement, implying QSZK is in QIP(2) with a quantum linear-space honest prover. Our results primarily build upon a space-efficient variant of the quantum singular value transformation (QSVT) introduced by Gily\'en, Su, Low, and Wiebe (STOC 2019), which is of independent interest. Our technique provides a unified approach for designing space-bounded quantum algorithms. Specifically, we show that implementing QSVT for any bounded polynomial that approximates a piecewise-smooth function incurs only a constant overhead in terms of the space required for special forms of the projected unitary encoding.

This work proposes a novel variational approximation of partial differential equations on moving geometries determined by explicit boundary representations. The benefits of the proposed formulation are the ability to handle large displacements of explicitly represented domain boundaries without generating body-fitted meshes and remeshing techniques. For the space discretization, we use a background mesh and an unfitted method that relies on integration on cut cells only. We perform this intersection by using clipping algorithms. To deal with the mesh movement, we pullback the equations to a reference configuration (the spatial mesh at the initial time slab times the time interval) that is constant in time. This way, the geometrical intersection algorithm is only required in 3D, another key property of the proposed scheme. At the end of the time slab, we compute the deformed mesh, intersect the deformed boundary with the background mesh, and consider an exact transfer operator between meshes to compute jump terms in the time discontinuous Galerkin integration. The transfer is also computed using geometrical intersection algorithms. We demonstrate the applicability of the method to fluid problems around rotating (2D and 3D) geometries described by oriented boundary meshes. We also provide a set of numerical experiments that show the optimal convergence of the method.

We propose a new simple and explicit numerical scheme for time-homogeneous stochastic differential equations. The scheme is based on sampling increments at each time step from a skew-symmetric probability distribution, with the level of skewness determined by the drift and volatility of the underlying process. We show that as the step-size decreases the scheme converges weakly to the diffusion of interest. We then consider the problem of simulating from the limiting distribution of an ergodic diffusion process using the numerical scheme with a fixed step-size. We establish conditions under which the numerical scheme converges to equilibrium at a geometric rate, and quantify the bias between the equilibrium distributions of the scheme and of the true diffusion process. Notably, our results do not require a global Lipschitz assumption on the drift, in contrast to those required for the Euler--Maruyama scheme for long-time simulation at fixed step-sizes. Our weak convergence result relies on an extension of the theory of Milstein \& Tretyakov to stochastic differential equations with non-Lipschitz drift, which could also be of independent interest. We support our theoretical results with numerical simulations.

This paper addresses the approximation of the mean curvature flow of thin structures for which classical phase field methods are not suitable. By thin structures, we mean surfaces that are not domain boundaries, typically higher codimension objects such as 1D curves in 3D, i.e. filaments, or soap films spanning a boundary curve. To approximate the mean curvature flow of such surfaces, we consider a small thickening and we apply to the thickened set an evolution model that combines the classical Allen-Cahn equation with a penalty term that takes on larger values around the skeleton of the set. The novelty of our approach lies in the definition of this penalty term that guarantees a minimal thickness of the evolving set and prevents it from disappearing unexpectedly. We prove a few theoretical properties of our model, provide examples showing the connection with higher codimension mean curvature flow, and introduce a quasi-static numerical scheme with explicit integration of the penalty term. We illustrate the numerical efficiency of the model with accurate approximations of filament structures evolving by mean curvature flow, and we also illustrate its ability to find complex 3D approximations of solutions to the Steiner problem or the Plateau problem.

Characteristic formulae give a complete logical description of the behaviour of processes modulo some chosen notion of behavioural semantics. They allow one to reduce equivalence or preorder checking to model checking, and are exactly the formulae in the modal logics characterizing classic behavioural equivalences and preorders for which model checking can be reduced to equivalence or preorder checking. This paper studies the complexity of determining whether a formula is characteristic for some finite, loop-free process in each of the logics providing modal characterizations of the simulation-based semantics in van Glabbeek's branching-time spectrum. Since characteristic formulae in each of those logics are exactly the consistent and prime ones, it presents complexity results for the satisfiability and primality problems, and investigates the boundary between modal logics for which those problems can be solved in polynomial time and those for which they become computationally hard. Amongst other contributions, this article also studies the complexity of constructing characteristic formulae in the modal logics characterizing simulation-based semantics, both when such formulae are presented in explicit form and via systems of equations.

Multivariate probabilistic verification is concerned with the evaluation of joint probability distributions of vector quantities such as a weather variable at multiple locations or a wind vector for instance. The logarithmic score is a proper score that is useful in this context. In order to apply this score to ensemble forecasts, a choice for the density is required. Here, we are interested in the specific case when the density is multivariate normal with mean and covariance given by the ensemble mean and ensemble covariance, respectively. Under the assumptions of multivariate normality and exchangeability of the ensemble members, a relationship is derived which describes how the logarithmic score depends on ensemble size. It permits to estimate the score in the limit of infinite ensemble size from a small ensemble and thus produces a fair logarithmic score for multivariate ensemble forecasts under the assumption of normality. This generalises a study from 2018 which derived the ensemble size adjustment of the logarithmic score in the univariate case. An application to medium-range forecasts examines the usefulness of the ensemble size adjustments when multivariate normality is only an approximation. Predictions of vectors consisting of several different combinations of upper air variables are considered. Logarithmic scores are calculated for these vectors using ECMWF's daily extended-range forecasts which consist of a 100-member ensemble. The probabilistic forecasts of these vectors are verified against operational ECMWF analyses in the Northern mid-latitudes in autumn 2023. Scores are computed for ensemble sizes from 8 to 100. The fair logarithmic scores of ensembles with different cardinalities are very close, in contrast to the unadjusted scores which decrease considerably with ensemble size. This provides evidence for the practical usefulness of the derived relationships.

In this paper, we investigate nonlinear optimization problems whose constraints are defined as fuzzy relational equations (FRE) with max-min composition. Since the feasible solution set of the FRE is often a non-convex set and the resolution of the FREs is an NP-hard problem, conventional nonlinear approaches may involve high computational complexity. Based on the theoretical aspects of the problem, an algorithm (called FRE-ACO algorithm) is presented which benefits from the structural properties of the FREs, the ability of discrete ant colony optimization algorithm (denoted by ACO) to tackle combinatorial problems, and that of continuous ant colony optimization algorithm (denoted by ACOR) to solve continuous optimization problems. In the current method, the fundamental ideas underlying ACO and ACOR are combined and form an efficient approach to solve the nonlinear optimization problems constrained with such non-convex regions. Moreover, FRE-ACO algorithm preserves the feasibility of new generated solutions without having to initially find the minimal solutions of the feasible region or check the feasibility after generating the new solutions. FRE-ACO algorithm has been compared with some related works proposed for solving nonlinear optimization problems with respect to maxmin FREs. The obtained results demonstrate that the proposed algorithm has a higher convergence rate and requires a less number of function evaluations compared to other considered algorithms.

北京阿比特科技有限公司