亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

Local modifications of a computational domain are often performed in order to simplify the meshing process and to reduce computational costs and memory requirements. However, removing geometrical features of a domain often introduces a non-negligible error in the solution of a differential problem in which it is defined. In this work, we extend the results from [1] by studying the case of domains containing an arbitrary number of distinct Neumann features, and by performing an analysis on Poisson's, linear elasticity, and Stokes' equations. We introduce a simple, computationally cheap, reliable, and efficient a posteriori estimator of the geometrical defeaturing error. Moreover, we also introduce a geometric refinement strategy that accounts for the defeaturing error: Starting from a fully defeatured geometry, the algorithm determines at each iteration step which features need to be added to the geometrical model to reduce the defeaturing error. These important features are then added to the (partially) defeatured geometrical model at the next iteration, until the solution attains a prescribed accuracy. A wide range of two- and three-dimensional numerical experiments are finally reported to illustrate this work.

相關內容

We introduce a proof-theoretic method for showing nondefinability of second-order intuitionistic connectives by quantifier-free schemata. We apply the method to confirm that Taranovsky's "realizability disjunction" connective does not admit a quantifier-free definition, and use it to obtain new results and more nuanced information about the nondefinability of Kreisel's and Po{\l}acik's unary connectives. The finitary and combinatorial nature of our method makes it more resilient to changes in metatheory than common semantic approaches, whose robustness tends to waver once we pass to non-classical and especially anti-classical settings. Furthermore, we can easily transcribe the problem-specific subproofs into univalent type theory and check them using the Agda proof assistant.

Machine learning algorithms have grown in sophistication over the years and are increasingly deployed for real-life applications. However, when using machine learning techniques in practical settings, particularly in high-risk applications such as medicine and engineering, obtaining the failure probability of the predictive model is critical. We refer to this problem as the risk-assessment task. We focus on regression algorithms and the risk-assessment task of computing the probability of the true label lying inside an interval defined around the model's prediction. We solve the risk-assessment problem using the conformal prediction approach, which provides prediction intervals that are guaranteed to contain the true label with a given probability. Using this coverage property, we prove that our approximated failure probability is conservative in the sense that it is not lower than the true failure probability of the ML algorithm. We conduct extensive experiments to empirically study the accuracy of the proposed method for problems with and without covariate shift. Our analysis focuses on different modeling regimes, dataset sizes, and conformal prediction methodologies.

Certifying the positivity of trigonometric polynomials is of first importance for design problems in discrete-time signal processing. It is well known from the Riesz-Fej\'ez spectral factorization theorem that any trigonometric univariate polynomial positive on the unit circle can be decomposed as a Hermitian square with complex coefficients. Here we focus on the case of polynomials with Gaussian integer coefficients, i.e., with real and imaginary parts being integers. We design, analyze and compare, theoretically and practically,three hybrid numeric-symbolic algorithms computing weighted sums of Hermitian squares decompositions for trigonometric univariate polynomials positive on the unit circle with Gaussian coefficients. The numerical steps the first and second algorithm rely on are complex root isolation and semidefinite programming, respectively. An exact sum of Hermitian squares decomposition is obtained thanks to compensation techniques. The third algorithm, also based on complex semidefinite programming, is an adaptation of the rounding and projection algorithm by Peyrl and Parrilo. For all three algorithms, we prove bit complexity and output size estimates that are polynomial in the degree of the input and linear in the maximum bitsize of its coefficients. We compare their performance on randomly chosen benchmarks, and further design a certified finite impulse filter.

Recently, Sato et al. proposed an public verifiable blind quantum computation (BQC) protocol by inserting a third-party arbiter. However, it is not true public verifiable in a sense, because the arbiter is determined in advance and participates in the whole process. In this paper, a public verifiable protocol for measurement-only BQC is proposed. The fidelity between arbitrary states and the graph states of 2-colorable graphs is estimated by measuring the entanglement witnesses of the graph states,so as to verify the correctness of the prepared graph states. Compared with the previous protocol, our protocol is public verifiable in the true sense by allowing other random clients to execute the public verification. It also has greater advantages in the efficiency, where the number of local measurements is O(n^3*log {n}) and graph states' copies is O(n^2*log{n}).

We show that the problem of counting the number of $n$-variable unate functions reduces to the problem of counting the number of $n$-variable monotone functions. Using recently obtained results on $n$-variable monotone functions, we obtain counts of $n$-variable unate functions up to $n=9$. We use an enumeration strategy to obtain the number of $n$-variable balanced monotone functions up to $n=7$. We show that the problem of counting the number of $n$-variable balanced unate functions reduces to the problem of counting the number of $n$-variable balanced monotone functions, and consequently, we obtain the number of $n$-variable balanced unate functions up to $n=7$. Using enumeration, we obtain the numbers of equivalence classes of $n$-variable balanced monotone functions, unate functions and balanced unate functions up to $n=6$. Further, for each of the considered sub-class of $n$-variable monotone and unate functions, we also obtain the corresponding numbers of $n$-variable non-degenerate functions.

A finite element based computational scheme is developed and employed to assess a duality based variational approach to the solution of the linear heat and transport PDE in one space dimension and time, and the nonlinear system of ODEs of Euler for the rotation of a rigid body about a fixed point. The formulation turns initial-(boundary) value problems into degenerate elliptic boundary value problems in (space)-time domains representing the Euler-Lagrange equations of suitably designed dual functionals in each of the above problems. We demonstrate reasonable success in approximating solutions of this range of parabolic, hyperbolic, and ODE primal problems, which includes energy dissipation as well as conservation, by a unified dual strategy lending itself to a variational formulation. The scheme naturally associates a family of dual solutions to a unique primal solution; such `gauge invariance' is demonstrated in our computed solutions of the heat and transport equations, including the case of a transient dual solution corresponding to a steady primal solution of the heat equation. Primal evolution problems with causality are shown to be correctly approximated by non-causal dual problems.

Fitted finite element methods are constructed for a singularly perturbed convection-diffusion problem in two space dimensions. Exponential splines as basis functions are combined with Shishkin meshes to obtain a stable parameter-uniform numerical method. These schemes satisfy a discrete maximum principle. In the classical case, the numerical approximations converge, in the maximum pointwise norm, at a rate of second order and the approximations converge at a rate of first order for all values of the singular perturbation parameter.

Semitopologies model consensus in distributed system by equating the notion of a quorum -- a set of participants sufficient to make local progress -- with that of an open set. This yields a topology-like theory of consensus, but semitopologies generalise topologies, since the intersection of two quorums need not necessarily be a quorum. The semitopological model of consensus is naturally heterogeneous and local, just like topologies can be heterogenous and local, and for the same reasons: points may have different quorums and there is no restriction that open sets / quorums be uniformly generated (e.g. open sets can be something other than two-thirds majorities of the points in the space). Semiframes are an algebraic abstraction of semitopologies. They are to semitopologies as frames are to topologies. We give a notion of semifilter, which plays a role analogous to filters, and show how to build a semiframe out of the open sets of a semitopology, and a semitopology out of the semifilters of a semiframe. We define suitable notions of category and morphism and prove a categorical duality between (sober) semiframes and (spatial) semitopologies, and investigate well-behavedness properties on semitopologies and semiframes across the duality. Surprisingly, the structure of semiframes is not what one might initially expect just from looking at semitopologies, and the canonical structure required for the duality result -- a compatibility relation *, generalising sets intersection -- is also canonical for expressing well-behavedness properties. Overall, we deliver a new categorical, algebraic, abstract framework within which to study consensus on distributed systems, and which is also simply interesting to consider as a mathematical theory in its own right.

The alpha complex is a fundamental data structure from computational geometry, which encodes the topological type of a union of balls $B(x; r) \subset \mathbb{R}^m$ for $x\in S$, including a weighted version that allows for varying radii. It consists of the collection of "simplices" $\sigma = \{x_0, ..., x_k \} \subset S$, which correspond to nomempty $(k + 1)$-fold intersections of cells in a radius-restricted version of the Voronoi diagram. Existing algorithms for computing the alpha complex require that the points reside in low dimension because they begin by computing the entire Delaunay complex, which rapidly becomes intractable, even when the alpha complex is of a reasonable size. This paper presents a method for computing the alpha complex without computing the full Delaunay triangulation by applying Lagrangian duality, specifically an algorithm based on dual quadratic programming that seeks to rule simplices out rather than ruling them in.

Maximum margin binary classification is one of the most fundamental algorithms in machine learning, yet the role of featurization maps and the high-dimensional asymptotics of the misclassification error for non-Gaussian features are still poorly understood. We consider settings in which we observe binary labels $y_i$ and either $d$-dimensional covariates ${\boldsymbol z}_i$ that are mapped to a $p$-dimension space via a randomized featurization map ${\boldsymbol \phi}:\mathbb{R}^d \to\mathbb{R}^p$, or $p$-dimensional features of non-Gaussian independent entries. In this context, we study two fundamental questions: $(i)$ At what overparametrization ratio $p/n$ do the data become linearly separable? $(ii)$ What is the generalization error of the max-margin classifier? Working in the high-dimensional regime in which the number of features $p$, the number of samples $n$ and the input dimension $d$ (in the nonlinear featurization setting) diverge, with ratios of order one, we prove a universality result establishing that the asymptotic behavior is completely determined by the expected covariance of feature vectors and by the covariance between features and labels. In particular, the overparametrization threshold and generalization error can be computed within a simpler Gaussian model. The main technical challenge lies in the fact that max-margin is not the maximizer (or minimizer) of an empirical average, but the maximizer of a minimum over the samples. We address this by representing the classifier as an average over support vectors. Crucially, we find that in high dimensions, the support vector count is proportional to the number of samples, which ultimately yields universality.

北京阿比特科技有限公司