亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

We consider the problem of estimating a temperature-dependent thermal conductivity model (curve) from temperature measurements. We apply a Bayesian estimation approach that takes into account measurement errors and limited prior information of system properties. The approach intertwines system simulation and Markov chain Monte Carlo (MCMC) sampling. We investigate the impact of assuming different model classes -- cubic polynomials and piecewise linear functions -- their parametrization, and different types of prior information -- ranging from uninformative to informative. Piecewise linear functions require more parameters (conductivity values) to be estimated than the four parameters (coefficients or conductivity values) needed for cubic polynomials. The former model class is more flexible, but the latter requires less MCMC samples. While parametrizing polynomials with coefficients may feel more natural, it turns out that parametrizing them using conductivity values is far more natural for the specification of prior information. Robust estimation is possible for all model classes and parametrizations, as long as the prior information is accurate or not too informative. Gaussian Markov random field priors are especially well-suited for piecewise linear functions.

相關內容

Overdamped Langevin dynamics are reversible stochastic differential equations which are commonly used to sample probability measures in high-dimensional spaces, such as the ones appearing in computational statistical physics and Bayesian inference. By varying the diffusion coefficient, there are in fact infinitely many overdamped Langevin dynamics which are reversible with respect to the target probability measure at hand. This suggests to optimize the diffusion coefficient in order to increase the convergence rate of the dynamics, as measured by the spectral gap of the generator associated with the stochastic differential equation. We analytically study this problem here, obtaining in particular necessary conditions on the optimal diffusion coefficient. We also derive an explicit expression of the optimal diffusion in some appropriate homogenized limit. Numerical results, both relying on discretizations of the spectral gap problem and Monte Carlo simulations of the stochastic dynamics, demonstrate the increased quality of the sampling arising from an appropriate choice of the diffusion coefficient.

This contribution introduces the idea of refinement patterns for the generation of optimal meshes in the context of the Finite Element Method. The main idea is to generate a library of possible patterns on which elements can be refined and use this library to inform an h adaptive code on how to handle complex refinements in regions of interest. There are no restrictions on the type of elements that can be refined, and the patterns can be generated for any element type. The main advantage of this approach is that it allows for the generation of optimal meshes in a systematic way where, even if a certain pattern is not available, it can easily be included through a simple text file with nodes and sub-elements. The contribution presents a detailed methodology for incorporating refinement patterns into h adaptive Finite Element Method codes and demonstrates the effectiveness of the approach through mesh refinement of problems with complex geometries.

We provide rigorous theoretical bounds for Anderson acceleration (AA) that allow for approximate calculations when applied to solve linear problems. We show that, when the approximate calculations satisfy the provided error bounds, the convergence of AA is maintained while the computational time could be reduced. We also provide computable heuristic quantities, guided by the theoretical error bounds, which can be used to automate the tuning of accuracy while performing approximate calculations. For linear problems, the use of heuristics to monitor the error introduced by approximate calculations, combined with the check on monotonicity of the residual, ensures the convergence of the numerical scheme within a prescribed residual tolerance. Motivated by the theoretical studies, we propose a reduced variant of AA, which consists in projecting the least-squares used to compute the Anderson mixing onto a subspace of reduced dimension. The dimensionality of this subspace adapts dynamically at each iteration as prescribed by the computable heuristic quantities. We numerically show and assess the performance of AA with approximate calculations on: (i) linear deterministic fixed-point iterations arising from the Richardson's scheme to solve linear systems with open-source benchmark matrices with various preconditioners and (ii) non-linear deterministic fixed-point iterations arising from non-linear time-dependent Boltzmann equations.

We analyze a bilinear optimal control problem for the Stokes--Brinkman equations: the control variable enters the state equations as a coefficient. In two- and three-dimensional Lipschitz domains, we perform a complete continuous analysis that includes the existence of solutions and first- and second-order optimality conditions. We also develop two finite element methods that differ fundamentally in whether the admissible control set is discretized or not. For each of the proposed methods, we perform a convergence analysis and derive a priori error estimates; the latter under the assumption that the domain is convex. Finally, assuming that the domain is Lipschitz, we develop an a posteriori error estimator for each discretization scheme and obtain a global reliability bound.

In cluster-randomized experiments, there is emerging interest in exploring the causal mechanism in which a cluster-level treatment affects the outcome through an intermediate outcome. Despite an extensive development of causal mediation methods in the past decade, only a few exceptions have been considered in assessing causal mediation in cluster-randomized studies, all of which depend on parametric model-based estimators. In this article, we develop the formal semiparametric efficiency theory to motivate several doubly-robust methods for addressing several mediation effect estimands corresponding to both the cluster-average and the individual-level treatment effects in cluster-randomized experiments--the natural indirect effect, natural direct effect, and spillover mediation effect. We derive the efficient influence function for each mediation effect, and carefully parameterize each efficient influence function to motivate practical strategies for operationalizing each estimator. We consider both parametric working models and data-adaptive machine learners to estimate the nuisance functions, and obtain semiparametric efficient causal mediation estimators in the latter case. Our methods are illustrated via extensive simulations and two completed cluster-randomized experiments.

The relevance of shallow-depth quantum circuits has recently increased, mainly due to their applicability to near-term devices. In this context, one of the main goals of quantum circuit complexity is to find problems that can be solved by quantum shallow circuits but require more computational resources classically. Our first contribution in this work is to prove new separations between classical and quantum constant-depth circuits. Firstly, we show a separation between constant-depth quantum circuits with quantum advice $\mathsf{QNC}^0/\mathsf{qpoly}$, and $\mathsf{AC}^0[p]$, which is the class of classical constant-depth circuits with unbounded-fan in and $\pmod{p}$ gates. In addition, we show a separation between $\mathsf{QAC}^0$, which additionally has Toffoli gates with unbounded control, and $\mathsf{AC}^0[p]$. This establishes the first such separation for a shallow-depth quantum class that does not involve quantum fan-out gates. Secondly, we consider $\mathsf{QNC}^0$ circuits with infinite-size gate sets. We show that these circuits, along with (classical or quantum) prime modular gates, can implement threshold gates, showing that $\mathsf{QNC}^0[p]=\mathsf{QTC}^0$. Finally, we also show that in the infinite-size gateset case, these quantum circuit classes for higher-dimensional Hilbert spaces do not offer any advantage to standard qubit implementations.

Two sequential estimators are proposed for the odds p/(1-p) and log odds log(p/(1-p)) respectively, using independent Bernoulli random variables with parameter p as inputs. The estimators are unbiased, and guarantee that the variance of the estimation error divided by the true value of the odds, or the variance of the estimation error of the log odds, are less than a target value for any p in (0,1). The estimators are close to optimal in the sense of Wolfowitz's bound.

Quantum hypothesis testing (QHT) has been traditionally studied from the information-theoretic perspective, wherein one is interested in the optimal decay rate of error probabilities as a function of the number of samples of an unknown state. In this paper, we study the sample complexity of QHT, wherein the goal is to determine the minimum number of samples needed to reach a desired error probability. By making use of the wealth of knowledge that already exists in the literature on QHT, we characterize the sample complexity of binary QHT in the symmetric and asymmetric settings, and we provide bounds on the sample complexity of multiple QHT. In more detail, we prove that the sample complexity of symmetric binary QHT depends logarithmically on the inverse error probability and inversely on the negative logarithm of the fidelity. As a counterpart of the quantum Stein's lemma, we also find that the sample complexity of asymmetric binary QHT depends logarithmically on the inverse type II error probability and inversely on the quantum relative entropy. We then provide lower and upper bounds on the sample complexity of multiple QHT, with it remaining an intriguing open question to improve these bounds. The final part of our paper outlines and reviews how sample complexity of QHT is relevant to a broad swathe of research areas and can enhance understanding of many fundamental concepts, including quantum algorithms for simulation and search, quantum learning and classification, and foundations of quantum mechanics. As such, we view our paper as an invitation to researchers coming from different communities to study and contribute to the problem of sample complexity of QHT, and we outline a number of open directions for future research.

The starting point of this paper is a collection of properties of an algorithm that have been distilled from the informal descriptions of what an algorithm is that are given in standard works from the mathematical and computer science literature. Based on that, the notion of a proto-algorithm is introduced. The thought is that algorithms are equivalence classes of proto-algorithms under some equivalence relation. Three equivalence relations are defined. Two of them give bounds between which an appropriate equivalence relation must lie. The third lies in between these two and is likely an appropriate equivalence relation. A sound method is presented to prove, using an imperative process algebra based on ACP, that this equivalence relation holds between two proto-algorithms.

Topology optimization is an important basis for the design of components. Here, the optimal structure is found within a design space subject to boundary conditions. Thereby, the specific material law has a strong impact on the final design. An important kind of material behavior is hardening: then a, for instance, linear-elastic structure is not optimal if plastic deformation will be induced by the loads. Since hardening behavior has a remarkable impact on the resultant stress field, it needs to be accounted for during topology optimization. In this contribution, we present an extension of the thermodynamic topology optimization that accounts for this non-linear material behavior due to the evolution of plastic strains. For this purpose, we develop a novel surrogate model that allows to compute the plastic strain tensor corresponding to the current structure design for arbitrary hardening behavior. We show the agreement of the model with the classic plasticity model for monotonic loading. Furthermore, we demonstrate the interaction of the topology optimization for hardening material behavior results in structural changes.

北京阿比特科技有限公司