亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

We consider the inverse problem of reconstructing an unknown function $u$ from a finite set of measurements, under the assumption that $u$ is the trajectory of a transport-dominated problem with unknown input parameters. We propose an algorithm based on the Parameterized Background Data-Weak method (PBDW) where dynamical sensor placement is combined with approximation spaces that evolve in time. We prove that the method ensures an accurate reconstruction at all times and allows to incorporate relevant physical properties in the reconstructed solutions by suitably evolving the dynamical approximation space. As an application of this strategy we consider Hamiltonian systems modeling wave-type phenomena, where preservation of the geometric structure of the flow plays a crucial role in the accuracy and stability of the reconstructed trajectory.

相關內容

The development of cubical type theory inspired the idea of "extension types" which has been found to have applications in other type theories that are unrelated to homotopy type theory or cubical type theory. This article describes these applications, including on records, metaprogramming, controlling unfolding, and some more exotic ones.

A very simple example demonstrates that Fisher's application of the conditionality principle to regression ("fixed $x$ regression"), endorsed by Sprott and many other followers, makes prediction impossible in the context of statistical learning theory. On the other hand, relaxing the requirement of conditionality makes it possible via, e.g., conformal prediction.

We prove a stability estimate, in a suitable expected value, of the $1$-Wasserstein distance between the solution of the continuity equation under a Sobolev velocity field and a measure obtained by pushing forward Dirac deltas whose centers belong to a partition of the domain by a (sort of) explicit forward Euler method. The main tool is a $L^\infty_t (L^p_x)$ estimate on the difference between the regular Lagrangian flow of the velocity field and an explicitly constructed approximation of such flow. Although our result only gives estimates in expected value, it has the advantage of being easily parallelizable and of not relying on any particular structure on the mesh. At the end, we also provide estimates with a logarithmic Wasserstein distance, already used in other works on this particular problem.

We consider the estimation of the cumulative hazard function, and equivalently the distribution function, with censored data under a setup that preserves the privacy of the survival database. This is done through a $\alpha$-locally differentially private mechanism for the failure indicators and by proposing a non-parametric kernel estimator for the cumulative hazard function that remains consistent under the privatization. Under mild conditions, we also prove lowers bounds for the minimax rates of convergence and show that estimator is minimax optimal under a well-chosen bandwidth.

Partitioned methods for coupled problems rely on data transfers between subdomains to synchronize the subdomain equations and enable their independent solution. By treating each subproblem as a separate entity, these methods enable code reuse, increase concurrency and provide a convenient framework for plug-and-play multiphysics simulations. However, accuracy and stability of partitioned methods depends critically on the type of information exchanged between the subproblems. The exchange mechanisms can vary from minimally intrusive remap across interfaces to more accurate but also more intrusive and expensive estimates of the necessary information based on monolithic formulations of the coupled system. These transfer mechanisms are separated by accuracy, performance and intrusiveness gaps that tend to limit the scope of the resulting partitioned methods to specific simulation scenarios. Data-driven system identification techniques provide an opportunity to close these gaps by enabling the construction of accurate, computationally efficient and minimally intrusive data transfer surrogates. This approach shifts the principal computational burden to an offline phase, leaving the application of the surrogate as the sole additional cost during the online simulation phase. In this paper we formulate and demonstrate such a \emph{dynamic flux surrogate-based} partitioned method for a model advection-diffusion transmission problem by using Dynamic Mode Decomposition (DMD) to learn the dynamics of the interface flux from data. The accuracy of the resulting DMD flux surrogate is comparable to that of a dual Schur complement reconstruction, yet its application cost is significantly lower. Numerical results confirm the attractive properties of the new partitioned approach.

We consider finite element approximations to the optimal constant for the Hardy inequality with exponent $p=2$ in bounded domains of dimension $n=1$ or $n \geq 3$. For finite element spaces of piecewise linear and continuous functions on a mesh of size $h$, we prove that the approximate Hardy constant converges to the optimal Hardy constant at a rate proportional to $1/| \log h |^2$. This result holds in dimension $n=1$, in any dimension $n \geq 3$ if the domain is the unit ball and the finite element discretization exploits the rotational symmetry of the problem, and in dimension $n=3$ for general finite element discretizations of the unit ball. In the first two cases, our estimates show excellent quantitative agreement with values of the discrete Hardy constant obtained computationally.

We investigate the multiplicity model with m values of some test statistic independently drawn from a mixture of no effect (null) and positive effect (alternative), where we seek to identify, the alternative test results with a controlled error rate. We are interested in the case where the alternatives are rare. A number of multiple testing procedures filter the set of ordered p-values in order to eliminate the nulls. Such an approach can only work if the p-values originating from the alternatives form one or several identifiable clusters. The Benjamini and Hochberg (BH) method, for example, assumes that this cluster occurs in a small interval $(0,\Delta)$ and filters out all or most of the ordered p-values $p_{(r)}$ above a linear threshold $s \times r$. In repeated applications this filter controls the false discovery rate via the slope s. We propose a new adaptive filter that deletes the p-values from regions of uniform distribution. In cases where a single cluster remains, the p-values in an interval are declared alternatives, with the mid-point and the length of the interval chosen by controlling the data-dependent FDR at a desired level.

Given a graph $G$ and two independent sets of $G$, the independent set reconfiguration problem asks whether one independent set can be transformed into the other by moving a single vertex at a time, such that at each intermediate step we have an independent set of $G$. We study the complexity of this problem for $H$-free graphs under the token sliding and token jumping rule. Our contribution is twofold. First, we prove a reconfiguration analogue of Alekseev's theorem, showing that the problem is PSPACE-complete unless $H$ is a path or a subdivision of the claw. We then show that under the token sliding rule, the problem admits a polynomial-time algorithm if the input graph is fork-free.

In social choice theory with ordinal preferences, a voting method satisfies the axiom of positive involvement if adding to a preference profile a voter who ranks an alternative uniquely first cannot cause that alternative to go from winning to losing. In this note, we prove a new impossibility theorem concerning this axiom: there is no ordinal voting method satisfying positive involvement that also satisfies the Condorcet winner and loser criteria, resolvability, and a common invariance property for Condorcet methods, namely that the choice of winners depends only on the ordering of majority margins by size.

For a sequence of random structures with $n$-element domains over a relational signature, we define its first order (FO) complexity as a certain subset in the Banach space $\ell^{\infty}/c_0$. The well-known FO zero-one law and FO convergence law correspond to FO complexities equal to $\{0,1\}$ and a subset of $\mathbb{R}$, respectively. We present a hierarchy of FO complexity classes, introduce a stochastic FO reduction that allows to transfer complexity results between different random structures, and deduce using this tool several new logical limit laws for binomial random structures. Finally, we introduce a conditional distribution on graphs, subject to a FO sentence $\varphi$, that generalises certain well-known random graph models, show instances of this distribution for every complexity class, and prove that the set of all $\varphi$ validating 0--1 law is not recursively enumerable.

北京阿比特科技有限公司