亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

We consider robust optimal experimental design (ROED) for nonlinear Bayesian inverse problems governed by partial differential equations (PDEs). An optimal design is one that maximizes some utility quantifying the quality of the solution of an inverse problem. However, the optimal design is dependent on elements of the inverse problem such as the simulation model, the prior, or the measurement error model. ROED aims to produce an optimal design that is aware of the additional uncertainties encoded in the inverse problem and remains optimal even after variations in them. We follow a worst-case scenario approach to develop a new framework for robust optimal design of nonlinear Bayesian inverse problems. The proposed framework a) is scalable and designed for infinite-dimensional Bayesian nonlinear inverse problems constrained by PDEs; b) develops efficient approximations of the utility, namely, the expected information gain; c) employs eigenvalue sensitivity techniques to develop analytical forms and efficient evaluation methods of the gradient of the utility with respect to the uncertainties we wish to be robust against; and d) employs a probabilistic optimization paradigm that properly defines and efficiently solves the resulting combinatorial max-min optimization problem. The effectiveness of the proposed approach is illustrated for optimal sensor placement problem in an inverse problem governed by an elliptic PDE.

相關內容

We study the high-dimensional partial linear model, where the linear part has a high-dimensional sparse regression coefficient and the nonparametric part includes a function whose derivatives are of bounded total variation. We expand upon the univariate trend filtering to develop partial linear trend filtering--a doubly penalized least square estimation approach based on $\ell_1$ penalty and total variation penalty. Analogous to the advantages of trend filtering in univariate nonparametric regression, partial linear trend filtering not only can be efficiently computed, but also achieves the optimal error rate for estimating the nonparametric function. This in turn leads to the oracle rate for the linear part as if the underlying nonparametric function were known. We compare the proposed approach with a standard smoothing spline based method, and show both empirically and theoretically that the former outperforms the latter when the underlying function possesses heterogeneous smoothness. We apply our approach to the IDATA study to investigate the relationship between metabolomic profiles and ultra-processed food (UPF) intake, efficiently identifying key metabolites associated with UPF consumption and demonstrating strong predictive performance.

This paper introduces a nonconforming virtual element method for general second-order elliptic problems with variable coefficients on domains with curved boundaries and curved internal interfaces. We prove arbitrary order optimal convergence in the energy and $L^2$ norms, confirmed by numerical experiments on a set of polygonal meshes. The accuracy of the numerical approximation provided by the method is shown to be comparable with the theoretical analysis.

Accurately recommending products has long been a subject requiring in-depth research. This study proposes a multimodal paradigm for clothing recommendations. Specifically, it designs a multimodal analysis method that integrates clothing description texts and images, utilizing a pre-trained large language model to deeply explore the hidden meanings of users and products. Additionally, a variational encoder is employed to learn the relationship between user information and products to address the cold start problem in recommendation systems. This study also validates the significant performance advantages of this method over various recommendation system methods through extensive ablation experiments, providing crucial practical guidance for the comprehensive optimization of recommendation systems.

The discrete cosine transform (DCT) is a central tool for image and video coding because it can be related to the Karhunen-Lo\`eve transform (KLT), which is the optimal transform in terms of retained transform coefficients and data decorrelation. In this paper, we introduce 16-, 32-, and 64-point low-complexity DCT approximations by minimizing individually the angle between the rows of the exact DCT matrix and the matrix induced by the approximate transforms. According to some classical figures of merit, the proposed transforms outperformed the approximations for the DCT already known in the literature. Fast algorithms were also developed for the low-complexity transforms, asserting a good balance between the performance and its computational cost. Practical applications in image encoding showed the relevance of the transforms in this context. In fact, the experiments showed that the proposed transforms had better results than the known approximations in the literature for the cases of 16, 32, and 64 blocklength.

We consider the equilibrium equations for a linearized Cosserat material and provide two perspectives concerning well-posedness. First, the system can be viewed as the Hodge Laplace problem on a differential complex. On the other hand, we show how the Cosserat materials can be analyzed by inheriting results from linearized elasticity. Both perspectives give rise to mixed finite element methods, which we refer to as strongly and weakly coupled, respectively. We prove convergence of both classes of methods, with particular attention to improved convergence rate estimates, and stability in the limit of vanishing characteristic length of the micropolar structure. The theoretical results are fully reflected in the actual performance of the methods, as shown by the numerical verifications.

We study the sample complexity of the prototypical tasks quantum purity estimation and quantum inner product estimation. In purity estimation, we are to estimate $tr(\rho^2)$ of an unknown quantum state $\rho$ to additive error $\epsilon$. Meanwhile, for quantum inner product estimation, Alice and Bob are to estimate $tr(\rho\sigma)$ to additive error $\epsilon$ given copies of unknown quantum state $\rho$ and $\sigma$ using classical communication and restricted quantum communication. In this paper, we show a strong connection between the sample complexity of purity estimation with bounded quantum memory and inner product estimation with bounded quantum communication and unentangled measurements. We propose a protocol that solves quantum inner product estimation with $k$-qubit one-way quantum communication and unentangled local measurements using $O(median\{1/\epsilon^2,2^{n/2}/\epsilon,2^{n-k}/\epsilon^2\})$ copies of $\rho$ and $\sigma$. Our protocol can be modified to estimate the purity of an unknown quantum state $\rho$ using $k$-qubit quantum memory with the same complexity. We prove that arbitrary protocols with $k$-qubit quantum memory that estimate purity to error $\epsilon$ require $\Omega(median\{1/\epsilon^2,2^{n/2}/\sqrt{\epsilon},2^{n-k}/\epsilon^2\})$ copies of $\rho$. This indicates the same lower bound for quantum inner product estimation with one-way $k$-qubit quantum communication and classical communication, and unentangled local measurements. For purity estimation, we further improve the lower bound to $\Omega(\max\{1/\epsilon^2,2^{n/2}/\epsilon\})$ for any protocols using an identical single-copy projection-valued measurement. Additionally, we investigate a decisional variant of quantum distributed inner product estimation without quantum communication for mixed state and provide a lower bound on the sample complexity.

We provide a novel dimension-free uniform concentration bound for the empirical risk function of constrained logistic regression. Our bound yields a milder sufficient condition for a uniform law of large numbers than conditions derived by the Rademacher complexity argument and McDiarmid's inequality. The derivation is based on the PAC-Bayes approach with second-order expansion and Rademacher-complexity-based bounds for the residual term of the expansion.

We study the strong existence and uniqueness of solutions within a Weyl chamber for a class of time-dependent particle systems driven by multiplicative noise. This class includes well-known processes in physics and mathematical finance. We propose a method to prove the existence of negative moments for the solutions. This result allows us to analyze two numerical schemes for approximating the solutions. The first scheme is a $\theta$-Euler--Maruyama scheme, which ensures that the approximated solution remains within the Weyl chamber. The second scheme is a truncated $\theta$-Euler--Maruyama scheme, which produces values in $\mathbb{R}^{d}$ instead of the Weyl chamber $\mathbb{W}$, offering improved computational efficiency.

Unique continuation principles are fundamental properties of elliptic partial differential equations, giving conditions that guarantee that the solution to an elliptic equation must be uniformly zero. Since finite-element discretizations are a natural tool to help gain understanding into elliptic equations, it is natural to ask if such principles also hold at the discrete level. In this work, we prove a version of the unique continuation principle for piecewise-linear and -bilinear finite-element discretizations of the Laplacian eigenvalue problem on polygonal domains in $\mathbb{R}^2$. Namely, we show that any solution to the discretized equation $-\Delta u = \lambda u$ with vanishing Dirichlet and Neumann traces must be identically zero under certain geometric and topological assumptions on the resulting triangulation. We also provide a counterexample, showing that a nonzero \emph{inner solution} exists when the topological assumptions are not satisfied. Finally, we give an application to an eigenvalue interlacing problem, where the space of inner solutions makes an explicit appearance.

Gaussian graphical models are nowadays commonly applied to the comparison of groups sharing the same variables, by jointy learning their independence structures. We consider the case where there are exactly two dependent groups and the association structure is represented by a family of coloured Gaussian graphical models suited to deal with paired data problems. To learn the two dependent graphs, together with their across-graph association structure, we implement a fused graphical lasso penalty. We carry out a comprehensive analysis of this approach, with special attention to the role played by some relevant submodel classes. In this way, we provide a broad set of tools for the application of Gaussian graphical models to paired data problems. These include results useful for the specification of penalty values in order to obtain a path of lasso solutions and an ADMM algorithm that solves the fused graphical lasso optimization problem. Finally, we present an application of our method to cancer genomics where it is of interest to compare cancer cells with a control sample from histologically normal tissues adjacent to the tumor. All the methods described in this article are implemented in the $\texttt{R}$ package $\texttt{pdglasso}$ availabe at: //github.com/savranciati/pdglasso.

北京阿比特科技有限公司