亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

We consider estimation of a normal mean matrix under the Frobenius loss. Motivated by the Efron--Morris estimator, a generalization of Stein's prior has been recently developed, which is superharmonic and shrinks the singular values towards zero. The generalized Bayes estimator with respect to this prior is minimax and dominates the maximum likelihood estimator. However, here we show that it is inadmissible by using Brown's condition. Then, we develop two types of priors that provide improved generalized Bayes estimators and examine their performance numerically. The proposed priors attain risk reduction by adding scalar shrinkage or column-wise shrinkage to singular value shrinkage. Parallel results for Bayesian predictive densities are also given.

相關內容

We consider scalar semilinear elliptic PDEs, where the nonlinearity is strongly monotone, but only locally Lipschitz continuous. To linearize the arising discrete nonlinear problem, we employ a damped Zarantonello iteration, which leads to a linear Poisson-type equation that is symmetric and positive definite. The resulting system is solved by a contractive algebraic solver such as a multigrid method with local smoothing. We formulate a fully adaptive algorithm that equibalances the various error components coming from mesh refinement, iterative linearization, and algebraic solver. We prove that the proposed adaptive iteratively linearized finite element method (AILFEM) guarantees convergence with optimal complexity, where the rates are understood with respect to the overall computational cost (i.e., the computational time). Numerical experiments investigate the involved adaptivity parameters.

Characterizing the solution sets in a problem by closedness under operations is recognized as one of the key aspects of algorithm development, especially in constraint satisfaction. An example from the Boolean satisfiability problem is that the solution set of a Horn conjunctive normal form (CNF) is closed under the minimum operation, and this property implies that minimizing a nonnegative linear function over a Horn CNF can be done in polynomial time. In this paper, we focus on the set of integer points (vectors) in a polyhedron, and study the relation between these sets and closedness under operations from the viewpoint of 2-decomposability. By adding further conditions to the 2-decomposable polyhedra, we show that important classes of sets of integer vectors in polyhedra are characterized by 2-decomposability and closedness under certain operations, and in some classes, by closedness under operations alone. The most prominent result we show is that the set of integer vectors in a unit-two-variable-per-inequality polyhedron can be characterized by closedness under the median and directed discrete midpoint operations, each of these operations was independently considered in constraint satisfaction and discrete convex analysis.

We present a comprehensive analysis of the implications of artificial latency in the Proposer-Builder Separation framework on the Ethereum network. Focusing on the MEV-Boost auction system, we analyze how strategic latency manipulation affects Maximum Extractable Value yields and network integrity. Our findings reveal both increased profitability for node operators and significant systemic challenges, including heightened network inefficiencies and centralization risks. We empirically validates these insights with a pilot that Chorus One has been operating on Ethereum mainnet. We demonstrate the nuanced effects of latency on bid selection and validator dynamics. Ultimately, this research underscores the need for balanced strategies that optimize Maximum Extractable Value capture while preserving the Ethereum network's decentralization ethos.

We are interested in generating surfaces with arbitrary roughness and forming patterns on the surfaces. Two methods are applied to construct rough surfaces. In the first method, some superposition of wave functions with random frequencies and angles of propagation are used to get periodic rough surfaces with analytic parametric equations. The amplitude of such surfaces is also an important variable in the provided eigenvalue analysis for the Laplace-Beltrami operator and in the generation of pattern formation. Numerical experiments show that the patterns become irregular as the amplitude and frequency of the rough surface increase. For the sake of easy generalization to closed manifolds, we propose a second construction method for rough surfaces, which uses random nodal values and discretized heat filters. We provide numerical evidence that both surface {construction methods} yield comparable patterns to those {observed} in real-life animals.

Active reconfigurable intelligent surface (RIS) is a new RIS architecture that can reflect and amplify communication signals. It can provide enhanced performance gain compared to the conventional passive RIS systems that can only reflect the signals. On the other hand, the design problem of active RIS-aided systems is more challenging than the passive RIS-aided systems and its efficient algorithms are less studied. In this paper, we consider the sum rate maximization problem in the multiuser massive multiple-input single-output (MISO) downlink with the aid of a large-scale active RIS. Existing approaches for handling this problem usually resort to general optimization solvers and can be computationally prohibitive. We propose an efficient block successive upper bound minimization (BSUM) method, of which each step has a (semi) closed-form update. Thus, the proposed algorithm has an attractive low per-iteration complexity. By simulation, our proposed algorithm consumes much less computation than the existing approaches. In particular, when the MIMO and/or RIS sizes are large, our proposed algorithm can be orders-of-magnitude faster than existing approaches.

We establish Bernstein's inequalities for functions of general (general-state-space and possibly non-reversible) Markov chains. These inequalities achieve sharp variance proxies and encompass the classical Bernstein inequality for independent random variables as special cases. The key analysis lies in bounding the operator norm of a perturbed Markov transition kernel by the exponential of sum of two convex functions. One coincides with what delivers the classical Bernstein inequality, and the other reflects the influence of the Markov dependence. A convex analysis on these two functions then derives our Bernstein inequalities. As applications, we apply our Bernstein inequalities to the Markov chain Monte Carlo integral estimation problem and the robust mean estimation problem with Markov-dependent samples, and achieve tight deviation bounds that previous inequalities can not.

We consider the Navier-Stokes-Fourier system governing the motion of a general compressible, heat conducting, Newtonian fluid driven by random initial/boundary data. Convergence of the stochastic collocation and Monte Carlo numerical methods is shown under the hypothesis that approximate solutions are bounded in probability. Abstract results are illustrated by numerical experiments for the Rayleigh-Benard convection problem.

This paper proposes a new approach to fit a linear regression for symbolic internal-valued variables, which improves both the Center Method suggested by Billard and Diday in \cite{BillardDiday2000} and the Center and Range Method suggested by Lima-Neto, E.A. and De Carvalho, F.A.T. in \cite{Lima2008, Lima2010}. Just in the Centers Method and the Center and Range Method, the new methods proposed fit the linear regression model on the midpoints and in the half of the length of the intervals as an additional variable (ranges) assumed by the predictor variables in the training data set, but to make these fitments in the regression models, the methods Ridge Regression, Lasso, and Elastic Net proposed by Tibshirani, R. Hastie, T., and Zou H in \cite{Tib1996, HastieZou2005} are used. The prediction of the lower and upper of the interval response (dependent) variable is carried out from their midpoints and ranges, which are estimated from the linear regression models with shrinkage generated in the midpoints and the ranges of the interval-valued predictors. Methods presented in this document are applied to three real data sets cardiologic interval data set, Prostate interval data set and US Murder interval data set to then compare their performance and facility of interpretation regarding the Center Method and the Center and Range Method. For this evaluation, the root-mean-squared error and the correlation coefficient are used. Besides, the reader may use all the methods presented herein and verify the results using the {\tt RSDA} package written in {\tt R} language, that can be downloaded and installed directly from {\tt CRAN} \cite{Rod2014}.

The goal of explainable Artificial Intelligence (XAI) is to generate human-interpretable explanations, but there are no computationally precise theories of how humans interpret AI generated explanations. The lack of theory means that validation of XAI must be done empirically, on a case-by-case basis, which prevents systematic theory-building in XAI. We propose a psychological theory of how humans draw conclusions from saliency maps, the most common form of XAI explanation, which for the first time allows for precise prediction of explainee inference conditioned on explanation. Our theory posits that absent explanation humans expect the AI to make similar decisions to themselves, and that they interpret an explanation by comparison to the explanations they themselves would give. Comparison is formalized via Shepard's universal law of generalization in a similarity space, a classic theory from cognitive science. A pre-registered user study on AI image classifications with saliency map explanations demonstrate that our theory quantitatively matches participants' predictions of the AI.

Deep learning is usually described as an experiment-driven field under continuous criticizes of lacking theoretical foundations. This problem has been partially fixed by a large volume of literature which has so far not been well organized. This paper reviews and organizes the recent advances in deep learning theory. The literature is categorized in six groups: (1) complexity and capacity-based approaches for analyzing the generalizability of deep learning; (2) stochastic differential equations and their dynamic systems for modelling stochastic gradient descent and its variants, which characterize the optimization and generalization of deep learning, partially inspired by Bayesian inference; (3) the geometrical structures of the loss landscape that drives the trajectories of the dynamic systems; (4) the roles of over-parameterization of deep neural networks from both positive and negative perspectives; (5) theoretical foundations of several special structures in network architectures; and (6) the increasingly intensive concerns in ethics and security and their relationships with generalizability.

北京阿比特科技有限公司