亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

The isogeometric approximation of the Stokes problem in a trimmed domain is studied. This setting is characterized by an underlying mesh unfitted with the boundary of the physical domain making the imposition of the essential boundary conditions a challenging problem. A very popular strategy is to rely on the so-called Nitsche method~[22]. We show with numerically examples that in some degenerate trimmed domain configurations there is a lack of stability of the formulation, potentially polluting the computed solutions. After extending the stabilization procedure of~[16] to incompressible flow problems, we theoretically prove that, combined with the Raviart-Thomas, the N\'ed\'elec, and the Taylor-Hood (in this case, only when the isogeometric map is the identity) isogeometric elements, we are able to recover the well-posedness of the formulation and, consequently, optimal a priori error estimates. Numerical results corroborating the theory are provided.

相關內容

This paper studies distributed binary test of statistical independence under communication (information bits) constraints. While testing independence is very relevant in various applications, distributed independence test is particularly useful for event detection in sensor networks where data correlation often occurs among observations of devices in the presence of a signal of interest. By focusing on the case of two devices because of their tractability, we begin by investigating conditions on Type I error probability restrictions under which the minimum Type II error admits an exponential behavior with the sample size. Then, we study the finite sample-size regime of this problem. We derive new upper and lower bounds for the gap between the minimum Type II error and its exponential approximation under different setups, including restrictions imposed on the vanishing Type I error probability. Our theoretical results shed light on the sample-size regimes at which approximations of the Type II error probability via error exponents became informative enough in the sense of predicting well the actual error probability. We finally discuss an application of our results where the gap is evaluated numerically, and we show that exponential approximations are not only tractable but also a valuable proxy for the Type II probability of error in the finite-length regime.

The estimation of information measures of continuous distributions based on samples is a fundamental problem in statistics and machine learning. In this paper, we analyze estimates of differential entropy in $K$-dimensional Euclidean space, computed from a finite number of samples, when the probability density function belongs to a predetermined convex family $\mathcal{P}$. First, estimating differential entropy to any accuracy is shown to be infeasible if the differential entropy of densities in $\mathcal{P}$ is unbounded, clearly showing the necessity of additional assumptions. Subsequently, we investigate sufficient conditions that enable confidence bounds for the estimation of differential entropy. In particular, we provide confidence bounds for simple histogram based estimation of differential entropy from a fixed number of samples, assuming that the probability density function is Lipschitz continuous with known Lipschitz constant and known, bounded support. Our focus is on differential entropy, but we provide examples that show that similar results hold for mutual information and relative entropy as well.

In this paper, we consider several efficient data structures for the problem of sampling from a dynamically changing discrete probability distribution, where some prior information is known on the distribution of the rates, in particular the maximum and minimum rate, and where the number of possible outcomes N is large. We consider three basic data structures, the Acceptance-Rejection method, the Complete Binary Tree and the Alias method. These can be used as building blocks in a multi-level data structure, where at each of the levels, one of the basic data structures can be used, with the top level selecting a group of events, and the bottom level selecting an element from a group. Depending on assumptions on the distribution of the rates of outcomes, different combinations of the basic structures can be used. We prove that for particular data structures the expected time of sampling and update is constant when the rate distribution follows certain conditions. We show that for any distribution, combining a tree structure with the Acceptance-Rejection method, we have an expected time of sampling and update of $O\left(\log\log{r_{max}}/{r_{min}}\right)$ is possible, where $r_{max}$ is the maximum rate and $r_{min}$ the minimum rate. We also discuss an implementation of a Two Levels Acceptance-Rejection data structure, that allows expected constant time for sampling, and amortized constant time for updates, assuming that $r_{max}$ and $r_{min}$ are known and the number of events is sufficiently large. We also present an experimental verification, highlighting the limits given by the constraints of a real-life setting.

Information geometry is concerned with the application of differential geometry concepts in the study of the parametric spaces of statistical models. When the random variables are independent and identically distributed, the underlying parametric space exhibit constant curvature, which makes the geometry hyperbolic (negative) or spherical (positive). In this paper, we derive closed-form expressions for the components of the first and second fundamental forms regarding pairwise isotropic Gaussian-Markov random field manifolds, allowing the computation of the Gaussian, mean and principal curvatures. Computational simulations using Markov Chain Monte Carlo dynamics indicate that a change in the sign of the Gaussian curvature is related to the emergence of phase transitions in the field. Moreover, the curvatures are highly asymmetrical for positive and negative displacements in the inverse temperature parameter, suggesting the existence of irreversible geometric properties in the parametric space along the dynamics. Furthermore, these asymmetric changes in the curvature of the space induces an intrinsic notion of time in the evolution of the random field.

We derive several explicit formulae for finding infinitely many solutions of the equation $AXA=XAX$, when $A$ is singular. We start by splitting the equation into a couple of linear matrix equations and then show how the projectors commuting with $A$ can be used to get families containing an infinite number of solutions. Some techniques for determining those projectors are proposed, which use, in particular, the properties of the Drazin inverse, spectral projectors, the matrix sign function, and eigenvalues. We also investigate in detail how the well-known similarity transformations like Jordan and Schur decompositions can be used to obtain new representations of the solutions. The computation of solutions by the suggested methods using finite precision arithmetic is also a concern. Difficulties arising in their implementation are identified and ideas to overcome them are discussed. Numerical experiments shed some light on the methods that may be promising for solving numerically the said matrix equation.

The paper addresses the problem of sampling discretization of integral norms of elements of finite-dimensional subspaces satisfying some conditions. We prove sampling discretization results under two standard kinds of assumptions -- conditions on the entropy numbers and conditions in terms of the Nikol'skii-type inequalities. We prove some upper bounds on the number of sample points sufficient for good discretization and show that these upper bounds are sharp in a certain sense. Then we apply our general conditional results to subspaces with special structures, namely, subspaces with the tensor product structure. We demonstrate that applications of results based on the Nikol'skii-type inequalities provide somewhat better results than applications of results based on the entropy numbers conditions. Finally, we apply discretization results to the problem of sampling recovery.

Five new algorithms were proposed in order to optimize well conditioning of structural matrices. Along with decreasing the size and duration of analyses, minimizing analytical errors is a critical factor in the optimal computer analysis of skeletal structures. Appropriate matrices with a greater number of zeros (sparse), a well structure, and a well condition are advantageous for this objective. As a result, a problem of optimization with various goals will be addressed. This study seeks to minimize analytical errors such as rounding errors in skeletal structural flexibility matrixes via the use of more consistent and appropriate mathematical methods. These errors become more pronounced in particular designs with ill-suited flexibility matrixes; structures with varying stiffness are a frequent example of this. Due to the usage of weak elements, the flexibility matrix has a large number of non-diagonal terms, resulting in analytical errors. In numerical analysis, the ill-condition of a matrix may be resolved by moving or substituting rows; this study examined the definition and execution of these modifications prior to creating the flexibility matrix. Simple topological and algebraic features have been mostly utilized in this study to find fundamental cycle bases with particular characteristics. In conclusion, appropriately conditioned flexibility matrices are obtained, and analytical errors are reduced accordingly.

In this study, we develop an asymptotic theory of nonparametric regression for a locally stationary functional time series. First, we introduce the notion of a locally stationary functional time series (LSFTS) that takes values in a semi-metric space. Then, we propose a nonparametric model for LSFTS with a regression function that changes smoothly over time. We establish the uniform convergence rates of a class of kernel estimators, the Nadaraya-Watson (NW) estimator of the regression function, and a central limit theorem of the NW estimator.

The problem of Approximate Nearest Neighbor (ANN) search is fundamental in computer science and has benefited from significant progress in the past couple of decades. However, most work has been devoted to pointsets whereas complex shapes have not been sufficiently treated. Here, we focus on distance functions between discretized curves in Euclidean space: they appear in a wide range of applications, from road segments to time-series in general dimension. For $\ell_p$-products of Euclidean metrics, for any $p$, we design simple and efficient data structures for ANN, based on randomized projections, which are of independent interest. They serve to solve proximity problems under a notion of distance between discretized curves, which generalizes both discrete Fr\'echet and Dynamic Time Warping distances. These are the most popular and practical approaches to comparing such curves. We offer the first data structures and query algorithms for ANN with arbitrarily good approximation factor, at the expense of increasing space usage and preprocessing time over existing methods. Query time complexity is comparable or significantly improved by our algorithms, our algorithm is especially efficient when the length of the curves is bounded.

Methods that align distributions by minimizing an adversarial distance between them have recently achieved impressive results. However, these approaches are difficult to optimize with gradient descent and they often do not converge well without careful hyperparameter tuning and proper initialization. We investigate whether turning the adversarial min-max problem into an optimization problem by replacing the maximization part with its dual improves the quality of the resulting alignment and explore its connections to Maximum Mean Discrepancy. Our empirical results suggest that using the dual formulation for the restricted family of linear discriminators results in a more stable convergence to a desirable solution when compared with the performance of a primal min-max GAN-like objective and an MMD objective under the same restrictions. We test our hypothesis on the problem of aligning two synthetic point clouds on a plane and on a real-image domain adaptation problem on digits. In both cases, the dual formulation yields an iterative procedure that gives more stable and monotonic improvement over time.

北京阿比特科技有限公司