亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

We describe an algorithm that allows one to find dense packing configurations of a number of congruent disks in arbitrary domains in two or more dimensions. We have applied it to a large class of two dimensional domains such as rectangles, ellipses, crosses, multiply connected domains and even to the cardioid. For many of the cases that we have studied no previous result was available. The fundamental idea in our approach is the introduction of "image" disks, which allows one to work with a fixed container, thus lifting the limitations of the packing algorithms of \cite{Nurmela97,Amore21,Amore23}. We believe that the extension of our algorithm to three (or higher) dimensional containers (not considered here) can be done straightforwardly.

相關內容

The distribution-free chain ladder of Mack justified the use of the chain ladder predictor and enabled Mack to derive an estimator of conditional mean squared error of prediction for the chain ladder predictor. Classical insurance loss models, i.e. of compound Poisson type, are not consistent with Mack's distribution-free chain ladder. However, for a sequence of compound Poisson loss models indexed by exposure (e.g. number of contracts), we show that the chain ladder predictor and Mack's estimator of conditional mean squared error of prediction can be derived by considering large exposure asymptotics. Hence, quantifying chain ladder prediction uncertainty can be done with Mack's estimator without relying on the validity of the model assumptions of the distribution-free chain ladder.

Rational function approximations provide a simple but flexible alternative to polynomial approximation, allowing one to capture complex non-linearities without oscillatory artifacts. However, there have been few attempts to use rational functions on noisy data due to the likelihood of creating spurious singularities. To avoid the creation of singularities, we use Bernstein polynomials and appropriate conditions on their coefficients to force the denominator to be strictly positive. While this reduces the range of rational polynomials that can be expressed, it keeps all the benefits of rational functions while maintaining the robustness of polynomial approximation in noisy data scenarios. Our numerical experiments on noisy data show that existing rational approximation methods continually produce spurious poles inside the approximation domain. This contrasts our method, which cannot create poles in the approximation domain and provides better fits than a polynomial approximation and even penalized splines on functions with multiple variables. Moreover, guaranteeing pole-free in an interval is critical for estimating non-constant coefficients when numerically solving differential equations using spectral methods. This provides a compact representation of the original differential equation, allowing numeric solvers to achieve high accuracy quickly, as seen in our experiments.

Nonignorable missing outcomes are common in real world datasets and often require strong parametric assumptions to achieve identification. These assumptions can be implausible or untestable, and so we may forgo them in favour of partially identified models that narrow the set of a priori possible values to an identification region. Here we propose a new nonparametric Bayes method that allows for the incorporation of multiple clinically relevant restrictions of the parameter space simultaneously. We focus on two common restrictions, instrumental variables and the direction of missing data bias, and investigate how these restrictions narrow the identification region for parameters of interest. Additionally, we propose a rejection sampling algorithm that allows us to quantify the evidence for these assumptions in the data. We compare our method to a standard Heckman selection model in both simulation studies and in an applied problem examining the effectiveness of cash-transfers for people experiencing homelessness.

Optimum distance flag codes (ODFCs), as special flag codes, have received a lot of attention due to its application in random network coding. In 2021, Alonso-Gonz\'{a}lez et al. constructed optimal $(n,\mathcal{A})$-ODFC for $\mathcal {A}\subseteq \{1,2,\ldots,k,n-k,\ldots,n-1\}$ with $k\in \mathcal A$ and $k|n$. In this paper, we introduce a new construction of $(n,\mathcal A)_q$-ODFCs by maximum rank-metric codes. It is proved that there is an $(n,\mathcal{A})$-ODFC of size $\frac{q^n-q^{k+r}}{q^k-1}+1$ for any $\mathcal{A}\subseteq\{1,2,\ldots,k,n-k,\ldots,n-1\}$ with $\mathcal A\cap \{k,n-k\}\neq\emptyset$, where $r\equiv n\pmod k$ and $0\leq r<k$. Furthermore, when $k>\frac{q^r-1}{q-1}$, this $(n,\mathcal A)_q$-ODFC is optimal. Specially, when $r=0$, Alonso-Gonz\'{a}lez et al.'s result is also obtained.

Over the last decade, approximating functions in infinite dimensions from samples has gained increasing attention in computational science and engineering, especially in computational uncertainty quantification. This is primarily due to the relevance of functions that are solutions to parametric differential equations in various fields, e.g. chemistry, economics, engineering, and physics. While acquiring accurate and reliable approximations of such functions is inherently difficult, current benchmark methods exploit the fact that such functions often belong to certain classes of holomorphic functions to get algebraic convergence rates in infinite dimensions with respect to the number of (potentially adaptive) samples $m$. Our work focuses on providing theoretical approximation guarantees for the class of $(\boldsymbol{b},\varepsilon)$-holomorphic functions, demonstrating that these algebraic rates are the best possible for Banach-valued functions in infinite dimensions. We establish lower bounds using a reduction to a discrete problem in combination with the theory of $m$-widths, Gelfand widths and Kolmogorov widths. We study two cases, known and unknown anisotropy, in which the relative importance of the variables is known and unknown, respectively. A key conclusion of our paper is that in the latter setting, approximation from finite samples is impossible without some inherent ordering of the variables, even if the samples are chosen adaptively. Finally, in both cases, we demonstrate near-optimal, non-adaptive (random) sampling and recovery strategies which achieve close to same rates as the lower bounds.

Vintage factor analysis is one important type of factor analysis that aims to first find a low-dimensional representation of the original data, and then to seek a rotation such that the rotated low-dimensional representation is scientifically meaningful. Perhaps the most widely used vintage factor analysis is the Principal Component Analysis (PCA) followed by the varimax rotation. Despite its popularity, little theoretical guarantee can be provided mainly because varimax rotation requires to solve a non-convex optimization over the set of orthogonal matrices. In this paper, we propose a deflation varimax procedure that solves each row of an orthogonal matrix sequentially. In addition to its net computational gain and flexibility, we are able to fully establish theoretical guarantees for the proposed procedure in a broad context. Adopting this new varimax approach as the second step after PCA, we further analyze this two step procedure under a general class of factor models. Our results show that it estimates the factor loading matrix in the optimal rate when the signal-to-noise-ratio (SNR) is moderate or large. In the low SNR regime, we offer possible improvement over using PCA and the deflation procedure when the additive noise under the factor model is structured. The modified procedure is shown to be optimal in all SNR regimes. Our theory is valid for finite sample and allows the number of the latent factors to grow with the sample size as well as the ambient dimension to grow with, or even exceed, the sample size. Extensive simulation and real data analysis further corroborate our theoretical findings.

We introduce time-ordered multibody interactions to describe complex systems manifesting temporal as well as multibody dependencies. First, we show how the dynamics of multivariate Markov chains can be decomposed in ensembles of time-ordered multibody interactions. Then, we present an algorithm to extract those interactions from data capturing the system-level dynamics of node states and a measure to characterize the complexity of interaction ensembles. Finally, we experimentally validate the robustness of our algorithm against statistical errors and its efficiency at inferring parsimonious interaction ensembles.

The domatic number of a graph is the maximum number of vertex disjoint dominating sets that partition the vertex set of the graph. In this paper we consider the fractional variant of this notion. Graphs with fractional domatic number 1 are exactly the graphs that contain an isolated vertex. Furthermore, it is known that all other graphs have fractional domatic number at least 2. In this note we characterize graphs with fractional domatic number 2. More specifically, we show that a graph without isolated vertices has fractional domatic number 2 if and only if it has a vertex of degree 1 or a connected component isomorphic to a 4-cycle. We conjecture that if the fractional domatic number is more than 2, then it is at least 7/3.

We consider uncertainty quantification for the Poisson problem subject to domain uncertainty. For the stochastic parameterization of the random domain, we use the model recently introduced by Kaarnioja, Kuo, and Sloan (SIAM J. Numer. Anal., 2020) in which a countably infinite number of independent random variables enter the random field as periodic functions. We develop lattice quasi-Monte Carlo (QMC) cubature rules for computing the expected value of the solution to the Poisson problem subject to domain uncertainty. These QMC rules can be shown to exhibit higher order cubature convergence rates permitted by the periodic setting independently of the stochastic dimension of the problem. In addition, we present a complete error analysis for the problem by taking into account the approximation errors incurred by truncating the input random field to a finite number of terms and discretizing the spatial domain using finite elements. The paper concludes with numerical experiments demonstrating the theoretical error estimates.

We improve the previously best known upper bounds on the sizes of $\theta$-spherical codes for every $\theta<\theta^*\approx 62.997^{\circ}$ at least by a factor of $0.4325$, in sufficiently high dimensions. Furthermore, for sphere packing densities in dimensions $n\geq 2000$ we have an improvement at least by a factor of $0.4325+\frac{51}{n}$. Our method also breaks many non-numerical sphere packing density bounds in smaller dimensions. This is the first such improvement for each dimension since the work of Kabatyanskii and Levenshtein~\cite{KL} and its later improvement by Levenshtein~\cite{Leven79}. Novelties of this paper include the analysis of triple correlations, usage of the concentration of mass in high dimensions, and the study of the spacings between the roots of Jacobi polynomials.

北京阿比特科技有限公司