亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

Microaggregation is a method to coarsen a dataset, by optimally clustering data points in groups of at least $k$ points, thereby providing a $k$-anonymity type disclosure guarantee for each point in the dataset. Previous algorithms for univariate microaggregation had a $O(k n)$ time complexity. By rephrasing microaggregation as an instance of the concave least weight subsequence problem, in this work we provide improved algorithms that provide an optimal univariate microaggregation on sorted data in $O(n)$ time and space. We further show that our algorithms work not only for sum of squares cost functions, as typically considered, but seamlessly extend to many other cost functions used for univariate microaggregation tasks. In experiments we show that the presented algorithms lead to real world performance improvements.

相關內容

Estimating parameters from data is a fundamental problem in physics, customarily done by minimizing a loss function between a model and observed statistics. In scattering-based analysis, researchers often employ their domain expertise to select a specific range of wavevectors for analysis, a choice that can vary depending on the specific case. We introduce another paradigm that defines a probabilistic generative model from the beginning of data processing and propagates the uncertainty for parameter estimation, termed ab initio uncertainty quantification (AIUQ). As an illustrative example, we demonstrate this approach with differential dynamic microscopy (DDM) that extracts dynamical information through Fourier analysis at a selected range of wavevectors. We first show that DDM is equivalent to fitting a temporal variogram in the reciprocal space using a latent factor model as the generative model. Then we derive the maximum marginal likelihood estimator, which optimally weighs information at all wavevectors, therefore eliminating the need to select the range of wavevectors. Furthermore, we substantially reduce the computational cost by utilizing the generalized Schur algorithm for Toeplitz covariances without approximation. Simulated studies validate that AIUQ significantly improves estimation accuracy and enables model selection with automated analysis. The utility of AIUQ is also demonstrated by three distinct sets of experiments: first in an isotropic Newtonian fluid, pushing limits of optically dense systems compared to multiple particle tracking; next in a system undergoing a sol-gel transition, automating the determination of gelling points and critical exponent; and lastly, in discerning anisotropic diffusive behavior of colloids in a liquid crystal. These outcomes collectively underscore AIUQ's versatility to capture system dynamics in an efficient and automated manner.

Fourth-order variational inequalities are encountered in various scientific and engineering disciplines, including elliptic optimal control problems and plate obstacle problems. In this paper, we consider additive Schwarz methods for solving fourth-order variational inequalities. Based on a unified framework of various finite element methods for fourth-order variational inequalities, we develop one- and two-level additive Schwarz methods. We prove that the two-level method is scalable in the sense that the convergence rate of the method depends on $H/h$ and $H/\delta$ only, where $h$ and $H$ are the typical diameters of an element and a subdomain, respectively, and $\delta$ measures the overlap among the subdomains. This proof relies on a new nonlinear positivity-preserving coarse interpolation operator, the construction of which was previously unknown. To the best of our knowledge, this analysis represents the first investigation into the scalability of the two-level additive Schwarz method for fourth-order variational inequalities. Our theoretical results are verified by numerical experiments.

In a regression model with multiple response variables and multiple explanatory variables, if the difference of the mean vectors of the response variables for different values of explanatory variables is always in the direction of the first principal eigenvector of the covariance matrix of the response variables, then it is called a multivariate allometric regression model. This paper studies the estimation of the first principal eigenvector in the multivariate allometric regression model. A class of estimators that includes conventional estimators is proposed based on weighted sum-of-squares matrices of regression sum-of-squares matrix and residual sum-of-squares matrix. We establish an upper bound of the mean squared error of the estimators contained in this class, and the weight value minimizing the upper bound is derived. Sufficient conditions for the consistency of the estimators are discussed in weak identifiability regimes under which the difference of the largest and second largest eigenvalues of the covariance matrix decays asymptotically and in ``large $p$, large $n$" regimes, where $p$ is the number of response variables and $n$ is the sample size. Several numerical results are also presented.

In this paper, we examine the relationship between the stability of the dynamical system $x^{\prime}=f(x)$ and the computability of its basins of attraction. We present a computable $C^{\infty}$ system $x^{\prime}=f(x)$ that possesses a computable and stable equilibrium point, yet whose basin of attraction is robustly non-computable in a neighborhood of $f$ in the sense that both the equilibrium point and the non-computability of its associated basin of attraction persist when $f$ is slightly perturbed. This indicates that local stability near a stable equilibrium point alone is insufficient to guarantee the computability of its basin of attraction. However, we also demonstrate that the basins of attraction associated with a structurally stable - globally stable - planar system defined on a compact set are computable. Our findings suggest that the global stability of a system and the compactness of the domain play a pivotal role in determining the computability of its basins of attraction.

We develop a new `subspace layered least squares' interior point method (IPM) for solving linear programs. Applied to an $n$-variable linear program in standard form, the iteration complexity of our IPM is up to an $O(n^{1.5} \log n)$ factor upper bounded by the \emph{straight line complexity} (SLC) of the linear program. This term refers to the minimum number of segments of any piecewise linear curve that traverses the \emph{wide neighborhood} of the central path, a lower bound on the iteration complexity of any IPM that follows a piecewise linear trajectory along a path induced by a self-concordant barrier. In particular, our algorithm matches the number of iterations of any such IPM up to the same factor $O(n^{1.5}\log n)$. As our second contribution, we show that the SLC of any linear program is upper bounded by $2^{n + o(1)}$, which implies that our IPM's iteration complexity is at most exponential. This in contrast to existing iteration complexity bounds that depend on either bit-complexity or condition measures; these can be unbounded in the problem dimension. We achieve our upper bound by showing that the central path is well-approximated by a combinatorial proxy we call the \emph{max central path}, which consists of $2n$ shadow vertex simplex paths. Our upper bound complements the lower bounds of Allamigeon, Benchimol, Gaubert, and Joswig (SIAGA 2018), and Allamigeon, Gaubert, and Vandame (STOC 2022), who constructed linear programs with exponential SLC. Finally, we show that each iteration of our IPM can be implemented in strongly polynomial time. Along the way, we develop a deterministic algorithm that approximates the singular value decomposition of a matrix in strongly polynomial time to high accuracy, which may be of independent interest.

We consider the classical problems of interpolating a polynomial given a black box for evaluation, and of multiplying two polynomials, in the setting where the bit-lengths of the coefficients may vary widely, so-called unbalanced polynomials. Writing s for the total bit-length and D for the degree, our new algorithms have expected running time $\tilde{O}(s \log D)$, whereas previous methods for (resp.) dense or sparse arithmetic have at least $\tilde{O}(sD)$ or $\tilde{O}(s^2)$ bit complexity.

The broad class of multivariate unified skew-normal (SUN) distributions has been recently shown to possess fundamental conjugacy properties. When used as priors for the vector of parameters in general probit, tobit, and multinomial probit models, these distributions yield posteriors that still belong to the SUN family. Although such a core result has led to important advancements in Bayesian inference and computation, its applicability beyond likelihoods associated with fully-observed, discretized, or censored realizations from multivariate Gaussian models remains yet unexplored. This article covers such an important gap by proving that the wider family of multivariate unified skew-elliptical (SUE) distributions, which extends SUNs to more general perturbations of elliptical densities, guarantees conjugacy for broader classes of models, beyond those relying on fully-observed, discretized or censored Gaussians. Such a result leverages the closure under linear combinations, conditioning and marginalization of SUE to prove that such a family is conjugate to the likelihood induced by general multivariate regression models for fully-observed, censored or dichotomized realizations from skew-elliptical distributions. This advancement substantially enlarges the set of models that enable conjugate Bayesian inference to general formulations arising from elliptical and skew-elliptical families, including the multivariate Student's t and skew-t, among others.

We analyze a Discontinuous Galerkin method for a problem with linear advection-reaction and $p$-type diffusion, with Sobolev indices $p\in (1, \infty)$. The discretization of the diffusion term is based on the full gradient including jump liftings and interior-penalty stabilization while, for the advective contribution, we consider a strengthened version of the classical upwind scheme. The developed error estimates track the dependence of the local contributions to the error on local P\'eclet numbers. A set of numerical tests supports the theoretical derivations.

The number of proposed iterative optimization heuristics is growing steadily, and with this growth, there have been many points of discussion within the wider community. One particular criticism that is raised towards many new algorithms is their focus on metaphors used to present the method, rather than emphasizing their potential algorithmic contributions. Several studies into popular metaphor-based algorithms have highlighted these problems, even showcasing algorithms that are functionally equivalent to older existing methods. Unfortunately, this detailed approach is not scalable to the whole set of metaphor-based algorithms. Because of this, we investigate ways in which benchmarking can shed light on these algorithms. To this end, we run a set of 294 algorithm implementations on the BBOB function suite. We investigate how the choice of the budget, the performance measure, or other aspects of experimental design impact the comparison of these algorithms. Our results emphasize why benchmarking is a key step in expanding our understanding of the algorithm space, and what challenges still need to be overcome to fully gauge the potential improvements to the state-of-the-art hiding behind the metaphors.

We consider the problem of sketching a set valuation function, which is defined as the expectation of a valuation function of independent random item values. We show that for monotone subadditive or submodular valuation functions satisfying a weak homogeneity condition, or certain other conditions, there exist discretized distributions of item values with $O(k\log(k))$ support sizes that yield a sketch valuation function which is a constant-factor approximation, for any value query for a set of items of cardinality less than or equal to $k$. The discretized distributions can be efficiently computed by an algorithm for each item's value distribution separately. Our results hold under conditions that accommodate a wide range of valuation functions arising in applications, such as the value of a team corresponding to the best performance of a team member, constant elasticity of substitution production functions exhibiting diminishing returns used in economics and consumer theory, and others. Sketch valuation functions are particularly valuable for finding approximate solutions to optimization problems such as best set selection and welfare maximization. They enable computationally efficient evaluation of approximate value oracle queries and provide an approximation guarantee for the underlying optimization problem.

北京阿比特科技有限公司