亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

We consider a discrete best approximation problem formulated in the framework of tropical algebra, which deals with the theory and applications of algebraic systems with idempotent operations. Given a set of samples of input and output of an unknown function, the problem is to construct a generalized tropical Puiseux polynomial that best approximates the function in the sense of a tropical distance function. The construction of an approximate polynomial involves the evaluation of both unknown coefficient and exponent of each monomial in the polynomial. To solve the approximation problem, we first reduce the problem to an equation in unknown vector of coefficients, which is given by a matrix with entries parameterized by unknown exponents. We derive a best approximate solution of the equation, which yields both vector of coefficients and approximation error parameterized by the exponents. Optimal values of exponents are found by minimization of the approximation error, which is transformed into minimization of a function of exponents over all partitions of a finite set. We solve this minimization problem in terms of max-plus algebra by using a computational procedure based on the agglomerative clustering technique. This solution is extended to the minimization problem of finding optimal exponents in the polynomial in terms of max-algebra. The results obtained are applied to develop new solutions for conventional problems of discrete best Chebyshev approximation of real functions by piecewise linear functions and piecewise Puiseux polynomials. We discuss computational complexity of the proposed solution and estimate upper bounds on the computational time. We demonstrate examples of approximation problems solved in terms of max-plus and max-algebra, and give graphical illustrations.

相關內容

In a regression model with multiple response variables and multiple explanatory variables, if the difference of the mean vectors of the response variables for different values of explanatory variables is always in the direction of the first principal eigenvector of the covariance matrix of the response variables, then it is called a multivariate allometric regression model. This paper studies the estimation of the first principal eigenvector in the multivariate allometric regression model. A class of estimators that includes conventional estimators is proposed based on weighted sum-of-squares matrices of regression sum-of-squares matrix and residual sum-of-squares matrix. We establish an upper bound of the mean squared error of the estimators contained in this class, and the weight value minimizing the upper bound is derived. Sufficient conditions for the consistency of the estimators are discussed in weak identifiability regimes under which the difference of the largest and second largest eigenvalues of the covariance matrix decays asymptotically and in ``large $p$, large $n$" regimes, where $p$ is the number of response variables and $n$ is the sample size. Several numerical results are also presented.

We present BadGD, a unified theoretical framework that exposes the vulnerabilities of gradient descent algorithms through strategic backdoor attacks. Backdoor attacks involve embedding malicious triggers into a training dataset to disrupt the model's learning process. Our framework introduces three novel constructs: Max RiskWarp Trigger, Max GradWarp Trigger, and Max GradDistWarp Trigger, each designed to exploit specific aspects of gradient descent by distorting empirical risk, deterministic gradients, and stochastic gradients respectively. We rigorously define clean and backdoored datasets and provide mathematical formulations for assessing the distortions caused by these malicious backdoor triggers. By measuring the impact of these triggers on the model training procedure, our framework bridges existing empirical findings with theoretical insights, demonstrating how a malicious party can exploit gradient descent hyperparameters to maximize attack effectiveness. In particular, we show that these exploitations can significantly alter the loss landscape and gradient calculations, leading to compromised model integrity and performance. This research underscores the severe threats posed by such data-centric attacks and highlights the urgent need for robust defenses in machine learning. BadGD sets a new standard for understanding and mitigating adversarial manipulations, ensuring the reliability and security of AI systems.

We consider the problem of constructing reduced models for large scale systems with poles in general domains in the complex plane (as opposed to, e.g., the open left-half plane or the open unit disk). Our goal is to design a model reduction scheme, building upon theoretically established methodologies, yet encompassing this new class of models. To this aim, we develop a balanced truncation framework through conformal maps to handle poles in general domains. The major difference from classical balanced truncation resides in the formulation of the Gramians. We show that these new Gramians can still be computed by solving modified Lyapunov equations for specific conformal maps. A numerical algorithm to perform balanced truncation with conformal maps is developed and is tested on three numerical examples, namely a heat model, the Schr\"odinger equation, and the undamped linear wave equation, the latter two having spectra on the imaginary axis.

The multidimensional knapsack problem (MKP) is an NP-hard combinatorial optimization problem whose solution is determining a subset of maximum total profit items that do not violate capacity constraints. Due to its hardness, large-scale MKP instances are usually a target for metaheuristics, a context in which effective feasibility maintenance strategies are crucial. In 1998, Chu and Beasley proposed an effective heuristic repair that is still relevant for recent metaheuristics. However, due to its deterministic nature, the diversity of solutions such heuristic provides is insufficient for long runs. As a result, the search for new solutions ceases after a while. This paper proposes an efficiency-based randomization strategy for the heuristic repair that increases the variability of the repaired solutions without deteriorating quality and improves the overall results.

This research conducts a thorough reevaluation of seismic fragility curves by utilizing ordinal regression models, moving away from the commonly used log-normal distribution function known for its simplicity. It explores the nuanced differences and interrelations among various ordinal regression approaches, including Cumulative, Sequential, and Adjacent Category models, alongside their enhanced versions that incorporate category-specific effects and variance heterogeneity. The study applies these methodologies to empirical bridge damage data from the 2008 Wenchuan earthquake, using both frequentist and Bayesian inference methods, and conducts model diagnostics using surrogate residuals. The analysis covers eleven models, from basic to those with heteroscedastic extensions and category-specific effects. Through rigorous leave-one-out cross-validation, the Sequential model with category-specific effects emerges as the most effective. The findings underscore a notable divergence in damage probability predictions between this model and conventional Cumulative probit models, advocating for a substantial transition towards more adaptable fragility curve modeling techniques that enhance the precision of seismic risk assessments. In conclusion, this research not only readdresses the challenge of fitting seismic fragility curves but also advances methodological standards and expands the scope of seismic fragility analysis. It advocates for ongoing innovation and critical reevaluation of conventional methods to advance the predictive accuracy and applicability of seismic fragility models within the performance-based earthquake engineering domain.

In this paper, we develop a new and effective approach to nonparametric quantile regression that accommodates ultrahigh-dimensional data arising from spatio-temporal processes. This approach proves advantageous in staving off computational challenges that constitute known hindrances to existing nonparametric quantile regression methods when the number of predictors is much larger than the available sample size. We investigate conditions under which estimation is feasible and of good overall quality and obtain sharp approximations that we employ to devising statistical inference methodology. These include simultaneous confidence intervals and tests of hypotheses, whose asymptotics is borne by a non-trivial functional central limit theorem tailored to martingale differences. Additionally, we provide finite-sample results through various simulations which, accompanied by an illustrative application to real-worldesque data (on electricity demand), offer guarantees on the performance of the proposed methodology.

This work proposes a novel variational approximation of partial differential equations on moving geometries determined by explicit boundary representations. The benefits of the proposed formulation are the ability to handle large displacements of explicitly represented domain boundaries without generating body-fitted meshes and remeshing techniques. For the space discretization, we use a background mesh and an unfitted method that relies on integration on cut cells only. We perform this intersection by using clipping algorithms. To deal with the mesh movement, we pullback the equations to a reference configuration (the spatial mesh at the initial time slab times the time interval) that is constant in time. This way, the geometrical intersection algorithm is only required in 3D, another key property of the proposed scheme. At the end of the time slab, we compute the deformed mesh, intersect the deformed boundary with the background mesh, and consider an exact transfer operator between meshes to compute jump terms in the time discontinuous Galerkin integration. The transfer is also computed using geometrical intersection algorithms. We demonstrate the applicability of the method to fluid problems around rotating (2D and 3D) geometries described by oriented boundary meshes. We also provide a set of numerical experiments that show the optimal convergence of the method.

We propose an actor-critic algorithm for a family of complex problems arising in algebraic statistics and discrete optimization. The core task is to produce a sample from a finite subset of the non-negative integer lattice defined by a high-dimensional polytope. We translate the problem into a Markov decision process and devise an actor-critic reinforcement learning (RL) algorithm to learn a set of good moves that can be used for sampling. We prove that the actor-critic algorithm converges to an approximately optimal sampling policy. To tackle complexity issues that typically arise in these sampling problems, and to allow the RL to function at scale, our solution strategy takes three steps: decomposing the starting point of the sample, using RL on each induced subproblem, and reconstructing to obtain a sample in the original polytope. In this setup, the proof of convergence applies to each subproblem in the decomposition. We test the method in two regimes. In statistical applications, a high-dimensional polytope arises as the support set for the reference distribution in a model/data fit test for a broad family of statistical models for categorical data. We demonstrate how RL can be used for model fit testing problems for data sets for which traditional MCMC samplers converge too slowly due to problem size and sparsity structure. To test the robustness of the algorithm and explore its generalization properties, we apply it to synthetically generated data of various sizes and sparsity levels.

This paper deals with the algorithmic aspects of solving feasibility problems of semidefinite programming (SDP), aka linear matrix inequalities (LMI). Since in some SDP instances all feasible solutions have irrational entries, numerical solvers that work with rational numbers can only find an approximate solution. We study the following question: is it possible to certify feasibility of a given SDP using an approximate solution that is sufficiently close to some exact solution? Existing approaches make the assumption that there exist rational feasible solutions (and use techniques such as rounding and lattice reduction algorithms). We propose an alternative approach that does not need this assumption. More specifically, we show how to construct a system of polynomial equations whose set of real solutions is guaranteed to have an isolated correct solution (assuming that the target exact solution is maximum-rank). This allows, in particular, to use algorithms from real algebraic geometry for solving systems of polynomial equations, yielding a hybrid (or symbolic-numerical) method for SDPs. We experimentally compare it with a pure symbolic method in [Henrion, Naldi, Safey El Din; SIAM J. Optim., 2016]; the hybrid method was able to certify feasibility of many SDP instances on which [Henrion, Naldi, Safey El Din; SIAM J. Optim., 2016] failed. We argue that our approach may have other uses, such as refining an approximate solution using methods of numerical algebraic geometry for systems of polynomial equations.

Deep learning is usually described as an experiment-driven field under continuous criticizes of lacking theoretical foundations. This problem has been partially fixed by a large volume of literature which has so far not been well organized. This paper reviews and organizes the recent advances in deep learning theory. The literature is categorized in six groups: (1) complexity and capacity-based approaches for analyzing the generalizability of deep learning; (2) stochastic differential equations and their dynamic systems for modelling stochastic gradient descent and its variants, which characterize the optimization and generalization of deep learning, partially inspired by Bayesian inference; (3) the geometrical structures of the loss landscape that drives the trajectories of the dynamic systems; (4) the roles of over-parameterization of deep neural networks from both positive and negative perspectives; (5) theoretical foundations of several special structures in network architectures; and (6) the increasingly intensive concerns in ethics and security and their relationships with generalizability.

北京阿比特科技有限公司