亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

We propose a new variant of Chubanov's method for solving the feasibility problem over the symmetric cone by extending Roos's method (2018) of solving the feasibility problem over the nonnegative orthant. The proposed method considers a feasibility problem associated with a norm induced by the maximum eigenvalue of an element and uses a rescaling focusing on the upper bound for the sum of eigenvalues of any feasible solution to the problem. Its computational bound is (i) equivalent to that of Roos's original method (2018) and superior to that of Louren\c{c}o et al.'s method (2019) when the symmetric cone is the nonnegative orthant, (ii) superior to that of Louren\c{c}o et al.'s method (2019) when the symmetric cone is a Cartesian product of second-order cones, (iii) equivalent to that of Louren\c{c}o et al.'s method (2019) when the symmetric cone is the simple positive semidefinite cone, and (iv) superior to that of Pena and Soheili's method (2017) for any simple symmetric cones under the feasibility assumption of the problem imposed in Pena and Soheili's method (2017). We also conduct numerical experiments that compare the performance of our method with existing methods by generating instances in three types: strongly (but ill-conditioned) feasible instances, weakly feasible instances, and infeasible instances. For any of these instances, the proposed method is rather more efficient than the existing methods in terms of accuracy and execution time.

相關內容

Verification of discrete time or continuous time dynamical systems over the reals is known to be undecidable. It is however known that undecidability does not hold for various classes of systems: if robustness is defined as the fact that reachability relation is stable under infinitesimal perturbation, then their reachability relation is decidable. In other words, undecidability implies sensitivity under infinitesimal perturbation, a property usually not expected in systems considered in practice, and hence can be seen (somehow informally) as an artefact of the theory, that always assumes exactness. In a similar vein, it is known that, while undecidability holds for logical formulas over the reals, it does not hold when considering delta-undecidability: one must determine whether a property is true, or $\delta$-far from being true. We first extend the previous statements to a theory for general (discrete time, continuous-time, and even hybrid) dynamical systems, and we relate the two approaches. We also relate robustness to some geometric properties of reachability relation. But mainly, when a system is robust, it then makes sense to quantify at which level of perturbation. We prove that assuming robustness to polynomial perturbations on precision leads to reachability verifiable in complexity class PSPACE, and even to a characterization of this complexity class. We prove that assuming robustness to polynomial perturbations on time or length of trajectories leads to similar statements, but with PTIME. It has been recently unexpectedly shown that the length of a solution of a polynomial ordinary differential equation corresponds to a time of computation: PTIME corresponds to solutions of polynomial differential equations of polynomial length. Our results argue that the answer is given by precision: space corresponds to the involved precision.

We derive limiting distributions of symmetrized estimators of scatter, where instead of all $n(n-1)/2$ pairs of the $n$ observations we only consider $nd$ suitably chosen pairs, $1 \le d < \lfloor n/2\rfloor$. It turns out that the resulting estimators are asymptotically equivalent to the original one whenever $d = d(n) \to \infty$ at arbitrarily slow speed. We also investigate the asymptotic properties for arbitrary fixed $d$. These considerations and numerical examples indicate that for practical purposes, moderate fixed values of $d$ between,say, $10$ and $20$ yield already estimators which are computationally feasible and rather close to the original ones.

Many popular models from the networks literature can be viewed through a common lens of contingency tables on network dyads, resulting in \emph{log-linear ERGMs}: exponential family models for random graphs whose sufficient statistics are linear on the dyads. We propose a new model in this family, the \emph{$p_1$-SBM}, which combines node and group effects common in network formation mechanisms. In particular, it is a generalization of several well-known ERGMs including the stochastic blockmodel for undirected graphs, the degree-corrected version of it, and the directed $p_1$ model without group structure. We frame the problem of testing model fit for the log-linear ERGM class through an exact conditional test whose $p$-value can be approximated efficiently in networks of both small and moderately large sizes. The sampling methods we build rely on a dynamic adaptation of Markov bases. We use quick estimation algorithms adapted from the contingency table literature and effective sampling methods rooted in graph theory and algebraic statistics. The performance and scalability of the method is demonstrated on two data sets from biology: the connectome of \emph{C. elegans} and the interactome of \emph{Arabidopsis thaliana}. These two networks -- a neuronal network and a protein-protein interaction network -- have been popular examples in the network science literature. Our work provides a model-based approach to studying them.

This paper presents a new discretization error quantification method for the numerical integration of ordinary differential equations. The error is modelled by using the Wishart distribution, which enables us to capture the correlation between variables. Error quantification is achieved by solving an optimization problem under the order constraints for the covariance matrices. An algorithm for the optimization problem is also established in a slightly broader context.

Given only a few observed entries from a low-rank matrix $X$, matrix completion is the problem of imputing the missing entries, and it formalizes a wide range of real-world settings that involve estimating missing data. However, when there are too few observed entries to complete the matrix, what other aspects of the underlying matrix can be reliably recovered? We study one such problem setting, that of "one-sided" matrix completion, where our goal is to recover the right singular vectors of $X$, even in the regime where recovering the left singular vectors is impossible, which arises when there are more rows than columns and very few observations. We propose a natural algorithm that involves imputing the missing values of the matrix $X^TX$ and show that even with only two observations per row in $X$, we can provably recover $X^TX$ as long as we have at least $\Omega(r^2 d \log d)$ rows, where $r$ is the rank and $d$ is the number of columns. We evaluate our algorithm on one-sided recovery of synthetic data and low-coverage genome sequencing. In these settings, our algorithm substantially outperforms standard matrix completion and a variety of direct factorization methods.

For a graph $G$, a subset $S \subseteq V(G)$ is called a \emph{resolving set} if for any two vertices $u,v \in V(G)$, there exists a vertex $w \in S$ such that $d(w,u) \neq d(w,v)$. The {\sc Metric Dimension} problem takes as input a graph $G$ and a positive integer $k$, and asks whether there exists a resolving set of size at most $k$. This problem was introduced in the 1970s and is known to be \NP-hard~[GT~61 in Garey and Johnson's book]. In the realm of parameterized complexity, Hartung and Nichterlein~[CCC~2013] proved that the problem is \W[2]-hard when parameterized by the natural parameter $k$. They also observed that it is \FPT\ when parameterized by the vertex cover number and asked about its complexity under \emph{smaller} parameters, in particular the feedback vertex set number. We answer this question by proving that {\sc Metric Dimension} is \W[1]-hard when parameterized by the combined parameter feedback vertex set number plus pathwidth. This also improves the result of Bonnet and Purohit~[IPEC 2019] which states that the problem is \W[1]-hard parameterized by the pathwidth. On the positive side, we show that {\sc Metric Dimension} is \FPT\ when parameterized by either the distance to cluster or the distance to co-cluster, both of which are smaller parameters than the vertex cover number.

We present a novel approach for solving the time-dependent Schr\"{o}dinger equation (TDSE). The method we propose converts the TDSE to an equivalent Volterra integral equation; introducing a global Lagrange interpolation of the integrand transforms the equation to a linear system, which is then solved iteratively. In this paper, we derive the method, explore its performance on several examples, and discuss the corresponding numerical details.

The performance of distributed storage systems deployed on wide-area networks can be improved using weighted (majority) quorum systems instead of their regular variants due to the heterogeneous performance of the nodes. A significant limitation of weighted majority quorum systems lies in their dependence on static weights, which are inappropriate for systems subject to the dynamic nature of networked environments. To overcome this limitation, such quorum systems require mechanisms for reassigning weights over time according to the performance variations. We study the problem of node weight reassignment in asynchronous systems with a static set of servers and static fault threshold. We prove that solving such a problem is as hard as solving consensus, i.e., it cannot be implemented in asynchronous failure-prone distributed systems. This result is somewhat counter-intuitive, given the recent results showing that two related problems -- replica set reconfiguration and asset transfer -- can be solved in asynchronous systems. Inspired by these problems, we present two versions of the problem that contain restrictions on the weights of servers and the way they are reassigned. We propose a protocol to implement one of the restricted problems in asynchronous systems. As a case study, we construct a dynamic-weighted atomic storage based on such a protocol. We also discuss the relationship between weight reassignment and asset transfer problems and compare our dynamic-weighted atomic storage with reconfigurable atomic storage.

In many practical applications including remote sensing, multi-task learning, and multi-spectrum imaging, data are described as a set of matrices sharing a common column space. We consider the joint estimation of such matrices from their noisy linear measurements. We study a convex estimator regularized by a pair of matrix norms. The measurement model corresponds to block-wise sensing and the reconstruction is possible only when the total energy is well distributed over blocks. The first norm, which is the maximum-block-Frobenius norm, favors such a solution. This condition is analogous to the notion of low-spikiness in matrix completion or column-wise sensing. The second norm, which is a tensor norm on a pair of suitable Banach spaces, induces low-rankness in the solution together with the first norm. We demonstrate that the joint estimation provides a significant gain over the individual recovery of each matrix when the number of matrices sharing a column space and the ambient dimension of the shared column space are large relative to the number of columns in each matrix. The convex estimator is cast as a semidefinite program and an efficient ADMM algorithm is derived. The empirical behavior of the convex estimator is illustrated using Monte Carlo simulations and recovery performance is compared to existing methods in the literature.

While existing work in robust deep learning has focused on small pixel-level $\ell_p$ norm-based perturbations, this may not account for perturbations encountered in several real world settings. In many such cases although test data might not be available, broad specifications about the types of perturbations (such as an unknown degree of rotation) may be known. We consider a setup where robustness is expected over an unseen test domain that is not i.i.d. but deviates from the training domain. While this deviation may not be exactly known, its broad characterization is specified a priori, in terms of attributes. We propose an adversarial training approach which learns to generate new samples so as to maximize exposure of the classifier to the attributes-space, without having access to the data from the test domain. Our adversarial training solves a min-max optimization problem, with the inner maximization generating adversarial perturbations, and the outer minimization finding model parameters by optimizing the loss on adversarial perturbations generated from the inner maximization. We demonstrate the applicability of our approach on three types of naturally occurring perturbations -- object-related shifts, geometric transformations, and common image corruptions. Our approach enables deep neural networks to be robust against a wide range of naturally occurring perturbations. We demonstrate the usefulness of the proposed approach by showing the robustness gains of deep neural networks trained using our adversarial training on MNIST, CIFAR-10, and a new variant of the CLEVR dataset.

北京阿比特科技有限公司