亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

We study a class of bilevel integer programs with second-order cone constraints at the upper level and a convex quadratic objective and linear constraints at the lower level. We develop disjunctive cuts to separate bilevel infeasible points using a second-order-cone-based cut-generating procedure. To the best of our knowledge, this is the first time disjunctive cuts are studied in the context of discrete bilevel optimization. Using these disjunctive cuts, we establish a branch-and-cut algorithm for the problem class we study, and a cutting plane method for the problem variant with only binary variables. Our computational study demonstrates that both our approaches outperform a state-of-the-art generic solver for mixed-integer bilevel linear programs that is able to solve a linearized version of our test instances, where the non-linearities are linearized in a McCormick fashion.

相關內容

We develop a family of cut finite element methods of different orders based on the discontinuous Galerkin framework, for hyperbolic conservation laws with stationary interfaces in both one and two space dimensions, and for moving interfaces in one space dimension. Interface conditions are imposed weakly and so that both conservation and stability are ensured. A CutFEM with discontinuous elements in space is developed and coupled to standard explicit time-stepping schemes for linear advection problems and the acoustic wave problem with stationary interfaces. In the case of moving interfaces, we propose a space-time CutFEM based on discontinuous elements both in space and time for linear advection problems. We show that the proposed CutFEM are conservative and energy stable. For the stationary interface case an a priori error estimate is proven. Numerical computations in both one and two space dimensions support the analysis, and in addition demonstrate that the proposed methods have the expected accuracy.

We introduce certain sparse representation methods, named as stochastic pre-orthogonal adaptive Fourier decomposition 1 and 2 (SPOAFD1 and SPOAFD2) to solve the Dirichlet boundary value problem and the Cauchy initial value problem of random data. To solve the stochastic boundary value problems the sparse representation is, as the initial step, applied to the random boundary data. Due to the semigroup property of the Poisson and the heat kernel, each entry of the expanding series can be lifted up to compose a solution of the Dirichlet and the Cauchy initial value problem, respectively. The sparse representation gives rise to analytic as well as numerical solutions to the problems with high efficiency.

In this paper we propose a deep learning based numerical scheme for strongly coupled FBSDE, stemming from stochastic control. It is a modification of the deep BSDE method in which the initial value to the backward equation is not a free parameter, and with a new loss function being the weighted sum of the cost of the control problem, and a variance term which coincides with the means square error in the terminal condition. We show by a numerical example that a direct extension of the classical deep BSDE method to FBSDE, fails for a simple linear-quadratic control problem, and motivate why the new method works. Under regularity and boundedness assumptions on the exact controls of time continuous and time discrete control problems we provide an error analysis for our method. We show empirically that the method converges for three different problems, one being the one that failed for a direct extension of the deep BSDE method.

Motivated by applications to the theory of rank-metric codes, we study the problem of estimating the number of common complements of a family of subspaces over a finite field in terms of the cardinality of the family and its intersection structure. We derive upper and lower bounds for this number, along with their asymptotic versions as the field size tends to infinity. We then use these bounds to describe the general behaviour of common complements with respect to sparseness and density, showing that the decisive property is whether or not the number of spaces to be complemented is negligible with respect to the field size. By specializing our results to matrix spaces, we obtain upper and lower bounds for the number of MRD codes in the rank metric. In particular, we answer an open question in coding theory, proving that MRD codes are sparse for all parameter sets as the field size grows, with only very few exceptions. We also investigate the density of MRD codes as their number of columns tends to infinity, obtaining a new asymptotic bound. Using properties of the Euler function from number theory, we then show that our bound improves on known results for most parameter sets. We conclude the paper by establishing general structural properties of the density function of rank-metric codes.

We prove new lower bounds for suitable competitive ratio measures of two relaxed online packing problems: online removable multiple knapsack, and a recently introduced online minimum peak appointment scheduling problem. The high level objective in both problems is to pack arriving items of sizes at most 1 into bins of capacity 1 as efficiently as possible, but the exact formalizations differ. In the appointment scheduling problem, every item has to be assigned to a position, which can be seen as a time interval during a workday of length 1. That is, items are not assigned to bins, but only once all the items are processed, the optimal number of bins subject to chosen positions is determined, and this is the cost of the online algorithm. On the other hand, in the removable knapsack problem there is a fixed number of bins, and the goal of packing items, which consists in choosing a particular bin for every packed item (and nothing else), is to pack as valuable a subset as possible. In this last problem it is possible to reject items, that is, deliberately not pack them, as well as to remove packed items at any later point in time, which adds flexibility to the problem.

The commonly quoted error rates for QMC integration with an infinite low discrepancy sequence is $O(n^{-1}\log(n)^r)$ with $r=d$ for extensible sequences and $r=d-1$ otherwise. Such rates hold uniformly over all $d$ dimensional integrands of Hardy-Krause variation one when using $n$ evaluation points. Implicit in those bounds is that for any sequence of QMC points, the integrand can be chosen to depend on $n$. In this paper we show that rates with any $r<(d-1)/2$ can hold when $f$ is held fixed as $n\to\infty$. This is accomplished following a suggestion of Erich Novak to use some unpublished results of Trojan from the 1980s as given in the information based complexity monograph of Traub, Wasilkowski and Wo\'zniakowski. The proof is made by applying a technique of Roth with the theorem of Trojan. The proof is non constructive and we do not know of any integrand of bounded variation in the sense of Hardy and Krause for which the QMC error exceeds $(\log n)^{1+\epsilon}/n$ for infinitely many $n$ when using a digital sequence such as one of Sobol's. An empirical search when $d=2$ for integrands designed to exploit known weaknesses in certain point sets showed no evidence that $r>1$ is needed. An example with $d=3$ and $n$ up to $2^{100}$ might possibly require $r>1$.

We study the facility location problems where agents are located on a real line and divided into groups based on criteria such as ethnicity or age. Our aim is to design mechanisms to locate a facility to approximately minimize the costs of groups of agents to the facility fairly while eliciting the agents' locations truthfully. We first explore various well-motivated group fairness cost objectives for the problems and show that many natural objectives have an unbounded approximation ratio. We then consider minimizing the maximum total group cost and minimizing the average group cost objectives. For these objectives, we show that existing classical mechanisms (e.g., median) and new group-based mechanisms provide bounded approximation ratios, where the group-based mechanisms can achieve better ratios. We also provide lower bounds for both objectives. To measure fairness between groups and within each group, we study a new notion of intergroup and intragroup fairness (IIF) . We consider two IIF objectives and provide mechanisms with tight approximation ratios.

We present and analyze a momentum-based gradient method for training linear classifiers with an exponentially-tailed loss (e.g., the exponential or logistic loss), which maximizes the classification margin on separable data at a rate of $\widetilde{\mathcal{O}}(1/t^2)$. This contrasts with a rate of $\mathcal{O}(1/\log(t))$ for standard gradient descent, and $\mathcal{O}(1/t)$ for normalized gradient descent. This momentum-based method is derived via the convex dual of the maximum-margin problem, and specifically by applying Nesterov acceleration to this dual, which manages to result in a simple and intuitive method in the primal. This dual view can also be used to derive a stochastic variant, which performs adaptive non-uniform sampling via the dual variables.

The Normalized Cut (NCut) objective function, widely used in data clustering and image segmentation, quantifies the cost of graph partitioning in a way that biases clusters or segments that are balanced towards having lower values than unbalanced partitionings. However, this bias is so strong that it avoids any singleton partitions, even when vertices are very weakly connected to the rest of the graph. Motivated by the B\"uhler-Hein family of balanced cut costs, we propose the family of Compassionately Conservative Balanced (CCB) Cut costs, which are indexed by a parameter that can be used to strike a compromise between the desire to avoid too many singleton partitions and the notion that all partitions should be balanced. We show that CCB-Cut minimization can be relaxed into an orthogonally constrained $\ell_{\tau}$-minimization problem that coincides with the problem of computing Piecewise Flat Embeddings (PFE) for one particular index value, and we present an algorithm for solving the relaxed problem by iteratively minimizing a sequence of reweighted Rayleigh quotients (IRRQ). Using images from the BSDS500 database, we show that image segmentation based on CCB-Cut minimization provides better accuracy with respect to ground truth and greater variability in region size than NCut-based image segmentation.

In this paper, we study the optimal convergence rate for distributed convex optimization problems in networks. We model the communication restrictions imposed by the network as a set of affine constraints and provide optimal complexity bounds for four different setups, namely: the function $F(\xb) \triangleq \sum_{i=1}^{m}f_i(\xb)$ is strongly convex and smooth, either strongly convex or smooth or just convex. Our results show that Nesterov's accelerated gradient descent on the dual problem can be executed in a distributed manner and obtains the same optimal rates as in the centralized version of the problem (up to constant or logarithmic factors) with an additional cost related to the spectral gap of the interaction matrix. Finally, we discuss some extensions to the proposed setup such as proximal friendly functions, time-varying graphs, improvement of the condition numbers.

北京阿比特科技有限公司