亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

This paper deals with the maximum independent set (M.I.S.) problem, also known as the stable set problem. The basic mathematical programming model that captures this problem is an Integer Program (I.P.) with zero-one variables and only the edge inequalities. We present an enhanced model by adding a polynomial number of linear constraints, known as valid inequalities; this new model is still polynomial in the number of vertices in the graph. We carried out computational testing of the Linear Relaxation of the new Integer Program. We tested about 7000 instances of randomly generated (and connected) graphs with up to 64 vertices (as well as all 64, 128, and 256-vertex instances at the "challenge" website OEIS.org). In each of these instances, the Linear Relaxation returned an optimal solution with (i) every variable having an integer value, and (ii) the optimal solution value of the Linear Relaxation was the same as that of the original (basic) Integer Program. Our computational experience has been that a binary search on the objective function value is a powerful tool which yields a (weakly) polynomial algorithm.

相關內容

An informative measurement is the most efficient way to gain information about an unknown state. We present a first-principles derivation of a general-purpose dynamic programming algorithm that returns an optimal sequence of informative measurements by sequentially maximizing the entropy of possible measurement outcomes. This algorithm can be used by an autonomous agent or robot to decide where best to measure next, planning a path corresponding to an optimal sequence of informative measurements. The algorithm is applicable to states and controls that are either continuous or discrete, and agent dynamics that is either stochastic or deterministic; including Markov decision processes and Gaussian processes. Recent results from the fields of approximate dynamic programming and reinforcement learning, including on-line approximations such as rollout and Monte Carlo tree search, allow the measurement task to be solved in real time. The resulting solutions include non-myopic paths and measurement sequences that can generally outperform, sometimes substantially, commonly used greedy approaches. This is demonstrated for a global search task, where on-line planning for a sequence of local searches is found to reduce the number of measurements in the search by approximately half. A variant of the algorithm is derived for Gaussian processes for active sensing.

In a temporal graph, each edge is available at specific points in time. Such an availability point is often represented by a ''temporal edge'' that can be traversed from its tail only at a specific departure time, for arriving in its head after a specific travel time. In such a graph, the connectivity from one node to another is naturally captured by the existence of a temporal path where temporal edges can be traversed one after the other. When imposing constraints on how much time it is possible to wait at a node in-between two temporal edges, it then becomes interesting to consider temporal walks where it is allowed to visit several times the same node, possibly at different times. We study the complexity of computing minimum-cost temporal walks from a single source under waiting-time constraints in a temporal graph, and ask under which conditions this problem can be solved in linear time. Our main result is a linear time algorithm when the input temporal graph is given by its (classical) space-time representation. We use an algebraic framework for manipulating abstract costs, enabling the optimization of a large variety of criteria or even combinations of these. It allows to improve previous results for several criteria such as number of edges or overall waiting time even without waiting constraints. It saves a logarithmic factor for all criteria under waiting constraints. Interestingly, we show that a logarithmic factor in the time complexity appears to be necessary with a more basic input consisting of a single ordered list of temporal edges (sorted either by arrival times or departure times). We indeed show equivalence between the space-time representation and a representation with two ordered lists.

Energy modelling can enable energy-aware software development and assist the developer in meeting an application's energy budget. Although many energy models for embedded processors exist, most do not account for processor-specific configurations, neither are they suitable for static energy consumption estimation. This paper introduces a set of comprehensive energy models for Arm's Cortex-M0 processor, ready to support energy-aware development of edge computing applications using either profiling- or static-analysis-based energy consumption estimation. We use a commercially representative physical platform together with a custom modified Instruction Set Simulator to obtain the physical data and system state markers used to generate the models. The models account for different processor configurations which all have a significant impact on the execution time and energy consumption of edge computing applications. Unlike existing works, which target a very limited set of applications, all developed models are generated and validated using a very wide range of benchmarks from a variety of emerging IoT application areas, including machine learning and have a prediction error of less than 5%.

We study nonlinear optimization problems with a stochastic objective and deterministic equality and inequality constraints, which emerge in numerous applications including finance, manufacturing, power systems and, recently, deep neural networks. We propose an active-set stochastic sequential quadratic programming (StoSQP) algorithm that utilizes a differentiable exact augmented Lagrangian as the merit function. The algorithm adaptively selects the penalty parameters of the augmented Lagrangian and performs a stochastic line search to decide the stepsize. The global convergence is established: for any initialization, the KKT residuals converge to zero almost surely. Our algorithm and analysis further develop the prior work of Na et al., (2022). Specifically, we allow nonlinear inequality constraints without requiring the strict complementary condition; refine some of the designs in Na et al., (2022) such as the feasibility error condition and the monotonically increasing sample size; strengthen the global convergence guarantee; and improve the sample complexity on the objective Hessian. We demonstrate the performance of the designed algorithm on a subset of nonlinear problems collected in CUTEst test set and on constrained logistic regression problems.

Prophet inequalities consist of many beautiful statements that establish tight performance ratios between online and offline allocation algorithms. Typically, tightness is established by constructing an algorithmic guarantee and a worst-case instance separately, whose bounds match as a result of some "ingenuity". In this paper, we instead formulate the construction of the worst-case instance as an optimization problem, which directly finds the tight ratio without needing to construct two bounds separately. Our analysis of this complex optimization problem involves identifying the structure in a new "Type Coverage" dual problem. It can be seen as akin to the celebrated Magician and OCRS problems, except more general in that it can also provide tight ratios relative to the optimal offline allocation, whereas the earlier problems only concerns the ex-ante relaxation of the offline problem. Through this analysis, our paper provides a unified framework that derives new prophet inequalities and recovers existing ones, including two important new results. First, we show that the "oblivious" method of setting a static threshold due to Chawla et al. (2020), surprisingly, is best-possible among all static threshold algorithms, under any number $k$ of units. We emphasize that this result is derived without needing to explicitly find any counterexample instances. Our result implies that the asymptotic convergence rate of $1-O(\sqrt{\log k/k})$ for static threshold algorithms, first established in Hajiaghayi et al. (2007), is tight; this confirms for the first time a separation with the convergence rate of adaptive algorithms, which is $1-\Theta(\sqrt{1/k})$ due to Alaei (2014). Second, turning to the IID setting, our framework allows us to numerically illustrate the tight guarantee (of adaptive algorithms) under any number $k$ of starting units. Our guarantees for $k>1$ exceed the state-of-the-art.

In this work we propose a discretization of the second boundary condition for the Monge-Ampere equation arising in geometric optics and optimal transport. The discretization we propose is the natural generalization of the popular Oliker-Prussner method proposed in 1988. For the discretization of the differential operator, we use a discrete analogue of the subdifferential. Existence, unicity and stability of the solutions to the discrete problem are established. Convergence results to the continuous problem are given.

In this paper, we investigate the online allocation problem of maximizing the overall revenue subject to both lower and upper bound constraints. Compared to the extensively studied online problems with only resource upper bounds, the two-sided constraints affect the prospects of resource consumption more severely. As a result, only limited violations of constraints or pessimistic competitive bounds could be guaranteed. To tackle the challenge, we define a measure of feasibility $\xi^*$ to evaluate the hardness of this problem, and estimate this measurement by an optimization routine with theoretical guarantees. We propose an online algorithm adopting a constructive framework, where we initialize a threshold price vector using the estimation, then dynamically update the price vector and use it for decision-making at each step. It can be shown that the proposed algorithm is $\big(1-O(\frac{\varepsilon}{\xi^*-\varepsilon})\big)$ or $\big(1-O(\frac{\varepsilon}{\xi^*-\sqrt{\varepsilon}})\big)$ competitive with high probability for $\xi^*$ known or unknown respectively. To the best of our knowledge, this is the first result establishing a nearly optimal competitive algorithm for solving two-sided constrained online allocation problems with a high probability of feasibility.

The hybrid high-order method is a modern numerical framework for the approximation of elliptic PDEs. We present here an extension of the hybrid high-order method to meshes possessing curved edges/faces. Such an extension allows us to enforce boundary conditions exactly on curved domains, and capture curved geometries that appear internally in the domain e.g. discontinuities in a diffusion coefficient. The method makes use of non-polynomial functions on the curved faces and does not require any mappings between reference elements/faces. Such an approach does not require the faces to be polynomial, and has a strict upper bound on the number of degrees of freedom on a curved face for a given polynomial degree. Moreover, this approach of enriching the space of unknowns on the curved faces with non-polynomial functions should extend naturally to other polytopal methods. We show the method to be stable and consistent on curved meshes and derive optimal error estimates in $L^2$ and energy norms. We present numerical examples of the method on a domain with curved boundary, and for a diffusion problem such that the diffusion tensor is discontinuous along a curved arc.

Data dissemination is a fundamental task in distributed computing. This paper studies broadcast problems in various innovative models where the communication network connecting $n$ processes is dynamic (e.g., due to mobility or failures) and controlled by an adversary. In the first model, the processes transitively communicate their ids in synchronous rounds along a rooted tree given in each round by the adversary whose goal is to maximize the number of rounds until at least one id is known by all processes. Previous research has shown a $\lceil{\frac{3n-1}{2}}\rceil-2$ lower bound and an $O(n\log\log n)$ upper bound. We show the first linear upper bound for this problem, namely $\lceil{(1 + \sqrt 2) n-1}\rceil \approx 2.4n$. We extend these results to the setting where the adversary gives in each round $k$-disjoint forests and their goal is to maximize the number of rounds until there is a set of $k$ ids such that each process knows of at least one of them. We give a $\left\lceil{\frac{3(n-k)}{2}}\right\rceil-1$ lower bound and a $\frac{\pi^2+6}{6}n+1 \approx 2.6n$ upper bound for this problem. Finally, we study the setting where the adversary gives in each round a directed graph with $k$ roots and their goal is to maximize the number of rounds until there exist $k$ ids that are known by all processes. We give a $\left\lceil{\frac{3(n-3k)}{2}}\right\rceil+2$ lower bound and a $\lceil { (1+\sqrt{2})n}\rceil+k-1 \approx 2.4n+k$ upper bound for this problem. For the two latter problems no upper or lower bounds were previously known.

In this paper, we introduce a new causal framework capable of dealing with probabilistic and non-probabilistic problems. Indeed, we provide a direct causal effect formula called Probabilistic vAriational Causal Effect (PACE) and its variations satisfying some ideas and postulates. Our formula of causal effect uses the idea of the total variation of a function integrated with probability theory. The probabilistic part is the natural availability of changing an exposure values given some variables. These variables interfere with the effect of the exposure on a given outcome. PACE has a parameter $d$ determining the degree of considering the natural availability of changing the exposure values. The lower values of $d$ refer to the scenarios for which rare cases are important. In contrast, with the higher values of $d$, our framework deals with the problems that are in nature probabilistic. Hence, instead of a single value for causal effect, we provide a causal effect vector by discretizing $d$. Further, we introduce the positive and negative PACE to measure the positive and the negative causal changes in the outcome while changing the exposure values. Furthermore, we provide an identifiability criterion for PACE to deal with observational studies. We also address the problem of computing counterfactuals in causal reasoning. We compare our framework to the Pearl, the mutual information, the conditional mutual information, and the Janzing et al. frameworks by investigating several examples.

北京阿比特科技有限公司