亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

Given an undirected graph $G=(V,E)$, the longest induced path problem (LIPP) consists of obtaining a maximum cardinality subset $W\subseteq V$ such that $W$ induces a simple path in $G$. In this paper, we propose two new formulations with an exponential number of constraints for the problem, together with effective branch-and-cut procedures for its solution. While the first formulation (cec) is based on constraints that explicitly eliminate cycles, the second one (cut) ensures connectivity via cutset constraints. We compare, both theoretically and experimentally, the newly proposed approaches with a state-of-the-art formulation recently proposed in the literature. More specifically, we show that the polyhedra defined by formulation cut and that of the formulation available in the literature are the same. Besides, we show that these two formulations are stronger in theory than cec. We also propose a new branch-and-cut procedure using the new formulations. Computational experiments show that the newly proposed formulation cec, although less strong from a theoretical point of view, is the best performing approach as it can solve all but one of the 1065 benchmark instances used in the literature within the given time limit. In addition, our newly proposed approaches outperform the state-of-the-art formulation when it comes to the median times to solve the instances to optimality. Furthermore, we perform extended computational experiments considering more challenging and hard-to-solve larger instances and evaluate the impacts on the results when offering initial feasible solutions (warm starts) to the formulations.

相關內容

The main problem in the area of property testing is to understand which graph properties are \emph{testable}, which means that with constantly many queries to any input graph $G$, a tester can decide with good probability whether $G$ satisfies the property, or is far from satisfying the property. Testable properties are well understood in the dense model and in the bounded degree model, but little is known in sparse graph classes when graphs are allowed to have unbounded degree. This is the setting of the \emph{sparse model}. We prove that for any proper minor-closed class $\mathcal{G}$, any monotone property (i.e., any property that is closed under taking subgraphs) is testable for graphs from $\mathcal{G}$ in the sparse model. This extends a result of Czumaj and Sohler (FOCS'19), who proved it for monotone properties with finitely many obstructions. Our result implies for instance that for any integers $k$ and $t$, $k$-colorability of $K_t$-minor free graphs is testable in the sparse model. Elek recently proved that monotone properties of bounded degree graphs from minor-closed classes that are closed under disjoint union can be verified by an approximate proof labeling scheme in constant time. We show again that the assumption of bounded degree can be omitted in his result.

In this paper, some preliminaries about signal flow graph, linear time-invariant system on F(z) and computational complexity are first introduced in detail. In order to synthesize the necessary and sufficient condition on F(z) for a general 2-path problem, the sufficient condition on F(z) or R and necessary conditions on F(z) for a general 2-path problem are secondly analyzed respectively. Moreover, an equivalent sufficient and necessary condition on R whether there exists a general 2-path is deduced in detail. Finally, the computational complexity of the algorithm for this equivalent sufficient and necessary condition is introduced so that it means that the general 2-path problem is a P problem.

We study the federated optimization problem from a dual perspective and propose a new algorithm termed federated dual coordinate descent (FedDCD), which is based on a type of coordinate descent method developed by Necora et al.[Journal of Optimization Theory and Applications, 2017]. Additionally, we enhance the FedDCD method with inexact gradient oracles and Nesterov's acceleration. We demonstrate theoretically that our proposed approach achieves better convergence rates than the state-of-the-art primal federated optimization algorithms under certain situations. Numerical experiments on real-world datasets support our analysis.

We call a multigraph $(k,d)$-edge colourable if its edge set can be partitioned into $k$ subgraphs of maximum degree at most $d$ and denote as $\chi'_{d}(G)$ the minimum $k$ such that $G$ is $(k,d)$-edge colourable. We prove that for every integer $d$, every multigraph $G$ with maximum degree $\Delta$ is $(\lceil \frac{\Delta}{d} \rceil, d)$-edge colourable if $d$ is even and $(\lceil \frac{3\Delta - 1}{3d - 1} \rceil, d)$-edge colourable if $d$ is odd and these bounds are tight. We also prove that for every simple graph $G$, $\chi'_{d}(G) \in \{ \lceil \frac{\Delta}{d} \rceil, \lceil \frac{\Delta+1}{d} \rceil \}$ and characterize the values of $d$ and $\Delta$ for which it is NP-complete to compute $\chi'_d(G)$. These results generalize several classic results on the chromatic index of a graph by Shannon, Vizing, Holyer, Leven and Galil.

We are interested in investigating the security of source encryption with symmetric key under the side-channel attacks. In this paper, we propose a general model of source encryption with symmetric key under the side-channel attacks which can be apply to any kind of source encryption with symmetric key. We also propose a new security criterion for strong secrecy in the case of side-channel attacks which can be seen as a natural extension of the mutual information, i.e., \emph{the maximum conditional mutual information between the plaintext and the ciphertext given the adversarial key leakage, where the maximum is taken over all possible plaintext distribution}. Under this new criterion, we successfully formulate the rate region which serves as both necessary and sufficient conditions to have secure transmission even under side-channel attacks. Furthermore, we also prove another theoretical result regarding our new security criterion which might be interesting in its own right, i.e., although our new security criterion is clearly more strict compared to the standard security criterion which is the simple mutual information, in the case of discrete memoryless source, no perfect secrecy under standard security criterion can be achieved without achieving perfect secrecy in this new security criterion.

Convergence to a saddle point for convex-concave functions has been studied for decades, while recent years has seen a surge of interest in non-convex (zero-sum) smooth games, motivated by their recent wide applications. It remains an intriguing research challenge how local optimal points are defined and which algorithm can converge to such points. An interesting concept is known as the local minimax point, which strongly correlates with the widely-known gradient descent ascent algorithm. This paper aims to provide a comprehensive analysis of local minimax points, such as their relation with other solution concepts and their optimality conditions. We find that local saddle points can be regarded as a special type of local minimax points, called uniformly local minimax points, under mild continuity assumptions. In (non-convex) quadratic games, we show that local minimax points are (in some sense) equivalent to global minimax points. Finally, we study the stability of gradient algorithms near local minimax points. Although gradient algorithms can converge to local/global minimax points in the non-degenerate case, they would often fail in general cases. This implies the necessity of either novel algorithms or concepts beyond saddle points and minimax points in non-convex smooth games.

Deep Neural Networks (DNNs), despite their tremendous success in recent years, could still cast doubts on their predictions due to the intrinsic uncertainty associated with their learning process. Ensemble techniques and post-hoc calibrations are two types of approaches that have individually shown promise in improving the uncertainty calibration of DNNs. However, the synergistic effect of the two types of methods has not been well explored. In this paper, we propose a truth discovery framework to integrate ensemble-based and post-hoc calibration methods. Using the geometric variance of the ensemble candidates as a good indicator for sample uncertainty, we design an accuracy-preserving truth estimator with provably no accuracy drop. Furthermore, we show that post-hoc calibration can also be enhanced by truth discovery-regularized optimization. On large-scale datasets including CIFAR and ImageNet, our method shows consistent improvement against state-of-the-art calibration approaches on both histogram-based and kernel density-based evaluation metrics. Our codes are available at //github.com/horsepurve/truly-uncertain.

Few-shot learning is a challenging problem that requires a model to recognize novel classes with few labeled data. In this paper, we aim to find the expected prototypes of the novel classes, which have the maximum cosine similarity with the samples of the same class. Firstly, we propose a cosine similarity based prototypical network to compute basic prototypes of the novel classes from the few samples. A bias diminishing module is further proposed for prototype rectification since the basic prototypes computed in the low-data regime are biased against the expected prototypes. In our method, the intra-class bias and the cross-class bias are diminished to modify the prototypes. Then we give a theoretical analysis of the impact of the bias diminishing module on the expected performance of our method. We conduct extensive experiments on four few-shot benchmarks and further analyze the advantage of the bias diminishing module. The bias diminishing module brings in significant improvement by a large margin of 3% to 9% in general. Notably, our approach achieves state-of-the-art performance on miniImageNet (70.31% in 1-shot and 81.89% in 5-shot) and tieredImageNet (78.74% in 1-shot and 86.92% in 5-shot), which demonstrates the superiority of the proposed method.

We present an end-to-end framework for solving the Vehicle Routing Problem (VRP) using reinforcement learning. In this approach, we train a single model that finds near-optimal solutions for problem instances sampled from a given distribution, only by observing the reward signals and following feasibility rules. Our model represents a parameterized stochastic policy, and by applying a policy gradient algorithm to optimize its parameters, the trained model produces the solution as a sequence of consecutive actions in real time, without the need to re-train for every new problem instance. On capacitated VRP, our approach outperforms classical heuristics and Google's OR-Tools on medium-sized instances in solution quality with comparable computation time (after training). We demonstrate how our approach can handle problems with split delivery and explore the effect of such deliveries on the solution quality. Our proposed framework can be applied to other variants of the VRP such as the stochastic VRP, and has the potential to be applied more generally to combinatorial optimization problems.

Prevalent techniques in zero-shot learning do not generalize well to other related problem scenarios. Here, we present a unified approach for conventional zero-shot, generalized zero-shot and few-shot learning problems. Our approach is based on a novel Class Adapting Principal Directions (CAPD) concept that allows multiple embeddings of image features into a semantic space. Given an image, our method produces one principal direction for each seen class. Then, it learns how to combine these directions to obtain the principal direction for each unseen class such that the CAPD of the test image is aligned with the semantic embedding of the true class, and opposite to the other classes. This allows efficient and class-adaptive information transfer from seen to unseen classes. In addition, we propose an automatic process for selection of the most useful seen classes for each unseen class to achieve robustness in zero-shot learning. Our method can update the unseen CAPD taking the advantages of few unseen images to work in a few-shot learning scenario. Furthermore, our method can generalize the seen CAPDs by estimating seen-unseen diversity that significantly improves the performance of generalized zero-shot learning. Our extensive evaluations demonstrate that the proposed approach consistently achieves superior performance in zero-shot, generalized zero-shot and few/one-shot learning problems.

北京阿比特科技有限公司