亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

In PATH SET PACKING, the input is an undirected graph $G$, a collection $\cal P$ of simple paths in $G$, and a positive integer $k$. The problem is to decide whether there exist $k$ edge-disjoint paths in $\cal P$. We study the parameterized complexity of PATH SET PACKING with respect to both natural and structural parameters. We show that the problem is $W[1]$-hard with respect to vertex cover plus the maximum length of a path in $\cal P$, and $W[1]$-hard respect to pathwidth plus maximum degree plus solution size. These results answer an open question raised in COCOON 2018. On the positive side, we show an FPT algorithm parameterized by feedback vertex set plus maximum degree, and also show an FPT algorithm parameterized by treewidth plus maximum degree plus maximum length of a path in $\cal P$. Both the positive results complement the hardness of PATH SET PACKING with respect to any subset of the parameters used in the FPT algorithms.

相關內容

Consider a following NP-problem DOUBLE CLIQUE (abbr.: CLIQ$_{2}$): Given a natural number $k>2$ and a pair of two disjoint subgraphs of a fixed graph $G$ decide whether each subgraph in question contains a $k$-clique. I prove that CLIQ$_{2}$ can't be solved in polynomial time by a deterministic TM, which infers $\mathbf{P}\neq \mathbf{NP}$. This proof upgrades the well-known proof of polynomial unsolvability of the partial result with respect to analogous monotone problem CLIQUE (abbr.: CLIQ). However, problem CLIQ$_{2}$ is not monotone and appears more complex than just iterated CLIQ, as the required subgraphs are mutually dependent (cf. Remark 27 in the text).

This article considers linear approximation based on function evaluations in reproducing kernel Hilbert spaces of the Gaussian kernel and a more general class of weighted power series kernels on the interval $[-1, 1]$. We derive almost matching upper and lower bounds on the worst-case error, measured both in the uniform and $L^2([-1,1])$-norm, in these spaces. The results show that if the power series kernel expansion coefficients $\alpha_n^{-1}$ decay at least factorially, their rate of decay controls that of the worst-case error. Specifically, (i) the $n$th minimal error decays as $\alpha_n^{{ -1/2}}$ up to a sub-exponential factor and (ii) for any $n$ sampling points in $[-1, 1]$ there exists a linear algorithm whose error is $\alpha_n^{{ -1/2}}$ up to an exponential factor. For the Gaussian kernel the dominating factor in the bounds is $(n!)^{-1/2}$.

In this paper, we study the adaptive planewave discretization for a cluster of eigenvalues of second-order elliptic partial differential equations. We first design an a posteriori error estimator and prove both the upper and lower bounds. Based on the a posteriori error estimator, we propose an adaptive planewave method. We then prove that the adaptive planewave approximations have the linear convergence rate and quasi-optimal complexity.

Heterogeneity is a dominant factor in the behaviour of many biological processes. Despite this, it is common for mathematical and statistical analyses to ignore biological heterogeneity as a source of variability in experimental data. Therefore, methods for exploring the identifiability of models that explicitly incorporate heterogeneity through variability in model parameters are relatively underdeveloped. We develop a new likelihood-based framework, based on moment matching, for inference and identifiability analysis of differential equation models that capture biological heterogeneity through parameters that vary according to probability distributions. As our novel method is based on an approximate likelihood function, it is highly flexible; we demonstrate identifiability analysis using both a frequentist approach based on profile likelihood, and a Bayesian approach based on Markov-chain Monte Carlo. Through three case studies, we demonstrate our method by providing a didactic guide to inference and identifiability analysis of hyperparameters that relate to the statistical moments of model parameters from independent observed data. Our approach has a computational cost comparable to analysis of models that neglect heterogeneity, a significant improvement over many existing alternatives. We demonstrate how analysis of random parameter models can aid better understanding of the sources of heterogeneity from biological data.

We present a highly scalable strategy for developing mesh-free neuro-symbolic partial differential equation solvers from existing numerical discretizations found in scientific computing. This strategy is unique in that it can be used to efficiently train neural network surrogate models for the solution functions and the differential operators, while retaining the accuracy and convergence properties of state-of-the-art numerical solvers. This neural bootstrapping method is based on minimizing residuals of discretized differential systems on a set of random collocation points with respect to the trainable parameters of the neural network, achieving unprecedented resolution and optimal scaling for solving physical and biological systems.

We consider optimization problems in which the goal is find a $k$-dimensional subspace of $\mathbb{R}^n$, $k<<n$, which minimizes a convex and smooth loss. Such problems generalize the fundamental task of principal component analysis (PCA) to include robust and sparse counterparts, and logistic PCA for binary data, among others. This problem could be approached either via nonconvex gradient methods with highly-efficient iterations, but for which arguing about fast convergence to a global minimizer is difficult or, via a convex relaxation for which arguing about convergence to a global minimizer is straightforward, but the corresponding methods are often inefficient in high dimensions. In this work we bridge these two approaches under a strict complementarity assumption, which in particular implies that the optimal solution to the convex relaxation is unique and is also the optimal solution to the original nonconvex problem. Our main result is a proof that a natural nonconvex gradient method which is \textit{SVD-free} and requires only a single QR-factorization of an $n\times k$ matrix per iteration, converges locally with a linear rate. We also establish linear convergence results for the nonconvex projected gradient method, and the Frank-Wolfe method when applied to the convex relaxation.

Spatially inhomogeneous functions, which may be smooth in some regions and rough in other regions, are modelled naturally in a Bayesian manner using so-called Besov priors which are given by random wavelet expansions with Laplace-distributed coefficients. This paper studies theoretical guarantees for such prior measures - specifically, we examine their frequentist posterior contraction rates in the setting of non-linear inverse problems with Gaussian white noise. Our results are first derived under a general local Lipschitz assumption on the forward map. We then verify the assumption for two non-linear inverse problems arising from elliptic partial differential equations, the Darcy flow model from geophysics as well as a model for the Schr\"odinger equation appearing in tomography. In the course of the proofs, we also obtain novel concentration inequalities for penalized least squares estimators with $\ell^1$ wavelet penalty, which have a natural interpretation as maximum a posteriori (MAP) estimators. The true parameter is assumed to belong to some spatially inhomogeneous Besov class $B^{\alpha}_{11}$, $\alpha>0$. In a setting with direct observations, we complement these upper bounds with a lower bound on the rate of contraction for arbitrary Gaussian priors. An immediate consequence of our results is that while Laplace priors can achieve minimax-optimal rates over $B^{\alpha}_{11}$-classes, Gaussian priors are limited to a (by a polynomial factor) slower contraction rate. This gives information-theoretical justification for the intuition that Laplace priors are more compatible with $\ell^1$ regularity structure in the underlying parameter.

Tyler's M-estimator is a well known procedure for robust and heavy-tailed covariance estimation. Tyler himself suggested an iterative fixed-point algorithm for computing his estimator however, it requires super-linear (in the size of the data) runtime per iteration, which maybe prohibitive in large scale. In this work we propose, to the best of our knowledge, the first Frank-Wolfe-based algorithms for computing Tyler's estimator. One variant uses standard Frank-Wolfe steps, the second also considers \textit{away-steps} (AFW), and the third is a \textit{geodesic} version of AFW (GAFW). AFW provably requires, up to a log factor, only linear time per iteration, while GAFW runs in linear time (up to a log factor) in a large $n$ (number of data-points) regime. All three variants are shown to provably converge to the optimal solution with sublinear rate, under standard assumptions, despite the fact that the underlying optimization problem is not convex nor smooth. Under an additional fairly mild assumption, that holds with probability 1 when the (normalized) data-points are i.i.d. samples from a continuous distribution supported on the entire unit sphere, AFW and GAFW are proved to converge with linear rates. Importantly, all three variants are parameter-free and use adaptive step-sizes.

As an important piece of the multi-tier computing architecture for future wireless networks, over-the-air computation (OAC) enables efficient function computation in multiple-access edge computing, where a fusion center aims to compute a function of the data distributed at edge devices. Existing OAC relies exclusively on the maximum likelihood (ML) estimation at the fusion center to recover the arithmetic sum of the transmitted signals from different devices. ML estimation, however, is much susceptible to noise. In particular, in the misaligned OAC where there are channel misalignments among received signals, ML estimation suffers from severe error propagation and noise enhancement. To address these challenges, this paper puts forth a Bayesian approach by letting each edge device transmit two pieces of statistical information to the fusion center such that Bayesian estimators can be devised to tackle the misalignments. Numerical and simulation results verify that, 1) For the aligned and synchronous OAC, our linear minimum mean squared error (LMMSE) estimator significantly outperforms the ML estimator. In the low signal-to-noise ratio (SNR) regime, the LMMSE estimator reduces the mean squared error (MSE) by at least 6 dB; in the high SNR regime, the LMMSE estimator lowers the error floor of MSE by 86.4%; 2) For the asynchronous OAC, our LMMSE and sum-product maximum a posteriori (SP-MAP) estimators are on an equal footing in terms of the MSE performance, and are significantly better than the ML estimator. Moreover, the SP-MAP estimator is computationally efficient, the complexity of which grows linearly with the packet length.

For the well-known Survivable Network Design Problem (SNDP) we are given an undirected graph $G$ with edge costs, a set $R$ of terminal vertices, and an integer demand $d_{s,t}$ for every terminal pair $s,t\in R$. The task is to compute a subgraph $H$ of $G$ of minimum cost, such that there are at least $d_{s,t}$ disjoint paths between $s$ and $t$ in $H$. If the paths are required to be edge-disjoint we obtain the edge-connectivity variant (EC-SNDP), while internally vertex-disjoint paths result in the vertex-connectivity variant (VC-SNDP). Another important case is the element-connectivity variant (LC-SNDP), where the paths are disjoint on edges and non-terminals. In this work we shed light on the parameterized complexity of the above problems. We consider several natural parameters, which include the solution size $\ell$, the sum of demands $D$, the number of terminals $k$, and the maximum demand $d_\max$. Using simple, elegant arguments, we prove the following results. - We give a complete picture of the parameterized tractability of the three variants w.r.t. parameter $\ell$: both EC-SNDP and LC-SNDP are FPT, while VC-SNDP is W[1]-hard. - We identify some special cases of VC-SNDP that are FPT: * when $d_\max\leq 3$ for parameter $\ell$, * on locally bounded treewidth graphs (e.g., planar graphs) for parameter $\ell$, and * on graphs of treewidth $tw$ for parameter $tw+D$. - The well-known Directed Steiner Tree (DST) problem can be seen as single-source EC-SNDP with $d_\max=1$ on directed graphs, and is FPT parameterized by $k$ [Dreyfus & Wagner 1971]. We show that in contrast, the 2-DST problem, where $d_\max=2$, is W[1]-hard, even when parameterized by $\ell$.

北京阿比特科技有限公司