亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

We consider the problem of binary string reconstruction from the multiset of its substring compositions, i.e., referred to as the substring composition multiset, first introduced and studied by Acharya et al. We introduce a new algorithm for the problem of string reconstruction from its substring composition multiset which relies on the algebraic properties of the equivalent bivariate polynomial formulation of the problem. We then characterize specific algebraic conditions for the binary string to be reconstructed that guarantee the algorithm does not require any backtracking through the reconstruction, and, consequently, the time complexity is bounded polynomially. More specifically, in the case of no backtracking, our algorithm has a time complexity of $O(n^2)$ compared to the algorithm by Acharya et al., which has a time complexity of $O(n^2\log(n))$, where $n$ is the length of the binary string. Furthermore, it is shown that larger sets of binary strings are uniquely reconstructable by the new algorithm and without the need for backtracking leading to codebooks of reconstruction codes that are larger, by a linear factor in size, compared to the previously known construction by Pattabiraman et al., while having $O(n^2)$ reconstruction complexity.

相關內容

Given data on the choices made by consumers for different offer sets, a key challenge is to develop parsimonious models that describe and predict consumer choice behavior while being amenable to prescriptive tasks such as pricing and assortment optimization. The marginal distribution model (MDM) is one such model, that requires only the specification of marginal distributions of the random utilities. This paper aims to establish necessary and sufficient conditions for given choice data to be consistent with the MDM hypothesis, inspired by the utility of similar characterizations for the random utility model (RUM). This endeavor leads to an exact characterization of the set of choice probabilities that the MDM can represent. Verifying the consistency of choice data with this characterization is equivalent to solving a polynomial-sized linear program. Since the analogous verification task for RUM is computationally intractable and neither of these models subsumes the other, MDM is helpful in striking a balance between tractability and representational power. The characterization is convenient to be used with robust optimization for making data-driven sales and revenue predictions for new unseen assortments. When the choice data lacks consistency with the MDM hypothesis, finding the best-fitting MDM choice probabilities reduces to solving a mixed integer convex program. The results extend naturally to the case where the alternatives can be grouped based on the similarity of the marginal distributions of the utilities. Numerical experiments show that MDM provides better representational power and prediction accuracy than multinominal logit and significantly better computational performance than RUM.

We investigate pseudopolynomial-time algorithms for Bounded Knapsack and Bounded Subset Sum. Recent years have seen a growing interest in settling their fine-grained complexity with respect to various parameters. For Bounded Knapsack, the number of items $n$ and the maximum item weight $w_{\max}$ are two of the most natural parameters that have been studied extensively in the literature. The previous best running time in terms of $n$ and $w_{\max}$ is $O(n + w^3_{\max})$ [Polak, Rohwedder, Wegrzycki '21]. There is a conditional lower bound of $O((n + w_{\max})^{2-o(1)})$ based on $(\min,+)$-convolution hypothesis [Cygan, Mucha, Wegrzycki, Wlodarczyk '17]. We narrow the gap significantly by proposing a $\tilde{O}(n + w^{12/5}_{\max})$-time algorithm. Note that in the regime where $w_{\max} \approx n$, our algorithm runs in $\tilde{O}(n^{12/5})$ time, while all the previous algorithms require $\Omega(n^3)$ time in the worst case. For Bounded Subset Sum, we give two algorithms running in $\tilde{O}(nw_{\max})$ and $\tilde{O}(n + w^{3/2}_{\max})$ time, respectively. These results match the currently best running time for 0-1 Subset Sum. Prior to our work, the best running times (in terms of $n$ and $w_{\max}$) for Bounded Subset Sum is $\tilde{O}(n + w^{5/3}_{\max})$ [Polak, Rohwedder, Wegrzycki '21] and $\tilde{O}(n + \mu_{\max}^{1/2}w_{\max}^{3/2})$ [implied by Bringmann '19 and Bringmann, Wellnitz '21], where $\mu_{\max}$ refers to the maximum multiplicity of item weights.

Accurate reconstruction of both the geometric and topological details of a 3D object from a single 2D image embodies a fundamental challenge in computer vision. Existing explicit/implicit solutions to this problem struggle to recover self-occluded geometry and/or faithfully reconstruct topological shape structures. To resolve this dilemma, we introduce LIST, a novel neural architecture that leverages local and global image features to accurately reconstruct the geometric and topological structure of a 3D object from a single image. We utilize global 2D features to predict a coarse shape of the target object and then use it as a base for higher-resolution reconstruction. By leveraging both local 2D features from the image and 3D features from the coarse prediction, we can predict the signed distance between an arbitrary point and the target surface via an implicit predictor with great accuracy. Furthermore, our model does not require camera estimation or pixel alignment. It provides an uninfluenced reconstruction from the input-view direction. Through qualitative and quantitative analysis, we show the superiority of our model in reconstructing 3D objects from both synthetic and real-world images against the state of the art.

The Independent Cutset problem asks whether there is a set of vertices in a given graph that is both independent and a cutset. Such a problem is $\textsf{NP}$-complete even when the input graph is planar and has maximum degree five. In this paper, we first present a $\mathcal{O}^*(1.4423^{n})$-time algorithm for the problem. We also show how to compute a minimum independent cutset (if any) in the same running time. Since the property of having an independent cutset is MSO$_1$-expressible, our main results are concerned with structural parameterizations for the problem considering parameters that are not bounded by a function of the clique-width of the input. We present $\textsf{FPT}$-time algorithms for the problem considering the following parameters: the dual of the maximum degree, the dual of the solution size, the size of a dominating set (where a dominating set is given as an additional input), the size of an odd cycle transversal, the distance to chordal graphs, and the distance to $P_5$-free graphs. We close by introducing the notion of $\alpha$-domination, which allows us to identify more fixed-parameter tractable and polynomial-time solvable cases.

The standard photometric stereo model makes several assumptions that are rarely verified in experimental datasets. In particular, the observed object should behave as a Lambertian reflector and the light sources should be positioned at an infinite distance from it, along a known direction. Even when Lambert's law is approximately fulfilled, an accurate assessment of the relative position between the light source and the target is often unavailable in real situations. The Hayakawa procedure is a computational method for estimating such information directly from the data images. It occasionally breaks down when some of the available images deviate too much from ideality. Indeed, in narrow shooting scenarios, typical, e.g., of archaeological excavation sites, it may be impossible to position a flashlight at a sufficient distance from the observed surface. It is then necessary to understand if a given dataset is reliable and which images should be selected to better reconstruct the target. In this paper, we propose some algorithms to perform this task and explore their effectiveness.

In this paper, we study the weighted $k$-server problem on the uniform metric in both the offline and online settings. We start with the offline setting. In contrast to the (unweighted) $k$-server problem which has a polynomial-time solution using min-cost flows, there are strong computational lower bounds for the weighted $k$-server problem, even on the uniform metric. Specifically, we show that assuming the unique games conjecture, there are no polynomial-time algorithms with a sub-polynomial approximation factor, even if we use $c$-resource augmentation for $c < 2$. Furthermore, if we consider the natural LP relaxation of the problem, then obtaining a bounded integrality gap requires us to use at least $\ell$ resource augmentation, where $\ell$ is the number of distinct server weights. We complement these results by obtaining a constant-approximation algorithm via LP rounding, with a resource augmentation of $(2+\epsilon)\ell$ for any constant $\epsilon > 0$. In the online setting, an $\exp(k)$ lower bound is known for the competitive ratio of any randomized algorithm for the weighted $k$-server problem on the uniform metric. In contrast, we show that $2\ell$-resource augmentation can bring the competitive ratio down by an exponential factor to only $O(\ell^2 \log \ell)$. Our online algorithm uses the two-stage approach of first obtaining a fractional solution using the online primal-dual framework, and then rounding it online.

We consider a class of problems of Discrete Tomography which has been deeply investigated in the past: the reconstruction of convex lattice sets from their horizontal and/or vertical X-rays, i.e. from the number of points in a sequence of consecutive horizontal and vertical lines. The reconstruction of the HV-convex polyominoes works usually in two steps, first the filling step consisting in filling operations, second the convex aggregation of the switching components. We prove three results about the convex aggregation step: (1) The convex aggregation step used for the reconstruction of HV-convex polyominoes does not always provide a solution. The example yielding to this result is called \textit{the bad guy} and disproves a conjecture of the domain. (2) The reconstruction of a digital convex lattice set from only one X-ray can be performed in polynomial time. We prove it by encoding the convex aggregation problem in a Directed Acyclic Graph. (3) With the same strategy, we prove that the reconstruction of fat digital convex sets from their horizontal and vertical X-rays can be solved in polynomial time. Fatness is a property of the digital convex sets regarding the relative position of the left, right, top and bottom points of the set. The complexity of the reconstruction of the lattice sets which are not fat remains an open question.

For any particular class of graphs, algorithms for computational problems restricted to the class often rely on structural properties that depend on the specific problem at hand. This begs the question if a large set of such results can be explained by some common problem conditions. We propose such conditions for $HH$-subgraph-free graphs. For a set of graphs $HH$, a graph $G$ is $HH$-subgraph-free if $G$ does not contain any of graph from $H$ as a subgraph. Our conditions are easy to state. A graph problem must be efficiently solvable on graphs of bounded treewidth, computationally hard on subcubic graphs, and computational hardness must be preserved under edge subdivision of subcubic graphs. Our meta-classification says that if a graph problem satisfies all three conditions, then for every finite set $HH$, it is ``efficiently solvable'' on $HH$-subgraph-free graphs if $HH$ contains a disjoint union of one or more paths and subdivided claws, and is ``computationally hard'' otherwise. We illustrate the broad applicability of our meta-classification by obtaining a dichotomy between polynomial-time solvability and NP-completeness for many well-known partitioning, covering and packing problems, network design problems and width parameter problems. For other problems, we obtain a dichotomy between almost-linear-time solvability and having no subquadratic-time algorithm (conditioned on some hardness hypotheses). The proposed framework thus gives a simple pathway to determine the complexity of graph problems on $HH$-subgraph-free graphs. This is confirmed even more by the fact that along the way, we uncover and resolve several open questions from the literature.

Imaging problems such as the one in nanoCT require the solution of an inverse problem, where it is often taken for granted that the forward operator, i.e., the underlying physical model, is properly known. In the present work we address the problem where the forward model is inexact due to stochastic or deterministic deviations during the measurement process. We particularly investigate the performance of non-learned iterative reconstruction methods dealing with inexactness and learned reconstruction schemes, which are based on U-Nets and conditional invertible neural networks. The latter also provide the opportunity for uncertainty quantification. A synthetic large data set in line with a typical nanoCT setting is provided and extensive numerical experiments are conducted evaluating the proposed methods.

We study the problems of sequential nonparametric two-sample and independence testing. Sequential tests process data online and allow using observed data to decide whether to stop and reject the null hypothesis or to collect more data, while maintaining type I error control. We build upon the principle of (nonparametric) testing by betting, where a gambler places bets on future observations and their wealth measures evidence against the null hypothesis. While recently developed kernel-based betting strategies often work well on simple distributions, selecting a suitable kernel for high-dimensional or structured data, such as images, is often nontrivial. To address this drawback, we design prediction-based betting strategies that rely on the following fact: if a sequentially updated predictor starts to consistently determine (a) which distribution an instance is drawn from, or (b) whether an instance is drawn from the joint distribution or the product of the marginal distributions (the latter produced by external randomization), it provides evidence against the two-sample or independence nulls respectively. We empirically demonstrate the superiority of our tests over kernel-based approaches under structured settings. Our tests can be applied beyond the case of independent and identically distributed data, remaining valid and powerful even when the data distribution drifts over time.

北京阿比特科技有限公司