亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

Recently, codes for correcting a burst of errors have attracted significant attention. One of the most important reasons is that bursts of errors occur in certain emerging techniques, such as DNA storage. In this paper, we investigate a type of error, called a $(t,s)$-burst, which deletes $t$ consecutive symbols and inserts $s$ arbitrary symbols at the same coordinate. Note that a $(t,s)$-burst error can be seen as a generalization of a burst of insertions ($t=0$), a burst of deletions ($s=0$), and a burst of substitutions ($t=s$). Our main contribution is to give explicit constructions of $q$-ary $(t,s)$-burst correcting codes with $\log n + O(1)$ bits of redundancy for any given non-negative integers $t$, $s$, and $q \geq 2$. These codes have optimal redundancy up to an additive constant. Furthermore, we apply our $(t,s)$-burst correcting codes to combat other various types of errors and improve the corresponding results. In particular, one of our byproducts is a permutation code capable of correcting a burst of $t$ stable deletions with $\log n + O(1)$ bits of redundancy, which is optimal up to an additive constant.

相關內容

$\ell_1$ regularization is used to preserve edges or enforce sparsity in a solution to an inverse problem. We investigate the Split Bregman and the Majorization-Minimization iterative methods that turn this non-smooth minimization problem into a sequence of steps that include solving an $\ell_2$-regularized minimization problem. We consider selecting the regularization parameter in the inner generalized Tikhonov regularization problems that occur at each iteration in these $\ell_1$ iterative methods. The generalized cross validation and $\chi^2$ degrees of freedom methods are extended to these inner problems. In particular, for the $\chi^2$ method this includes extending the $\chi^2$ result for problems in which the regularization operator has more rows than columns, and showing how to use the $A-$weighted generalized inverse to estimate prior information at each inner iteration. Numerical experiments for image deblurring problems demonstrate that it is more effective to select the regularization parameter automatically within the iterative schemes than to keep it fixed for all iterations. Moreover, an appropriate regularization parameter can be estimated in the early iterations and used fixed to convergence.

We present PSiLON Net, an MLP architecture that uses $L_1$ weight normalization for each weight vector and shares the length parameter across the layer. The 1-path-norm provides a bound for the Lipschitz constant of a neural network and reflects on its generalizability, and we show how PSiLON Net's design drastically simplifies the 1-path-norm, while providing an inductive bias towards efficient learning and near-sparse parameters. We propose a pruning method to achieve exact sparsity in the final stages of training, if desired. To exploit the inductive bias of residual networks, we present a simplified residual block, leveraging concatenated ReLU activations. For networks constructed with such blocks, we prove that considering only a subset of possible paths in the 1-path-norm is sufficient to bound the Lipschitz constant. Using the 1-path-norm and this improved bound as regularizers, we conduct experiments in the small data regime using overparameterized PSiLON Nets and PSiLON ResNets, demonstrating reliable optimization and strong performance.

Accurate and dense mapping in large-scale environments is essential for various robot applications. Recently, implicit neural signed distance fields (SDFs) have shown promising advances in this task. However, most existing approaches employ projective distances from range data as SDF supervision, introducing approximation errors and thus degrading the mapping quality. To address this problem, we introduce N$^{3}$-Mapping, an implicit neural mapping system featuring normal-guided neural non-projective signed distance fields. Specifically, we directly sample points along the surface normal, instead of the ray, to obtain more accurate non-projective distance values from range data. Then these distance values are used as supervision to train the implicit map. For large-scale mapping, we apply a voxel-oriented sliding window mechanism to alleviate the forgetting issue with a bounded memory footprint. Besides, considering the uneven distribution of measured point clouds, a hierarchical sampling strategy is designed to improve training efficiency. Experiments demonstrate that our method effectively mitigates SDF approximation errors and achieves state-of-the-art mapping quality compared to existing approaches.

We show a fully dynamic algorithm for maintaining $(1+\epsilon)$-approximate \emph{size} of maximum matching of the graph with $n$ vertices and $m$ edges using $m^{0.5-\Omega_{\epsilon}(1)}$ update time. This is the first polynomial improvement over the long-standing $O(n)$ update time, which can be trivially obtained by periodic recomputation. Thus, we resolve the value version of a major open question of the dynamic graph algorithms literature (see, e.g., [Gupta and Peng FOCS'13], [Bernstein and Stein SODA'16],[Behnezhad and Khanna SODA'22]). Our key technical component is the first sublinear algorithm for $(1,\epsilon n)$-approximate maximum matching with sublinear running time on dense graphs. All previous algorithms suffered a multiplicative approximation factor of at least $1.499$ or assumed that the graph has a very small maximum degree.

Implicational bases (IBs) are a common representation of finite closure systems and lattices, along with meet-irreducible elements. They appear in a wide variety of fields ranging from logic and databases to Knowledge Space Theory. Different IBs can represent the same closure system. Therefore, several IBs have been studied, such as the canonical and canonical direct bases. In this paper, we investigate the $D$-base, a refinement of the canonical direct base. It is connected with the $D$-relation, an essential tool in the study of free lattices. The $D$-base demonstrates desirable algorithmic properties, and together with the $D$-relation, it conveys essential properties of the underlying closure system. Hence, computing the $D$-base and the $D$-relation of a closure system from another representation is crucial to enjoy its benefits. However, complexity results for this task are lacking. In this paper, we give algorithms and hardness results for the computation of the $D$-base and $D$-relation. Specifically, we establish the $NP$-completeness of finding the $D$-relation from an arbitrary IB; we give an output-quasi-polynomial time algorithm to compute the $D$-base from meet-irreducible elements; and we obtain a polynomial-delay algorithm computing the $D$-base from an arbitrary IB. These results complete the picture regarding the complexity of identifying the $D$-base and $D$-relation of a closure system.

We present polylogarithmic approximation algorithms for variants of the Shortest Path, Group Steiner Tree, and Group ATSP problems with vector costs. In these problems, each edge e has a non-negative vector cost $c_e \in \mathbb{R}^{\ell}_{\ge 0}$. For a feasible solution - a path, subtree, or tour (respectively) - we find the total vector cost of all the edges in the solution and then compute the $\ell_p$-norm of the obtained cost vector (we assume that $p \ge 1$ is an integer). Our algorithms for series-parallel graphs run in polynomial time and those for arbitrary graphs run in quasi-polynomial time. To obtain our results, we introduce and use new flow-based Sum-of-Squares relaxations. We also obtain a number of hardness results.

The practical Domain Adaptation (DA) tasks, e.g., Partial DA (PDA), open-set DA, universal DA, and test-time adaptation, have gained increasing attention in the machine learning community. In this paper, we propose a novel approach, dubbed Adversarial Reweighting with $\alpha$-Power Maximization (ARPM), for PDA where the source domain contains private classes absent in target domain. In ARPM, we propose a novel adversarial reweighting model that adversarially learns to reweight source domain data to identify source-private class samples by assigning smaller weights to them, for mitigating potential negative transfer. Based on the adversarial reweighting, we train the transferable recognition model on the reweighted source distribution to be able to classify common class data. To reduce the prediction uncertainty of the recognition model on the target domain for PDA, we present an $\alpha$-power maximization mechanism in ARPM, which enriches the family of losses for reducing the prediction uncertainty for PDA. Extensive experimental results on five PDA benchmarks, i.e., Office-31, Office-Home, VisDA-2017, ImageNet-Caltech, and DomainNet, show that our method is superior to recent PDA methods. Ablation studies also confirm the effectiveness of components in our approach. To theoretically analyze our method, we deduce an upper bound of target domain expected error for PDA, which is approximately minimized in our approach. We further extend ARPM to open-set DA, universal DA, and test time adaptation, and verify the usefulness through experiments.

An online decision-making problem is a learning problem in which a player repeatedly makes decisions in order to minimize the long-term loss. These problems that emerge in applications often have nonlinear combinatorial objective functions, and developing algorithms for such problems has attracted considerable attention. An existing general framework for dealing with such objective functions is the online submodular minimization. However, practical problems are often out of the scope of this framework, since the domain of a submodular function is limited to a subset of the unit hypercube. To manage this limitation of the existing framework, we in this paper introduce the online $\mathrm{L}^{\natural}$-convex minimization, where an $\mathrm{L}^{\natural}$-convex function generalizes a submodular function so that the domain is a subset of the integer lattice. We propose computationally efficient algorithms for the online $\mathrm{L}^{\natural}$-convex function minimization in two major settings: the full information and the bandit settings. We analyze the regrets of these algorithms and show in particular that our algorithm for the full information setting obtains a tight regret bound up to a constant factor. We also demonstrate several motivating examples that illustrate the usefulness of the online $\mathrm{L}^{\natural}$-convex minimization.

When it comes to a personalized item recommendation system, It is essential to extract users' preferences and purchasing patterns. Assuming that users in the real world form a cluster and there is common favoritism in each cluster, in this work, we introduce Co-Clustering Wrapper (CCW). We compute co-clusters of users and items with co-clustering algorithms and add CF subnetworks for each cluster to extract the in-group favoritism. Combining the features from the networks, we obtain rich and unified information about users. We experimented real world datasets considering two aspects: Finding the number of groups divided according to in-group preference, and measuring the quantity of improvement of the performance.

Click-through rate (CTR) prediction plays a critical role in recommender systems and online advertising. The data used in these applications are multi-field categorical data, where each feature belongs to one field. Field information is proved to be important and there are several works considering fields in their models. In this paper, we proposed a novel approach to model the field information effectively and efficiently. The proposed approach is a direct improvement of FwFM, and is named as Field-matrixed Factorization Machines (FmFM, or $FM^2$). We also proposed a new explanation of FM and FwFM within the FmFM framework, and compared it with the FFM. Besides pruning the cross terms, our model supports field-specific variable dimensions of embedding vectors, which acts as soft pruning. We also proposed an efficient way to minimize the dimension while keeping the model performance. The FmFM model can also be optimized further by caching the intermediate vectors, and it only takes thousands of floating-point operations (FLOPs) to make a prediction. Our experiment results show that it can out-perform the FFM, which is more complex. The FmFM model's performance is also comparable to DNN models which require much more FLOPs in runtime.

北京阿比特科技有限公司