亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

A necklace is an equivalence class of words of length $n$ over an alphabet under the cyclic shift (rotation) operation. As a classical object, there have been many algorithmic results for key operations on necklaces, including counting, generating, ranking, and unranking. This paper generalises the concept of necklaces to the multidimensional setting. We define multidimensional necklaces as an equivalence classes over multidimensional words under the multidimensional cyclic shift operation. Alongside this definition, we generalise several problems from the one dimensional setting to the multidimensional setting for multidimensional necklaces with size $(n_1,n_2,...,n_d)$ over an alphabet of size $q$ including: providing closed form equations for counting the number of necklaces; an $O(n_1 \cdot n_2 \cdot ... \cdot n_d)$ time algorithm for transforming some necklace $w$ to the next necklace in the ordering; an $O((n_1 \cdot n_2 \cdot ... \cdot n_d)^5)$ time algorithm to rank necklaces (determine the number of necklaces smaller than $w$ in the set of necklaces); an $O((n_1\cdot n_2 \cdot ... \cdot n_d)^{6(d + 1)} \cdot \log^d(q))$ time algorithm to unrank multidimensional necklace (determine the $i^{th}$ necklace in the set of necklaces). Our results on counting, ranking, and unranking are further extended to the fixed content setting, where every necklace has the same Parikh vector, in other words every necklace shares the same number of occurrences of each symbol. Finally, we study the $k$-centre problem for necklaces both in the single and multidimensional settings. We provide strong approximation algorithms for solving this problem in both the one dimensional and multidimensional settings.

相關內容

Can we sense our location in an unfamiliar environment by taking a sublinear-size sample of our surroundings? Can we efficiently encrypt a message that only someone physically close to us can decrypt? To solve this kind of problems, we introduce and study a new type of hash functions for finding shifts in sublinear time. A function $h:\{0,1\}^n\to \mathbb{Z}_n$ is a $(d,\delta)$ {\em locality-preserving hash function for shifts} (LPHS) if: (1) $h$ can be computed by (adaptively) querying $d$ bits of its input, and (2) $\Pr [ h(x) \neq h(x \ll 1) + 1 ] \leq \delta$, where $x$ is random and $\ll 1$ denotes a cyclic shift by one bit to the left. We make the following contributions. * Near-optimal LPHS via Distributed Discrete Log: We establish a general two-way connection between LPHS and algorithms for distributed discrete logarithm in the generic group model. Using such an algorithm of Dinur et al. (Crypto 2018), we get LPHS with near-optimal error of $\delta=\tilde O(1/d^2)$. This gives an unusual example for the usefulness of group-based cryptography in a post-quantum world. We extend the positive result to non-cyclic and worst-case variants of LPHS. * Multidimensional LPHS: We obtain positive and negative results for a multidimensional extension of LPHS, making progress towards an optimal 2-dimensional LPHS. * Applications: We demonstrate the usefulness of LPHS by presenting cryptographic and algorithmic applications. In particular, we apply multidimensional LPHS to obtain an efficient "packed" implementation of homomorphic secret sharing and a sublinear-time implementation of location-sensitive encryption whose decryption requires a significantly overlapping view.

Sensor embedded glove systems have been reported to require careful, time consuming and precise calibrations on a per user basis in order to obtain consistent usable data. We have developed a low cost, flex sensor based smart glove system that may be resilient to the common limitations of data gloves. This system utilizes an Arduino based micro controller as well as a single flex sensor on each finger. Feedback from the Arduinos analog to digital converter can be used to infer objects dimensional properties, the reactions of each individual finger will differ with respect to the size and shape of a grasped object. In this work, we report its use in statistically differentiating stationary objects of spherical and cylindrical shapes of varying radii regardless of the variations introduced by gloves users. Using our sensor embedded glove system, we explored the practicability of object classification based on the tactile sensor responses from each finger of the smart glove. An estimated standard error of the mean was calculated from each of the of five fingers averaged flex sensor readings. Consistent with the literature, we found that there is a systematic dependence between an objects shape, dimension and the flex sensor readings. The sensor output from at least one finger, indicated a non-overlapping confidence interval when comparing spherical and cylindrical objects of the same radius. When sensing spheres and cylinders of varying sizes, all five fingers had a categorically varying reaction to each shape. We believe that our findings could be used in machine learning models for real-time object identification.

We consider Broyden's method and some accelerated schemes for nonlinear equations having a strongly regular singularity of first order with a one-dimensional nullspace. Our two main results are as follows. First, we show that the use of a preceding Newton-like step ensures convergence for starting points in a starlike domain with density 1. This extends the domain of convergence of these methods significantly. Second, we establish that the matrix updates of Broyden's method converge q-linearly with the same asymptotic factor as the iterates. This contributes to the long-standing question whether the Broyden matrices converge by showing that this is indeed the case for the setting at hand. Furthermore, we prove that the Broyden directions violate uniform linear independence, which implies that existing results for convergence of the Broyden matrices cannot be applied. Numerical experiments of high precision confirm the enlarged domain of convergence, the q-linear convergence of the matrix updates, and the lack of uniform linear independence. In addition, they suggest that these results can be extended to singularities of higher order and that Broyden's method can converge r-linearly without converging q-linearly. The underlying code is freely available.

In this work, we study the following problem, that we refer to as Low Rank column-wise Compressive Sensing (LRcCS): how to recover an $n \times q$ rank-$r$ matrix, $X^* =[x^*_1 , x^*_2 ,...x^*_q]$ from $m$ independent linear projections of each of its $q$ columns, i.e., from $y_k := A_k x^*_k , k \in [q]$, when $y_k$ is an $m$-length vector. The matrices $A_k$ are known and mutually independent for different $k$. The regime of interest is low-rank, i.e., $r \ll \min(n,q)$, and undersampled measurements, i.e., $m < n$. Even though many LR recovery problems have been extensively studied in the last decade, this particular problem has received little attention so far in terms of methods with provable guarantees. We introduce a novel gradient descent (GD) based solution called altGDmin. We show that, if all entries of all $A_k$s are i.i.d. Gaussian, and if the right singular vectors of $X^*$ satisfy the incoherence assumption, then $\epsilon$-accurate recovery of $X^*$ is possible with $mq > C (n+q) r^2 \log(1/\epsilon)$ total samples and $O( mq nr \log (1/\epsilon))$ time. Compared to existing work, to our best knowledge, this is the fastest solution and, for $\epsilon < 1/\sqrt{r}$, it also has the best sample complexity. Moreover, we show that a simple extension of our approach also solves LR Phase Retrieval (LRPR), which is the magnitude-only generalization of LRcCS. It involves recovering $X^*$ from the magnitudes of entries of $y_k$. We show that altGDmin-LRPR has matching sample complexity and better time complexity when compared with the (best) existing solution for LRPR.

Devising schemes for testing the amount of entanglement in quantum systems has played a crucial role in quantum computing and information theory. Here, we study the problem of testing whether an unknown state $|\psi\rangle$ is a matrix product state (MPS) in the property testing model. MPS are a class of physically-relevant quantum states which arise in the study of quantum many-body systems. A quantum state $|\psi_{1,...,n}\rangle$ comprised of $n$ qudits is said to be an MPS of bond dimension $r$ if the reduced density matrix $\psi_{1,...,k}$ has rank $r$ for each $k \in \{1,...,n\}$. When $r=1$, this corresponds to the set of product states. For larger values of $r$, this yields a more expressive class of quantum states, which are allowed to possess limited amounts of entanglement. In the property testing model, one is given $m$ identical copies of $|\psi\rangle$, and the goal is to determine whether $|\psi\rangle$ is an MPS of bond dimension $r$ or whether $|\psi\rangle$ is far from all such states. For the case of product states, we study the product test, a simple two-copy test previously analyzed by Harrow and Montanaro (FOCS 2010), and a key ingredient in their proof that $\mathsf{QMA(2)}=\mathsf{QMA}(k)$ for $k \geq 2$. We give a new and simpler analysis of the product test which achieves an optimal bound for a wide range of parameters, answering open problems of Harrow and Montanaro (FOCS 2010) and Montanaro and de Wolf (2016). For the case of $r\geq 2$, we give an efficient algorithm for testing whether $|\psi\rangle$ is an MPS of bond dimension $r$ using $m = O(n r^2)$ copies, independent of the dimensions of the qudits, and we show that $\Omega(n^{1/2})$ copies are necessary for this task. This lower bound shows that a dependence on the number of qudits $n$ is necessary, in sharp contrast to the case of product states where a constant number of copies suffices.

The idea of slicing divergences has been proven to be successful when comparing two probability measures in various machine learning applications including generative modeling, and consists in computing the expected value of a `base divergence' between one-dimensional random projections of the two measures. However, the topological, statistical, and computational consequences of this technique have not yet been well-established. In this paper, we aim at bridging this gap and derive various theoretical properties of sliced probability divergences. First, we show that slicing preserves the metric axioms and the weak continuity of the divergence, implying that the sliced divergence will share similar topological properties. We then precise the results in the case where the base divergence belongs to the class of integral probability metrics. On the other hand, we establish that, under mild conditions, the sample complexity of a sliced divergence does not depend on the problem dimension. We finally apply our general results to several base divergences, and illustrate our theory on both synthetic and real data experiments.

We study the class of first-order locally-balanced Metropolis--Hastings algorithms introduced in Livingstone & Zanella (2021). To choose a specific algorithm within the class the user must select a balancing function $g:\mathbb{R} \to \mathbb{R}$ satisfying $g(t) = tg(1/t)$, and a noise distribution for the proposal increment. Popular choices within the class are the Metropolis-adjusted Langevin algorithm and the recently introduced Barker proposal. We first establish a universal limiting optimal acceptance rate of 57% and scaling of $n^{-1/3}$ as the dimension $n$ tends to infinity among all members of the class under mild smoothness assumptions on $g$ and when the target distribution for the algorithm is of the product form. In particular we obtain an explicit expression for the asymptotic efficiency of an arbitrary algorithm in the class, as measured by expected squared jumping distance. We then consider how to optimise this expression under various constraints. We derive an optimal choice of noise distribution for the Barker proposal, optimal choice of balancing function under a Gaussian noise distribution, and optimal choice of first-order locally-balanced algorithm among the entire class, which turns out to depend on the specific target distribution. Numerical simulations confirm our theoretical findings and in particular show that a bi-modal choice of noise distribution in the Barker proposal gives rise to a practical algorithm that is consistently more efficient than the original Gaussian version.

In this paper, we show that the diagonal of a high-dimensional sample covariance matrix stemming from $n$ independent observations of a $p$-dimensional time series with finite fourth moments can be approximated in spectral norm by the diagonal of the population covariance matrix. We assume that $n,p\to \infty$ with $p/n$ tending to a constant which might be positive or zero. As applications, we provide an approximation of the sample correlation matrix ${\mathbf R}$ and derive a variety of results for its eigenvalues. We identify the limiting spectral distribution of ${\mathbf R}$ and construct an estimator for the population correlation matrix and its eigenvalues. Finally, the almost sure limits of the extreme eigenvalues of ${\mathbf R}$ in a generalized spiked correlation model are analyzed.

The use of orthogonal projections on high-dimensional input and target data in learning frameworks is studied. First, we investigate the relations between two standard objectives in dimension reduction, maximizing variance and preservation of pairwise relative distances. The derivation of their asymptotic correlation and numerical experiments tell that a projection usually cannot satisfy both objectives. In a standard classification problem we determine projections on the input data that balance them and compare subsequent results. Next, we extend our application of orthogonal projections to deep learning frameworks. We introduce new variational loss functions that enable integration of additional information via transformations and projections of the target data. In two supervised learning problems, clinical image segmentation and music information classification, the application of the proposed loss functions increase the accuracy.

In this work, we consider the distributed optimization of non-smooth convex functions using a network of computing units. We investigate this problem under two regularity assumptions: (1) the Lipschitz continuity of the global objective function, and (2) the Lipschitz continuity of local individual functions. Under the local regularity assumption, we provide the first optimal first-order decentralized algorithm called multi-step primal-dual (MSPD) and its corresponding optimal convergence rate. A notable aspect of this result is that, for non-smooth functions, while the dominant term of the error is in $O(1/\sqrt{t})$, the structure of the communication network only impacts a second-order term in $O(1/t)$, where $t$ is time. In other words, the error due to limits in communication resources decreases at a fast rate even in the case of non-strongly-convex objective functions. Under the global regularity assumption, we provide a simple yet efficient algorithm called distributed randomized smoothing (DRS) based on a local smoothing of the objective function, and show that DRS is within a $d^{1/4}$ multiplicative factor of the optimal convergence rate, where $d$ is the underlying dimension.

北京阿比特科技有限公司