亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

In this work, we consider $q$-ary signature codes of length $k$ and size $n$ for a noisy adder multiple access channel. A signature code in this model has the property that any subset of codewords can be uniquely reconstructed based on any vector that is obtained from the sum (over integers) of these codewords. We show that there exists an algorithm to construct a signature code of length $k = \frac{2n\log{3}}{(1-2\tau)\left(\log{n} + (q-1)\log{\frac{\pi}{2}}\right)} +\mathcal{O}\left(\frac{n}{\log{n}(q+\log{n})}\right)$ capable of correcting $\tau k$ errors at the channel output, where $0\le \tau < \frac{q-1}{2q}$. Furthermore, we present an explicit construction of signature codewords with polynomial complexity being able to correct up to $\left( \frac{q-1}{8q} - \epsilon\right)k$ errors for a codeword length $k = \mathcal{O} \left ( \frac{n}{\log \log n} \right )$, where $\epsilon$ is a small non-negative number. Moreover, we prove several non-existence results (converse bounds) for $q$-ary signature codes enabling error correction.

相關內容

This paper considers a Gaussian multi-input multi-output (MIMO) multiple access wiretap (MAC-WT) channel, where an eavesdropper (Eve) wants to extract the confidential information of all users. Assuming that both the legitimate receiver and Eve jointly decode their interested messages, we aim to maximize the sum secrecy rate of the system by precoder design. Although this problem could be solved by first using the iterative majorization minimization (MM) based algorithm to get a sequence of convex log-determinant optimization subproblems and then using some general tools, e.g., the interior point method, to deal with each subproblem, this strategy involves quite high computational complexity. Therefore, we propose a simultaneous diagonalization based low-complexity (SDLC) method to maximize the secrecy rate of a simple one-user wiretap channel, and then use this method to iteratively optimize the covariance matrix of each user. Simulation results show that in contrast to the existing approaches, the SDLC scheme achieves similar secrecy performance but requires much lower complexity.

We propose BareSkinNet, a novel method that simultaneously removes makeup and lighting influences from the face image. Our method leverages a 3D morphable model and does not require a reference clean face image or a specified light condition. By combining the process of 3D face reconstruction, we can easily obtain 3D geometry and coarse 3D textures. Using this information, we can infer normalized 3D face texture maps (diffuse, normal, roughness, and specular) by an image-translation network. Consequently, reconstructed 3D face textures without undesirable information will significantly benefit subsequent processes, such as re-lighting or re-makeup. In experiments, we show that BareSkinNet outperforms state-of-the-art makeup removal methods. In addition, our method is remarkably helpful in removing makeup to generate consistent high-fidelity texture maps, which makes it extendable to many realistic face generation applications. It can also automatically build graphic assets of face makeup images before and after with corresponding 3D data. This will assist artists in accelerating their work, such as 3D makeup avatar creation.

During software development, developers need answers to queries about semantic aspects of code. Even though extractive question-answering using neural approaches has been studied widely in natural languages, the problem of answering semantic queries over code using neural networks has not yet been explored. This is mainly because there is no existing dataset with extractive question and answer pairs over code involving complex concepts and long chains of reasoning. We bridge this gap by building a new, curated dataset called CodeQueries, and proposing a neural question-answering methodology over code. We build upon state-of-the-art pre-trained models of code to predict answer and supporting-fact spans. Given a query and code, only some of the code may be relevant to answer the query. We first experiment under an ideal setting where only the relevant code is given to the model and show that our models do well. We then experiment under three pragmatic considerations: (1) scaling to large-size code, (2) learning from a limited number of examples and (3) robustness to minor syntax errors in code. Our results show that while a neural model can be resilient to minor syntax errors in code, increasing size of code, presence of code that is not relevant to the query, and reduced number of training examples limit the model performance. We are releasing our data and models to facilitate future work on the proposed problem of answering semantic queries over code.

For Lagrange polynomial interpolation on open arcs $X=\gamma$ in $\CC$, it is well-known that the Lebesgue constant for the family of Chebyshev points ${\bf{x}}_n:=\{x_{n,j}\}^{n}_{j=0}$ on $[-1,1]\subset \RR$ has growth order of $O(log(n))$. The same growth order was shown in \cite{ZZ} for the Lebesgue constant of the family ${\bf {z^{**}_n}}:=\{z_{n,j}^{**}\}^{n}_{j=0}$ of some properly adjusted Fej\'er points on a rectifiable smooth open arc $\gamma\subset \CC$. On the other hand, in our recent work \cite{CZ2021}, it was observed that if the smooth open arc $\gamma$ is replaced by an $L$-shape arc $\gamma_0 \subset \CC$ consisting of two line segments, numerical experiments suggest that the Marcinkiewicz-Zygmund inequalities are no longer valid for the family of Fej\'er points ${\bf z}_n^{*}:=\{z_{n,j}^{*}\}^{n}_{j=0}$ on $\gamma$, and that the rate of growth for the corresponding Lebesgue constant $L_{{\bf {z}}^{*}_n}$ is as fast as $c\,log^2(n)$ for some constant $c>0$. The main objective of the present paper is 3-fold: firstly, it will be shown that for the special case of the $L$-shape arc $\gamma_0$ consisting of two line segments of the same length that meet at the angle of $\pi/2$, the growth rate of the Lebesgue constant $L_{{\bf {z}}_n^{*}}$ is at least as fast as $O(Log^2(n))$, with $\lim\sup \frac{L_{{\bf {z}}_n^{*}}}{log^2(n)} = \infty$; secondly, the corresponding (modified) Marcinkiewicz-Zygmund inequalities fail to hold; and thirdly, a proper adjustment ${\bf z}_n^{**}:=\{z_{n,j}^{**}\}^{n}_{j=0}$ of the Fej\'er points on $\gamma$ will be described to assure the growth rate of $L_{{\bf z}_n^{**}}$ to be exactly $O(Log^2(n))$.

Various cryptographic techniques are used in outsourced database systems to ensure data privacy while allowing for efficient querying. This work proposes a definition and components of a new secure and efficient outsourced database system, which answers various types of queries, with different privacy guarantees in different security models. This work starts with the survey of five order-revealing encryption schemes that can be used directly in many database indices and five range query protocols with various security / efficiency tradeoffs. The survey systematizes the state-of-the-art range query solutions in a snapshot adversary setting and offers some non-obvious observations regarding the efficiency of the constructions. In $\mathcal{E}\text{psolute}$, a secure range query engine, security is achieved in a setting with a much stronger adversary where she can continuously observe everything on the server, and leaking even the result size can enable a reconstruction attack. $\mathcal{E}\text{psolute}$ proposes a definition, construction, analysis, and experimental evaluation of a system that provably hides both access pattern and communication volume while remaining efficient. The work concludes with $k\text{-a}n\text{o}n$ -- a secure similarity search engine in a snapshot adversary model. The work presents a construction in which the security of $k\text{NN}$ queries is achieved similarly to OPE / ORE solutions -- encrypting the input with an approximate Distance Comparison Preserving Encryption scheme so that the inputs, the points in a hyperspace, are perturbed, but the query algorithm still produces accurate results. We use TREC datasets and queries for the search, and track the rank quality metrics such as MRR and nDCG. For the attacks, we build an LSTM model that trains on the correlation between a sentence and its embedding and then predicts words from the embedding.

As a special infinite-order vector autoregressive (VAR) model, the vector autoregressive moving average (VARMA) model can capture much richer temporal patterns than the widely used finite-order VAR model. However, its practicality has long been hindered by its non-identifiability, computational intractability, and relative difficulty of interpretation. This paper introduces a novel infinite-order VAR model which, with only a little sacrifice of generality, inherits the essential temporal patterns of the VARMA model but avoids all of the above drawbacks. As another attractive feature, the temporal and cross-sectional dependence structures of this model can be interpreted separately, since they are characterized by different sets of parameters. For high-dimensional time series, this separation motivates us to impose sparsity on the parameters determining the cross-sectional dependence. As a result, greater statistical efficiency and interpretability can be achieved, while no loss of temporal information is incurred by the imposed sparsity. We introduce an $\ell_1$-regularized estimator for the proposed model and derive the corresponding nonasymptotic error bounds. An efficient block coordinate descent algorithm and a consistent model order selection method are developed. The merit of the proposed approach is supported by simulation studies and a real-world macroeconomic data analysis.

Assigning weights to a large pool of objects is a fundamental task in a wide variety of applications. In this article, we introduce the concept of structured high-dimensional probability simplexes, in which most components are zero or near zero and the remaining ones are close to each other. Such structure is well motivated by (i) high-dimensional weights that are common in modern applications, and (ii) ubiquitous examples in which equal weights -- despite their simplicity -- often achieve favorable or even state-of-the-art predictive performance. This particular structure, however, presents unique challenges partly because, unlike high-dimensional linear regression, the parameter space is a simplex and pattern switching between partial constancy and sparsity is unknown. To address these challenges, we propose a new class of double spike Dirichlet priors to shrink a probability simplex to one with the desired structure. When applied to ensemble learning, such priors lead to a Bayesian method for structured high-dimensional ensembles that is useful for forecast combination and improving random forests, while enabling uncertainty quantification. We design efficient Markov chain Monte Carlo algorithms for implementation. Posterior contraction rates are established to study large sample behaviors of the posterior distribution. We demonstrate the wide applicability and competitive performance of the proposed methods through simulations and two real data applications using the European Central Bank Survey of Professional Forecasters data set and a data set from the UC Irvine Machine Learning Repository (UCI).

In this paper, we consider decentralized optimization problems where agents have individual cost functions to minimize subject to subspace constraints that require the minimizers across the network to lie in low-dimensional subspaces. This constrained formulation includes consensus or single-task optimization as special cases, and allows for more general task relatedness models such as multitask smoothness and coupled optimization. In order to cope with communication constraints, we propose and study an adaptive decentralized strategy where the agents employ differential randomized quantizers to compress their estimates before communicating with their neighbors. The analysis shows that, under some general conditions on the quantization noise, and for sufficiently small step-sizes $\mu$, the strategy is stable both in terms of mean-square error and average bit rate: by reducing $\mu$, it is possible to keep the estimation errors small (on the order of $\mu$) without increasing indefinitely the bit rate as $\mu\rightarrow 0$. Simulations illustrate the theoretical findings and the effectiveness of the proposed approach, revealing that decentralized learning is achievable at the expense of only a few bits.

This work considers Gaussian process interpolation with a periodized version of the Mat{\'e}rn covariance function (Stein, 1999, Section 6.7) with Fourier coefficients $\phi$($\alpha$^2 + j^2)^(--$\nu$--1/2). Convergence rates are studied for the joint maximum likelihood estimation of $\nu$ and $\phi$ when the data is sampled according to the model. The mean integrated squared error is also analyzed with fixed and estimated parameters, showing that maximum likelihood estimation yields asymptotically the same error as if the ground truth was known. Finally, the case where the observed function is a ''deterministic'' element of a continuous Sobolev space is also considered, suggesting that bounding assumptions on some parameters can lead to different estimates.

Entity resolution, a longstanding problem of data cleaning and integration, aims at identifying data records that represent the same real-world entity. Existing approaches treat entity resolution as a universal task, assuming the existence of a single interpretation of a real-world entity and focusing only on finding matched records, separating corresponding from non-corresponding ones, with respect to this single interpretation. However, in real-world scenarios, where entity resolution is part of a more general data project, downstream applications may have varying interpretations of real-world entities relating, for example, to various user needs. In what follows, we introduce the problem of multiple intents entity resolution (MIER), an extension to the universal (single intent) entity resolution task. As a solution, we propose FlexER, utilizing contemporary solutions to universal entity resolution tasks to solve multiple intents entity resolution. FlexER addresses the problem as a multi-label classification problem. It combines intent-based representations of tuple pairs using a multiplex graph representation that serves as an input to a graph neural network (GNN). FlexER learns intent representations and improves the outcome to multiple resolution problems. A large-scale empirical evaluation introduces a new benchmark and, using also two well-known benchmarks, shows that FlexER effectively solves the MIER problem and outperforms the state-of-the-art for a universal entity resolution.

北京阿比特科技有限公司