亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

Subset sum is a very old and fundamental problem in theoretical computer science. In this problem, $n$ items with weights $w_1, w_2, w_3, \ldots, w_n$ are given as input and the goal is to find out if there is a subset of them whose weights sum up to a given value $t$. While the problem is NP-hard in general, when the values are non-negative integer, subset sum can be solved in pseudo-polynomial time $~\widetilde O(n+t)$. In this work, we consider the dynamic variant of subset sum. In this setting, an upper bound $\tmax$ is provided in advance to the algorithm and in each operation, either a new item is added to the problem or for a given integer value $t \leq \tmax$, the algorithm is required to output whether there is a subset of items whose sum of weights is equal to $t$. Unfortunately, none of the existing subset sum algorithms is able to process these operations in truly sublinear time\footnote{Truly sublinear means $n^{1-\Omega(1)}$.} in terms of $\tmax$. Our main contribution is an algorithm whose amortized processing time\footnote{Since the runtimes are amortized, we do not use separate terms update time and query time for different operations and use processing time for all types of operations.} for each operation is truly sublinear in $\tmax$ when the number of operations is at least $\tmax^{2/3+\Omega(1)}$. We also show that when both element addition and element removal are allowed, there is no algorithm that can process each operation in time $\tmax^{1-\Omega(1)}$ on average unless \textsf{SETH}\footnote{The \textit{strong exponential time hypothesis} states that no algorithm can solve the satisfiability problem in time $2^{n(1-\Omega(1))}$.} fails.

相關內容

Processing 是一門開源編程語言和與之配套的(de)(de)集(ji)成開發環境(IDE)的(de)(de)名稱。Processing 在(zai)電(dian)子(zi)藝(yi)術和視覺設(she)計社區被用(yong)來教授編程基礎(chu),并運用(yong)于大量的(de)(de)新媒體和互(hu)動藝(yi)術作品中。

A burgeoning line of research has developed deep neural networks capable of approximating the solutions to high dimensional PDEs, opening related lines of theoretical inquiry focused on explaining how it is that these models appear to evade the curse of dimensionality. However, most theoretical analyses thus far have been limited to linear PDEs. In this work, we take a step towards studying the representational power of neural networks for approximating solutions to nonlinear PDEs. We focus on a class of PDEs known as \emph{nonlinear elliptic variational PDEs}, whose solutions minimize an \emph{Euler-Lagrange} energy functional $\mathcal{E}(u) = \int_\Omega L(\nabla u) dx$. We show that if composing a function with Barron norm $b$ with $L$ produces a function of Barron norm at most $B_L b^p$, the solution to the PDE can be $\epsilon$-approximated in the $L^2$ sense by a function with Barron norm $O\left(\left(dB_L\right)^{p^{\log(1/\epsilon)}}\right)$. By a classical result due to Barron [1993], this correspondingly bounds the size of a 2-layer neural network needed to approximate the solution. Treating $p, \epsilon, B_L$ as constants, this quantity is polynomial in dimension, thus showing neural networks can evade the curse of dimensionality. Our proof technique involves neurally simulating (preconditioned) gradient in an appropriate Hilbert space, which converges exponentially fast to the solution of the PDE, and such that we can bound the increase of the Barron norm at each iterate. Our results subsume and substantially generalize analogous prior results for linear elliptic PDEs.

We study the training of deep neural networks by gradient descent where floating-point arithmetic is used to compute the gradients. In this framework and under realistic assumptions, we demonstrate that it is highly unlikely to find ReLU neural networks that maintain, in the course of training with gradient descent, superlinearly many affine pieces with respect to their number of layers. In virtually all approximation theoretical arguments which yield high order polynomial rates of approximation, sequences of ReLU neural networks with exponentially many affine pieces compared to their numbers of layers are used. As a consequence, we conclude that approximating sequences of ReLU neural networks resulting from gradient descent in practice differ substantially from theoretically constructed sequences. The assumptions and the theoretical results are compared to a numerical study, which yields concurring results.

Modified Patankar-Runge-Kutta (MPRK) methods preserve the positivity as well as conservativity of a production-destruction system (PDS) of ordinary differential equations for all time step sizes. As a result, higher order MPRK schemes do not belong to the class of general linear methods, i.e. the iterates are generated by a nonlinear map $\mathbf g$ even when the PDS is linear. Moreover, due to the conservativity of the method, the map $\mathbf g$ possesses non-hyperbolic fixed points. Recently, a new theorem for the investigation of stability properties of non-hyperbolic fixed points of a nonlinear iteration map was developed. We apply this theorem to understand the stability properties of a family of second order MPRK methods when applied to a nonlinear PDS of ordinary differential equations. It is shown that the fixed points are stable for all time step sizes and members of the MPRK family. Finally, experiments are presented to numerically support the theoretical claims.

Collecting complete network data is expensive, time-consuming, and often infeasible. Aggregated Relational Data (ARD), which capture information about a social network by asking a respondent questions of the form ``How many people with trait X do you know?'' provide a low-cost option when collecting complete network data is not possible. Rather than asking about connections between each pair of individuals directly, ARD collects the number of contacts the respondent knows with a given trait. Despite widespread use and a growing literature on ARD methodology, there is still no systematic understanding of when and why ARD should accurately recover features of the unobserved network. This paper provides such a characterization by deriving conditions under which statistics about the unobserved network (or functions of these statistics like regression coefficients) can be consistently estimated using ARD. We do this by first providing consistent estimates of network model parameters for three commonly used probabilistic models: the beta-model with node-specific unobserved effects, the stochastic block model with unobserved community structure, and latent geometric space models with unobserved latent locations. A key observation behind these results is that cross-group link probabilities for a collection of (possibly unobserved) groups identifies the model parameters, meaning ARD is sufficient for parameter estimation. With these estimated parameters, it is possible to simulate graphs from the fitted distribution and analyze the distribution of network statistics. We can then characterize conditions under which the simulated networks based on ARD will allow for consistent estimation of the unobserved network statistics, such as eigenvector centrality or response functions by or of the unobserved network, such as regression coefficients.

Obtaining first-order regret bounds -- regret bounds scaling not as the worst-case but with some measure of the performance of the optimal policy on a given instance -- is a core question in sequential decision-making. While such bounds exist in many settings, they have proven elusive in reinforcement learning with large state spaces. In this work we address this gap, and show that it is possible to obtain regret scaling as $\widetilde{\mathcal{O}}(\sqrt{d^3 H^3 \cdot V_1^\star \cdot K} + d^{3.5}H^3\log K )$ in reinforcement learning with large state spaces, namely the linear MDP setting. Here $V_1^\star$ is the value of the optimal policy and $K$ is the number of episodes. We demonstrate that existing techniques based on least squares estimation are insufficient to obtain this result, and instead develop a novel robust self-normalized concentration bound based on the robust Catoni mean estimator, which may be of independent interest.

We study a submodular maximization problem motivated by applications in online retail. A platform displays a list of products to a user in response to a search query. The user inspects the first $k$ items in the list for a $k$ chosen at random from a given distribution, and decides whether to purchase an item from that set based on a choice model. The goal of the platform is to maximize the engagement of the shopper defined as the probability of purchase. This problem gives rise to a less-studied variation of submodular maximization in which we are asked to choose an $\textit{ordering}$ of a set of elements to maximize a linear combination of different submodular functions. First, using a reduction to maximizing submodular functions over matroids, we give an optimal $\left(1-1/e\right)$-approximation for this problem. We then consider a variant in which the platform cares not only about user engagement, but also about diversification across various groups of users, that is, guaranteeing a certain probability of purchase in each group. We characterize the polytope of feasible solutions and give a bi-criteria $((1-1/e)^2,(1-1/e)^2)$-approximation for this problem by rounding an approximate solution of a linear programming relaxation. For rounding, we rely on our reduction and the particular rounding techniques for matroid polytopes. For the special case in which underlying submodular functions are coverage functions -- which is practically relevant in online retail -- we propose an alternative LP relaxation and a simpler randomized rounding for the problem. This approach yields to an optimal bi-criteria $(1-1/e,1-1/e)$-approximation algorithm for the special case of the problem with coverage functions.

In this paper, we investigate a novel problem of building contextual bandits in the vertical federated setting, i.e., contextual information is vertically distributed over different departments. This problem remains largely unexplored in the research community. To this end, we carefully design a customized encryption scheme named orthogonal matrix-based mask mechanism(O3M) for encrypting local contextual information while avoiding expensive conventional cryptographic techniques. We further apply the mechanism to two commonly-used bandit algorithms, LinUCB and LinTS, and instantiate two practical protocols for online recommendation under the vertical federated setting. The proposed protocols can perfectly recover the service quality of centralized bandit algorithms while achieving a satisfactory runtime efficiency, which is theoretically proved and analyzed in this paper. By conducting extensive experiments on both synthetic and real-world datasets, we show the superiority of the proposed method in terms of privacy protection and recommendation performance.

Submodular functions and variants, through their ability to characterize diversity and coverage, have emerged as a key tool for data selection and summarization. Many recent approaches to learn submodular functions suffer from limited expressiveness. In this work, we propose FLEXSUBNET, a family of flexible neural models for both monotone and non-monotone submodular functions. To fit a latent submodular function from (set, value) observations, FLEXSUBNET applies a concave function on modular functions in a recursive manner. We do not draw the concave function from a restricted family, but rather learn from data using a highly expressive neural network that implements a differentiable quadrature procedure. Such an expressive neural model for concave functions may be of independent interest. Next, we extend this setup to provide a novel characterization of monotone \alpha-submodular functions, a recently introduced notion of approximate submodular functions. We then use this characterization to design a novel neural model for such functions. Finally, we consider learning submodular set functions under distant supervision in the form of (perimeter-set, high-value-subset) pairs. This yields a novel subset selection method based on an order-invariant, yet greedy sampler built around the above neural set functions. Our experiments on synthetic and real data show that FLEXSUBNET outperforms several baselines.

High-dimensional matrix-variate time series data are becoming widely available in many scientific fields, such as economics, biology, and meteorology. To achieve significant dimension reduction while preserving the intrinsic matrix structure and temporal dynamics in such data, Wang et al. (2017) proposed a matrix factor model that is shown to provide effective analysis. In this paper, we establish a general framework for incorporating domain or prior knowledge in the matrix factor model through linear constraints. The proposed framework is shown to be useful in achieving parsimonious parameterization, facilitating interpretation of the latent matrix factor, and identifying specific factors of interest. Fully utilizing the prior-knowledge-induced constraints results in more efficient and accurate modeling, inference, dimension reduction as well as a clear and better interpretation of the results. In this paper, constrained, multi-term, and partially constrained factor models for matrix-variate time series are developed, with efficient estimation procedures and their asymptotic properties. We show that the convergence rates of the constrained factor loading matrices are much faster than those of the conventional matrix factor analysis under many situations. Simulation studies are carried out to demonstrate the finite-sample performance of the proposed method and its associated asymptotic properties. We illustrate the proposed model with three applications, where the constrained matrix-factor models outperform their unconstrained counterparts in the power of variance explanation under the out-of-sample 10-fold cross-validation setting.

This paper considers the estimation and inference of the low-rank components in high-dimensional matrix-variate factor models, where each dimension of the matrix-variates ($p \times q$) is comparable to or greater than the number of observations ($T$). We propose an estimation method called $\alpha$-PCA that preserves the matrix structure and aggregates mean and contemporary covariance through a hyper-parameter $\alpha$. We develop an inferential theory, establishing consistency, the rate of convergence, and the limiting distributions, under general conditions that allow for correlations across time, rows, or columns of the noise. We show both theoretical and empirical methods of choosing the best $\alpha$, depending on the use-case criteria. Simulation results demonstrate the adequacy of the asymptotic results in approximating the finite sample properties. The $\alpha$-PCA compares favorably with the existing ones. Finally, we illustrate its applications with a real numeric data set and two real image data sets. In all applications, the proposed estimation procedure outperforms previous methods in the power of variance explanation using out-of-sample 10-fold cross-validation.

北京阿比特科技有限公司