{mayi_des}

亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

In the general AntiFactor problem, a graph $G$ is given with a set $X_v\subseteq \mathbb{N}$ of forbidden degrees for every vertex $v$ and the task is to find a set $S$ of edges such that the degree of $v$ in $S$ is not in the set $X_v$. Standard techniques (dynamic programming + fast convolution) can be used to show that if $M$ is the largest forbidden degree, then the problem can be solved in time $(M+2)^k\cdot n^{O(1)}$ if a tree decomposition of width $k$ is given. However, significantly faster algorithms are possible if the sets $X_v$ are sparse: our main algorithmic result shows that if every vertex has at most $x$ forbidden degrees (we call this special case AntiFactor$_x$), then the problem can be solved in time $(x+1)^{O(k)}\cdot n^{O(1)}$. That is, the AntiFactor$_x$ is fixed-parameter tractable parameterized by treewidth $k$ and the maximum number $x$ of excluded degrees. Our algorithm uses the technique of representative sets, which can be generalized to the optimization version, but (as expected) not to the counting version of the problem. In fact, we show that #AntiFactor$_1$ is already #W[1]-hard parameterized by the width of the given decomposition. Moreover, we show that, unlike for the decision version, the standard dynamic programming algorithm is essentially optimal for the counting version. Formally, for a fixed nonempty set $X$, we denote by $X$-AntiFactor the special case where every vertex $v$ has the same set $X_v=X$ of forbidden degrees. We show the following lower bound for every fixed set $X$: if there is an $\epsilon>0$ such that #$X$-AntiFactor can be solved in time $(\max X+2-\epsilon)^k\cdot n^{O(1)}$ on a tree decomposition of width $k$, then the Counting Strong Exponential-Time Hypothesis (#SETH) fails.

相關內容

We propose a theory for matrix completion that goes beyond the low-rank structure commonly considered in the literature and applies to general matrices of low description complexity, including sparse matrices, matrices with sparse factorizations such as, e.g., sparse R-factors in their QR-decomposition, and algebraic combinations of matrices of low description complexity. The mathematical concept underlying this theory is that of rectifiability, a basic notion in geometric measure theory. Complexity of the sets of matrices encompassed by the theory is measured in terms of Hausdorff and Minkowski dimensions. Our goal is the characterization of the number of linear measurements, with an emphasis on rank-$1$ measurements, needed for the existence of an algorithm that yields reconstruction, either perfect, with probability 1, or with arbitrarily small probability of error, depending on the setup. Specifically, we show that matrices taken from a set $\mathcal{U}$ such that $\mathcal{U}-\mathcal{U}$ has Hausdorff dimension $s$ %(or is countably $s$-rectifiable) can be recovered from $k>s$ measurements, and random matrices supported on a set $\mathcal{U}$ of Hausdorff dimension $s$ %(or a countably $s$-rectifiable set) can be recovered with probability 1 from $k>s$ measurements. What is more, we establish the existence of $\beta$-H\"older continuous decoders recovering matrices taken from a set of upper Minkowski dimension $s$ from $k>2s/(1-\beta)$ measurements and, with arbitrarily small probability of error, random matrices supported on a set of upper Minkowski dimension $s$ from $k>s/(1-\beta)$ measurements.

A continuous-time average consensus system is a linear dynamical system defined over a graph, where each node has its own state value that evolves according to a simultaneous linear differential equation. A node is allowed to interact with neighboring nodes. Average consensus is a phenomenon that the all the state values converge to the average of the initial state values. In this paper, we assume that a node can communicate with neighboring nodes through an additive white Gaussian noise channel. We first formulate the noisy average consensus system by using a stochastic differential equation (SDE), which allows us to use the Euler-Maruyama method, a numerical technique for solving SDEs. By studying the stochastic behavior of the residual error of the Euler-Maruyama method, we arrive at the covariance evolution equation. The analysis of the residual error leads to a compact formula for mean squared error (MSE), which shows that the sum of the inverse eigenvalues of the Laplacian matrix is the most dominant factor influencing the MSE. Furthermore, we propose optimization problems aimed at minimizing the MSE at a given target time, and introduce a deep unfolding-based optimization method to solve these problems. The quality of the solution is validated by numerical experiments.

This paper provides a finite-time analysis of linear stochastic approximation (LSA) algorithms with fixed step size, a core method in statistics and machine learning. LSA is used to compute approximate solutions of a $d$-dimensional linear system $\bar{\mathbf{A}} \theta = \bar{\mathbf{b}}$ for which $(\bar{\mathbf{A}}, \bar{\mathbf{b}})$ can only be estimated by (asymptotically) unbiased observations $\{(\mathbf{A}(Z_n),\mathbf{b}(Z_n))\}_{n \in \mathbb{N}}$. We consider here the case where $\{Z_n\}_{n \in \mathbb{N}}$ is an i.i.d. sequence or a uniformly geometrically ergodic Markov chain. We derive $p$-th moment and high-probability deviation bounds for the iterates defined by LSA and its Polyak-Ruppert-averaged version. Our finite-time instance-dependent bounds for the averaged LSA iterates are sharp in the sense that the leading term we obtain coincides with the local asymptotic minimax limit. Moreover, the remainder terms of our bounds admit a tight dependence on the mixing time $t_{\operatorname{mix}}$ of the underlying chain and the norm of the noise variables. We emphasize that our result requires the SA step size to scale only with logarithm of the problem dimension $d$.

Many streaming algorithms provide only a high-probability relative approximation. These two relaxations, of allowing approximation and randomization, seem necessary -- for many streaming problems, both relaxations must be employed simultaneously, to avoid an exponentially larger (and often trivial) space complexity. A common drawback of these randomized approximate algorithms is that independent executions on the same input have different outputs, that depend on their random coins. Pseudo-deterministic algorithms combat this issue, and for every input, they output with high probability the same ``canonical'' solution. We consider perhaps the most basic problem in data streams, of counting the number of items in a stream of length at most $n$. Morris's counter [CACM, 1978] is a randomized approximation algorithm for this problem that uses $O(\log\log n)$ bits of space, for every fixed approximation factor (greater than $1$). Goldwasser, Grossman, Mohanty and Woodruff [ITCS 2020] asked whether pseudo-deterministic approximation algorithms can match this space complexity. Our main result answers their question negatively, and shows that such algorithms must use $\Omega(\sqrt{\log n / \log\log n})$ bits of space. Our approach is based on a problem that we call Shift Finding, and may be of independent interest. In this problem, one has query access to a shifted version of a known string $F\in\{0,1\}^{3n}$, which is guaranteed to start with $n$ zeros and end with $n$ ones, and the goal is to find the unknown shift using a small number of queries. We provide for this problem an algorithm that uses $O(\sqrt{n})$ queries. It remains open whether $poly(\log n)$ queries suffice; if true, then our techniques immediately imply a nearly-tight $\Omega(\log n/\log\log n)$ space bound for pseudo-deterministic approximate counting.

$ \renewcommand{\tilde}{\widetilde} $We present an $\tilde{O}(\log^2 n)$ round deterministic distributed algorithm for the maximal independent set problem. By known reductions, this round complexity extends also to maximal matching, $\Delta+1$ vertex coloring, and $2\Delta-1$ edge coloring. These four problems are among the most central problems in distributed graph algorithms and have been studied extensively for the past four decades. This improved round complexity comes closer to the $\tilde{\Omega}(\log n)$ lower bound of maximal independent set and maximal matching [Balliu et al. FOCS '19]. The previous best known deterministic complexity for all of these problems was $\Theta(\log^3 n)$. Via the shattering technique, the improvement permeates also to the corresponding randomized complexities, e.g., the new randomized complexity of $\Delta+1$ vertex coloring is now $\tilde{O}(\log^2\log n)$ rounds. Our approach is a novel combination of the previously known two methods for developing deterministic algorithms for these problems, namely global derandomization via network decomposition (see e.g., [Rozhon, Ghaffari STOC'20; Ghaffari, Grunau, Rozhon SODA'21; Ghaffari et al. SODA'23]) and local rounding of fractional solutions (see e.g., [Fischer DISC'17; Harris FOCS'19; Fischer, Ghaffari, Kuhn FOCS'17; Ghaffari, Kuhn FOCS'21; Faour et al. SODA'23]). We consider a relaxation of the classic network decomposition concept, where instead of requiring the clusters in the same block to be non-adjacent, we allow each node to have a small number of neighboring clusters. We also show a deterministic algorithm that computes this relaxed decomposition faster than standard decompositions. We then use this relaxed decomposition to significantly improve the integrality of certain fractional solutions, before handing them to the local rounding procedure that now has to do fewer rounding steps.

In this paper, an efficient ensemble domain decomposition algorithm is proposed for fast solving the fully-mixed random Stokes-Darcy model with the physically realistic Beavers-Joseph (BJ) interface conditions. We utilize the Monte Carlo method for the coupled model with random inputs to derive some deterministic Stokes-Darcy numerical models and use the idea of the ensemble to realize the fast computation of multiple problems. One remarkable feature of the algorithm is that multiple linear systems share a common coefficient matrix in each deterministic numerical model, which significantly reduces the computational cost and achieves comparable accuracy with the traditional methods. Moreover, by domain decomposition, we can decouple the Stokes-Darcy system into two smaller sub-physics problems naturally. Both mesh-dependent and mesh-independent convergence rates of the algorithm are rigorously derived by choosing suitable Robin parameters. Optimized Robin parameters are derived and analyzed to accelerate the convergence of the proposed algorithm. Especially, for small hydraulic conductivity in practice, the almost optimal geometric convergence can be obtained by finite element discretization. Finally, two groups of numerical experiments are conducted to validate and illustrate the exclusive features of the proposed algorithm.

This paper proposes a regularizer called Implicit Neural Representation Regularizer (INRR) to improve the generalization ability of the Implicit Neural Representation (INR). The INR is a fully connected network that can represent signals with details not restricted by grid resolution. However, its generalization ability could be improved, especially with non-uniformly sampled data. The proposed INRR is based on learned Dirichlet Energy (DE) that measures similarities between rows/columns of the matrix. The smoothness of the Laplacian matrix is further integrated by parameterizing DE with a tiny INR. INRR improves the generalization of INR in signal representation by perfectly integrating the signal's self-similarity with the smoothness of the Laplacian matrix. Through well-designed numerical experiments, the paper also reveals a series of properties derived from INRR, including momentum methods like convergence trajectory and multi-scale similarity. Moreover, the proposed method could improve the performance of other signal representation methods.

This work derives upper bounds on the convergence rate of the moment-sum-of-squares hierarchy with correlative sparsity for global minimization of polynomials on compact basic semialgebraic sets. The main conclusion is that both sparse hierarchies based on the Schm\"udgen and Putinar Positivstellens\"atze enjoy a polynomial rate of convergence that depends on the size of the largest clique in the sparsity graph but not on the ambient dimension. Interestingly, the sparse bounds outperform the best currently available bounds for the dense hierarchy when the maximum clique size is sufficiently small compared to the ambient dimension and the performance is measured by the running time of an interior point method required to obtain a bound on the global minimum of a given accuracy.

In this work, we study discrete minimizers of the Ginzburg-Landau energy in finite element spaces. Special focus is given to the influence of the Ginzburg-Landau parameter $\kappa$. This parameter is of physical interest as large values can trigger the appearance of vortex lattices. Since the vortices have to be resolved on sufficiently fine computational meshes, it is important to translate the size of $\kappa$ into a mesh resolution condition, which can be done through error estimates that are explicit with respect to $\kappa$ and the spatial mesh width $h$. For that, we first work in an abstract framework for a general class of discrete spaces, where we present convergence results in a problem-adapted $\kappa$-weighted norm. Afterwards we apply our findings to Lagrangian finite elements and a particular generalized finite element construction. In numerical experiments we confirm that our derived $L^2$- and $H^1$-error estimates are indeed optimal in $\kappa$ and $h$.

A set $S\subseteq V$ of vertices of a graph $G$ is a \emph{$c$-clustered set} if it induces a subgraph with components of order at most $c$ each, and $\alpha_c(G)$ denotes the size of a largest $c$-clustered set. For any graph $G$ on $n$ vertices and treewidth $k$, we show that $\alpha_c(G) \geq \frac{c}{c+k+1}n$, which improves a result of Wood [arXiv:2208.10074, August 2022], while we construct $n$-vertex graphs $G$ of treewidth~$k$ with $\alpha_c(G)\leq \frac{c}{c+k}n$. In the case $c\leq 2$ or $k=1$ we prove the better lower bound $\alpha_c(G) \geq \frac{c}{c+k}n$, which settles a conjecture of Chappell and Pelsmajer [Electron.\ J.\ Comb., 2013] and is best-possible. Finally, in the case $c=3$ and $k=2$, we show $\alpha_c(G) \geq \frac{5}{9}n$ and which is best-possible.

北京阿比特科技有限公司
x$), then the problem can be solved in time $(x+1)^{O(k)}\cdot n^{O(1)}$. That is, the AntiFactor 精品夜色国产国偷自产乱码,夜夜爽一区二区三区视频,国产日韩高清视频在线观看,内射无码AV-区二区在线观看,国产综合一区二区 {mayi_des}

亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

In the general AntiFactor problem, a graph $G$ is given with a set $X_v\subseteq \mathbb{N}$ of forbidden degrees for every vertex $v$ and the task is to find a set $S$ of edges such that the degree of $v$ in $S$ is not in the set $X_v$. Standard techniques (dynamic programming + fast convolution) can be used to show that if $M$ is the largest forbidden degree, then the problem can be solved in time $(M+2)^k\cdot n^{O(1)}$ if a tree decomposition of width $k$ is given. However, significantly faster algorithms are possible if the sets $X_v$ are sparse: our main algorithmic result shows that if every vertex has at most $x$ forbidden degrees (we call this special case AntiFactor$_x$), then the problem can be solved in time $(x+1)^{O(k)}\cdot n^{O(1)}$. That is, the AntiFactor$_x$ is fixed-parameter tractable parameterized by treewidth $k$ and the maximum number $x$ of excluded degrees. Our algorithm uses the technique of representative sets, which can be generalized to the optimization version, but (as expected) not to the counting version of the problem. In fact, we show that #AntiFactor$_1$ is already #W[1]-hard parameterized by the width of the given decomposition. Moreover, we show that, unlike for the decision version, the standard dynamic programming algorithm is essentially optimal for the counting version. Formally, for a fixed nonempty set $X$, we denote by $X$-AntiFactor the special case where every vertex $v$ has the same set $X_v=X$ of forbidden degrees. We show the following lower bound for every fixed set $X$: if there is an $\epsilon>0$ such that #$X$-AntiFactor can be solved in time $(\max X+2-\epsilon)^k\cdot n^{O(1)}$ on a tree decomposition of width $k$, then the Counting Strong Exponential-Time Hypothesis (#SETH) fails.

相關內容

We propose a theory for matrix completion that goes beyond the low-rank structure commonly considered in the literature and applies to general matrices of low description complexity, including sparse matrices, matrices with sparse factorizations such as, e.g., sparse R-factors in their QR-decomposition, and algebraic combinations of matrices of low description complexity. The mathematical concept underlying this theory is that of rectifiability, a basic notion in geometric measure theory. Complexity of the sets of matrices encompassed by the theory is measured in terms of Hausdorff and Minkowski dimensions. Our goal is the characterization of the number of linear measurements, with an emphasis on rank-$1$ measurements, needed for the existence of an algorithm that yields reconstruction, either perfect, with probability 1, or with arbitrarily small probability of error, depending on the setup. Specifically, we show that matrices taken from a set $\mathcal{U}$ such that $\mathcal{U}-\mathcal{U}$ has Hausdorff dimension $s$ %(or is countably $s$-rectifiable) can be recovered from $k>s$ measurements, and random matrices supported on a set $\mathcal{U}$ of Hausdorff dimension $s$ %(or a countably $s$-rectifiable set) can be recovered with probability 1 from $k>s$ measurements. What is more, we establish the existence of $\beta$-H\"older continuous decoders recovering matrices taken from a set of upper Minkowski dimension $s$ from $k>2s/(1-\beta)$ measurements and, with arbitrarily small probability of error, random matrices supported on a set of upper Minkowski dimension $s$ from $k>s/(1-\beta)$ measurements.

A continuous-time average consensus system is a linear dynamical system defined over a graph, where each node has its own state value that evolves according to a simultaneous linear differential equation. A node is allowed to interact with neighboring nodes. Average consensus is a phenomenon that the all the state values converge to the average of the initial state values. In this paper, we assume that a node can communicate with neighboring nodes through an additive white Gaussian noise channel. We first formulate the noisy average consensus system by using a stochastic differential equation (SDE), which allows us to use the Euler-Maruyama method, a numerical technique for solving SDEs. By studying the stochastic behavior of the residual error of the Euler-Maruyama method, we arrive at the covariance evolution equation. The analysis of the residual error leads to a compact formula for mean squared error (MSE), which shows that the sum of the inverse eigenvalues of the Laplacian matrix is the most dominant factor influencing the MSE. Furthermore, we propose optimization problems aimed at minimizing the MSE at a given target time, and introduce a deep unfolding-based optimization method to solve these problems. The quality of the solution is validated by numerical experiments.

This paper provides a finite-time analysis of linear stochastic approximation (LSA) algorithms with fixed step size, a core method in statistics and machine learning. LSA is used to compute approximate solutions of a $d$-dimensional linear system $\bar{\mathbf{A}} \theta = \bar{\mathbf{b}}$ for which $(\bar{\mathbf{A}}, \bar{\mathbf{b}})$ can only be estimated by (asymptotically) unbiased observations $\{(\mathbf{A}(Z_n),\mathbf{b}(Z_n))\}_{n \in \mathbb{N}}$. We consider here the case where $\{Z_n\}_{n \in \mathbb{N}}$ is an i.i.d. sequence or a uniformly geometrically ergodic Markov chain. We derive $p$-th moment and high-probability deviation bounds for the iterates defined by LSA and its Polyak-Ruppert-averaged version. Our finite-time instance-dependent bounds for the averaged LSA iterates are sharp in the sense that the leading term we obtain coincides with the local asymptotic minimax limit. Moreover, the remainder terms of our bounds admit a tight dependence on the mixing time $t_{\operatorname{mix}}$ of the underlying chain and the norm of the noise variables. We emphasize that our result requires the SA step size to scale only with logarithm of the problem dimension $d$.

Many streaming algorithms provide only a high-probability relative approximation. These two relaxations, of allowing approximation and randomization, seem necessary -- for many streaming problems, both relaxations must be employed simultaneously, to avoid an exponentially larger (and often trivial) space complexity. A common drawback of these randomized approximate algorithms is that independent executions on the same input have different outputs, that depend on their random coins. Pseudo-deterministic algorithms combat this issue, and for every input, they output with high probability the same ``canonical'' solution. We consider perhaps the most basic problem in data streams, of counting the number of items in a stream of length at most $n$. Morris's counter [CACM, 1978] is a randomized approximation algorithm for this problem that uses $O(\log\log n)$ bits of space, for every fixed approximation factor (greater than $1$). Goldwasser, Grossman, Mohanty and Woodruff [ITCS 2020] asked whether pseudo-deterministic approximation algorithms can match this space complexity. Our main result answers their question negatively, and shows that such algorithms must use $\Omega(\sqrt{\log n / \log\log n})$ bits of space. Our approach is based on a problem that we call Shift Finding, and may be of independent interest. In this problem, one has query access to a shifted version of a known string $F\in\{0,1\}^{3n}$, which is guaranteed to start with $n$ zeros and end with $n$ ones, and the goal is to find the unknown shift using a small number of queries. We provide for this problem an algorithm that uses $O(\sqrt{n})$ queries. It remains open whether $poly(\log n)$ queries suffice; if true, then our techniques immediately imply a nearly-tight $\Omega(\log n/\log\log n)$ space bound for pseudo-deterministic approximate counting.

$ \renewcommand{\tilde}{\widetilde} $We present an $\tilde{O}(\log^2 n)$ round deterministic distributed algorithm for the maximal independent set problem. By known reductions, this round complexity extends also to maximal matching, $\Delta+1$ vertex coloring, and $2\Delta-1$ edge coloring. These four problems are among the most central problems in distributed graph algorithms and have been studied extensively for the past four decades. This improved round complexity comes closer to the $\tilde{\Omega}(\log n)$ lower bound of maximal independent set and maximal matching [Balliu et al. FOCS '19]. The previous best known deterministic complexity for all of these problems was $\Theta(\log^3 n)$. Via the shattering technique, the improvement permeates also to the corresponding randomized complexities, e.g., the new randomized complexity of $\Delta+1$ vertex coloring is now $\tilde{O}(\log^2\log n)$ rounds. Our approach is a novel combination of the previously known two methods for developing deterministic algorithms for these problems, namely global derandomization via network decomposition (see e.g., [Rozhon, Ghaffari STOC'20; Ghaffari, Grunau, Rozhon SODA'21; Ghaffari et al. SODA'23]) and local rounding of fractional solutions (see e.g., [Fischer DISC'17; Harris FOCS'19; Fischer, Ghaffari, Kuhn FOCS'17; Ghaffari, Kuhn FOCS'21; Faour et al. SODA'23]). We consider a relaxation of the classic network decomposition concept, where instead of requiring the clusters in the same block to be non-adjacent, we allow each node to have a small number of neighboring clusters. We also show a deterministic algorithm that computes this relaxed decomposition faster than standard decompositions. We then use this relaxed decomposition to significantly improve the integrality of certain fractional solutions, before handing them to the local rounding procedure that now has to do fewer rounding steps.

In this paper, an efficient ensemble domain decomposition algorithm is proposed for fast solving the fully-mixed random Stokes-Darcy model with the physically realistic Beavers-Joseph (BJ) interface conditions. We utilize the Monte Carlo method for the coupled model with random inputs to derive some deterministic Stokes-Darcy numerical models and use the idea of the ensemble to realize the fast computation of multiple problems. One remarkable feature of the algorithm is that multiple linear systems share a common coefficient matrix in each deterministic numerical model, which significantly reduces the computational cost and achieves comparable accuracy with the traditional methods. Moreover, by domain decomposition, we can decouple the Stokes-Darcy system into two smaller sub-physics problems naturally. Both mesh-dependent and mesh-independent convergence rates of the algorithm are rigorously derived by choosing suitable Robin parameters. Optimized Robin parameters are derived and analyzed to accelerate the convergence of the proposed algorithm. Especially, for small hydraulic conductivity in practice, the almost optimal geometric convergence can be obtained by finite element discretization. Finally, two groups of numerical experiments are conducted to validate and illustrate the exclusive features of the proposed algorithm.

This paper proposes a regularizer called Implicit Neural Representation Regularizer (INRR) to improve the generalization ability of the Implicit Neural Representation (INR). The INR is a fully connected network that can represent signals with details not restricted by grid resolution. However, its generalization ability could be improved, especially with non-uniformly sampled data. The proposed INRR is based on learned Dirichlet Energy (DE) that measures similarities between rows/columns of the matrix. The smoothness of the Laplacian matrix is further integrated by parameterizing DE with a tiny INR. INRR improves the generalization of INR in signal representation by perfectly integrating the signal's self-similarity with the smoothness of the Laplacian matrix. Through well-designed numerical experiments, the paper also reveals a series of properties derived from INRR, including momentum methods like convergence trajectory and multi-scale similarity. Moreover, the proposed method could improve the performance of other signal representation methods.

This work derives upper bounds on the convergence rate of the moment-sum-of-squares hierarchy with correlative sparsity for global minimization of polynomials on compact basic semialgebraic sets. The main conclusion is that both sparse hierarchies based on the Schm\"udgen and Putinar Positivstellens\"atze enjoy a polynomial rate of convergence that depends on the size of the largest clique in the sparsity graph but not on the ambient dimension. Interestingly, the sparse bounds outperform the best currently available bounds for the dense hierarchy when the maximum clique size is sufficiently small compared to the ambient dimension and the performance is measured by the running time of an interior point method required to obtain a bound on the global minimum of a given accuracy.

In this work, we study discrete minimizers of the Ginzburg-Landau energy in finite element spaces. Special focus is given to the influence of the Ginzburg-Landau parameter $\kappa$. This parameter is of physical interest as large values can trigger the appearance of vortex lattices. Since the vortices have to be resolved on sufficiently fine computational meshes, it is important to translate the size of $\kappa$ into a mesh resolution condition, which can be done through error estimates that are explicit with respect to $\kappa$ and the spatial mesh width $h$. For that, we first work in an abstract framework for a general class of discrete spaces, where we present convergence results in a problem-adapted $\kappa$-weighted norm. Afterwards we apply our findings to Lagrangian finite elements and a particular generalized finite element construction. In numerical experiments we confirm that our derived $L^2$- and $H^1$-error estimates are indeed optimal in $\kappa$ and $h$.

A set $S\subseteq V$ of vertices of a graph $G$ is a \emph{$c$-clustered set} if it induces a subgraph with components of order at most $c$ each, and $\alpha_c(G)$ denotes the size of a largest $c$-clustered set. For any graph $G$ on $n$ vertices and treewidth $k$, we show that $\alpha_c(G) \geq \frac{c}{c+k+1}n$, which improves a result of Wood [arXiv:2208.10074, August 2022], while we construct $n$-vertex graphs $G$ of treewidth~$k$ with $\alpha_c(G)\leq \frac{c}{c+k}n$. In the case $c\leq 2$ or $k=1$ we prove the better lower bound $\alpha_c(G) \geq \frac{c}{c+k}n$, which settles a conjecture of Chappell and Pelsmajer [Electron.\ J.\ Comb., 2013] and is best-possible. Finally, in the case $c=3$ and $k=2$, we show $\alpha_c(G) \geq \frac{5}{9}n$ and which is best-possible.

北京阿比特科技有限公司
x$ is fixed-parameter tractable parameterized by treewidth $k$ and the maximum number $x$ of excluded degrees. Our algorithm uses the technique of representative sets, which can be generalized to the optimization version, but (as expected) not to the counting version of the problem. In fact, we show that #AntiFactor 精品夜色国产国偷自产乱码,夜夜爽一区二区三区视频,国产日韩高清视频在线观看,内射无码AV-区二区在线观看,国产综合一区二区 {mayi_des}

亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

In the general AntiFactor problem, a graph $G$ is given with a set $X_v\subseteq \mathbb{N}$ of forbidden degrees for every vertex $v$ and the task is to find a set $S$ of edges such that the degree of $v$ in $S$ is not in the set $X_v$. Standard techniques (dynamic programming + fast convolution) can be used to show that if $M$ is the largest forbidden degree, then the problem can be solved in time $(M+2)^k\cdot n^{O(1)}$ if a tree decomposition of width $k$ is given. However, significantly faster algorithms are possible if the sets $X_v$ are sparse: our main algorithmic result shows that if every vertex has at most $x$ forbidden degrees (we call this special case AntiFactor$_x$), then the problem can be solved in time $(x+1)^{O(k)}\cdot n^{O(1)}$. That is, the AntiFactor$_x$ is fixed-parameter tractable parameterized by treewidth $k$ and the maximum number $x$ of excluded degrees. Our algorithm uses the technique of representative sets, which can be generalized to the optimization version, but (as expected) not to the counting version of the problem. In fact, we show that #AntiFactor$_1$ is already #W[1]-hard parameterized by the width of the given decomposition. Moreover, we show that, unlike for the decision version, the standard dynamic programming algorithm is essentially optimal for the counting version. Formally, for a fixed nonempty set $X$, we denote by $X$-AntiFactor the special case where every vertex $v$ has the same set $X_v=X$ of forbidden degrees. We show the following lower bound for every fixed set $X$: if there is an $\epsilon>0$ such that #$X$-AntiFactor can be solved in time $(\max X+2-\epsilon)^k\cdot n^{O(1)}$ on a tree decomposition of width $k$, then the Counting Strong Exponential-Time Hypothesis (#SETH) fails.

相關內容

We propose a theory for matrix completion that goes beyond the low-rank structure commonly considered in the literature and applies to general matrices of low description complexity, including sparse matrices, matrices with sparse factorizations such as, e.g., sparse R-factors in their QR-decomposition, and algebraic combinations of matrices of low description complexity. The mathematical concept underlying this theory is that of rectifiability, a basic notion in geometric measure theory. Complexity of the sets of matrices encompassed by the theory is measured in terms of Hausdorff and Minkowski dimensions. Our goal is the characterization of the number of linear measurements, with an emphasis on rank-$1$ measurements, needed for the existence of an algorithm that yields reconstruction, either perfect, with probability 1, or with arbitrarily small probability of error, depending on the setup. Specifically, we show that matrices taken from a set $\mathcal{U}$ such that $\mathcal{U}-\mathcal{U}$ has Hausdorff dimension $s$ %(or is countably $s$-rectifiable) can be recovered from $k>s$ measurements, and random matrices supported on a set $\mathcal{U}$ of Hausdorff dimension $s$ %(or a countably $s$-rectifiable set) can be recovered with probability 1 from $k>s$ measurements. What is more, we establish the existence of $\beta$-H\"older continuous decoders recovering matrices taken from a set of upper Minkowski dimension $s$ from $k>2s/(1-\beta)$ measurements and, with arbitrarily small probability of error, random matrices supported on a set of upper Minkowski dimension $s$ from $k>s/(1-\beta)$ measurements.

A continuous-time average consensus system is a linear dynamical system defined over a graph, where each node has its own state value that evolves according to a simultaneous linear differential equation. A node is allowed to interact with neighboring nodes. Average consensus is a phenomenon that the all the state values converge to the average of the initial state values. In this paper, we assume that a node can communicate with neighboring nodes through an additive white Gaussian noise channel. We first formulate the noisy average consensus system by using a stochastic differential equation (SDE), which allows us to use the Euler-Maruyama method, a numerical technique for solving SDEs. By studying the stochastic behavior of the residual error of the Euler-Maruyama method, we arrive at the covariance evolution equation. The analysis of the residual error leads to a compact formula for mean squared error (MSE), which shows that the sum of the inverse eigenvalues of the Laplacian matrix is the most dominant factor influencing the MSE. Furthermore, we propose optimization problems aimed at minimizing the MSE at a given target time, and introduce a deep unfolding-based optimization method to solve these problems. The quality of the solution is validated by numerical experiments.

This paper provides a finite-time analysis of linear stochastic approximation (LSA) algorithms with fixed step size, a core method in statistics and machine learning. LSA is used to compute approximate solutions of a $d$-dimensional linear system $\bar{\mathbf{A}} \theta = \bar{\mathbf{b}}$ for which $(\bar{\mathbf{A}}, \bar{\mathbf{b}})$ can only be estimated by (asymptotically) unbiased observations $\{(\mathbf{A}(Z_n),\mathbf{b}(Z_n))\}_{n \in \mathbb{N}}$. We consider here the case where $\{Z_n\}_{n \in \mathbb{N}}$ is an i.i.d. sequence or a uniformly geometrically ergodic Markov chain. We derive $p$-th moment and high-probability deviation bounds for the iterates defined by LSA and its Polyak-Ruppert-averaged version. Our finite-time instance-dependent bounds for the averaged LSA iterates are sharp in the sense that the leading term we obtain coincides with the local asymptotic minimax limit. Moreover, the remainder terms of our bounds admit a tight dependence on the mixing time $t_{\operatorname{mix}}$ of the underlying chain and the norm of the noise variables. We emphasize that our result requires the SA step size to scale only with logarithm of the problem dimension $d$.

Many streaming algorithms provide only a high-probability relative approximation. These two relaxations, of allowing approximation and randomization, seem necessary -- for many streaming problems, both relaxations must be employed simultaneously, to avoid an exponentially larger (and often trivial) space complexity. A common drawback of these randomized approximate algorithms is that independent executions on the same input have different outputs, that depend on their random coins. Pseudo-deterministic algorithms combat this issue, and for every input, they output with high probability the same ``canonical'' solution. We consider perhaps the most basic problem in data streams, of counting the number of items in a stream of length at most $n$. Morris's counter [CACM, 1978] is a randomized approximation algorithm for this problem that uses $O(\log\log n)$ bits of space, for every fixed approximation factor (greater than $1$). Goldwasser, Grossman, Mohanty and Woodruff [ITCS 2020] asked whether pseudo-deterministic approximation algorithms can match this space complexity. Our main result answers their question negatively, and shows that such algorithms must use $\Omega(\sqrt{\log n / \log\log n})$ bits of space. Our approach is based on a problem that we call Shift Finding, and may be of independent interest. In this problem, one has query access to a shifted version of a known string $F\in\{0,1\}^{3n}$, which is guaranteed to start with $n$ zeros and end with $n$ ones, and the goal is to find the unknown shift using a small number of queries. We provide for this problem an algorithm that uses $O(\sqrt{n})$ queries. It remains open whether $poly(\log n)$ queries suffice; if true, then our techniques immediately imply a nearly-tight $\Omega(\log n/\log\log n)$ space bound for pseudo-deterministic approximate counting.

$ \renewcommand{\tilde}{\widetilde} $We present an $\tilde{O}(\log^2 n)$ round deterministic distributed algorithm for the maximal independent set problem. By known reductions, this round complexity extends also to maximal matching, $\Delta+1$ vertex coloring, and $2\Delta-1$ edge coloring. These four problems are among the most central problems in distributed graph algorithms and have been studied extensively for the past four decades. This improved round complexity comes closer to the $\tilde{\Omega}(\log n)$ lower bound of maximal independent set and maximal matching [Balliu et al. FOCS '19]. The previous best known deterministic complexity for all of these problems was $\Theta(\log^3 n)$. Via the shattering technique, the improvement permeates also to the corresponding randomized complexities, e.g., the new randomized complexity of $\Delta+1$ vertex coloring is now $\tilde{O}(\log^2\log n)$ rounds. Our approach is a novel combination of the previously known two methods for developing deterministic algorithms for these problems, namely global derandomization via network decomposition (see e.g., [Rozhon, Ghaffari STOC'20; Ghaffari, Grunau, Rozhon SODA'21; Ghaffari et al. SODA'23]) and local rounding of fractional solutions (see e.g., [Fischer DISC'17; Harris FOCS'19; Fischer, Ghaffari, Kuhn FOCS'17; Ghaffari, Kuhn FOCS'21; Faour et al. SODA'23]). We consider a relaxation of the classic network decomposition concept, where instead of requiring the clusters in the same block to be non-adjacent, we allow each node to have a small number of neighboring clusters. We also show a deterministic algorithm that computes this relaxed decomposition faster than standard decompositions. We then use this relaxed decomposition to significantly improve the integrality of certain fractional solutions, before handing them to the local rounding procedure that now has to do fewer rounding steps.

In this paper, an efficient ensemble domain decomposition algorithm is proposed for fast solving the fully-mixed random Stokes-Darcy model with the physically realistic Beavers-Joseph (BJ) interface conditions. We utilize the Monte Carlo method for the coupled model with random inputs to derive some deterministic Stokes-Darcy numerical models and use the idea of the ensemble to realize the fast computation of multiple problems. One remarkable feature of the algorithm is that multiple linear systems share a common coefficient matrix in each deterministic numerical model, which significantly reduces the computational cost and achieves comparable accuracy with the traditional methods. Moreover, by domain decomposition, we can decouple the Stokes-Darcy system into two smaller sub-physics problems naturally. Both mesh-dependent and mesh-independent convergence rates of the algorithm are rigorously derived by choosing suitable Robin parameters. Optimized Robin parameters are derived and analyzed to accelerate the convergence of the proposed algorithm. Especially, for small hydraulic conductivity in practice, the almost optimal geometric convergence can be obtained by finite element discretization. Finally, two groups of numerical experiments are conducted to validate and illustrate the exclusive features of the proposed algorithm.

This paper proposes a regularizer called Implicit Neural Representation Regularizer (INRR) to improve the generalization ability of the Implicit Neural Representation (INR). The INR is a fully connected network that can represent signals with details not restricted by grid resolution. However, its generalization ability could be improved, especially with non-uniformly sampled data. The proposed INRR is based on learned Dirichlet Energy (DE) that measures similarities between rows/columns of the matrix. The smoothness of the Laplacian matrix is further integrated by parameterizing DE with a tiny INR. INRR improves the generalization of INR in signal representation by perfectly integrating the signal's self-similarity with the smoothness of the Laplacian matrix. Through well-designed numerical experiments, the paper also reveals a series of properties derived from INRR, including momentum methods like convergence trajectory and multi-scale similarity. Moreover, the proposed method could improve the performance of other signal representation methods.

This work derives upper bounds on the convergence rate of the moment-sum-of-squares hierarchy with correlative sparsity for global minimization of polynomials on compact basic semialgebraic sets. The main conclusion is that both sparse hierarchies based on the Schm\"udgen and Putinar Positivstellens\"atze enjoy a polynomial rate of convergence that depends on the size of the largest clique in the sparsity graph but not on the ambient dimension. Interestingly, the sparse bounds outperform the best currently available bounds for the dense hierarchy when the maximum clique size is sufficiently small compared to the ambient dimension and the performance is measured by the running time of an interior point method required to obtain a bound on the global minimum of a given accuracy.

In this work, we study discrete minimizers of the Ginzburg-Landau energy in finite element spaces. Special focus is given to the influence of the Ginzburg-Landau parameter $\kappa$. This parameter is of physical interest as large values can trigger the appearance of vortex lattices. Since the vortices have to be resolved on sufficiently fine computational meshes, it is important to translate the size of $\kappa$ into a mesh resolution condition, which can be done through error estimates that are explicit with respect to $\kappa$ and the spatial mesh width $h$. For that, we first work in an abstract framework for a general class of discrete spaces, where we present convergence results in a problem-adapted $\kappa$-weighted norm. Afterwards we apply our findings to Lagrangian finite elements and a particular generalized finite element construction. In numerical experiments we confirm that our derived $L^2$- and $H^1$-error estimates are indeed optimal in $\kappa$ and $h$.

A set $S\subseteq V$ of vertices of a graph $G$ is a \emph{$c$-clustered set} if it induces a subgraph with components of order at most $c$ each, and $\alpha_c(G)$ denotes the size of a largest $c$-clustered set. For any graph $G$ on $n$ vertices and treewidth $k$, we show that $\alpha_c(G) \geq \frac{c}{c+k+1}n$, which improves a result of Wood [arXiv:2208.10074, August 2022], while we construct $n$-vertex graphs $G$ of treewidth~$k$ with $\alpha_c(G)\leq \frac{c}{c+k}n$. In the case $c\leq 2$ or $k=1$ we prove the better lower bound $\alpha_c(G) \geq \frac{c}{c+k}n$, which settles a conjecture of Chappell and Pelsmajer [Electron.\ J.\ Comb., 2013] and is best-possible. Finally, in the case $c=3$ and $k=2$, we show $\alpha_c(G) \geq \frac{5}{9}n$ and which is best-possible.

北京阿比特科技有限公司
1$ is already #W[1]-hard parameterized by the width of the given decomposition. Moreover, we show that, unlike for the decision version, the standard dynamic programming algorithm is essentially optimal for the counting version. Formally, for a fixed nonempty set $X$, we denote by $X$-AntiFactor the special case where every vertex $v$ has the same set $X_v=X$ of forbidden degrees. We show the following lower bound for every fixed set $X$: if there is an $\epsilon>0$ such that #$X$-AntiFactor can be solved in time $(\max X+2-\epsilon)^k\cdot n^{O(1)}$ on a tree decomposition of width $k$, then the Counting Strong Exponential-Time Hypothesis (#SETH) fails. ">

亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

In the general AntiFactor problem, a graph $G$ is given with a set $X_v\subseteq \mathbb{N}$ of forbidden degrees for every vertex $v$ and the task is to find a set $S$ of edges such that the degree of $v$ in $S$ is not in the set $X_v$. Standard techniques (dynamic programming + fast convolution) can be used to show that if $M$ is the largest forbidden degree, then the problem can be solved in time $(M+2)^k\cdot n^{O(1)}$ if a tree decomposition of width $k$ is given. However, significantly faster algorithms are possible if the sets $X_v$ are sparse: our main algorithmic result shows that if every vertex has at most $x$ forbidden degrees (we call this special case AntiFactor$_x$), then the problem can be solved in time $(x+1)^{O(k)}\cdot n^{O(1)}$. That is, the AntiFactor$_x$ is fixed-parameter tractable parameterized by treewidth $k$ and the maximum number $x$ of excluded degrees. Our algorithm uses the technique of representative sets, which can be generalized to the optimization version, but (as expected) not to the counting version of the problem. In fact, we show that #AntiFactor$_1$ is already #W[1]-hard parameterized by the width of the given decomposition. Moreover, we show that, unlike for the decision version, the standard dynamic programming algorithm is essentially optimal for the counting version. Formally, for a fixed nonempty set $X$, we denote by $X$-AntiFactor the special case where every vertex $v$ has the same set $X_v=X$ of forbidden degrees. We show the following lower bound for every fixed set $X$: if there is an $\epsilon>0$ such that #$X$-AntiFactor can be solved in time $(\max X+2-\epsilon)^k\cdot n^{O(1)}$ on a tree decomposition of width $k$, then the Counting Strong Exponential-Time Hypothesis (#SETH) fails.

相關內容

We propose a theory for matrix completion that goes beyond the low-rank structure commonly considered in the literature and applies to general matrices of low description complexity, including sparse matrices, matrices with sparse factorizations such as, e.g., sparse R-factors in their QR-decomposition, and algebraic combinations of matrices of low description complexity. The mathematical concept underlying this theory is that of rectifiability, a basic notion in geometric measure theory. Complexity of the sets of matrices encompassed by the theory is measured in terms of Hausdorff and Minkowski dimensions. Our goal is the characterization of the number of linear measurements, with an emphasis on rank-$1$ measurements, needed for the existence of an algorithm that yields reconstruction, either perfect, with probability 1, or with arbitrarily small probability of error, depending on the setup. Specifically, we show that matrices taken from a set $\mathcal{U}$ such that $\mathcal{U}-\mathcal{U}$ has Hausdorff dimension $s$ %(or is countably $s$-rectifiable) can be recovered from $k>s$ measurements, and random matrices supported on a set $\mathcal{U}$ of Hausdorff dimension $s$ %(or a countably $s$-rectifiable set) can be recovered with probability 1 from $k>s$ measurements. What is more, we establish the existence of $\beta$-H\"older continuous decoders recovering matrices taken from a set of upper Minkowski dimension $s$ from $k>2s/(1-\beta)$ measurements and, with arbitrarily small probability of error, random matrices supported on a set of upper Minkowski dimension $s$ from $k>s/(1-\beta)$ measurements.

A continuous-time average consensus system is a linear dynamical system defined over a graph, where each node has its own state value that evolves according to a simultaneous linear differential equation. A node is allowed to interact with neighboring nodes. Average consensus is a phenomenon that the all the state values converge to the average of the initial state values. In this paper, we assume that a node can communicate with neighboring nodes through an additive white Gaussian noise channel. We first formulate the noisy average consensus system by using a stochastic differential equation (SDE), which allows us to use the Euler-Maruyama method, a numerical technique for solving SDEs. By studying the stochastic behavior of the residual error of the Euler-Maruyama method, we arrive at the covariance evolution equation. The analysis of the residual error leads to a compact formula for mean squared error (MSE), which shows that the sum of the inverse eigenvalues of the Laplacian matrix is the most dominant factor influencing the MSE. Furthermore, we propose optimization problems aimed at minimizing the MSE at a given target time, and introduce a deep unfolding-based optimization method to solve these problems. The quality of the solution is validated by numerical experiments.

This paper provides a finite-time analysis of linear stochastic approximation (LSA) algorithms with fixed step size, a core method in statistics and machine learning. LSA is used to compute approximate solutions of a $d$-dimensional linear system $\bar{\mathbf{A}} \theta = \bar{\mathbf{b}}$ for which $(\bar{\mathbf{A}}, \bar{\mathbf{b}})$ can only be estimated by (asymptotically) unbiased observations $\{(\mathbf{A}(Z_n),\mathbf{b}(Z_n))\}_{n \in \mathbb{N}}$. We consider here the case where $\{Z_n\}_{n \in \mathbb{N}}$ is an i.i.d. sequence or a uniformly geometrically ergodic Markov chain. We derive $p$-th moment and high-probability deviation bounds for the iterates defined by LSA and its Polyak-Ruppert-averaged version. Our finite-time instance-dependent bounds for the averaged LSA iterates are sharp in the sense that the leading term we obtain coincides with the local asymptotic minimax limit. Moreover, the remainder terms of our bounds admit a tight dependence on the mixing time $t_{\operatorname{mix}}$ of the underlying chain and the norm of the noise variables. We emphasize that our result requires the SA step size to scale only with logarithm of the problem dimension $d$.

Many streaming algorithms provide only a high-probability relative approximation. These two relaxations, of allowing approximation and randomization, seem necessary -- for many streaming problems, both relaxations must be employed simultaneously, to avoid an exponentially larger (and often trivial) space complexity. A common drawback of these randomized approximate algorithms is that independent executions on the same input have different outputs, that depend on their random coins. Pseudo-deterministic algorithms combat this issue, and for every input, they output with high probability the same ``canonical'' solution. We consider perhaps the most basic problem in data streams, of counting the number of items in a stream of length at most $n$. Morris's counter [CACM, 1978] is a randomized approximation algorithm for this problem that uses $O(\log\log n)$ bits of space, for every fixed approximation factor (greater than $1$). Goldwasser, Grossman, Mohanty and Woodruff [ITCS 2020] asked whether pseudo-deterministic approximation algorithms can match this space complexity. Our main result answers their question negatively, and shows that such algorithms must use $\Omega(\sqrt{\log n / \log\log n})$ bits of space. Our approach is based on a problem that we call Shift Finding, and may be of independent interest. In this problem, one has query access to a shifted version of a known string $F\in\{0,1\}^{3n}$, which is guaranteed to start with $n$ zeros and end with $n$ ones, and the goal is to find the unknown shift using a small number of queries. We provide for this problem an algorithm that uses $O(\sqrt{n})$ queries. It remains open whether $poly(\log n)$ queries suffice; if true, then our techniques immediately imply a nearly-tight $\Omega(\log n/\log\log n)$ space bound for pseudo-deterministic approximate counting.

$ \renewcommand{\tilde}{\widetilde} $We present an $\tilde{O}(\log^2 n)$ round deterministic distributed algorithm for the maximal independent set problem. By known reductions, this round complexity extends also to maximal matching, $\Delta+1$ vertex coloring, and $2\Delta-1$ edge coloring. These four problems are among the most central problems in distributed graph algorithms and have been studied extensively for the past four decades. This improved round complexity comes closer to the $\tilde{\Omega}(\log n)$ lower bound of maximal independent set and maximal matching [Balliu et al. FOCS '19]. The previous best known deterministic complexity for all of these problems was $\Theta(\log^3 n)$. Via the shattering technique, the improvement permeates also to the corresponding randomized complexities, e.g., the new randomized complexity of $\Delta+1$ vertex coloring is now $\tilde{O}(\log^2\log n)$ rounds. Our approach is a novel combination of the previously known two methods for developing deterministic algorithms for these problems, namely global derandomization via network decomposition (see e.g., [Rozhon, Ghaffari STOC'20; Ghaffari, Grunau, Rozhon SODA'21; Ghaffari et al. SODA'23]) and local rounding of fractional solutions (see e.g., [Fischer DISC'17; Harris FOCS'19; Fischer, Ghaffari, Kuhn FOCS'17; Ghaffari, Kuhn FOCS'21; Faour et al. SODA'23]). We consider a relaxation of the classic network decomposition concept, where instead of requiring the clusters in the same block to be non-adjacent, we allow each node to have a small number of neighboring clusters. We also show a deterministic algorithm that computes this relaxed decomposition faster than standard decompositions. We then use this relaxed decomposition to significantly improve the integrality of certain fractional solutions, before handing them to the local rounding procedure that now has to do fewer rounding steps.

In this paper, an efficient ensemble domain decomposition algorithm is proposed for fast solving the fully-mixed random Stokes-Darcy model with the physically realistic Beavers-Joseph (BJ) interface conditions. We utilize the Monte Carlo method for the coupled model with random inputs to derive some deterministic Stokes-Darcy numerical models and use the idea of the ensemble to realize the fast computation of multiple problems. One remarkable feature of the algorithm is that multiple linear systems share a common coefficient matrix in each deterministic numerical model, which significantly reduces the computational cost and achieves comparable accuracy with the traditional methods. Moreover, by domain decomposition, we can decouple the Stokes-Darcy system into two smaller sub-physics problems naturally. Both mesh-dependent and mesh-independent convergence rates of the algorithm are rigorously derived by choosing suitable Robin parameters. Optimized Robin parameters are derived and analyzed to accelerate the convergence of the proposed algorithm. Especially, for small hydraulic conductivity in practice, the almost optimal geometric convergence can be obtained by finite element discretization. Finally, two groups of numerical experiments are conducted to validate and illustrate the exclusive features of the proposed algorithm.

This paper proposes a regularizer called Implicit Neural Representation Regularizer (INRR) to improve the generalization ability of the Implicit Neural Representation (INR). The INR is a fully connected network that can represent signals with details not restricted by grid resolution. However, its generalization ability could be improved, especially with non-uniformly sampled data. The proposed INRR is based on learned Dirichlet Energy (DE) that measures similarities between rows/columns of the matrix. The smoothness of the Laplacian matrix is further integrated by parameterizing DE with a tiny INR. INRR improves the generalization of INR in signal representation by perfectly integrating the signal's self-similarity with the smoothness of the Laplacian matrix. Through well-designed numerical experiments, the paper also reveals a series of properties derived from INRR, including momentum methods like convergence trajectory and multi-scale similarity. Moreover, the proposed method could improve the performance of other signal representation methods.

This work derives upper bounds on the convergence rate of the moment-sum-of-squares hierarchy with correlative sparsity for global minimization of polynomials on compact basic semialgebraic sets. The main conclusion is that both sparse hierarchies based on the Schm\"udgen and Putinar Positivstellens\"atze enjoy a polynomial rate of convergence that depends on the size of the largest clique in the sparsity graph but not on the ambient dimension. Interestingly, the sparse bounds outperform the best currently available bounds for the dense hierarchy when the maximum clique size is sufficiently small compared to the ambient dimension and the performance is measured by the running time of an interior point method required to obtain a bound on the global minimum of a given accuracy.

In this work, we study discrete minimizers of the Ginzburg-Landau energy in finite element spaces. Special focus is given to the influence of the Ginzburg-Landau parameter $\kappa$. This parameter is of physical interest as large values can trigger the appearance of vortex lattices. Since the vortices have to be resolved on sufficiently fine computational meshes, it is important to translate the size of $\kappa$ into a mesh resolution condition, which can be done through error estimates that are explicit with respect to $\kappa$ and the spatial mesh width $h$. For that, we first work in an abstract framework for a general class of discrete spaces, where we present convergence results in a problem-adapted $\kappa$-weighted norm. Afterwards we apply our findings to Lagrangian finite elements and a particular generalized finite element construction. In numerical experiments we confirm that our derived $L^2$- and $H^1$-error estimates are indeed optimal in $\kappa$ and $h$.

A set $S\subseteq V$ of vertices of a graph $G$ is a \emph{$c$-clustered set} if it induces a subgraph with components of order at most $c$ each, and $\alpha_c(G)$ denotes the size of a largest $c$-clustered set. For any graph $G$ on $n$ vertices and treewidth $k$, we show that $\alpha_c(G) \geq \frac{c}{c+k+1}n$, which improves a result of Wood [arXiv:2208.10074, August 2022], while we construct $n$-vertex graphs $G$ of treewidth~$k$ with $\alpha_c(G)\leq \frac{c}{c+k}n$. In the case $c\leq 2$ or $k=1$ we prove the better lower bound $\alpha_c(G) \geq \frac{c}{c+k}n$, which settles a conjecture of Chappell and Pelsmajer [Electron.\ J.\ Comb., 2013] and is best-possible. Finally, in the case $c=3$ and $k=2$, we show $\alpha_c(G) \geq \frac{5}{9}n$ and which is best-possible.

北京阿比特科技有限公司