{mayi_des}

亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

We consider the product of determinantal point processes (DPPs), a point process whose probability mass is proportional to the product of principal minors of multiple matrices, as a natural, promising generalization of DPPs. We study the computational complexity of computing its normalizing constant, which is among the most essential probabilistic inference tasks. Our complexity-theoretic results (almost) rule out the existence of efficient algorithms for this task unless the input matrices are forced to have favorable structures. In particular, we prove the following: (1) Computing $\sum_S\det({\bf A}_{S,S})^p$ exactly for every (fixed) positive even integer $p$ is UP-hard and Mod$_3$P-hard, which gives a negative answer to an open question posed by Kulesza and Taskar. (2) $\sum_S\det({\bf A}_{S,S})\det({\bf B}_{S,S})\det({\bf C}_{S,S})$ is NP-hard to approximate within a factor of $2^{O(|I|^{1-\epsilon})}$ or $2^{O(n^{1/\epsilon})}$ for any $\epsilon>0$, where $|I|$ is the input size and $n$ is the order of the input matrix. This result is stronger than the #P-hardness for the case of two matrices derived by Gillenwater. (3) There exists a $k^{O(k)}n^{O(1)}$-time algorithm for computing $\sum_S\det({\bf A}_{S,S})\det({\bf B}_{S,S})$, where $k$ is the maximum rank of $\bf A$ and $\bf B$ or the treewidth of the graph formed by nonzero entries of $\bf A$ and $\bf B$. Such parameterized algorithms are said to be fixed-parameter tractable. These results can be extended to the fixed-size case. Further, we present two applications of fixed-parameter tractable algorithms given a matrix $\bf A$ of treewidth $w$: (4) We can compute a $2^{\frac{n}{2p-1}}$-approximation to $\sum_S\det({\bf A}_{S,S})^p$ for any fractional number $p>1$ in $w^{O(wp)}n^{O(1)}$ time. (5) We can find a $2^{\sqrt n}$-approximation to unconstrained MAP inference in $w^{O(w\sqrt n)}n^{O(1)}$ time.

相關內容

CC在計算復雜性方面表現突出。它的學科處于數學與計算機理論科學的交叉點,具有清晰的數學輪廓和嚴格的數學格式。官網鏈接: · 分解的 · 確切的 · 優化器 · FAST ·
2022 年 2 月 1 日

LU and Cholesky matrix factorization algorithms are core subroutines used to solve systems of linear equations (SLEs) encountered while solving an optimization problem. Standard factorization algorithms are highly efficient but remain susceptible to the accumulation roundoff errors, which can lead solvers to return feasibility and optimality certificates that are actually invalid. This paper introduces a novel approach for solving sequences of closely related SLEs encountered in nonlinear programming efficiently and without roundoff errors. Specifically, it introduces rank-one update algorithms for the roundoff-error-free (REF) factorization framework, a toolset built on integer-preserving arithmetic that has led to the development and implementation of fail-proof SLE solution subroutines for linear programming. The formal guarantees of the proposed algorithms are formally established through the derivation of theoretical insights. Their computational advantages are supported with computational experiments, which demonstrate upwards of 75x-improvements over exact factorization run-times on fully dense matrices with over one million entries. A significant advantage of the proposed methodology is that the length of any coefficient calculated via the associated algorithms is bounded polynomially in the size of the inputs without having to resort to greatest common divisor operations, which are required by and thereby hinder an efficient implementation of exact rational arithmetic approaches.

The lazy algorithm for a real base $\beta$ is generalized to the setting of Cantor bases $\boldsymbol{\beta}=(\beta_n)_{n\in \mathbb{N}}$ introduced recently by Charlier and the author. To do so, let $x_{\boldsymbol{\beta}}$ be the greatest real number that has a $\boldsymbol{\beta}$-representation $a_0a_1a_2\cdots$ such that each letter $a_n$ belongs to $\{0,\ldots,\lceil \beta_n \rceil -1\}$. This paper is concerned with the combinatorial properties of the lazy $\boldsymbol{\beta}$-expansions, which are defined when $x_{\boldsymbol{\beta}}<+\infty$. As an illustration, Cantor bases following the Thue-Morse sequence are studied and a formula giving their corresponding value of $x_{\boldsymbol{\beta}}$ is proved. First, it is shown that the lazy $\boldsymbol{\beta}$-expansions are obtained by "flipping" the digits of the greedy $\boldsymbol{\beta}$-expansions. Next, a Parry-like criterion characterizing the sequences of non-negative integers that are the lazy $\boldsymbol{\beta}$-expansions of some real number in $(x_{\boldsymbol{\beta}}-1,x_{\boldsymbol{\beta}}]$ is proved. Moreover, the lazy $\boldsymbol{\beta}$-shift is studied and in the particular case of alternate bases, that is the periodic Cantor bases, an analogue of Bertrand-Mathis' theorem in the lazy framework is proved: the lazy $\boldsymbol{\beta}$-shift is sofic if and only if all quasi-lazy $\boldsymbol{\beta}^{(i)}$-expansions of $x_{\boldsymbol{\beta}^{(i)}}-1$ are ultimately periodic, where $\boldsymbol{\beta}^{(i)}$ is the $i$-th shift of the alternate base $\boldsymbol{\beta}$.

A long line of research on fixed parameter tractability of integer programming culminated with showing that integer programs with n variables and a constraint matrix with dual tree-depth d and largest entry D are solvable in time g(d,D)poly(n) for some function g. However, the dual tree-depth of a constraint matrix is not preserved by row operations, i.e., a given integer program can be equivalent to another with a smaller dual tree-depth, and thus does not reflect its geometric structure. We prove that the minimum dual tree-depth of a row-equivalent matrix is equal to the branch-depth of the matroid defined by the columns of the matrix. We design a fixed parameter algorithm for computing branch-depth of matroids represented over a finite field and a fixed parameter algorithm for computing a row-equivalent matrix with minimum dual tree-depth. Finally, we use these results to obtain an algorithm for integer programming running in time g(d*,D)poly(n) where d* is the branch-depth of the constraint matrix; the branch-depth cannot be replaced by the more permissive notion of branch-width.

We give a quantum algorithm for solving the Bounded Distance Decoding (BDD) problem with a subexponential approximation factor on a class of integer lattices. The quantum algorithm uses a well-known but challenging-to-use quantum state on lattices as a type of approximate quantum eigenvector to randomly self-reduce the BDD instance to a random BDD instance which is solvable classically. The running time of the quantum algorithm is polynomial for one range of approximation factors and subexponential time for a second range of approximation factors. The subclass of lattices we study has a natural description in terms of the lattice's periodicity and finite abelian group rank. This view makes for a clean quantum algorithm in terms of finite abelian groups, uses very relatively little from lattice theory, and suggests exploring approximation algorithms for lattice problems in parameters other than dimension alone. A talk on this paper sparked many lively discussions and resulted in a new classical algorithm matching part of our result. We leave it as a challenge to give a classcial algorithm matching the general case.

We propose quantum subroutines for the simplex method that avoid classical computation of the basis inverse. We show how to quantize all steps of the simplex algorithm, including checking optimality, unboundedness, and identifying a pivot (i.e., pricing the columns and performing the ratio test) according to Dantzig's rule or the steepest edge rule. The quantized subroutines obtain a polynomial speedup in the dimension of the problem, but have worse dependence on other numerical parameters. For example, for a problem with $m$ constraints, $n$ variables, at most $d_c$ nonzero elements per column of the costraint matrix, at most $d$ nonzero elements per column or row of the basis, basis condition number $\kappa$, and optimality tolerance $\epsilon$, pricing can be performed in $\tilde{O}(\frac{1}{\epsilon}\kappa d \sqrt{n}(d_c n + d m))$ time, where the $\tilde{O}$ notation hides polylogarithmic factors; classically, pricing requires $O(d_c^{0.7} m^{1.9} + m^{2 + o(1)} + d_c n)$ time in the worst case using the fastest known algorithm for sparse matrix multiplication. For well-conditioned sparse problems the quantum subroutines scale better in $m$ and $n$, and may therefore have an advantage for very large problems. The running time of the quantum subroutines can be improved if the constraint matrix admits an efficient algorithmic description, or if quantum RAM is available.

We present an algorithm for the maximum matching problem in dynamic (insertion-deletions) streams with *asymptotically optimal* space complexity: for any $n$-vertex graph, our algorithm with high probability outputs an $\alpha$-approximate matching in a single pass using $O(n^2/\alpha^3)$ bits of space. A long line of work on the dynamic streaming matching problem has reduced the gap between space upper and lower bounds first to $n^{o(1)}$ factors [Assadi-Khanna-Li-Yaroslavtsev; SODA 2016] and subsequently to $\text{polylog}{(n)}$ factors [Dark-Konrad; CCC 2020]. Our upper bound now matches the Dark-Konrad lower bound up to $O(1)$ factors, thus completing this research direction. Our approach consists of two main steps: we first (provably) identify a family of graphs, similar to the instances used in prior work to establish the lower bounds for this problem, as the only "hard" instances to focus on. These graphs include an induced subgraph which is both sparse and contains a large matching. We then design a dynamic streaming algorithm for this family of graphs which is more efficient than prior work. The key to this efficiency is a novel sketching method, which bypasses the typical loss of $\text{polylog}{(n)}$-factors in space compared to standard $L_0$-sampling primitives, and can be of independent interest in designing optimal algorithms for other streaming problems.

We establish some limit theorems for quasi-arithmetic means of random variables. This class of means contains the arithmetic, geometric and harmonic means. Our feature is that the generators of quasi-arithmetic means are allowed to be complex-valued, which makes considerations for quasi-arithmetic means of random variables which could take negative values possible. Our motivation for the limit theorems is finding simple estimators of the parameters of the Cauchy distribution. By applying the limit theorems, we obtain some closed-form unbiased strongly-consistent estimators for the joint of the location and scale parameters of the Cauchy distribution, which are easy to compute and analyze.

A radio labelling of a graph $G$ is a mapping $f : V(G) \rightarrow \{0, 1, 2,\ldots\}$ such that $|f(u)-f(v)| \geq diam(G) + 1 - d(u,v)$ for every pair of distinct vertices $u,v$ of $G$, where $diam(G)$ is the diameter of $G$ and $d(u,v)$ is the distance between $u$ and $v$ in $G$. The radio number $rn(G)$ of $G$ is the smallest integer $k$ such that $G$ admits a radio labelling $f$ with $\max\{f(v):v \in V(G)\} = k$. The weight of a tree $T$ from a vertex $v \in V(T)$ is the sum of the distances in $T$ from $v$ to all other vertices, and a vertex of $T$ achieving the minimum weight is called a weight center of $T$. It is known that any tree has one or two weight centers. A tree is called a two-branch tree if the removal of all its weight centers results in a forest with exactly two components. In this paper we obtain a sharp lower bound for the radio number of two-branch trees which improves a known lower bound for general trees. We also give a necessary and sufficient condition for this improved lower bound to be achieved. Using these results, we determine the radio number of two families of level-wise regular two-branch trees.

Two-component deterministic- and random-scan Gibbs samplers are studied using the theory of two projections. It is found that in terms of asymptotic variance, the two-component random-scan Gibbs sampler is never much worse, and could be considerably better than its deterministic-scan counterpart, provided that the selection probability is appropriately chosen. This is especially the case when there is a large discrepancy in computation cost between the two components. Together with previous results regarding the convergence rates of two-component Gibbs Markov chains, results herein suggest one may use the deterministic-scan version in the burn-in stage, and switch to the random-scan version in the estimation stage. The theory of two projections can also be utilized to study other properties of variants of two-component Gibbs samplers. As a side product, some general formulas for characterizing the convergence rate of a possibly non-reversible or time-inhomogeneous Markov chain in an operator theoretic framework are developed.

We show NP-completeness for various problems about the existence of arithmetic expression trees. When given a set of operations, inputs, and a target value does there exist an expression tree with those inputs and operations that evaluates to the target? We consider the variations where the structure of the tree is also given and the variation where no parentheses are allowed in the expression.

北京阿比特科技有限公司
3$P-hard, which gives a negative answer to an open question posed by Kulesza and Taskar. (2) $\sum_S\det({\bf A}_{S,S})\det({\bf B}_{S,S})\det({\bf C}_{S,S})$ is NP-hard to approximate within a factor of $2^{O(|I|^{1-\epsilon})}$ or $2^{O(n^{1/\epsilon})}$ for any $\epsilon>0$, where $|I|$ is the input size and $n$ is the order of the input matrix. This result is stronger than the #P-hardness for the case of two matrices derived by Gillenwater. (3) There exists a $k^{O(k)}n^{O(1)}$-time algorithm for computing $\sum_S\det({\bf A}_{S,S})\det({\bf B}_{S,S})$, where $k$ is the maximum rank of $\bf A$ and $\bf B$ or the treewidth of the graph formed by nonzero entries of $\bf A$ and $\bf B$. Such parameterized algorithms are said to be fixed-parameter tractable. These results can be extended to the fixed-size case. Further, we present two applications of fixed-parameter tractable algorithms given a matrix $\bf A$ of treewidth $w$: (4) We can compute a $2^{\frac{n}{2p-1}}$-approximation to $\sum_S\det({\bf A}_{S,S})^p$ for any fractional number $p>1$ in $w^{O(wp)}n^{O(1)}$ time. (5) We can find a $2^{\sqrt n}$-approximation to unconstrained MAP inference in $w^{O(w\sqrt n)}n^{O(1)}$ time. ">

亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

We consider the product of determinantal point processes (DPPs), a point process whose probability mass is proportional to the product of principal minors of multiple matrices, as a natural, promising generalization of DPPs. We study the computational complexity of computing its normalizing constant, which is among the most essential probabilistic inference tasks. Our complexity-theoretic results (almost) rule out the existence of efficient algorithms for this task unless the input matrices are forced to have favorable structures. In particular, we prove the following: (1) Computing $\sum_S\det({\bf A}_{S,S})^p$ exactly for every (fixed) positive even integer $p$ is UP-hard and Mod$_3$P-hard, which gives a negative answer to an open question posed by Kulesza and Taskar. (2) $\sum_S\det({\bf A}_{S,S})\det({\bf B}_{S,S})\det({\bf C}_{S,S})$ is NP-hard to approximate within a factor of $2^{O(|I|^{1-\epsilon})}$ or $2^{O(n^{1/\epsilon})}$ for any $\epsilon>0$, where $|I|$ is the input size and $n$ is the order of the input matrix. This result is stronger than the #P-hardness for the case of two matrices derived by Gillenwater. (3) There exists a $k^{O(k)}n^{O(1)}$-time algorithm for computing $\sum_S\det({\bf A}_{S,S})\det({\bf B}_{S,S})$, where $k$ is the maximum rank of $\bf A$ and $\bf B$ or the treewidth of the graph formed by nonzero entries of $\bf A$ and $\bf B$. Such parameterized algorithms are said to be fixed-parameter tractable. These results can be extended to the fixed-size case. Further, we present two applications of fixed-parameter tractable algorithms given a matrix $\bf A$ of treewidth $w$: (4) We can compute a $2^{\frac{n}{2p-1}}$-approximation to $\sum_S\det({\bf A}_{S,S})^p$ for any fractional number $p>1$ in $w^{O(wp)}n^{O(1)}$ time. (5) We can find a $2^{\sqrt n}$-approximation to unconstrained MAP inference in $w^{O(w\sqrt n)}n^{O(1)}$ time.

相關內容

CC在計算復雜性方面表現突出。它的學科處于數學與計算機理論科學的交叉點,具有清晰的數學輪廓和嚴格的數學格式。官網鏈接: · 分解的 · 確切的 · 優化器 · FAST ·
2022 年 2 月 1 日

LU and Cholesky matrix factorization algorithms are core subroutines used to solve systems of linear equations (SLEs) encountered while solving an optimization problem. Standard factorization algorithms are highly efficient but remain susceptible to the accumulation roundoff errors, which can lead solvers to return feasibility and optimality certificates that are actually invalid. This paper introduces a novel approach for solving sequences of closely related SLEs encountered in nonlinear programming efficiently and without roundoff errors. Specifically, it introduces rank-one update algorithms for the roundoff-error-free (REF) factorization framework, a toolset built on integer-preserving arithmetic that has led to the development and implementation of fail-proof SLE solution subroutines for linear programming. The formal guarantees of the proposed algorithms are formally established through the derivation of theoretical insights. Their computational advantages are supported with computational experiments, which demonstrate upwards of 75x-improvements over exact factorization run-times on fully dense matrices with over one million entries. A significant advantage of the proposed methodology is that the length of any coefficient calculated via the associated algorithms is bounded polynomially in the size of the inputs without having to resort to greatest common divisor operations, which are required by and thereby hinder an efficient implementation of exact rational arithmetic approaches.

The lazy algorithm for a real base $\beta$ is generalized to the setting of Cantor bases $\boldsymbol{\beta}=(\beta_n)_{n\in \mathbb{N}}$ introduced recently by Charlier and the author. To do so, let $x_{\boldsymbol{\beta}}$ be the greatest real number that has a $\boldsymbol{\beta}$-representation $a_0a_1a_2\cdots$ such that each letter $a_n$ belongs to $\{0,\ldots,\lceil \beta_n \rceil -1\}$. This paper is concerned with the combinatorial properties of the lazy $\boldsymbol{\beta}$-expansions, which are defined when $x_{\boldsymbol{\beta}}<+\infty$. As an illustration, Cantor bases following the Thue-Morse sequence are studied and a formula giving their corresponding value of $x_{\boldsymbol{\beta}}$ is proved. First, it is shown that the lazy $\boldsymbol{\beta}$-expansions are obtained by "flipping" the digits of the greedy $\boldsymbol{\beta}$-expansions. Next, a Parry-like criterion characterizing the sequences of non-negative integers that are the lazy $\boldsymbol{\beta}$-expansions of some real number in $(x_{\boldsymbol{\beta}}-1,x_{\boldsymbol{\beta}}]$ is proved. Moreover, the lazy $\boldsymbol{\beta}$-shift is studied and in the particular case of alternate bases, that is the periodic Cantor bases, an analogue of Bertrand-Mathis' theorem in the lazy framework is proved: the lazy $\boldsymbol{\beta}$-shift is sofic if and only if all quasi-lazy $\boldsymbol{\beta}^{(i)}$-expansions of $x_{\boldsymbol{\beta}^{(i)}}-1$ are ultimately periodic, where $\boldsymbol{\beta}^{(i)}$ is the $i$-th shift of the alternate base $\boldsymbol{\beta}$.

A long line of research on fixed parameter tractability of integer programming culminated with showing that integer programs with n variables and a constraint matrix with dual tree-depth d and largest entry D are solvable in time g(d,D)poly(n) for some function g. However, the dual tree-depth of a constraint matrix is not preserved by row operations, i.e., a given integer program can be equivalent to another with a smaller dual tree-depth, and thus does not reflect its geometric structure. We prove that the minimum dual tree-depth of a row-equivalent matrix is equal to the branch-depth of the matroid defined by the columns of the matrix. We design a fixed parameter algorithm for computing branch-depth of matroids represented over a finite field and a fixed parameter algorithm for computing a row-equivalent matrix with minimum dual tree-depth. Finally, we use these results to obtain an algorithm for integer programming running in time g(d*,D)poly(n) where d* is the branch-depth of the constraint matrix; the branch-depth cannot be replaced by the more permissive notion of branch-width.

We give a quantum algorithm for solving the Bounded Distance Decoding (BDD) problem with a subexponential approximation factor on a class of integer lattices. The quantum algorithm uses a well-known but challenging-to-use quantum state on lattices as a type of approximate quantum eigenvector to randomly self-reduce the BDD instance to a random BDD instance which is solvable classically. The running time of the quantum algorithm is polynomial for one range of approximation factors and subexponential time for a second range of approximation factors. The subclass of lattices we study has a natural description in terms of the lattice's periodicity and finite abelian group rank. This view makes for a clean quantum algorithm in terms of finite abelian groups, uses very relatively little from lattice theory, and suggests exploring approximation algorithms for lattice problems in parameters other than dimension alone. A talk on this paper sparked many lively discussions and resulted in a new classical algorithm matching part of our result. We leave it as a challenge to give a classcial algorithm matching the general case.

We propose quantum subroutines for the simplex method that avoid classical computation of the basis inverse. We show how to quantize all steps of the simplex algorithm, including checking optimality, unboundedness, and identifying a pivot (i.e., pricing the columns and performing the ratio test) according to Dantzig's rule or the steepest edge rule. The quantized subroutines obtain a polynomial speedup in the dimension of the problem, but have worse dependence on other numerical parameters. For example, for a problem with $m$ constraints, $n$ variables, at most $d_c$ nonzero elements per column of the costraint matrix, at most $d$ nonzero elements per column or row of the basis, basis condition number $\kappa$, and optimality tolerance $\epsilon$, pricing can be performed in $\tilde{O}(\frac{1}{\epsilon}\kappa d \sqrt{n}(d_c n + d m))$ time, where the $\tilde{O}$ notation hides polylogarithmic factors; classically, pricing requires $O(d_c^{0.7} m^{1.9} + m^{2 + o(1)} + d_c n)$ time in the worst case using the fastest known algorithm for sparse matrix multiplication. For well-conditioned sparse problems the quantum subroutines scale better in $m$ and $n$, and may therefore have an advantage for very large problems. The running time of the quantum subroutines can be improved if the constraint matrix admits an efficient algorithmic description, or if quantum RAM is available.

We present an algorithm for the maximum matching problem in dynamic (insertion-deletions) streams with *asymptotically optimal* space complexity: for any $n$-vertex graph, our algorithm with high probability outputs an $\alpha$-approximate matching in a single pass using $O(n^2/\alpha^3)$ bits of space. A long line of work on the dynamic streaming matching problem has reduced the gap between space upper and lower bounds first to $n^{o(1)}$ factors [Assadi-Khanna-Li-Yaroslavtsev; SODA 2016] and subsequently to $\text{polylog}{(n)}$ factors [Dark-Konrad; CCC 2020]. Our upper bound now matches the Dark-Konrad lower bound up to $O(1)$ factors, thus completing this research direction. Our approach consists of two main steps: we first (provably) identify a family of graphs, similar to the instances used in prior work to establish the lower bounds for this problem, as the only "hard" instances to focus on. These graphs include an induced subgraph which is both sparse and contains a large matching. We then design a dynamic streaming algorithm for this family of graphs which is more efficient than prior work. The key to this efficiency is a novel sketching method, which bypasses the typical loss of $\text{polylog}{(n)}$-factors in space compared to standard $L_0$-sampling primitives, and can be of independent interest in designing optimal algorithms for other streaming problems.

We establish some limit theorems for quasi-arithmetic means of random variables. This class of means contains the arithmetic, geometric and harmonic means. Our feature is that the generators of quasi-arithmetic means are allowed to be complex-valued, which makes considerations for quasi-arithmetic means of random variables which could take negative values possible. Our motivation for the limit theorems is finding simple estimators of the parameters of the Cauchy distribution. By applying the limit theorems, we obtain some closed-form unbiased strongly-consistent estimators for the joint of the location and scale parameters of the Cauchy distribution, which are easy to compute and analyze.

A radio labelling of a graph $G$ is a mapping $f : V(G) \rightarrow \{0, 1, 2,\ldots\}$ such that $|f(u)-f(v)| \geq diam(G) + 1 - d(u,v)$ for every pair of distinct vertices $u,v$ of $G$, where $diam(G)$ is the diameter of $G$ and $d(u,v)$ is the distance between $u$ and $v$ in $G$. The radio number $rn(G)$ of $G$ is the smallest integer $k$ such that $G$ admits a radio labelling $f$ with $\max\{f(v):v \in V(G)\} = k$. The weight of a tree $T$ from a vertex $v \in V(T)$ is the sum of the distances in $T$ from $v$ to all other vertices, and a vertex of $T$ achieving the minimum weight is called a weight center of $T$. It is known that any tree has one or two weight centers. A tree is called a two-branch tree if the removal of all its weight centers results in a forest with exactly two components. In this paper we obtain a sharp lower bound for the radio number of two-branch trees which improves a known lower bound for general trees. We also give a necessary and sufficient condition for this improved lower bound to be achieved. Using these results, we determine the radio number of two families of level-wise regular two-branch trees.

Two-component deterministic- and random-scan Gibbs samplers are studied using the theory of two projections. It is found that in terms of asymptotic variance, the two-component random-scan Gibbs sampler is never much worse, and could be considerably better than its deterministic-scan counterpart, provided that the selection probability is appropriately chosen. This is especially the case when there is a large discrepancy in computation cost between the two components. Together with previous results regarding the convergence rates of two-component Gibbs Markov chains, results herein suggest one may use the deterministic-scan version in the burn-in stage, and switch to the random-scan version in the estimation stage. The theory of two projections can also be utilized to study other properties of variants of two-component Gibbs samplers. As a side product, some general formulas for characterizing the convergence rate of a possibly non-reversible or time-inhomogeneous Markov chain in an operator theoretic framework are developed.

We show NP-completeness for various problems about the existence of arithmetic expression trees. When given a set of operations, inputs, and a target value does there exist an expression tree with those inputs and operations that evaluates to the target? We consider the variations where the structure of the tree is also given and the variation where no parentheses are allowed in the expression.

北京阿比特科技有限公司