亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

Analysis-suitable $G^1$ (AS-$G^1$) multi-patch spline surfaces [4] are particular $G^1$-smooth multi-patch spline surfaces, which are needed to ensure the construction of $C^1$-smooth multi-patch spline spaces with optimal polynomial reproduction properties [16]. We present a novel local approach for the design of AS-$G^1$ multi-patch spline surfaces, which is based on the use of Lagrange multipliers. The presented method is simple and generates an AS-$G^1$ multi-patch spline surface by approximating a given $G^1$-smooth but non-AS-$G^1$ multi-patch surface. Several numerical examples demonstrate the potential of the proposed technique for the construction of AS-$G^1$ multi-patch spline surfaces and show that these surfaces are especially suited for applications in isogeometric analysis by solving the biharmonic problem, a particular fourth order partial differential equation, with optimal rates of convergence over them.

相關內容

Surface 是微軟公(gong)司(si)( )旗下(xia)一系(xi)列(lie)使用 Windows 10(早期為 Windows 8.X)操作(zuo)系(xi)統的(de)電腦產(chan)品,目(mu)前有 Surface、Surface Pro 和 Surface Book 三個系(xi)列(lie)。 2012 年(nian) 6 月(yue) 18 日,初代 Surface Pro/RT 由時任微軟 CEO 史蒂夫·鮑爾默發(fa)布于在洛杉磯(ji)舉行的(de)記者會,2012 年(nian) 10 月(yue) 26 日上市銷(xiao)售。

We present polynomial-time SDP-based algorithms for the following problem: For fixed $k \leq \ell$, given a real number $\epsilon>0$ and a graph $G$ that admits a $k$-colouring with a $\rho$-fraction of the edges coloured properly, it returns an $\ell$-colouring of $G$ with an $(\alpha \rho - \epsilon)$-fraction of the edges coloured properly in polynomial time in $G$ and $1 / \epsilon$. Our algorithms are based on the algorithms of Frieze and Jerrum [Algorithmica'97] and of Karger, Motwani and Sudan [JACM'98]. For $k = 2, \ell = 3$, our algorithm achieves an approximation ratio $\alpha = 1$, which is the best possible. When $k$ is fixed and $\ell$ grows large, our algorithm achieves an approximation ratio of $\alpha = 1 - o(1 / \ell)$. When $k, \ell$ are both large, our algorithm achieves an approximation ratio of $\alpha = 1 - 1 / \ell + 2 \ln \ell / k \ell - o(\ln \ell / k \ell) - O(1 / k^2)$; if we fix $d = \ell - k$ and allow $k, \ell$ to grow large, this is $\alpha = 1 - 1 / \ell + 2 \ln \ell / k \ell - o(\ln \ell / k \ell)$. By extending the results of Khot, Kindler, Mossel and O'Donnell [SICOMP'07] to the promise setting, we show that for large $k$ and $\ell$, assuming Khot's Unique Games Conjecture (UGC), it is \NP-hard to achieve an approximation ratio $\alpha$ greater than $1 - 1 / \ell + 2 \ln \ell / k \ell + o(\ln \ell / k \ell)$, provided that $\ell$ is bounded by a function that is $o(\exp(\sqrt[3]{k}))$. For the case where $d = \ell - k$ is fixed, this bound matches the performance of our algorithm up to $o(\ln \ell / k \ell)$. Furthermore, by extending the results of Guruswami and Sinop [ToC'13] to the promise setting, we prove that it is NP-hard to achieve an approximation ratio greater than $1 - 1 / \ell + 8 \ln \ell / k \ell + o(\ln \ell / k \ell)$, provided again that $\ell$ is bounded as before (but this time without assuming the UGC).

At STOC 2002, Eiter, Gottlob, and Makino presented a technique called ordered generation that yields an $n^{O(d)}$-delay algorithm listing all minimal transversals of an $n$-vertex hypergraph of degeneracy $d$. Recently at IWOCA 2019, Conte, Kant\'e, Marino, and Uno asked whether this XP-delay algorithm parameterized by $d$ could be made FPT-delay for a weaker notion of degeneracy, or even parameterized by the maximum degree $\Delta$, i.e., whether it can be turned into an algorithm with delay $f(\Delta)\cdot n^{O(1)}$ for some computable function $f$. Moreover, and as a first step toward answering that question, they note that they could not achieve these time bounds even for the particular case of minimal dominating sets enumeration. In this paper, using ordered generation, we show that an FPT-delay algorithm can be devised for minimal transversals enumeration parameterized by the maximum degree and dimension, giving a positive and more general answer to the latter question.

We introduce a higher-dimensional "cubical" chain complex and apply it to the design of quantum locally testable codes. Our cubical chain complex can be constructed for any dimension $t$, and in a precise sense generalizes the Sipser-Spielman construction of expander codes (case $t=1$) and the constructions by Dinur et. al and Panteleev and Kalachev of a square complex (case $t$=2), which have been applied to the design of classical locally testable and quantum low-density parity check codes respectively. For $t=4$ our construction gives a family of quantum locally testable codes conditional on a conjecture about robustness of four-tuples of random linear maps. These codes have linear dimension, inverse poly-logarithmic relative distance and soundness, and polylogarithmic-size parity checks. Our complex can be built in a modular way from two ingredients. Firstly, the geometry (edges, faces, cubes, etc.) is provided by a set $G$ of size $N$, together with pairwise commuting sets of actions $A_1,\ldots,A_t$ on it. Secondly, the chain complex itself is obtained by associating local coefficient spaces based on codes, with each geometric object, and introducing local maps on those coefficient spaces. We bound the cycle and co-cycle expansion of the chain complex. The assumptions we need are two-fold: firstly, each Cayley graph $Cay(G,A_j)$ needs to be a good (spectral) expander, and secondly, the families of codes and their duals both need to satisfy a form of robustness (that generalizes the condition of agreement testability for pairs of codes). While the first assumption is easy to satisfy, it is currently not known if the second can be achieved.

In 2012 Chen and Singer introduced the notion of discrete residues for rational functions as a complete obstruction to rational summability. More explicitly, for a given rational function f(x), there exists a rational function g(x) such that f(x) = g(x+1) - g(x) if and only if every discrete residue of f(x) is zero. Discrete residues have many important further applications beyond summability: to creative telescoping problems, thence to the determination of (differential-)algebraic relations among hypergeometric sequences, and subsequently to the computation of (differential) Galois groups of difference equations. However, the discrete residues of a rational function are defined in terms of its complete partial fraction decomposition, which makes their direct computation impractical due to the high complexity of completely factoring arbitrary denominator polynomials into linear factors. We develop a factorization-free algorithm to compute discrete residues of rational functions, relying only on gcd computations and linear algebra.

We study the weak recovery problem on the $r$-uniform hypergraph stochastic block model ($r$-HSBM) with two balanced communities. In this model, $n$ vertices are randomly divided into two communities, and size-$r$ hyperedges are added randomly depending on whether all vertices in the hyperedge are in the same community. The goal of the weak recovery problem is to recover a non-trivial fraction of the communities given the hypergraph. Previously, Pal and Zhu (2021) established that weak recovery is always possible above a natural threshold called the Kesten-Stigum (KS) threshold. Gu and Polyanskiy (2023) proved that the KS threshold is tight if $r\le 4$ or the expected degree $d$ is small. It remained open whether the KS threshold is tight for $r\ge 5$ and large $d$. In this paper we determine the tightness of the KS threshold for any fixed $r$ and large $d$. We prove that for $r\le 6$ and $d$ large enough, the KS threshold is tight. This shows that there is no information-computation gap in this regime. This partially confirms a conjecture of Angelini et al. (2015). For $r\ge 7$, we prove that for $d$ large enough, the KS threshold is not tight, providing more evidence supporting the existence of an information-computation gap in this regime. Furthermore, we establish asymptotic bounds on the weak recovery threshold for fixed $r$ and large $d$. We also obtain a number of results regarding the closely-related broadcasting on hypertrees (BOHT) model, including the asymptotics of the reconstruction threshold for $r\ge 7$ and impossibility of robust reconstruction at criticality.

Continuous-time algebraic Riccati equations can be found in many disciplines in different forms. In the case of small-scale dense coefficient matrices, stabilizing solutions can be computed to all possible formulations of the Riccati equation. This is not the case when it comes to large-scale sparse coefficient matrices. In this paper, we provide a reformulation of the Newton-Kleinman iteration scheme for continuous-time algebraic Riccati equations using indefinite symmetric low-rank factorizations. This allows the application of the method to the case of general large-scale sparse coefficient matrices. We provide convergence results for several prominent realizations of the equation and show in numerical examples the effectiveness of the approach.

Multi-product formulas (MPF) are linear combinations of Trotter circuits offering high-quality simulation of Hamiltonian time evolution with fewer Trotter steps. Here we report two contributions aimed at making multi-product formulas more viable for near-term quantum simulations. First, we extend the theory of Trotter error with commutator scaling developed by Childs, Su, Tran et al. to multi-product formulas. Our result implies that multi-product formulas can achieve a quadratic reduction of Trotter error in 1-norm (nuclear norm) on arbitrary time intervals compared with the regular product formulas without increasing the required circuit depth or qubit connectivity. The number of circuit repetitions grows only by a constant factor. Second, we introduce dynamic multi-product formulas with time-dependent coefficients chosen to minimize a certain efficiently computable proxy for the Trotter error. We use a minimax estimation method to make dynamic multi-product formulas robust to uncertainty from algorithmic errors, sampling and hardware noise. We call this method Minimax MPF and we provide a rigorous bound on its error.

The research presents the closed-form outage analysis of the newly presented $\alpha$-modification of the shadowed Beaulieu-Xie fading model for wireless communications. For the considered channel, the closed-form analytical expressions for the outage probability (including its upper and lower bounds), raw moments, amount of fading, and channel quality estimation indicator are derived. The carried out thorough numerical simulation and analysis demonstrates strong agreement with the presented closed-form solutions and illustrates the relationship between the outage probability and channel parameters.

Classical Krylov subspace projection methods for the solution of linear problem $Ax = b$ output an approximate solution $\widetilde{x}\simeq x$. Recently, it has been recognized that projection methods can be understood from a statistical perspective. These probabilistic projection methods return a distribution $p(\widetilde{x})$ in place of a point estimate $\widetilde{x}$. The resulting uncertainty, codified as a distribution, can, in theory, be meaningfully combined with other uncertainties, can be propagated through computational pipelines, and can be used in the framework of probabilistic decision theory. The problem we address is that the current probabilistic projection methods lead to the poorly calibrated posterior distribution. We improve the covariance matrix from previous works in a way that it does not contain such undesirable objects as $A^{-1}$ or $A^{-1}A^{-T}$, results in nontrivial uncertainty, and reproduces an arbitrary projection method as a mean of the posterior distribution. We also propose a variant that is numerically inexpensive in the case the uncertainty is calibrated a priori. Since it usually is not, we put forward a practical way to calibrate uncertainty that performs reasonably well, albeit at the expense of roughly doubling the numerical cost of the underlying projection method.

The present work presents a stable POD-Galerkin based reduced-order model (ROM) for two-dimensional Rayleigh-B\'enard convection in a square geometry for three Rayleigh numbers: $10^4$ (steady state), $3\times 10^5$ (periodic), and $6 \times 10^6$ (chaotic). Stability is obtained through a particular (staggered-grid) full-order model (FOM) discretization that leads to a ROM that is pressure-free and has skew-symmetric (energy-conserving) convective terms. This yields long-time stable solutions without requiring stabilizing mechanisms, even outside the training data range. The ROM's stability is validated for the different test cases by investigating the Nusselt and Reynolds number time series and the mean and variance of the vertical temperature profile. In general, these quantities converge to the FOM when increasing the number of modes, and turn out to be a good measure of accuracy. However, for the chaotic case, convergence with increasing numbers of modes is relatively difficult and a high number of modes is required to resolve the low-energy structures that are important for the global dynamics.

北京阿比特科技有限公司