亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

The generalized Lanczos trust-region (GLTR) method is one of the most popular approaches for solving large-scale trust-region subproblem (TRS). Recently, Jia and Wang [Z. Jia and F. Wang, \emph{SIAM J. Optim., 31 (2021), pp. 887--914}] considered the convergence of this method and established some {\it a prior} error bounds on the residual, the solution and the Largrange multiplier. In this paper, we revisit the convergence of the GLTR method and try to improve these bounds. First, we establish a sharper upper bound on the residual. Second, we give a new bound on the distance between the approximation and the exact solution, and show that the convergence of the approximation has nothing to do with the associated spectral separation. Third, we present some non-asymptotic bounds for the convergence of the Largrange multiplier, and define a factor that plays an important role on the convergence of the Largrange multiplier. Numerical experiments demonstrate the effectiveness of our theoretical results.

相關內容

We propose a generalization of nonlinear stability of numerical one-step integrators to Riemannian manifolds in the spirit of Butcher's notion of B-stability. Taking inspiration from Simpson-Porco and Bullo, we introduce non-expansive systems on such manifolds and define B-stability of integrators. In this first exposition, we provide concrete results for a geodesic version of the Implicit Euler (GIE) scheme. We prove that the GIE method is B-stable on Riemannian manifolds with non-positive sectional curvature. We show through numerical examples that the GIE method is expansive when applied to a certain non-expansive vector field on the 2-sphere, and that the GIE method does not necessarily possess a unique solution for large enough step sizes. Finally, we derive a new improved global error estimate for general Lie group integrators.

The time-harmonic Maxwell equations at high wavenumber k in domains with an analytic boundary and impedance boundary conditions are considered. A wavenumber-explicit stability and regularity theory is developed that decomposes the solution into a part with finite Sobolev regularity that is controlled uniformly in k and an analytic part. Using this regularity, quasi-optimality of the Galerkin discretization based on Nedelec elements of order p on a mesh with mesh size h is shown under the k-explicit scale resolution condition that a) kh/p is sufficient small and b) p/\ln k is bounded from below.

We describe three algorithms to determine the stable, semistable, and torus-polystable loci of the GIT quotient of a projective variety by a reductive group. The algorithms are efficient when the group is semisimple. By using an implementation of our algorithms for simple groups, we provide several applications to the moduli theory of algebraic varieties, including the K-moduli of algebraic varieties, the moduli of algebraic curves and the Mukai models of the moduli space of curves for low genus. We also discuss a number of potential improvements and some natural open problems arising from this work.

Confidence intervals based on the central limit theorem (CLT) are a cornerstone of classical statistics. Despite being only asymptotically valid, they are ubiquitous because they permit statistical inference under very weak assumptions, and can often be applied to problems even when nonasymptotic inference is impossible. This paper introduces time-uniform analogues of such asymptotic confidence intervals. To elaborate, our methods take the form of confidence sequences (CS) -- sequences of confidence intervals that are uniformly valid over time. CSs provide valid inference at arbitrary stopping times, incurring no penalties for "peeking" at the data, unlike classical confidence intervals which require the sample size to be fixed in advance. Existing CSs in the literature are nonasymptotic, and hence do not enjoy the aforementioned broad applicability of asymptotic confidence intervals. Our work bridges the gap by giving a definition for "asymptotic CSs", and deriving a universal asymptotic CS that requires only weak CLT-like assumptions. While the CLT approximates the distribution of a sample average by that of a Gaussian at a fixed sample size, we use strong invariance principles (stemming from the seminal 1960s work of Strassen and improvements by Koml\'os, Major, and Tusn\'ady) to uniformly approximate the entire sample average process by an implicit Gaussian process. As an illustration of our theory, we derive asymptotic CSs for the average treatment effect using efficient estimators in observational studies (for which no nonasymptotic bounds can exist even in the fixed-time regime) as well as randomized experiments, enabling causal inference that can be continuously monitored and adaptively stopped.

We study the problem of enumerating Tarski fixed points, focusing on the relational lattices of equivalences, quasiorders and binary relations. We present a polynomial space enumeration algorithm for Tarski fixed points on these lattices and other lattices of polynomial height. It achieves polynomial delay when enumerating fixed points of increasing isotone maps on all three lattices, as well as decreasing isotone maps on the lattice of binary relations. In those cases in which the enumeration algorithm does not guarantee polynomial delay on the three relational lattices on the other hand, we prove exponential lower bounds for deciding the existence of three fixed points when the isotone map is given as an oracle, and that it is NP-hard to find three or more Tarski fixed points. More generally, we show that any deterministic or bounded-error randomized algorithm must perform a number of queries asymptotically at least as large as the lattice width to decide the existence of three fixed points when the isotone map is given as an oracle. Finally, we demonstrate that our findings yield a polynomial delay and space algorithm for listing bisimulations and instances of some related models of behavioral or role equivalence.

We provide a non-unit disk framework to solve combinatorial optimization problems such as Maximum Cut (Max-Cut) and Maximum Independent Set (MIS) on a Rydberg quantum annealer. Our setup consists of a many-body interacting Rydberg system where locally controllable light shifts are applied to individual qubits in order to map the graph problem onto the Ising spin model. Exploiting the flexibility that optical tweezers offer in terms of spatial arrangement, our numerical simulations implement the local-detuning protocol while globally driving the Rydberg annealer to the desired many-body ground state, which is also the solution to the optimization problem. Using optimal control methods, these solutions are obtained for prototype graphs with varying sizes at time scales well within the system lifetime and with approximation ratios close to one. The non-blockade approach facilitates the encoding of graph problems with specific topologies that can be realized in two-dimensional Rydberg configurations and is applicable to both unweighted as well as weighted graphs. A comparative analysis with fast simulated annealing is provided which highlights the advantages of our scheme in terms of system size, hardness of the graph, and the number of iterations required to converge to the solution.

Making inference with spatial extremal dependence models can be computationally burdensome since they involve intractable and/or censored likelihoods. Building on recent advances in likelihood-free inference with neural Bayes estimators, that is, neural networks that approximate Bayes estimators, we develop highly efficient estimators for censored peaks-over-threshold models that encode censoring information in the neural network architecture. Our new method provides a paradigm shift that challenges traditional censored likelihood-based inference methods for spatial extremal dependence models. Our simulation studies highlight significant gains in both computational and statistical efficiency, relative to competing likelihood-based approaches, when applying our novel estimators to make inference with popular extremal dependence models, such as max-stable, $r$-Pareto, and random scale mixture process models. We also illustrate that it is possible to train a single neural Bayes estimator for a general censoring level, precluding the need to retrain the network when the censoring level is changed. We illustrate the efficacy of our estimators by making fast inference on hundreds-of-thousands of high-dimensional spatial extremal dependence models to assess extreme particulate matter 2.5 microns or less in diameter (PM2.5) concentration over the whole of Saudi Arabia.

Statistical models are at the heart of any empirical study for hypothesis testing. We present a new cross-platform Python-based package which employs different likelihood prescriptions through a plug-in system. This framework empowers users to propose, examine, and publish new likelihood prescriptions without developing software infrastructure, ultimately unifying and generalising different ways of constructing likelihoods and employing them for hypothesis testing, all in one place. Within this package, we propose a new simplified likelihood prescription that surpasses its predecessors' approximation accuracy by incorporating asymmetric uncertainties. Furthermore, our package facilitates the inclusion of various likelihood combination routines, thereby broadening the scope of independent studies through a meta-analysis. By remaining agnostic to the source of the likelihood prescription and the signal hypothesis generator, our platform allows for the seamless implementation of packages with different likelihood prescriptions, fostering compatibility and interoperability.

Kleene's computability theory based on the S1-S9 computation schemes constitutes a model for computing with objects of any finite type and extends Turing's 'machine model' which formalises computing with real numbers. A fundamental distinction in Kleene's framework is between normal and non-normal functionals where the former compute the associated Kleene quantifier $\exists^n$ and the latter do not. Historically, the focus was on normal functionals, but recently new non-normal functionals have been studied based on well-known theorems, the weakest among which seems to be the uncountability of the reals. These new non-normal functionals are fundamentally different from historical examples like Tait's fan functional: the latter is computable from $\exists^2$, while the former are computable in $\exists^3$ but not in weaker oracles. Of course, there is a great divide or abyss separating $\exists^2$ and $\exists^3$ and we identify slight variations of our new non-normal functionals that are again computable in $\exists^2$, i.e. fall on different sides of this abyss. Our examples are based on mainstream mathematical notions, like quasi-continuity, Baire classes, bounded variation, and semi-continuity from real analysis.

Recent advances in 3D fully convolutional networks (FCN) have made it feasible to produce dense voxel-wise predictions of volumetric images. In this work, we show that a multi-class 3D FCN trained on manually labeled CT scans of several anatomical structures (ranging from the large organs to thin vessels) can achieve competitive segmentation results, while avoiding the need for handcrafting features or training class-specific models. To this end, we propose a two-stage, coarse-to-fine approach that will first use a 3D FCN to roughly define a candidate region, which will then be used as input to a second 3D FCN. This reduces the number of voxels the second FCN has to classify to ~10% and allows it to focus on more detailed segmentation of the organs and vessels. We utilize training and validation sets consisting of 331 clinical CT images and test our models on a completely unseen data collection acquired at a different hospital that includes 150 CT scans, targeting three anatomical organs (liver, spleen, and pancreas). In challenging organs such as the pancreas, our cascaded approach improves the mean Dice score from 68.5 to 82.2%, achieving the highest reported average score on this dataset. We compare with a 2D FCN method on a separate dataset of 240 CT scans with 18 classes and achieve a significantly higher performance in small organs and vessels. Furthermore, we explore fine-tuning our models to different datasets. Our experiments illustrate the promise and robustness of current 3D FCN based semantic segmentation of medical images, achieving state-of-the-art results. Our code and trained models are available for download: //github.com/holgerroth/3Dunet_abdomen_cascade.

北京阿比特科技有限公司