亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

The growth pattern of an invasive cell-to-cell propagation (called the successive coronas) on the square grid is a tilted square. On the triangular and hexagonal grids, it is an hexagon. It is remarkable that, on the aperiodic structure of Penrose tilings, this cell-to-cell diffusion process tends to a regular decagon (at the limit). In this article we generalize this result to any regular multigrid dual tiling, by defining the characteristic polygon of a multigrid and its dual tiling. Exploiting this elegant duality allows to fully understand why such surprising phenomena, of seeing highly regular polygonal shapes emerge from aperiodic underlying structures, happen.

相關內容

We consider the problem of constructing reduced models for large scale systems with poles in general domains in the complex plane (as opposed to, e.g., the open left-half plane or the open unit disk). Our goal is to design a model reduction scheme, building upon theoretically established methodologies, yet encompassing this new class of models. To this aim, we develop a balanced truncation framework through conformal maps to handle poles in general domains. The major difference from classical balanced truncation resides in the formulation of the Gramians. We show that these new Gramians can still be computed by solving modified Lyapunov equations for specific conformal maps. A numerical algorithm to perform balanced truncation with conformal maps is developed and is tested on three numerical examples, namely a heat model, the Schr\"odinger equation, and the undamped linear wave equation, the latter two having spectra on the imaginary axis.

The multidimensional knapsack problem (MKP) is an NP-hard combinatorial optimization problem whose solution is determining a subset of maximum total profit items that do not violate capacity constraints. Due to its hardness, large-scale MKP instances are usually a target for metaheuristics, a context in which effective feasibility maintenance strategies are crucial. In 1998, Chu and Beasley proposed an effective heuristic repair that is still relevant for recent metaheuristics. However, due to its deterministic nature, the diversity of solutions such heuristic provides is insufficient for long runs. As a result, the search for new solutions ceases after a while. This paper proposes an efficiency-based randomization strategy for the heuristic repair that increases the variability of the repaired solutions without deteriorating quality and improves the overall results.

Untargeted metabolomic profiling through liquid chromatography-mass spectrometry (LC-MS) measures a vast array of metabolites within biospecimens, advancing drug development, disease diagnosis, and risk prediction. However, the low throughput of LC-MS poses a major challenge for biomarker discovery, annotation, and experimental comparison, necessitating the merging of multiple datasets. Current data pooling methods encounter practical limitations due to their vulnerability to data variations and hyperparameter dependence. Here we introduce GromovMatcher, a flexible and user-friendly algorithm that automatically combines LC-MS datasets using optimal transport. By capitalizing on feature intensity correlation structures, GromovMatcher delivers superior alignment accuracy and robustness compared to existing approaches. This algorithm scales to thousands of features requiring minimal hyperparameter tuning. Manually curated datasets for validating alignment algorithms are limited in the field of untargeted metabolomics, and hence we develop a dataset split procedure to generate pairs of validation datasets to test the alignments produced by GromovMatcher and other methods. Applying our method to experimental patient studies of liver and pancreatic cancer, we discover shared metabolic features related to patient alcohol intake, demonstrating how GromovMatcher facilitates the search for biomarkers associated with lifestyle risk factors linked to several cancer types.

Moderate calibration, the expected event probability among observations with predicted probability z being equal to z, is a desired property of risk prediction models. Current graphical and numerical techniques for evaluating moderate calibration of risk prediction models are mostly based on smoothing or grouping the data. As well, there is no widely accepted inferential method for the null hypothesis that a model is moderately calibrated. In this work, we discuss recently-developed, and propose novel, methods for the assessment of moderate calibration for binary responses. The methods are based on the limiting distributions of functions of standardized partial sums of prediction errors converging to the corresponding laws of Brownian motion. The novel method relies on well-known properties of the Brownian bridge which enables joint inference on mean and moderate calibration, leading to a unified "bridge" test for detecting miscalibration. Simulation studies indicate that the bridge test is more powerful, often substantially, than the alternative test. As a case study we consider a prediction model for short-term mortality after a heart attack, where we provide suggestions on graphical presentation and the interpretation of results. Moderate calibration can be assessed without requiring arbitrary grouping of data or using methods that require tuning of parameters. An accompanying R package implements this method (see //github.com/resplab/cumulcalib/).

Many flexible families of positive random variables exhibit non-closed forms of the density and distribution functions and this feature is considered unappealing for modelling purposes. However, such families are often characterized by a simple expression of the corresponding Laplace transform. Relying on the Laplace transform, we propose to carry out parameter estimation and goodness-of-fit testing for a general class of non-standard laws. We suggest a novel data-driven inferential technique, providing parameter estimators and goodness-of-fit tests, whose large-sample properties are derived. The implementation of the method is specifically considered for the positive stable and Tweedie distributions. A Monte Carlo study shows good finite-sample performance of the proposed technique for such laws.

It is well-known that decision-making problems from stochastic control can be formulated by means of a forward-backward stochastic differential equation (FBSDE). Recently, the authors of Ji et al. 2022 proposed an efficient deep learning algorithm based on the stochastic maximum principle (SMP). In this paper, we provide a convergence result for this deep SMP-BSDE algorithm and compare its performance with other existing methods. In particular, by adopting a strategy as in Han and Long 2020, we derive a-posteriori estimate, and show that the total approximation error can be bounded by the value of the loss functional and the discretization error. We present numerical examples for high-dimensional stochastic control problems, both in case of drift- and diffusion control, which showcase superior performance compared to existing algorithms.

Plug-and-play algorithms constitute a popular framework for solving inverse imaging problems that rely on the implicit definition of an image prior via a denoiser. These algorithms can leverage powerful pre-trained denoisers to solve a wide range of imaging tasks, circumventing the necessity to train models on a per-task basis. Unfortunately, plug-and-play methods often show unstable behaviors, hampering their promise of versatility and leading to suboptimal quality of reconstructed images. In this work, we show that enforcing equivariance to certain groups of transformations (rotations, reflections, and/or translations) on the denoiser strongly improves the stability of the algorithm as well as its reconstruction quality. We provide a theoretical analysis that illustrates the role of equivariance on better performance and stability. We present a simple algorithm that enforces equivariance on any existing denoiser by simply applying a random transformation to the input of the denoiser and the inverse transformation to the output at each iteration of the algorithm. Experiments on multiple imaging modalities and denoising networks show that the equivariant plug-and-play algorithm improves both the reconstruction performance and the stability compared to their non-equivariant counterparts.

Characteristic formulae give a complete logical description of the behaviour of processes modulo some chosen notion of behavioural semantics. They allow one to reduce equivalence or preorder checking to model checking, and are exactly the formulae in the modal logics characterizing classic behavioural equivalences and preorders for which model checking can be reduced to equivalence or preorder checking. This paper studies the complexity of determining whether a formula is characteristic for some finite, loop-free process in each of the logics providing modal characterizations of the simulation-based semantics in van Glabbeek's branching-time spectrum. Since characteristic formulae in each of those logics are exactly the consistent and prime ones, it presents complexity results for the satisfiability and primality problems, and investigates the boundary between modal logics for which those problems can be solved in polynomial time and those for which they become computationally hard. Amongst other contributions, this article also studies the complexity of constructing characteristic formulae in the modal logics characterizing simulation-based semantics, both when such formulae are presented in explicit form and via systems of equations.

The largest eigenvalue of the Hessian, or sharpness, of neural networks is a key quantity to understand their optimization dynamics. In this paper, we study the sharpness of deep linear networks for overdetermined univariate regression. Minimizers can have arbitrarily large sharpness, but not an arbitrarily small one. Indeed, we show a lower bound on the sharpness of minimizers, which grows linearly with depth. We then study the properties of the minimizer found by gradient flow, which is the limit of gradient descent with vanishing learning rate. We show an implicit regularization towards flat minima: the sharpness of the minimizer is no more than a constant times the lower bound. The constant depends on the condition number of the data covariance matrix, but not on width or depth. This result is proven both for a small-scale initialization and a residual initialization. Results of independent interest are shown in both cases. For small-scale initialization, we show that the learned weight matrices are approximately rank-one and that their singular vectors align. For residual initialization, convergence of the gradient flow for a Gaussian initialization of the residual network is proven. Numerical experiments illustrate our results and connect them to gradient descent with non-vanishing learning rate.

Similar to the notion of h-adaptivity, where the discretization resolution is adaptively changed, I propose the notion of model adaptivity, where the underlying model (the governing equations) is adaptively changed in space and time. Specifically, this work introduces a hybrid and adaptive coupling of a 3D bulk fluid flow model with a 2D thin film flow model. As a result, this work extends the applicability of existing thin film flow models to complex scenarios where, for example, bulk flow develops into thin films after striking a surface. At each location in space and time, the proposed framework automatically decides whether a 3D model or a 2D model must be applied. Using a meshless approach for both 3D and 2D models, at each particle, the decision to apply a 2D or 3D model is based on the user-prescribed resolution and a local principal component analysis. When a particle needs to be changed from a 3D model to 2D, or vice versa, the discretization is changed, and all relevant data mapping is done on-the-fly. Appropriate two-way coupling conditions and mass conservation considerations between the 3D and 2D models are also developed. Numerical results show that this model adaptive framework shows higher flexibility and compares well against finely resolved 3D simulations. In an actual application scenario, a 3 factor speed up is obtained, while maintaining the accuracy of the solution.

北京阿比特科技有限公司