亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

We propose a novel inference procedure for linear combinations of high-dimensional regression coefficients in generalized estimating equations (GEE), which are widely used to analyze correlated data. Our estimator for this more general inferential target, obtained via constructing projected estimating equations, is shown to be asymptotically normally distributed under certain regularity conditions. We also introduce a data-driven cross-validation procedure to select the tuning parameter for estimating the projection direction, which is not addressed in the existing procedures. We demonstrate the robust finite-sample performance, especially in estimation bias and confidence interval coverage, of the proposed method via extensive simulations, and apply the method to a longitudinal proteomic study of COVID-19 plasma samples to investigate the proteomic signatures associated with disease severity.

相關內容

Python serves as an open-source and cost-effective alternative to the MATLAB programming language. This paper introduces a concise topology optimization Python code, named ``PyHexTop," primarily intended for educational purposes. Code employs hexagonal elements to parameterize design domains as such elements provide checkerboard-free optimized design naturally. PyHexTop is developed based on the ``HoneyTop90" MATLAB code~\cite{kumar2023honeytop90} and uses the NumPy and SciPy libraries. Code is straightforward and easily comprehensible, proving a helpful tool that can help people new in the topology optimization field to learn and explore. PyHexTop is specifically tailored to address compliance minimization with specified volume constraints. The paper provides a detailed explanation of the code for solving the MBB design and extensions to solve problems with varying boundary and force conditions. The code is publicly shared at: \url{//github.com/PrabhatIn/PyHexTop.}

We study in this paper the monotonicity properties of the numerical solutions to Volterra integral equations with nonincreasing completely positive kernels on nonuniform meshes. There is a duality between the complete positivity and the properties of the complementary kernel being nonnegative and nonincreasing. Based on this, we propose the ``complementary monotonicity'' to describe the nonincreasing completely positive kernels, and the ``right complementary monotone'' (R-CMM) kernels as the analogue for nonuniform meshes. We then establish the monotonicity properties of the numerical solutions inherited from the continuous equation if the discretization has the R-CMM property. Such a property seems weaker than being log-convex and there is no resctriction on the step size ratio of the discretization for the R-CMM property to hold.

Hamilton-Jacobi (HJ) partial differential equations (PDEs) have diverse applications spanning physics, optimal control, game theory, and imaging sciences. This research introduces a first-order optimization-based technique for HJ PDEs, which formulates the time-implicit update of HJ PDEs as saddle point problems. We remark that the saddle point formulation for HJ equations is aligned with the primal-dual formulation of optimal transport and potential mean-field games (MFGs). This connection enables us to extend MFG techniques and design numerical schemes for solving HJ PDEs. We employ the primal-dual hybrid gradient (PDHG) method to solve the saddle point problems, benefiting from the simple structures that enable fast computations in updates. Remarkably, the method caters to a broader range of Hamiltonians, encompassing non-smooth and spatiotemporally dependent cases. The approach's effectiveness is verified through various numerical examples in both one-dimensional and two-dimensional examples, such as quadratic and $L^1$ Hamiltonians with spatial and time dependence.

We construct and analyze a message-passing algorithm for random constraint satisfaction problems (CSPs) at large clause density, generalizing work of El Alaoui, Montanari, and Sellke for Maximum Cut [arXiv:2111.06813] through a connection between random CSPs and mean-field Ising spin glasses. For CSPs with even predicates, the algorithm asymptotically solves a stochastic optimal control problem dual to an extended Parisi variational principle. This gives an optimal fraction of satisfied constraints among algorithms obstructed by the branching overlap gap property of Huang and Sellke [arXiv:2110.07847], notably including the Quantum Approximate Optimization Algorithm and all quantum circuits on a bounded-degree architecture of up to $\epsilon \cdot \log n$ depth.

This article is concerned with a regularity analysis of parametric operator equations with a perspective on uncertainty quantification. We study the regularity of mappings between Banach spaces near branches of isolated solutions that are implicitly defined by a residual equation. Under $s$-Gevrey assumptions on on the residual equation, we establish $s$-Gevrey bounds on the Fr\'echet derivatives of the local data-to-solution mapping. This abstract framework is illustrated in a proof of regularity bounds for a semilinear elliptic partial differential equation with parametric and random field input.

Preference-based optimization algorithms are iterative procedures that seek the optimal calibration of a decision vector based only on comparisons between couples of different tunings. At each iteration, a human decision-maker expresses a preference between two calibrations (samples), highlighting which one, if any, is better than the other. The optimization procedure must use the observed preferences to find the tuning of the decision vector that is most preferred by the decision-maker, while also minimizing the number of comparisons. In this work, we formulate the preference-based optimization problem from a utility theory perspective. Then, we propose GLISp-r, an extension of a recent preference-based optimization procedure called GLISp. The latter uses a Radial Basis Function surrogate to describe the tastes of the decision-maker. Iteratively, GLISp proposes new samples to compare with the best calibration available by trading off exploitation of the surrogate model and exploration of the decision space. In GLISp-r, we propose a different criterion to use when looking for new candidate samples that is inspired by MSRS, a popular procedure in the black-box optimization framework. Compared to GLISp, GLISp-r is less likely to get stuck on local optima of the preference-based optimization problem. We motivate this claim theoretically, with a proof of global convergence, and empirically, by comparing the performances of GLISp and GLISp-r on several benchmark optimization problems.

We investigate a class of parametric elliptic semilinear partial differential equations of second order with homogeneous essential boundary conditions, where the coefficients and the right-hand side (and hence the solution) may depend on a parameter. This model can be seen as a reaction-diffusion problem with a polynomial nonlinearity in the reaction term. The efficiency of various numerical approximations across the entire parameter space is closely related to the regularity of the solution with respect to the parameter. We show that if the coefficients and the right-hand side are analytic or Gevrey class regular with respect to the parameter, the same type of parametric regularity is valid for the solution. The key ingredient of the proof is the combination of the alternative-to-factorial technique from our previous work [1] with a novel argument for the treatment of the power-type nonlinearity in the reaction term. As an application of this abstract result, we obtain rigorous convergence estimates for numerical integration of semilinear reaction-diffusion problems with random coefficients using Gaussian and Quasi-Monte Carlo quadrature. Our theoretical findings are confirmed in numerical experiments.

We develop a numerical method for the Westervelt equation, an important equation in nonlinear acoustics, in the form where the attenuation is represented by a class of non-local in time operators. A semi-discretisation in time based on the trapezoidal rule and A-stable convolution quadrature is stated and analysed. Existence and regularity analysis of the continuous equations informs the stability and error analysis of the semi-discrete system. The error analysis includes the consideration of the singularity at $t = 0$ which is addressed by the use of a correction in the numerical scheme. Extensive numerical experiments confirm the theory.

Effective application of mathematical models to interpret biological data and make accurate predictions often requires that model parameters are identifiable. Approaches to assess the so-called structural identifiability of models are well-established for ordinary differential equation models, yet there are no commonly adopted approaches that can be applied to assess the structural identifiability of the partial differential equation (PDE) models that are requisite to capture spatial features inherent to many phenomena. The differential algebra approach to structural identifiability has recently been demonstrated to be applicable to several specific PDE models. In this brief article, we present general methodology for performing structural identifiability analysis on partially observed linear reaction-advection-diffusion (RAD) PDE models. We show that the differential algebra approach can always, in theory, be applied to linear RAD models. Moreover, despite the perceived complexity introduced by the addition of advection and diffusion terms, identifiability of spatial analogues of non-spatial models cannot decrease structural identifiability. Finally, we show that our approach can also be applied to a class of non-linear PDE models that are linear in the unobserved variables, and conclude by discussing future possibilities and computational cost of performing structural identifiability analysis on more general PDE models in mathematical biology.

We derive information-theoretic generalization bounds for supervised learning algorithms based on the information contained in predictions rather than in the output of the training algorithm. These bounds improve over the existing information-theoretic bounds, are applicable to a wider range of algorithms, and solve two key challenges: (a) they give meaningful results for deterministic algorithms and (b) they are significantly easier to estimate. We show experimentally that the proposed bounds closely follow the generalization gap in practical scenarios for deep learning.

北京阿比特科技有限公司