亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

Transport coefficients, such as the mobility, thermal conductivity and shear viscosity, are quantities of prime interest in statistical physics. At the macroscopic level, transport coefficients relate an external forcing of magnitude $\eta$, with $\eta \ll 1$, acting on the system to an average response expressed through some steady-state flux. In practice, steady-state averages involved in the linear response are computed as time averages over a realization of some stochastic differential equation. Variance reduction techniques are of paramount interest in this context, as the linear response is scaled by a factor of $1/\eta$, leading to large statistical error. One way to limit the increase in the variance is to allow for larger values of $\eta$ by increasing the range of values of the forcing for which the nonlinear part of the response is sufficiently small. In theory, one can add an extra forcing to the physical perturbation of the system, called synthetic forcing, as long as this extra forcing preserves the invariant measure of the reference system. The aim is to find synthetic perturbations allowing to reduce the nonlinear part of the response as much as possible. We present a mathematical framework for quantifying the quality of synthetic forcings, in the context of linear response theory, and discuss various possible choices for them. Our findings are illustrated with numerical results in low-dimensional systems.

相關內容

According to ICH Q8 guidelines, the biopharmaceutical manufacturer submits a design space (DS) definition as part of the regulatory approval application, in which case process parameter (PP) deviations within this space are not considered a change and do not trigger a regulatory post approval procedure. A DS can be described by non-linear PP ranges, i.e., the range of one PP conditioned on specific values of another. However, independent PP ranges (linear combinations) are often preferred in biopharmaceutical manufacturing due to their operation simplicity. While some statistical software supports the calculation of a DS comprised of linear combinations, such methods are generally based on discretizing the parameter space - an approach that scales poorly as the number of PPs increases. Here, we introduce a novel method for finding linear PP combinations using a numeric optimizer to calculate the largest design space within the parameter space that results in critical quality attribute (CQA) boundaries within acceptance criteria, predicted by a regression model. A precomputed approximation of tolerance intervals is used in inequality constraints to facilitate fast evaluations of this boundary using a single matrix multiplication. Correctness of the method was validated against different ground truths with known design spaces. Compared to stateof-the-art, grid-based approaches, the optimizer-based procedure is more accurate, generally yields a larger DS and enables the calculation in higher dimensions. Furthermore, a proposed weighting scheme can be used to favor certain PPs over others and therefore enabling a more dynamic approach to DS definition and exploration. The increased PP ranges of the larger DS provide greater operational flexibility for biopharmaceutical manufacturers.

Research on residential segregation has been active since the 1950s and originated in a desire to quantify the level of racial/ethnic segregation in the United States. The Index of Concentration at the Extremes (ICE), an operationalization of racialized economic segregation that simultaneously captures spatial, racial, and income polarization, has been a popular topic in public health research, with a particular focus on social epidemiology. However, the construction of the ICE metric usually ignores the spatial autocorrelation that may be present in the data, and it is usually presented without indicating its degree of statistical and spatial uncertainty. To address these issues, we propose reformulating the ICE metric using Bayesian modeling methodologies. We use a simulation study to evaluate the performance of each method by considering various segregation scenarios. The application is based on racialized economic segregation in Georgia, and the proposed modeling approach will help determine whether racialized economic segregation has changed over two non-overlapping time points.

In this paper, we propose, analyze and implement efficient time parallel methods for the Cahn-Hilliard (CH) equation. It is of great importance to develop efficient numerical methods for the CH equation, given the range of applicability of the CH equation has. The CH equation generally needs to be simulated for a very long time to get the solution of phase coarsening stage. Therefore it is desirable to accelerate the computation using parallel method in time. We present linear and nonlinear Parareal methods for the CH equation depending on the choice of fine approximation. We illustrate our results by numerical experiments.

Within the concept of physical human-robot interaction (pHRI), the most important criterion is the safety of the human operator interacting with a high degree of freedom (DoF) robot. Therefore, a robust control scheme is in high demand to establish safe pHRI and stabilize nonlinear, high DoF systems. In this paper, an adaptive decentralized control strategy is designed to accomplish the abovementioned objectives. To do so, a human upper limb model and an exoskeleton model are decentralized and augmented at the subsystem level to enable a decentralized control action design. Moreover, human exogenous force (HEF) that can resist exoskeleton motion is estimated using radial basis function neural networks (RBFNNs). Estimating both human upper limb and robot rigid body parameters, along with HEF estimation, makes the controller adaptable to different operators, ensuring their physical safety. The barrier Lyapunov function (BLF) is employed to guarantee that the robot can operate in a safe workspace while ensuring stability by adjusting the control law. Unknown actuator uncertainty and constraints are also considered in this study to ensure a smooth and safe pHRI. Then, the asymptotic stability of the whole system is established by means of the virtual stability concept and virtual power flows (VPFs) under the proposed robust controller. The experimental results are presented and compared to proportional-derivative (PD) and proportional-integral-derivative (PID) controllers. To show the robustness of the designed controller and its good performance, experiments are performed at different velocities, with different human users, and in the presence of unknown disturbances. The proposed controller showed perfect performance in controlling the robot, whereas PD and PID controllers could not even ensure stable motion in the wrist joints of the robot.

This paper presents a tractable sufficient condition for the consistency of maximum likelihood estimators (MLEs) in partially observed diffusion models, stated in terms of stationary distribution of the associated fully observed diffusion, under the assumption that the set of unknown parameter values is finite. This sufficient condition is then verified in the context of a latent price model of market microstructure, yielding consistency of maximum likelihood estimators of the unknown parameters in this model. Finally, we compute the latter estimators using historical financial data taken from the NASDAQ exchange.

Estimating state of health is a critical function of a battery management system but remains challenging due to the variability of operating conditions and usage requirements of real applications. As a result, techniques based on fitting equivalent circuit models may exhibit inaccuracy at extremes of performance and over long-term ageing, or instability of parameter estimates. Pure data-driven techniques, on the other hand, suffer from lack of generality beyond their training dataset. In this paper, we propose a hybrid approach combining data- and model-driven techniques for battery health estimation. Specifically, we demonstrate a Bayesian data-driven method, Gaussian process regression, to estimate model parameters as functions of states, operating conditions, and lifetime. Computational efficiency is ensured through a recursive approach yielding a unified joint state-parameter estimator that learns parameter dynamics from data and is robust to gaps and varying operating conditions. Results show the efficacy of the method, on both simulated and measured data, including accurate estimates and forecasts of battery capacity and internal resistance. This opens up new opportunities to understand battery ageing in real applications.

We present a new abstract interpretation framework for the precise over-approximation of numerical fixpoint iterators. Our key observation is that unlike in standard abstract interpretation (AI), typically used to over-approximate all reachable program states, in this setting, one only needs to abstract the concrete fixpoints, i.e., the final program states. Our framework targets numerical fixpoint iterators with convergence and uniqueness guarantees in the concrete and is based on two major technical contributions: (i) theoretical insights which allow us to compute sound and precise fixpoint abstractions without using joins, and (ii) a new abstract domain, CH-Zonotope, which admits efficient propagation and inclusion checks while retaining high precision. We implement our framework in a tool called CRAFT and evaluate it on a novel fixpoint-based neural network architecture (monDEQ) that is particularly challenging to verify. Our extensive evaluation demonstrates that CRAFT exceeds the state-of-the-art performance in terms of speed (two orders of magnitude), scalability (one order of magnitude), and precision (25% higher certified accuracies).

This paper focuses on the study of the order of power series that are linear combinations of a given finite set of power series. The order of a formal power series, known as $\textrm{ord}(f)$, is defined as the minimum exponent of $x$ that has a non-zero coefficient in $f(x)$. Our first result is that the order of the Wronskian of these power series is equivalent up to a polynomial factor, to the maximum order which occurs in the linear combination of these power series. This implies that the Wronskian approach used in (Kayal and Saha, TOCT'2012) to upper bound the order of sum of square roots is optimal up to a polynomial blowup. We also demonstrate similar upper bounds, similar to those of (Kayal and Saha, TOCT'2012), for the order of power series in a variety of other scenarios. We also solve a special case of the inequality testing problem outlined in (Etessami et al., TOCT'2014). In the second part of the paper, we study the equality variant of the sum of square roots problem, which is decidable in polynomial time due to (Bl\"omer, FOCS'1991). We investigate a natural generalization of this problem when the input integers are given as straight line programs. Under the assumption of the Generalized Riemann Hypothesis (GRH), we show that this problem can be reduced to the so-called ``one dimensional'' variant. We identify the key mathematical challenges for solving this ``one dimensional'' variant.

Network structures underlie the dynamics of many complex phenomena, from gene regulation and foodwebs to power grids and social media. Yet, as they often cannot be observed directly, their connectivities must be inferred from observations of their emergent dynamics. In this work we present a powerful computational method to infer large network adjacency matrices from time series data using a neural network, in order to provide uncertainty quantification on the prediction in a manner that reflects both the non-convexity of the inference problem as well as the noise on the data. This is useful since network inference problems are typically underdetermined, and a feature that has hitherto been lacking from such methods. We demonstrate our method's capabilities by inferring line failure locations in the British power grid from its response to a power cut. Since the problem is underdetermined, many classical statistical tools (e.g. regression) will not be straightforwardly applicable. Our method, in contrast, provides probability densities on each edge, allowing the use of hypothesis testing to make meaningful probabilistic statements about the location of the power cut. We also demonstrate our method's ability to learn an entire cost matrix for a non-linear model of economic activity in Greater London. Our method outperforms OLS regression on noisy data in terms of both speed and prediction accuracy, and scales as $N^2$ where OLS is cubic. Not having been specifically engineered for network inference, our method represents a general parameter estimation scheme that is applicable to any parameter dimension.

In this work, we consider the numerical computation of ground states and dynamics of single-component Bose-Einstein condensates (BECs). The corresponding models are spatially discretized with a multiscale finite element approach known as Localized Orthogonal Decomposition (LOD). Despite the outstanding approximation properties of such a discretization in the context of BECs, taking full advantage of it without creating severe computational bottlenecks can be tricky. In this paper, we therefore present two fully-discrete numerical approaches that are formulated in such a way that they take special account of the structure of the LOD spaces. One approach is devoted to the computation of ground states and another one for the computation of dynamics. A central focus of this paper is also the discussion of implementation aspects that are very important for the practical realization of the methods. In particular, we discuss the use of suitable data structures that keep the memory costs economical. The paper concludes with various numerical experiments in 1d, 2d and 3d that investigate convergence rates and approximation properties of the methods and which demonstrate their performance and computational efficiency, also in comparison to spectral and standard finite element approaches.

北京阿比特科技有限公司