亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

We present a new numerical model for solving the Chew-Goldberger-Low system of equations describing a bi-Maxwellian plasma in a magnetic field. Heliospheric and geospace environments are often observed to be in an anisotropic state with distinctly different parallel and perpendicular pressure components. The CGL system represents the simplest leading order correction to the common isotropic MHD model that still allows to incorporate the latter's most desirable features. However, the CGL system presents several numerical challenges: the system is not in conservation form, the source terms are stiff, and unlike MHD it is prone to a loss of hyperbolicity if the parallel and perpendicular pressures become too different. The usual cure is to bring the parallel and perpendicular pressures closer to one another; but that has usually been done in an ad hoc manner. We present a physics-informed method of pressure relaxation based on the idea of pitch-angle scattering that keeps the numerical system hyperbolic and naturally leads to zero anisotropy in the limit of very large plasma beta. Numerical codes based on the CGL equations can, therefore, be made to function robustly for any magnetic field strength, including the limit where the magnetic field approaches zero. The capabilities of our new algorithm are demonstrated using several stringent test problems that provide a comparison of the CGL equations in the weakly and strongly collisional limits. This includes a test problem that mimics interaction of a shock with a magnetospheric environment in 2D.

相關內容

We investigate the limiting behavior of the Navier-Stokes-Cahn-Hilliard model for binary-fluid flows as the diffuse-interface thickness passes to zero, in the presence of fluid-fluid-solid contact lines. Allowing for motion of such contact lines relative to the solid substrate is required to adequately model multi-phase and multi-species fluid transport past and through solid media. Even though diffuse-interface models provide an inherent slip mechanism through the mobility-induced diffusion, this slip vanishes as the interface thickness and mobility parameter tend to zero in the so-called sharp-interface limit. The objective of this work is to present dynamic wetting and generalized Navier boundary conditions for diffuse-interface models that are consistent in the sharp-interface limit. We concentrate our analysis on the prototypical binary-fluid Couette-flow problems. To verify the consistency of the diffuse-interface model in the limit of vanishing interface thickness, we provide reference limit solutions of a corresponding sharp-interface model. For parameter values both at and away from the critical viscosity ratio, we present and compare the results of both the diffuse- and sharp-interface models. The close match between both model results indicates that the considered test case lends itself well as a benchmark for further research.

We propose using mechanistic interpretability -- techniques for reverse engineering model weights into human-interpretable algorithms -- to derive and compactly prove formal guarantees on model performance. We prototype this approach by formally proving lower bounds on the accuracy of 151 small transformers trained on a Max-of-$K$ task. We create 102 different computer-assisted proof strategies and assess their length and tightness of bound on each of our models. Using quantitative metrics, we find that shorter proofs seem to require and provide more mechanistic understanding. Moreover, we find that more faithful mechanistic understanding leads to tighter performance bounds. We confirm these connections by qualitatively examining a subset of our proofs. Finally, we identify compounding structureless noise as a key challenge for using mechanistic interpretability to generate compact proofs on model performance.

We study the algorithmic task of finding large independent sets in Erdos-Renyi $r$-uniform hypergraphs on $n$ vertices having average degree $d$. Krivelevich and Sudakov showed that the maximum independent set has density $\left(\frac{r\log d}{(r-1)d}\right)^{1/(r-1)}$. We show that the class of low-degree polynomial algorithms can find independent sets of density $\left(\frac{\log d}{(r-1)d}\right)^{1/(r-1)}$ but no larger. This extends and generalizes earlier results of Gamarnik and Sudan, Rahman and Virag, and Wein on graphs, and answers a question of Bal and Bennett. We conjecture that this statistical-computational gap holds for this problem. Additionally, we explore the universality of this gap by examining $r$-partite hypergraphs. A hypergraph $H=(V,E)$ is $r$-partite if there is a partition $V=V_1\cup\cdots\cup V_r$ such that each edge contains exactly one vertex from each set $V_i$. We consider the problem of finding large balanced independent sets (independent sets containing the same number of vertices in each partition) in random $r$-partite hypergraphs with $n$ vertices in each partition and average degree $d$. We prove that the maximum balanced independent set has density $\left(\frac{r\log d}{(r-1)d}\right)^{1/(r-1)}$ asymptotically. Furthermore, we prove an analogous low-degree computational threshold of $\left(\frac{\log d}{(r-1)d}\right)^{1/(r-1)}$. Our results recover and generalize recent work of Perkins and the second author on bipartite graphs. While the graph case has been extensively studied, this work is the first to consider statistical-computational gaps of optimization problems on random hypergraphs. Our results suggest that these gaps persist for larger uniformities as well as across many models. A somewhat surprising aspect of the gap for balanced independent sets is that the algorithm achieving the lower bound is a simple degree-1 polynomial.

Beyond planarity concepts (prominent examples include k-planarity or fan-planarity) apply certain restrictions on the allowed patterns of crossings in drawings. It is natural to ask, how much the number of crossings may increase over the traditional (unrestricted) crossing number. Previous approaches to bound such ratios, e.g. [arXiv:1908.03153, arXiv:2105.12452], require very specialized constructions and arguments for each considered beyond planarity concept, and mostly only yield asymptotically non-tight bounds. We propose a very general proof framework that allows us to obtain asymptotically tight bounds, and where the concept-specific parts of the proof typically boil down to a couple of lines. We show the strength of our approach by giving improved or first bounds for several beyond planarity concepts.

We present a generalisation of the theory of quantitative algebras of Mardare, Panangaden and Plotkin where (i) the carriers of quantitative algebras are not restricted to be metric spaces and can be arbitrary fuzzy relations or generalised metric spaces, and (ii) the interpretations of the algebraic operations are not required to be nonexpansive. Our main results include: a novel sound and complete proof system, the proof that free quantitative algebras always exist, the proof of strict monadicity of the induced Free-Forgetful adjunction, the result that all monads (on fuzzy relations) that lift finitary monads (on sets) admit a quantitative equational presentation.

We propose a new statistical reduced complexity climate model. The centerpiece of the model consists of a set of physical equations for the global climate system which we show how to cast in non-linear state space form. The parameters in the model are estimated using the method of maximum likelihood with the likelihood function being evaluated by the extended Kalman filter. Our statistical framework is based on well-established methodology and is computationally feasible. In an empirical analysis, we estimate the parameters for a data set comprising the period 1959-2022. A likelihood ratio test sheds light on the most appropriate equation for converting the level of atmospheric concentration of carbon dioxide into radiative forcing. Using the estimated model, and different future paths of greenhouse gas emissions, we project global mean surface temperature until the year 2100. Our results illustrate the potential of combining statistical modelling with physical insights to arrive at rigorous statistical analyses of the climate system.

We consider a reconfigurable intelligent surface (RIS) assisted cell-free massive multiple-input multiple-output non-orthogonal multiple access (NOMA) system, where each access point (AP) serves all the users with the aid of the RIS. We practically model the system by considering imperfect instantaneous channel state information (CSI) and employing imperfect successive interference cancellation at the users end. We first obtain the channel estimates using linear minimum mean square error approach considering the spatial correlation at the RIS and then derive a closed-form downlink spectral efficiency (SE) expression using the statistical CSI. We next formulate a joint optimization problem to maximize the sum SE of the system. We first introduce a novel successive Quadratic Transform (successive-QT) algorithm to optimize the transmit power coefficients using the concept of block optimization along with quadratic transform and then use the particle swarm optimization technique to design the RIS phase shifts. Note that most of the existing works on RIS-aided cell-free systems are specific instances of the general scenario studied in this work. We numerically show that i) the RIS-assisted link is more advantageous at lower transmit power regions where the direct link between AP and user is weak, ii) NOMA outperforms orthogonal multiple access schemes in terms of SE, and iii) the proposed joint optimization framework significantly improves the sum SE of the system.

Evaluating the generalisation capabilities of multimodal models based solely on their performance on out-of-distribution data fails to capture their true robustness. This work introduces a comprehensive evaluation framework that systematically examines the role of instructions and inputs in the generalisation abilities of such models, considering architectural design, input perturbations across language and vision modalities, and increased task complexity. The proposed framework uncovers the resilience of multimodal models to extreme instruction perturbations and their vulnerability to observational changes, raising concerns about overfitting to spurious correlations. By employing this evaluation framework on current Transformer-based multimodal models for robotic manipulation tasks, we uncover limitations and suggest future advancements should focus on architectural and training innovations that better integrate multimodal inputs, enhancing a model's generalisation prowess by prioritising sensitivity to input content over incidental correlations.

This study explores the application of genetic algorithms in generating highly nonlinear substitution boxes (S-boxes) for symmetric key cryptography. We present a novel implementation that combines a genetic algorithm with the Walsh-Hadamard Spectrum (WHS) cost function to produce 8x8 S-boxes with a nonlinearity of 104. Our approach achieves performance parity with the best-known methods, requiring an average of 49,399 iterations with a 100% success rate. The study demonstrates significant improvements over earlier genetic algorithm implementations in this field, reducing iteration counts by orders of magnitude. By achieving equivalent performance through a different algorithmic approach, our work expands the toolkit available to cryptographers and highlights the potential of genetic methods in cryptographic primitive generation. The adaptability and parallelization potential of genetic algorithms suggest promising avenues for future research in S-box generation, potentially leading to more robust, efficient, and innovative cryptographic systems. Our findings contribute to the ongoing evolution of symmetric key cryptography, offering new perspectives on optimizing critical components of secure communication systems.

Many organizations use algorithms that have a disparate impact, i.e., the benefits or harms of the algorithm fall disproportionately on certain social groups. Addressing an algorithm's disparate impact can be challenging, however, because it is often unclear whether it is possible to reduce this impact without sacrificing other objectives of the organization, such as accuracy or profit. Establishing the improvability of algorithms with respect to multiple criteria is of both conceptual and practical interest: in many settings, disparate impact that would otherwise be prohibited under US federal law is permissible if it is necessary to achieve a legitimate business interest. The question is how a policy-maker can formally substantiate, or refute, this "necessity" defense. In this paper, we provide an econometric framework for testing the hypothesis that it is possible to improve on the fairness of an algorithm without compromising on other pre-specified objectives. Our proposed test is simple to implement and can be applied under any exogenous constraint on the algorithm space. We establish the large-sample validity and consistency of our test, and illustrate its practical application by evaluating a healthcare algorithm originally considered by Obermeyer et al. (2019). In this application, we reject the null hypothesis that it is not possible to reduce the algorithm's disparate impact without compromising the accuracy of its predictions.

北京阿比特科技有限公司