亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

The problem of low rank approximation is ubiquitous in science. Traditionally this problem is solved in unitary invariant norms such as Frobenius or spectral norm due to existence of efficient methods for building approximations. However, recent results reveal the potential of low rank approximations in Chebyshev norm, which naturally arises in many applications. In this paper we tackle the problem of building optimal rank-1 approximations in the Chebyshev norm. We investigate the properties of alternating minimization algorithm for building the low rank approximations and demonstrate how to use it to construct optimal rank-1 approximation. As a result we propose an algorithm that is capable of building optimal rank-1 approximations in Chebyshev norm for small matrices.

相關內容

We propose a novel surrogate modelling approach to efficiently and accurately approximate the response of complex dynamical systems driven by time-varying exogenous excitations over extended time periods. Our approach, namely manifold nonlinear autoregressive modelling with exogenous input (mNARX), involves constructing a problem-specific exogenous input manifold that is optimal for constructing autoregressive surrogates. The manifold, which forms the core of mNARX, is constructed incrementally by incorporating the physics of the system, as well as prior expert- and domain- knowledge. Because mNARX decomposes the full problem into a series of smaller sub-problems, each with a lower complexity than the original, it scales well with the complexity of the problem, both in terms of training and evaluation costs of the final surrogate. Furthermore, mNARX synergizes well with traditional dimensionality reduction techniques, making it highly suitable for modelling dynamical systems with high-dimensional exogenous inputs, a class of problems that is typically challenging to solve. Since domain knowledge is particularly abundant in physical systems, such as those found in civil and mechanical engineering, mNARX is well suited for these applications. We demonstrate that mNARX outperforms traditional autoregressive surrogates in predicting the response of a classical coupled spring-mass system excited by a one-dimensional random excitation. Additionally, we show that mNARX is well suited for emulating very high-dimensional time- and state-dependent systems, even when affected by active controllers, by surrogating the dynamics of a realistic aero-servo-elastic onshore wind turbine simulator. In general, our results demonstrate that mNARX offers promising prospects for modelling complex dynamical systems, in terms of accuracy and efficiency.

Static stability in economic models means negative incentives for deviation from equilibrium strategies, which we expect to assure a return to equilibrium, i.e., dynamic stability, as long as agents respond to incentives. There have been many attempts to prove this link, especially in evolutionary game theory, yielding both negative and positive results. This paper presents a universal and intuitive approach to this link. We prove that static stability assures dynamic stability if agents' choices of switching strategies are rationalizable by introducing costs and constraints in those switching decisions. This idea guides us to define \textit{net }gains from switches as the payoff improvement after deducting the costs. Under rationalizable dynamics, an agent maximizes the expected net gain subject to the constraints. We prove that the aggregate maximized expected net gain works as a Lyapunov function. It also explains reasons behind the known negative results. While our analysis here is confined to myopic evolutionary dynamics in population games, our approach is applicable to more complex situations.

We present a discontinuous Galerkin method for moist atmospheric dynamics, with and without warm rain. By considering a combined density for water vapour and cloud water, we avoid the need to model and compute a source term for condensation. We recover the vapour and cloud densities by solving a pointwise non-linear problem each time step. Consequently, we enforce the requirement for the water vapour not to be supersaturated implicitly. Together with an explicit time-stepping scheme, the method is highly parallelisable and can utilise high-performance computing hardware. Furthermore, the discretisation works on structured and unstructured meshes in two and three spatial dimensions. We illustrate the performance of our approach using several test cases in two and three spatial dimensions. In the case of a smooth, exact solution, we illustrate the optimal higher-order convergence rates of the method.

Markov proved that there exists an unrecognizable 4-manifold, that is, a 4-manifold for which the homeomorphism problem is undecidable. In this paper we consider the question how close we can get to S^4 with an unrecognizable manifold. One of our achievements is that we show a way to remove so-called Markov's trick from the proof of existence of such a manifold. This trick contributes to the complexity of the resulting manifold. We also show how to decrease the deficiency (or the number of relations) in so-called Adian-Rabin set which is another ingredient that contributes to the complexity of the resulting manifold. Altogether, our approach allows to show that the connected sum #_9(S^2 x S^2) is unrecognizable while the previous best result is the unrecognizability of #_12(S^2 x S^2) due to Gordon.

A recent body of work has demonstrated that Transformer embeddings can be linearly decomposed into well-defined sums of factors, that can in turn be related to specific network inputs or components. There is however still a dearth of work studying whether these mathematical reformulations are empirically meaningful. In the present work, we study representations from machine-translation decoders using two of such embedding decomposition methods. Our results indicate that, while decomposition-derived indicators effectively correlate with model performance, variation across different runs suggests a more nuanced take on this question. The high variability of our measurements indicate that geometry reflects model-specific characteristics more than it does sentence-specific computations, and that similar training conditions do not guarantee similar vector spaces.

Bayesian cross-validation (CV) is a popular method for predictive model assessment that is simple to implement and broadly applicable. A wide range of CV schemes is available for time series applications, including generic leave-one-out (LOO) and K-fold methods, as well as specialized approaches intended to deal with serial dependence such as leave-future-out (LFO), h-block, and hv-block. Existing large-sample results show that both specialized and generic methods are applicable to models of serially-dependent data. However, large sample consistency results overlook the impact of sampling variability on accuracy in finite samples. Moreover, the accuracy of a CV scheme depends on many aspects of the procedure. We show that poor design choices can lead to elevated rates of adverse selection. In this paper, we consider the problem of identifying the regression component of an important class of models of data with serial dependence, autoregressions of order p with q exogenous regressors (ARX(p,q)), under the logarithmic scoring rule. We show that when serial dependence is present, scores computed using the joint (multivariate) density have lower variance and better model selection accuracy than the popular pointwise estimator. In addition, we present a detailed case study of the special case of ARX models with fixed autoregressive structure and variance. For this class, we derive the finite-sample distribution of the CV estimators and the model selection statistic. We conclude with recommendations for practitioners.

Generating competitive strategies and performing continuous motion planning simultaneously in an adversarial setting is a challenging problem. In addition, understanding the intent of other agents is crucial to deploying autonomous systems in adversarial multi-agent environments. Existing approaches either discretize agent action by grouping similar control inputs, sacrificing performance in motion planning, or plan in uninterpretable latent spaces, producing hard-to-understand agent behaviors. Furthermore, the most popular policy optimization frameworks do not recognize the long-term effect of actions and become myopic. This paper proposes an agent action discretization method via abstraction that provides clear intentions of agent actions, an efficient offline pipeline of agent population synthesis, and a planning strategy using counterfactual regret minimization with function approximation. Finally, we experimentally validate our findings on scaled autonomous vehicles in a head-to-head racing setting. We demonstrate that using the proposed framework significantly improves learning, improves the win rate against different opponents, and the improvements can be transferred to unseen opponents in an unseen environment.

We revisit the problem of estimating an unknown parameter of a pure quantum state, and investigate `null-measurement' strategies in which the experimenter aims to measure in a basis that contains a vector close to the true system state. Such strategies are known to approach the quantum Fisher information for models where the quantum Cram\'{e}r-Rao bound is achievable but a detailed adaptive strategy for achieving the bound in the multi-copy setting has been lacking. We first show that the following naive null-measurement implementation fails to attain even the standard estimation scaling: estimate the parameter on a small sub-sample, and apply the null-measurement corresponding to the estimated value on the rest of the systems. This is due to non-identifiability issues specific to null-measurements, which arise when the true and reference parameters are close to each other. To avoid this, we propose the alternative displaced-null measurement strategy in which the reference parameter is altered by a small amount which is sufficient to ensure parameter identifiability. We use this strategy to devise asymptotically optimal measurements for models where the quantum Cram\'{e}r-Rao bound is achievable. More generally, we extend the method to arbitrary multi-parameter models and prove the asymptotic achievability of the the Holevo bound. An important tool in our analysis is the theory of quantum local asymptotic normality which provides a clear intuition about the design of the proposed estimators, and shows that they have asymptotically normal distributions.

Sociodemographic inequalities in student achievement are a persistent concern for education systems and are increasingly recognized to be intersectional. Intersectionality considers the multidimensional nature of disadvantage, appreciating the interlocking social determinants which shape individual experience. Intersectional multilevel analysis of individual heterogeneity and discriminatory accuracy (MAIHDA) is a new approach developed in population health but with limited application in educational research. In this study, we introduce and apply this approach to study sociodemographic inequalities in student achievement across two cohorts of students in London, England. We define 144 intersectional strata arising from combinations of student age, gender, free school meal status, special educational needs, and ethnicity. We find substantial strata-level variation in achievement composed primarily by additive rather than interactive effects with results stubbornly consistent across the cohorts. We conclude that policymakers should pay greater attention to multiply marginalized students and intersectional MAIHDA provides a useful approach to study their experiences.

The emergence of complex structures in the systems governed by a simple set of rules is among the most fascinating aspects of Nature. The particularly powerful and versatile model suitable for investigating this phenomenon is provided by cellular automata, with the Game of Life being one of the most prominent examples. However, this simplified model can be too limiting in providing a tool for modelling real systems. To address this, we introduce and study an extended version of the Game of Life, with the dynamical process governing the rule selection at each step. We show that the introduced modification significantly alters the behaviour of the game. We also demonstrate that the choice of the synchronization policy can be used to control the trade-off between the stability and the growth in the system.

北京阿比特科技有限公司