亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

We consider the problem of estimating a high-dimensional covariance matrix from a small number of observations when covariates on pairs of variables are available and the variables can have spatial structure. This is motivated by the problem arising in demography of estimating the covariance matrix of the total fertility rate (TFR) of 195 different countries when only 11 observations are available. We construct an estimator for high-dimensional covariance matrices by exploiting information about pairwise covariates, such as whether pairs of variables belong to the same cluster, or spatial structure of the variables, and interactions between the covariates. We reformulate the problem in terms of a mixed effects model. This requires the estimation of only a small number of parameters, which are easy to interpret and which can be selected using standard procedures. The estimator is consistent under general conditions, and asymptotically normal. It works if the mean and variance structure of the data is already specified or if some of the data are missing. We assess its performance under our model assumptions, as well as under model misspecification, using simulations. We find that it outperforms several popular alternatives. We apply it to the TFR dataset and draw some conclusions.

相關內容

Search and recommendation ecosystems exhibit competition among content creators. This competition has been tackled in a variety of game-theoretic frameworks. Content creators generate documents with the aim of being recommended by a content ranker for various information needs. In order for the ecosystem, modeled as a content ranking game, to be effective and maximize user welfare, it should guarantee stability, where stability is associated with the existence of pure Nash equilibrium in the corresponding game. Moreover, if the contents' ranking algorithm possesses a game in which any best-response learning dynamics of the content creators converge to equilibrium of high welfare, the system is considered highly attractive. However, as classical content ranking algorithms, employed by search and recommendation systems, rank documents by their distance to information needs, it has been shown that they fail to provide such stability properties. As a result, novel content ranking algorithms have been devised. In this work, we offer an alternative approach: corpus enrichment with a small set of fixed dummy documents. It turns out that, with the right design, such enrichment can lead to pure Nash equilibrium and even to the convergence of any best-response dynamics to a high welfare result, where we still employ the classical/current content ranking approach. We show two such corpus enrichment techniques with tight bounds on the number of documents needed to obtain the desired results. Interestingly, our study is a novel extension of Borel's Colonel Blotto game.

Blind estimation of intersymbol interference channels based on the Baum-Welch (BW) algorithm, a specific implementation of the expectation-maximization (EM) algorithm for training hidden Markov models, is robust and does not require labeled data. However, it is known for its extensive computation cost, slow convergence, and frequently converges to a local maximum. In this paper, we modified the trellis structure of the BW algorithm by associating the channel parameters with two consecutive states. This modification enables us to reduce the number of required states by half while maintaining the same performance. Moreover, to improve the convergence rate and the estimation performance, we construct a joint turbo-BW-equalization system by exploiting the extrinsic information produced by the turbo decoder to refine the BW-based estimator at each EM iteration. Our experiments demonstrate that the joint system achieves convergence in 10 EM iterations, which is 8 iterations less than a separate system design for a signal-to-noise ratio (SNR) of 4dB. Additionally, the joint system provides improved estimation accuracy with a mean square error (MSE) of $10^{-4}$ for an SNR of 6dB. We also identify scenarios where a joint design is not preferable, especially when the channel is noisy (e.g., SNR=2dB) and the decoder cannot provide reliable extrinsic information for a BW-based estimator.

The biological brain has inspired multiple advances in machine learning. However, most state-of-the-art models in computer vision do not operate like the human brain, simply because they are not capable of changing or improving their decisions/outputs based on a deeper analysis. The brain is recurrent, while these models are not. It is therefore relevant to explore what would be the impact of adding recurrent mechanisms to existing state-of-the-art architectures and to answer the question of whether recurrency can improve existing architectures. To this end, we build on a feed-forward segmentation model and explore multiple types of recurrency for image segmentation. We explore self-organizing, relational, and memory retrieval types of recurrency that minimize a specific energy function. In our experiments, we tested these models on artificial and medical imaging data, while analyzing the impact of high levels of noise and few-shot learning settings. Our results do not validate our initial hypothesis that recurrent models should perform better in these settings, suggesting that these recurrent architectures, by themselves, are not sufficient to surpass state-of-the-art feed-forward versions and that additional work needs to be done on the topic.

Neural operators effectively solve PDE problems from data without knowing the explicit equations, which learn the map from the input sequences of observed samples to the predicted values. Most existing works build the model in the original geometric space, leading to high computational costs when the number of sample points is large. We present the Latent Neural Operator (LNO) solving PDEs in the latent space. In particular, we first propose Physics-Cross-Attention (PhCA) transforming representation from the geometric space to the latent space, then learn the operator in the latent space, and finally recover the real-world geometric space via the inverse PhCA map. Our model retains flexibility that can decode values in any position not limited to locations defined in the training set, and therefore can naturally perform interpolation and extrapolation tasks particularly useful for inverse problems. Moreover, the proposed LNO improves both prediction accuracy and computational efficiency. Experiments show that LNO reduces the GPU memory by 50%, speeds up training 1.8 times, and reaches state-of-the-art accuracy on four out of six benchmarks for forward problems and a benchmark for inverse problem. Code is available at //github.com/L-I-M-I-T/LatentNeuralOperator.

In this work we explore the fidelity of numerical approximations to continuous spectra of hyperbolic partial differential equation systems with variable coefficients. We are particularly interested in the ability of discrete methods to accurately discover sources of physical instabilities. By focusing on the perturbed equations that arise in linearized problems, we apply high-order accurate summation-by-parts finite difference operators, with weak enforcement of boundary conditions through the simultaneous-approximation-term technique, which leads to a provably stable numerical discretization with formal order of accuracy given by p = 2, 3, 4 and 5. We derive analytic solutions using Laplace transform methods, which provide important ground truth for ensuring numerical convergence at the correct theoretical rate. We find that the continuous spectrum is better captured with mesh refinement, although dissipative strict stability (where the growth rate of the discrete problem is bounded above by the continuous) is not obtained. We also find that sole reliance on mesh refinement can be a problematic means for determining physical growth rates as some eigenvalues emerge (and persist with mesh refinement) based on spatial order of accuracy but are non-physical. We suggest that numerical methods be used to approximate discrete spectra when numerical stability is guaranteed and convergence of the discrete spectra is evident with both mesh refinement and increasing order of accuracy.

The ability to engage in other activities during the ride is considered by consumers as one of the key reasons for the adoption of automated vehicles. However, engagement in non-driving activities will provoke occupants' motion sickness, deteriorating their overall comfort and thereby risking acceptance of automated driving. Therefore, it is critical to extend our understanding of motion sickness and unravel the modulating factors that affect it through experiments with participants. Currently, most experiments are conducted on public roads (realistic but not reproducible) or test tracks (feasible with prototype automated vehicles). This research study develops a method to design an optimal path and speed reference to efficiently replicate on-road motion sickness exposure on a small test track. The method uses model predictive control to replicate the longitudinal and lateral accelerations collected from on-road drives on a test track of 70 m by 175 m. A within-subject experiment (47 participants) was conducted comparing the occupants' motion sickness occurrence in test-track and on-road conditions, with the conditions being cross-randomized. The results illustrate no difference and no effect of the condition on the occurrence of the average motion sickness across the participants. Meanwhile, there is an overall correspondence of individual sickness levels between on-road and test-track. This paves the path for the employment of our method for a simpler, safer and more replicable assessment of motion sickness.

The criticality problem in nuclear engineering asks for the principal eigenpair of a Boltzmann operator describing neutron transport in a reactor core. Being able to reliably design, and control such reactors requires assessing these quantities within quantifiable accuracy tolerances. In this paper we propose a paradigm that deviates from the common practice of approximately solving the corresponding spectral problem with a fixed, presumably sufficiently fine discretization. Instead, the present approach is based on first contriving iterative schemes, formulated in function space, that are shown to converge at a quantitative rate without assuming any a priori excess regularity properties, and that exploit only properties of the optical parameters in the underlying radiative transfer model. We develop the analytical and numerical tools for approximately realizing each iteration step within judiciously chosen accuracy tolerances, verified by a posteriori estimates, so as to still warrant quantifiable convergence to the exact eigenpair. This is carried out in full first for a Newton scheme. Since this is only locally convergent we analyze in addition the convergence of a power iteration in function space to produce sufficiently accurate initial guesses. Here we have to deal with intrinsic difficulties posed by compact but unsymmetric operators preventing standard arguments used in the finite dimensional case. Our main point is that we can avoid any condition on an initial guess to be already in a small neighborhood of the exact solution. We close with a discussion of remaining intrinsic obstructions to a certifiable numerical implementation, mainly related to not knowing the gap between the principal eigenvalue and the next smaller one in modulus.

Background: Various factors determine analyst effectiveness during elicitation. While the literature suggests that elicitation technique and time are influential factors, other attributes could also play a role. Aim: Determine aspects that may have an influence on analysts' ability to identify certain elements of the problem domain. Methodology: We conducted 14 quasi-experiments, inquiring 134 subjects about two problem domains. For each problem domain, we calculated whether the experimental subjects identified the problem domain elements (concepts, processes, and requirements), i.e., the degree to which these domain elements were visible. Results: Domain element visibility does not appear to be related to either analyst experience or analyst-client interaction. Domain element visibility depends on how analysts provide the elicited information: when asked about the knowledge acquired during elicitation, domain element visibility dramatically increases compared to the information they provide using a written report. Conclusions: Further research is required to replicate our results. However, the finding that analysts have difficulty reporting the information they have acquired is useful for identifying alternatives for improving the documentation of elicitation results. We found evidence that other issues, like domain complexity, the relative importance of different elements within the domain, and the interview script, also seem influential.

The existence of representative datasets is a prerequisite of many successful artificial intelligence and machine learning models. However, the subsequent application of these models often involves scenarios that are inadequately represented in the data used for training. The reasons for this are manifold and range from time and cost constraints to ethical considerations. As a consequence, the reliable use of these models, especially in safety-critical applications, is a huge challenge. Leveraging additional, already existing sources of knowledge is key to overcome the limitations of purely data-driven approaches, and eventually to increase the generalization capability of these models. Furthermore, predictions that conform with knowledge are crucial for making trustworthy and safe decisions even in underrepresented scenarios. This work provides an overview of existing techniques and methods in the literature that combine data-based models with existing knowledge. The identified approaches are structured according to the categories integration, extraction and conformity. Special attention is given to applications in the field of autonomous driving.

This work considers the question of how convenient access to copious data impacts our ability to learn causal effects and relations. In what ways is learning causality in the era of big data different from -- or the same as -- the traditional one? To answer this question, this survey provides a comprehensive and structured review of both traditional and frontier methods in learning causality and relations along with the connections between causality and machine learning. This work points out on a case-by-case basis how big data facilitates, complicates, or motivates each approach.

北京阿比特科技有限公司