亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

We investigate analytically the behaviour of the penalized maximum partial likelihood estimator (PMPLE). Our results are derived for a generic separable regularization, but we focus on the elastic net. This penalization is routinely adopted for survival analysis in the high dimensional regime, where the Maximum Partial Likelihood estimator (no regularization) might not even exist. Previous theoretical results require that the number $s$ of non-zero association coefficients is $O(n^{\alpha})$, with $\alpha \in (0,1)$ and $n$ the sample size. Here we accurately characterize the behaviour of the PMPLE when $s$ is proportional to $n$ via the solution of a system of six non-linear equations that can be easily obtained by fixed point iteration. These equations are derived by means of the replica method and under the assumption that the covariates $\mathbf{X}\in \mathbb{R}^p$ follow a multivariate Gaussian law with covariance $\mathbf{I}_p/p$. The solution of the previous equations allows us to investigate the dependency of various metrics of interest and hence their dependency on the ratio $\zeta = p/n$, the fraction of true active components $\nu = s/p$, and the regularization strength. We validate our results by extensive numerical simulations.

相關內容

Robot-assisted fruit harvesting has been a critical research direction supporting sustainable crop production. One important determinant of system behavior and efficiency is the end-effector that comes in direct contact with the crop during harvesting and directly affects harvesting success. Harvesting avocados poses unique challenges not addressed by existing end-effectors (namely, they have uneven surfaces and irregular shapes grow on thick peduncles, and have a sturdy calyx attached). The work reported in this paper contributes a new end-effector design suitable for avocado picking. A rigid system design with a two-stage rotational motion is developed, to first grasp the avocado and then detach it from its peduncle. A force analysis is conducted to determine key design parameters. Preliminary experiments demonstrate the efficiency of the developed end-effector to pick and apply a moment to an avocado from a specific viewpoint (as compared to pulling it directly), and in-lab experiments show that the end-effector can grasp and retrieve avocados with a 100% success rate.

An unexpected failure of a concrete gravity dam may cause unimaginable human suffering and massive economic losses. An earthquake is the main factor contributing to the concrete gravity dam's failure. In recent years, there has been a rise in efforts globally to make dams safe under dynamic loading. Numerical modeling of dams under earthquake loading yields substantial insights into dams' fracture and damage progression. In the present work, a particle-based computational framework is developed to investigate the failure of the Koyna dam, a concrete gravity dam in India exposed to dynamic loading. The dam-foundation system is considered here. The numerically obtained crack results in the concrete dam are compared with the available experimental results. The findings are consistent with one another.

We address the problem of the best uniform approximation of a continuous function on a convex domain. The approximation is by linear combinations of a finite system of functions (not necessarily Chebyshev) under arbitrary linear constraints. By modifying the concept of alternance and of the Remez iterative procedure we present a method, which demonstrates its efficiency in numerical problems. The linear rate of convergence is proved under some favourable assumptions. A special attention is paid to systems of complex exponents, Gaussian functions, lacunar algebraic and trigonometric polynomials. Applications to signal processing, linear ODE, switching dynamical systems, and to Markov-Bernstein type inequalities are considered.

A functional nonlinear regression approach, incorporating time information in the covariates, is proposed for temporal strong correlated manifold map data sequence analysis. Specifically, the functional regression parameters are supported on a connected and compact two--point homogeneous space. The Generalized Least--Squares (GLS) parameter estimator is computed in the linearized model, having error term displaying manifold scale varying Long Range Dependence (LRD). The performance of the theoretical and plug--in nonlinear regression predictors is illustrated by simulations on sphere, in terms of the empirical mean of the computed spherical functional absolute errors. In the case where the second--order structure of the functional error term in the linearized model is unknown, its estimation is performed by minimum contrast in the functional spectral domain. The linear case is illustrated in the Supplementary Material, revealing the effect of the slow decay velocity in time of the trace norms of the covariance operator family of the regression LRD error term. The purely spatial statistical analysis of atmospheric pressure at high cloud bottom, and downward solar radiation flux in Alegria et al. (2021) is extended to the spatiotemporal context, illustrating the numerical results from a generated synthetic data set.

An essential problem in statistics and machine learning is the estimation of expectations involving PDFs with intractable normalizing constants. The self-normalized importance sampling (SNIS) estimator, which normalizes the IS weights, has become the standard approach due to its simplicity. However, the SNIS has been shown to exhibit high variance in challenging estimation problems, e.g, involving rare events or posterior predictive distributions in Bayesian statistics. Further, most of the state-of-the-art adaptive importance sampling (AIS) methods adapt the proposal as if the weights had not been normalized. In this paper, we propose a framework that considers the original task as estimation of a ratio of two integrals. In our new formulation, we obtain samples from a joint proposal distribution in an extended space, with two of its marginals playing the role of proposals used to estimate each integral. Importantly, the framework allows us to induce and control a dependency between both estimators. We propose a construction of the joint proposal that decomposes in two (multivariate) marginals and a coupling. This leads to a two-stage framework suitable to be integrated with existing or new AIS and/or variational inference (VI) algorithms. The marginals are adapted in the first stage, while the coupling can be chosen and adapted in the second stage. We show in several examples the benefits of the proposed methodology, including an application to Bayesian prediction with misspecified models.

Prediction of climate tipping is challenging due to the lack of recent observation of actual climate tipping. Despite many previous efforts to accurately predict the existence and timing of climate tippings under specific climate scenarios, the predictability of climate tipping, the necessary conditions under which climate tipping can be predicted, has yet to be explored. In this study, the predictability of climate tipping is analyzed by Observation System Simulation Experiment (OSSE), in which the value of observation for prediction is assessed through the idealized experiment of data assimilation, using a simplified dynamic vegetation model and an Atlantic Meridional Overturning Circulation (AMOC) two box model. We find that the ratio of internal variability to observation error, or signal-to-noise ratio, should be large enough to accurately predict climate tippings. When observation can accurately resolve the internal variability of the system, assimilating these observations into process-based models can effectively improve the skill of predicting climate tippings. Our quantitative estimation of required observation accuracy to predict climate tipping implies that the existing observation network is not always sufficient to accurately project climate tipping.

Weakly Supervised Semantic Segmentation (WSSS) employs weak supervision, such as image-level labels, to train the segmentation model. Despite the impressive achievement in recent WSSS methods, we identify that introducing weak labels with high mean Intersection of Union (mIoU) does not guarantee high segmentation performance. Existing studies have emphasized the importance of prioritizing precision and reducing noise to improve overall performance. In the same vein, we propose ORANDNet, an advanced ensemble approach tailored for WSSS. ORANDNet combines Class Activation Maps (CAMs) from two different classifiers to increase the precision of pseudo-masks (PMs). To further mitigate small noise in the PMs, we incorporate curriculum learning. This involves training the segmentation model initially with pairs of smaller-sized images and corresponding PMs, gradually transitioning to the original-sized pairs. By combining the original CAMs of ResNet-50 and ViT, we significantly improve the segmentation performance over the single-best model and the naive ensemble model, respectively. We further extend our ensemble method to CAMs from AMN (ResNet-like) and MCTformer (ViT-like) models, achieving performance benefits in advanced WSSS models. It highlights the potential of our ORANDNet as a final add-on module for WSSS models.

We propose a Fast Fourier Transform based Periodic Interpolation Method (FFT-PIM), a flexible and computationally efficient approach for computing the scalar potential given by a superposition sum in a unit cell of an infinitely periodic array. Under the same umbrella, FFT-PIM allows computing the potential for 1D, 2D, and 3D periodicities for dynamic and static problems, including problems with and without a periodic phase shift. The computational complexity of the FFT-PIM is of $O(N \log N)$ for $N$ spatially coinciding sources and observer points. The FFT-PIM uses rapidly converging series representations of the Green's function serving as a kernel in the superposition sum. Based on these representations, the FFT-PIM splits the potential into its near-zone component, which includes a small number of images surrounding the unit cell of interest, and far-zone component, which includes the rest of an infinite number of images. The far-zone component is evaluated by projecting the non-uniform sources onto a sparse uniform grid, performing superposition sums on this sparse grid, and interpolating the potential from the uniform grid to the non-uniform observation points. The near-zone component is evaluated using an FFT-based method, which is adapted to efficiently handle non-uniform source-observer distributions within the periodic unit cell. The FFT-PIM can be used for a broad range of applications, such as periodic problems involving integral equations in computational electromagnetic and acoustic, micromagnetic solvers, and density functional theory solvers.

The goal of explainable Artificial Intelligence (XAI) is to generate human-interpretable explanations, but there are no computationally precise theories of how humans interpret AI generated explanations. The lack of theory means that validation of XAI must be done empirically, on a case-by-case basis, which prevents systematic theory-building in XAI. We propose a psychological theory of how humans draw conclusions from saliency maps, the most common form of XAI explanation, which for the first time allows for precise prediction of explainee inference conditioned on explanation. Our theory posits that absent explanation humans expect the AI to make similar decisions to themselves, and that they interpret an explanation by comparison to the explanations they themselves would give. Comparison is formalized via Shepard's universal law of generalization in a similarity space, a classic theory from cognitive science. A pre-registered user study on AI image classifications with saliency map explanations demonstrate that our theory quantitatively matches participants' predictions of the AI.

In large-scale systems there are fundamental challenges when centralised techniques are used for task allocation. The number of interactions is limited by resource constraints such as on computation, storage, and network communication. We can increase scalability by implementing the system as a distributed task-allocation system, sharing tasks across many agents. However, this also increases the resource cost of communications and synchronisation, and is difficult to scale. In this paper we present four algorithms to solve these problems. The combination of these algorithms enable each agent to improve their task allocation strategy through reinforcement learning, while changing how much they explore the system in response to how optimal they believe their current strategy is, given their past experience. We focus on distributed agent systems where the agents' behaviours are constrained by resource usage limits, limiting agents to local rather than system-wide knowledge. We evaluate these algorithms in a simulated environment where agents are given a task composed of multiple subtasks that must be allocated to other agents with differing capabilities, to then carry out those tasks. We also simulate real-life system effects such as networking instability. Our solution is shown to solve the task allocation problem to 6.7% of the theoretical optimal within the system configurations considered. It provides 5x better performance recovery over no-knowledge retention approaches when system connectivity is impacted, and is tested against systems up to 100 agents with less than a 9% impact on the algorithms' performance.

北京阿比特科技有限公司