亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

We show a deviation inequality for U-statistics of independent data taking values in a separable Banach space which satisfies some smoothness assumptions. We then provide applications to rates in the law of large numbers for U-statistics, a H{\"o}lderian functional central limit theorem and a moment inequality for incomplete $U$-statistics.

相關內容

This survey investigates the transformative potential of various YOLO variants, from YOLOv1 to the state-of-the-art YOLOv10, in the context of agricultural advancements. The primary objective is to elucidate how these cutting-edge object detection models can re-energise and optimize diverse aspects of agriculture, ranging from crop monitoring to livestock management. It aims to achieve key objectives, including the identification of contemporary challenges in agriculture, a detailed assessment of YOLO's incremental advancements, and an exploration of its specific applications in agriculture. This is one of the first surveys to include the latest YOLOv10, offering a fresh perspective on its implications for precision farming and sustainable agricultural practices in the era of Artificial Intelligence and automation. Further, the survey undertakes a critical analysis of YOLO's performance, synthesizes existing research, and projects future trends. By scrutinizing the unique capabilities packed in YOLO variants and their real-world applications, this survey provides valuable insights into the evolving relationship between YOLO variants and agriculture. The findings contribute towards a nuanced understanding of the potential for precision farming and sustainable agricultural practices, marking a significant step forward in the integration of advanced object detection technologies within the agricultural sector.

This paper addresses structured normwise, mixed, and componentwise condition numbers (CNs) for a linear function of the solution to the generalized saddle point problem (GSPP). We present a general framework that enables us to measure the structured CNs of the individual components of the solution. Then, we derive their explicit formulae when the input matrices have symmetric, Toeplitz, or some general linear structures. In addition, compact formulae for the unstructured CNs are obtained, which recover previous results on CNs for GSPPs for specific choices of the linear function. Furthermore, applications of the derived structured CNs are provided to determine the structured CNs for the weighted Toeplitz regularized least-squares problems and Tikhonov regularization problems, which retrieves some previous studies in the literature.

In this work, we develop algebraic solvers for linear systems arising from the discretization of second-order elliptic problems by saddle-point mixed finite element methods of arbitrary polynomial degree $p \ge 0$. We present a multigrid and a two-level domain decomposition approach in two or three space dimensions, which are steered by their respective a posteriori estimators of the algebraic error. First, we extend the results of [A. Mira\c{c}i, J. Pape\v{z}, and M. Vohral\'ik, SIAM J. Sci. Comput. 43 (2021), S117--S145] to the mixed finite element setting. Extending the multigrid procedure itself is rather natural. To obtain analogous theoretical results, however, a multilevel stable decomposition of the velocity space is needed. In two space dimensions, we can treat the velocity space as the curl of a stream-function space, for which the previous results apply. In three space dimensions, we design a novel stable decomposition by combining a one-level high-order local stable decomposition of [Chaumont-Frelet and Vohral\'ik, SIAM J. Numer. Anal. 61 (2023), 1783--1818] and a multilevel lowest-order stable decomposition of [Hiptmair, Wu, and Zheng, Numer. Math. Theory Methods Appl. 5 (2012), 297--332]. This allows us to prove that our multigrid solver contracts the algebraic error at each iteration and, simultaneously, that the associated a posteriori estimator is efficient. A $p$-robust contraction is shown in two space dimensions. Next, we use this multilevel methodology to define a two-level domain decomposition method where the subdomains consist of overlapping patches of coarse-level elements sharing a common coarse-level vertex. We again establish a ($p$-robust) contraction of the solver and efficiency of the a posteriori estimator. Numerical results presented both for the multigrid approach and the domain decomposition method confirm the theoretical findings.

The rapid development of Large Language Models (LLMs) and Generative Pre-Trained Transformers(GPTs) in the field of Generative Artificial Intelligence (AI) can significantly impact task automation in themodern economy. We anticipate that the PRA field will inevitably be affected by this technology. Thus, themain goal of this paper is to engage the risk assessment community into a discussion of benefits anddrawbacks of this technology for PRA. We make a preliminary analysis of possible application of LLM inProbabilistic Risk Assessment (PRA) modeling context referring to the ongoing experience in softwareengineering field. We explore potential application scenarios and the necessary conditions for controlledLLM usage in PRA modeling (whether static or dynamic). Additionally, we consider the potential impact ofthis technology on PRA modeling tools.

To deal with the task assignment problem of multi-AUV systems under kinematic constraints, which means steering capability constraints for underactuated AUVs or other vehicles likely, an improved task assignment algorithm is proposed combining the Dubins Path algorithm with improved SOM neural network algorithm. At first, the aimed tasks are assigned to the AUVs by improved SOM neural network method based on workload balance and neighborhood function. When there exists kinematic constraints or obstacles which may cause failure of trajectory planning, task re-assignment will be implemented by change the weights of SOM neurals, until the AUVs can have paths to reach all the targets. Then, the Dubins paths are generated in several limited cases. AUV's yaw angle is limited, which result in new assignments to the targets. Computation flow is designed so that the algorithm in MATLAB and Python can realizes the path planning to multiple targets. Finally, simulation results prove that the proposed algorithm can effectively accomplish the task assignment task for multi-AUV system.

This paper presents a learning-based method to solve the traditional parameterization and knot placement problems in B-spline approximation. Different from conventional heuristic methods or recent AI-based methods, the proposed method does not assume ordered or fixed-size data points as input. There is also no need for manually setting the number of knots. It casts the parameterization and knot placement problems as a sequence-to-sequence translation problem, a generative process automatically determining the number of knots, their placement, parameter values, and their ordering. Once trained, SplineGen demonstrates a notable improvement over existing methods, with a one to two orders of magnitude increase in approximation accuracy on test data.

This work introduces a new approach to building crash-safe file systems for persistent memory. We exploit the fact that Rust's typestate pattern allows compile-time enforcement of a specific order of operations. We introduce a novel crash-consistency mechanism, Synchronous Soft Updates, that boils down crash safety to enforcing ordering among updates to file-system metadata. We employ this approach to build SquirrelFS, a new file system with crash-consistency guarantees that are checked at compile time. SquirrelFS avoids the need for separate proofs, instead incorporating correctness guarantees into the typestate itself. Compiling SquirrelFS only takes tens of seconds; successful compilation indicates crash consistency, while an error provides a starting point for fixing the bug. We evaluate SquirrelFS against state of the art file systems such as NOVA and WineFS, and find that SquirrelFS achieves similar or better performance on a wide range of benchmarks and applications.

We describe a general approach to deriving linear-time logics for a wide variety of state-based, quantitative systems, by modelling the latter as coalgebras whose type incorporates both branching and linear behaviour. Concretely, we define logics whose syntax is determined by the type of linear behaviour, and whose domain of truth values is determined by the type of branching behaviour, and we provide two semantics for them: a step-wise semantics akin to that of standard coalgebraic logics, and a path-based semantics akin to that of standard linear-time logics. The former semantics is useful for model checking, whereas the latter is the more natural semantics, as it measures the extent with which qualitative properties hold along computation paths from a given state. Our main result is the equivalence of the two semantics. We also provide a semantic characterisation of a notion of logical distance induced by these logics. Instances of our logics support reasoning about the possibility, likelihood or minimal cost of exhibiting a given linear-time property.

The remarkable practical success of deep learning has revealed some major surprises from a theoretical perspective. In particular, simple gradient methods easily find near-optimal solutions to non-convex optimization problems, and despite giving a near-perfect fit to training data without any explicit effort to control model complexity, these methods exhibit excellent predictive accuracy. We conjecture that specific principles underlie these phenomena: that overparametrization allows gradient methods to find interpolating solutions, that these methods implicitly impose regularization, and that overparametrization leads to benign overfitting. We survey recent theoretical progress that provides examples illustrating these principles in simpler settings. We first review classical uniform convergence results and why they fall short of explaining aspects of the behavior of deep learning methods. We give examples of implicit regularization in simple settings, where gradient methods lead to minimal norm functions that perfectly fit the training data. Then we review prediction methods that exhibit benign overfitting, focusing on regression problems with quadratic loss. For these methods, we can decompose the prediction rule into a simple component that is useful for prediction and a spiky component that is useful for overfitting but, in a favorable setting, does not harm prediction accuracy. We focus specifically on the linear regime for neural networks, where the network can be approximated by a linear model. In this regime, we demonstrate the success of gradient flow, and we consider benign overfitting with two-layer networks, giving an exact asymptotic analysis that precisely demonstrates the impact of overparametrization. We conclude by highlighting the key challenges that arise in extending these insights to realistic deep learning settings.

While deep learning strategies achieve outstanding results in computer vision tasks, one issue remains. The current strategies rely heavily on a huge amount of labeled data. In many real-world problems it is not feasible to create such an amount of labeled training data. Therefore, researchers try to incorporate unlabeled data into the training process to reach equal results with fewer labels. Due to a lot of concurrent research, it is difficult to keep track of recent developments. In this survey we provide an overview of often used techniques and methods in image classification with fewer labels. We compare 21 methods. In our analysis we identify three major trends. 1. State-of-the-art methods are scaleable to real world applications based on their accuracy. 2. The degree of supervision which is needed to achieve comparable results to the usage of all labels is decreasing. 3. All methods share common techniques while only few methods combine these techniques to achieve better performance. Based on all of these three trends we discover future research opportunities.

北京阿比特科技有限公司