亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

Studying the response of a climate system to perturbations has practical significance. Standard methods in computing the trajectory-wise deviation caused by perturbations may suffer from the chaotic nature that makes the model error dominate the true response after a short lead time. Statistical response, which computes the return described by the statistics, provides a systematic way of reaching robust outcomes with an appropriate quantification of the uncertainty and extreme events. In this paper, information theory is applied to compute the statistical response and find the most sensitive perturbation direction of different El Ni\~no-Southern Oscillation (ENSO) events to initial value and model parameter perturbations. Depending on the initial phase and the time horizon, different state variables contribute to the most sensitive perturbation direction. While initial perturbations in sea surface temperature (SST) and thermocline depth usually lead to the most significant response of SST at short- and long-range, respectively, initial adjustment of the zonal advection can be crucial to trigger strong statistical responses at medium-range around 5 to 7 months, especially at the transient phases between El Ni\~no and La Ni\~na. It is also shown that the response in the variance triggered by external random forcing perturbations, such as the wind bursts, often dominates the mean response, making the resulting most sensitive direction very different from the trajectory-wise methods. Finally, despite the strong non-Gaussian climatology distributions, using Gaussian approximations in the information theory is efficient and accurate for computing the statistical response, allowing the method to be applied to sophisticated operational systems.

相關內容

Despite the possibility to quickly compute reachable sets of large-scale linear systems, current methods are not yet widely applied by practitioners. The main reason for this is probably that current approaches are not push-button-capable and still require to manually set crucial parameters, such as time step sizes and the accuracy of the used set representation -- these settings require expert knowledge. We present a generic framework to automatically find near-optimal parameters for reachability analysis of linear systems given a user-defined accuracy. To limit the computational overhead as much as possible, our methods tune all relevant parameters during runtime. We evaluate our approach on benchmarks from the ARCH competition as well as on random examples. Our results show that our new framework verifies the selected benchmarks faster than manually-tuned parameters and is an order of magnitude faster compared to genetic algorithms.

Designing distributed filtering circuits (DFCs) is complex and time-consuming, with the circuit performance relying heavily on the expertise and experience of electronics engineers. However, manual design methods tend to have exceedingly low-efficiency. This study proposes a novel end-to-end automated method for fabricating circuits to improve the design of DFCs. The proposed method harnesses reinforcement learning (RL) algorithms, eliminating the dependence on the design experience of engineers. Thus, it significantly reduces the subjectivity and constraints associated with circuit design. The experimental findings demonstrate clear improvements in both design efficiency and quality when comparing the proposed method with traditional engineer-driven methods. In particular, the proposed method achieves superior performance when designing complex or rapidly evolving DFCs. Furthermore, compared to existing circuit automation design techniques, the proposed method demonstrates superior design efficiency, highlighting the substantial potential of RL in circuit design automation.

Optimal control problems can be solved by applying the Pontryagin maximum principle and then solving for a Hamiltonian dynamical system. In this paper, we propose novel learning frameworks to tackle optimal control problems. By applying the Pontryagin maximum principle to the original optimal control problem, the learning focus shifts to reduced Hamiltonian dynamics and corresponding adjoint variables. The reduced Hamiltonian networks can be learned by going backward in time and then minimizing loss function deduced from the Pontryagin maximum principle's conditions. The learning process is further improved by progressively learning a posterior distribution of reduced Hamiltonians, utilizing a variational autoencoder which leads to more effective path exploration process. We apply our learning frameworks to control tasks and obtain competitive results.

Graphics processing units (GPUs) are continually evolving to cater to the computational demands of contemporary general-purpose workloads, particularly those driven by artificial intelligence (AI) utilizing deep learning techniques. A substantial body of studies have been dedicated to dissecting the microarchitectural metrics characterizing diverse GPU generations, which helps researchers understand the hardware details and leverage them to optimize the GPU programs. However, the latest Hopper GPUs present a set of novel attributes, including new tensor cores supporting FP8, DPX, and distributed shared memory. Their details still remain mysterious in terms of performance and operational characteristics. In this research, we propose an extensive benchmarking study focused on the Hopper GPU. The objective is to unveil its microarchitectural intricacies through an examination of the new instruction-set architecture (ISA) of Nvidia GPUs and the utilization of new CUDA APIs. Our approach involves two main aspects. Firstly, we conduct conventional latency and throughput comparison benchmarks across the three most recent GPU architectures, namely Hopper, Ada, and Ampere. Secondly, we delve into a comprehensive discussion and benchmarking of the latest Hopper features, encompassing the Hopper DPX dynamic programming (DP) instruction set, distributed shared memory, and the availability of FP8 tensor cores. The microbenchmarking results we present offer a deeper understanding of the novel GPU AI function units and programming features introduced by the Hopper architecture. This newfound understanding is expected to greatly facilitate software optimization and modeling efforts for GPU architectures. To the best of our knowledge, this study makes the first attempt to demystify the tensor core performance and programming instruction sets unique to Hopper GPUs.

When constructing parametric models to predict the cost of future claims, several important details have to be taken into account: (i) models should be designed to accommodate deductibles, policy limits, and coinsurance factors, (ii) parameters should be estimated robustly to control the influence of outliers on model predictions, and (iii) all point predictions should be augmented with estimates of their uncertainty. The methodology proposed in this paper provides a framework for addressing all these aspects simultaneously. Using payment-per-payment and payment-per-loss variables, we construct the adaptive version of method of winsorized moments (MWM) estimators for the parameters of truncated and censored lognormal distribution. Further, the asymptotic distributional properties of this approach are derived and compared with those of the maximum likelihood estimator (MLE) and method of trimmed moments (MTM) estimators. The latter being a primary competitor to MWM. Moreover, the theoretical results are validated with extensive simulation studies and risk measure sensitivity analysis. Finally, practical performance of these methods is illustrated using the well-studied data set of 1500 U.S. indemnity losses. With this real data set, it is also demonstrated that the composite models do not provide much improvement in the quality of predictive models compared to a stand-alone fitted distribution specially for truncated and censored sample data.

Among the common applications of plenoptic cameras are depth reconstruction and post-shot refocusing. These require a calibration relating the camera-side light field to that of the scene. Numerous methods with this goal have been developed based on thin lens models for the plenoptic camera's main lens and microlenses. Our work addresses the often-overlooked role of the main lens exit pupil in these models and specifically in the decoding process of standard plenoptic camera (SPC) images. We formally deduce the connection between the refocusing distance and the resampling parameter for the decoded light field and provide an analysis of the errors that arise when the exit pupil is not considered. In addition, previous work is revisited with respect to the exit pupil's role and all theoretical results are validated through a ray-tracing-based simulation. With the public release of the evaluated SPC designs alongside our simulation and experimental data we aim to contribute to a more accurate and nuanced understanding of plenoptic camera optics.

Instructing the model to generate a sequence of intermediate steps, a.k.a., a chain of thought (CoT), is a highly effective method to improve the accuracy of large language models (LLMs) on arithmetics and symbolic reasoning tasks. However, the mechanism behind CoT remains unclear. This work provides a theoretical understanding of the power of CoT for decoder-only transformers through the lens of expressiveness. Conceptually, CoT empowers the model with the ability to perform inherently serial computation, which is otherwise lacking in transformers, especially when depth is low. Given input length $n$, previous works have shown that constant-depth transformers with finite precision $\mathsf{poly}(n)$ embedding size can only solve problems in $\mathsf{TC}^0$ without CoT. We first show an even tighter expressiveness upper bound for constant-depth transformers with constant-bit precision, which can only solve problems in $\mathsf{AC}^0$, a proper subset of $ \mathsf{TC}^0$. However, with $T$ steps of CoT, constant-depth transformers using constant-bit precision and $O(\log n)$ embedding size can solve any problem solvable by boolean circuits of size $T$. Empirically, enabling CoT dramatically improves the accuracy for tasks that are hard for parallel computation, including the composition of permutation groups, iterated squaring, and circuit value problems, especially for low-depth transformers.

Clustering with capacity constraints is a fundamental problem that attracted significant attention throughout the years. In this paper, we give the first FPT constant-factor approximation algorithm for the problem of clustering points in a general metric into $k$ clusters to minimize the sum of cluster radii, subject to non-uniform hard capacity constraints. In particular, we give a $(15+\epsilon)$-approximation algorithm that runs in $2^{0(k^2\log k)}\cdot n^3$ time. When capacities are uniform, we obtain the following improved approximation bounds: A (4 + $\epsilon$)-approximation with running time $2^{O(k\log(k/\epsilon))}n^3$, which significantly improves over the FPT 28-approximation of Inamdar and Varadarajan [ESA 2020]; a (2 + $\epsilon$)-approximation with running time $2^{O(k/\epsilon^2 \cdot\log(k/\epsilon))}dn^3$ and a $(1+\epsilon)$-approximation with running time $2^{O(kd\log ((k/\epsilon)))}n^{3}$ in the Euclidean space; and a (1 + $\epsilon$)-approximation in the Euclidean space with running time $2^{O(k/\epsilon^2 \cdot\log(k/\epsilon))}dn^3$ if we are allowed to violate the capacities by (1 + $\epsilon$)-factor. We complement this result by showing that there is no (1 + $\epsilon$)-approximation algorithm running in time $f(k)\cdot n^{O(1)}$, if any capacity violation is not allowed.

We explore the use of aggregative crowdsourced forecasting (ACF) as a mechanism to help operationalize ``collective intelligence'' of human-machine teams for coordinated actions. We adopt the definition for Collective Intelligence as: ``A property of groups that emerges from synergies among data-information-knowledge, software-hardware, and individuals (those with new insights as well as recognized authorities) that enables just-in-time knowledge for better decisions than these three elements acting alone.'' Collective Intelligence emerges from new ways of connecting humans and AI to enable decision-advantage, in part by creating and leveraging additional sources of information that might otherwise not be included. Aggregative crowdsourced forecasting (ACF) is a recent key advancement towards Collective Intelligence wherein predictions (X\% probability that Y will happen) and rationales (why I believe it is this probability that X will happen) are elicited independently from a diverse crowd, aggregated, and then used to inform higher-level decision-making. This research asks whether ACF, as a key way to enable Operational Collective Intelligence, could be brought to bear on operational scenarios (i.e., sequences of events with defined agents, components, and interactions) and decision-making, and considers whether such a capability could provide novel operational capabilities to enable new forms of decision-advantage.

While it is nearly effortless for humans to quickly assess the perceptual similarity between two images, the underlying processes are thought to be quite complex. Despite this, the most widely used perceptual metrics today, such as PSNR and SSIM, are simple, shallow functions, and fail to account for many nuances of human perception. Recently, the deep learning community has found that features of the VGG network trained on the ImageNet classification task has been remarkably useful as a training loss for image synthesis. But how perceptual are these so-called "perceptual losses"? What elements are critical for their success? To answer these questions, we introduce a new Full Reference Image Quality Assessment (FR-IQA) dataset of perceptual human judgments, orders of magnitude larger than previous datasets. We systematically evaluate deep features across different architectures and tasks and compare them with classic metrics. We find that deep features outperform all previous metrics by huge margins. More surprisingly, this result is not restricted to ImageNet-trained VGG features, but holds across different deep architectures and levels of supervision (supervised, self-supervised, or even unsupervised). Our results suggest that perceptual similarity is an emergent property shared across deep visual representations.

北京阿比特科技有限公司