亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

This paper focuses on the Partitioned-Solution Approach (PSA) employed for the Time-Domain Simulation (TDS) of dynamic power system models. In PSA, differential equations are solved at each step of the TDS for state variables, whereas algebraic equations are solved separately. The goal of this paper is to propose a novel, matrix-pencil based technique to study numerical stability and accuracy of PSA in a unified way. The proposed technique quantifies the numerical deformation that PSA-based methods introduce to the dynamics of the power system model, and allows estimating useful upper time step bounds that achieve prescribed simulation accuracy criteria. The family of Predictor-Corrector (PC) methods, which is commonly applied in practical implementations of PSA, is utilized to illustrate the proposed technique. Simulations are carried out on the IEEE 39-bus system, as well as on a 1479-bus model of the All-Island Irish Transmission System (AIITS).

相關內容

Domination problems in general can capture situations in which some entities have an effect on other entities (and sometimes on themselves). The usual goal is to select a minimum number of entities that can influence a target group of entities or to influence a maximum number of target entities with a certain number of available influencers. In this work, we focus on the distinction between \textit{internal} and \textit{external} domination in the respective maximization problem. In particular, a dominator can dominate its entire neighborhood in a graph, internally dominating itself, while those of its neighbors which are not dominators themselves are externally dominated. We study the problem of maximizing the external domination that a given number of dominators can yield and we present a 0.5307-approximation algorithm for this problem. Moreover, our methods provide a framework for approximating a number of problems that can be cast in terms of external domination. In particular, we observe that an interesting interpretation of the maximum coverage problem can capture a new problem in elections, in which we want to maximize the number of \textit{externally represented} voters. We study this problem in two different settings, namely Non-Secrecy and Rational-Candidate, and provide approximability analysis for two alternative approaches; our analysis reveals, among other contributions, that an earlier resource allocation algorithm is, in fact, a 0.462-approximation algorithm for maximum external domination in directed graphs.

Benkeser et al. demonstrate how adjustment for baseline covariates in randomized trials can meaningfully improve precision for a variety of outcome types. Their findings build on a long history, starting in 1932 with R.A. Fisher and including more recent endorsements by the U.S. Food and Drug Administration and the European Medicines Agency. Here, we address an important practical consideration: *how* to select the adjustment approach -- which variables and in which form -- to maximize precision, while maintaining Type-I error control. Balzer et al. previously proposed *Adaptive Prespecification* within TMLE to flexibly and automatically select, from a prespecified set, the approach that maximizes empirical efficiency in small trials (N$<$40). To avoid overfitting with few randomized units, selection was previously limited to working generalized linear models, adjusting for a single covariate. Now, we tailor Adaptive Prespecification to trials with many randomized units. Using $V$-fold cross-validation and the estimated influence curve-squared as the loss function, we select from an expanded set of candidates, including modern machine learning methods adjusting for multiple covariates. As assessed in simulations exploring a variety of data generating processes, our approach maintains Type-I error control (under the null) and offers substantial gains in precision -- equivalent to 20-43\% reductions in sample size for the same statistical power. When applied to real data from ACTG Study 175, we also see meaningful efficiency improvements overall and within subgroups.

In this paper, we propose a new Bayesian inference method for a high-dimensional sparse factor model that allows both the factor dimensionality and the sparse structure of the loading matrix to be inferred. The novelty is to introduce a certain dependence between the sparsity level and the factor dimensionality, which leads to adaptive posterior concentration while keeping computational tractability. We show that the posterior distribution asymptotically concentrates on the true factor dimensionality, and more importantly, this posterior consistency is adaptive to the sparsity level of the true loading matrix and the noise variance. We also prove that the proposed Bayesian model attains the optimal detection rate of the factor dimensionality in a more general situation than those found in the literature. Moreover, we obtain a near-optimal posterior concentration rate of the covariance matrix. Numerical studies are conducted and show the superiority of the proposed method compared with other competitors.

Predicting the future trajectories of nearby objects plays a pivotal role in Robotics and Automation such as autonomous driving. While learning-based trajectory prediction methods have achieved remarkable performance on public benchmarks, the generalization ability of these approaches remains questionable. The poor generalizability on unseen domains, a well-recognized defect of data-driven approaches, can potentially harm the real-world performance of trajectory prediction models. We are thus motivated to improve generalization ability of models instead of merely pursuing high accuracy on average. Due to the lack of benchmarks for quantifying the generalization ability of trajectory predictors, we first construct a new benchmark called argoverse-shift, where the data distributions of domains are significantly different. Using this benchmark for evaluation, we identify that the domain shift problem seriously hinders the generalization of trajectory predictors since state-of-the-art approaches suffer from severe performance degradation when facing those out-of-distribution scenes. To enhance the robustness of models against domain shift problem, we propose a plug-and-play strategy for domain normalization in trajectory prediction. Our strategy utilizes the Frenet coordinate frame for modeling and can effectively narrow the domain gap of different scenes caused by the variety of road geometry and topology. Experiments show that our strategy noticeably boosts the prediction performance of the state-of-the-art in domains that were previously unseen to the models, thereby improving the generalization ability of data-driven trajectory prediction methods.

We investigate gradient descent training of wide neural networks and the corresponding implicit bias in function space. For univariate regression, we show that the solution of training a width-$n$ shallow ReLU network is within $n^{- 1/2}$ of the function which fits the training data and whose difference from the initial function has the smallest 2-norm of the second derivative weighted by a curvature penalty that depends on the probability distribution that is used to initialize the network parameters. We compute the curvature penalty function explicitly for various common initialization procedures. For instance, asymmetric initialization with a uniform distribution yields a constant curvature penalty, and thence the solution function is the natural cubic spline interpolation of the training data. \hj{For stochastic gradient descent we obtain the same implicit bias result.} We obtain a similar result for different activation functions. For multivariate regression we show an analogous result, whereby the second derivative is replaced by the Radon transform of a fractional Laplacian. For initialization schemes that yield a constant penalty function, the solutions are polyharmonic splines. Moreover, we show that the training trajectories are captured by trajectories of smoothing splines with decreasing regularization strength.

In this paper, we present and analyze a linear fully discrete second order scheme with variable time steps for the phase field crystal equation. More precisely, we construct a linear adaptive time stepping scheme based on the second order backward differentiation formulation (BDF2) and use the Fourier spectral method for the spatial discretization. The scalar auxiliary variable approach is employed to deal with the nonlinear term, in which we only adopt a first order method to approximate the auxiliary variable. This treatment is extremely important in the derivation of the unconditional energy stability of the proposed adaptive BDF2 scheme. However, we find for the first time that this strategy will not affect the second order accuracy of the unknown phase function $\phi^{n}$ by setting the positive constant $C_{0}$ large enough such that $C_{0}\geq 1/\Dt.$ The energy stability of the adaptive BDF2 scheme is established with a mild constraint on the adjacent time step radio $\gamma_{n+1}:=\Dt_{n+1}/\Dt_{n}\leq 4.8645$. Furthermore, a rigorous error estimate of the second order accuracy of $\phi^{n}$ is derived for the proposed scheme on the nonuniform mesh by using the uniform $H^{2}$ bound of the numerical solutions. Finally, some numerical experiments are carried out to validate the theoretical results and demonstrate the efficiency of the fully discrete adaptive BDF2 scheme.

Adaptive Random Testing (ART) enhances the testing effectiveness (including fault-detection capability) of Random Testing (RT) by increasing the diversity of the random test cases throughout the input domain. Many ART algorithms have been investigated according to different criteria, such as Fixed-Size-Candidate-Set ART (FSCS) and Restricted Random Testing (RRT), and have been widely used in many practical applications. Despite its popularity, ART suffers from the problem of high computational costs during test case generation, especially as the number of test cases increases. Although a number of strategies have been proposed to enhance the ART testing efficiency, such as the forgetting strategy and the k-dimensional tree strategy, these algorithms still face some challenges, including: (1) Although these algorithms can reduce the computation time, their execution costs are still very high, especially when the number of test cases is large; and (2) To achieve low computational costs, they may sacrifice some fault-detection capability. In this paper, we propose an approach based on Approximate Nearest Neighbors (ANNs), called Locality Sensitive Hashing ART (LSH-ART). When calculating distances among different test inputs, LSH-ART identifies the approximate (not necessarily exact) nearest neighbors for candidates in an efficient way. LSH-ART attempts to balance ART testing effectiveness and efficiency.

In this paper, we consider a class of nonconvex-nonconcave minimax problems, i.e., NC-PL minimax problems, whose objective functions satisfy the Polyak-\L ojasiewicz (PL) condition with respect to the inner variable. We propose a zeroth-order alternating gradient descent ascent (ZO-AGDA) algorithm and a zeroth-order variance reduced alternating gradient descent ascent (ZO-VRAGDA) algorithm for solving NC-PL minimax problem under the deterministic and the stochastic setting, respectively. The total number of function value queries to obtain an $\epsilon$-stationary point of ZO-AGDA and ZO-VRAGDA algorithm for solving NC-PL minimax problem is upper bounded by $\mathcal{O}(\varepsilon^{-2})$ and $\mathcal{O}(\varepsilon^{-3})$, respectively. To the best of our knowledge, they are the first two zeroth-order algorithms with the iteration complexity gurantee for solving NC-PL minimax problems.

The branch-and-bound algorithm based on decision diagrams introduced by Bergman et al. in 2016 is a framework for solving discrete optimization problems with a dynamic programming formulation. It works by compiling a series of bounded-width decision diagrams that can provide lower and upper bounds for any given subproblem. Eventually, every part of the search space will be either explored or pruned by the algorithm, thus proving optimality. This paper presents new ingredients to speed up the search by exploiting the structure of dynamic programming models. The key idea is to prevent the repeated exploration of nodes corresponding to the same dynamic programming states by storing and querying thresholds in a data structure called the Barrier. These thresholds are based on dominance relations between partial solutions previously found. They can be further strengthened by integrating the filtering techniques introduced by Gillard et al. in 2021. Computational experiments show that the pruning brought by the Barrier allows to significantly reduce the number of nodes expanded by the algorithm. This results in more benchmark instances of difficult optimization problems being solved in less time while using narrower decision diagrams.

To speed up online testing, adaptive traffic experimentation through multi-armed bandit algorithms is rising as an essential complementary alternative to the fixed horizon A/B testing. Based on recent research on best arm identification and statistical inference with adaptively collected data, this paper derives and evaluates four Bayesian batch bandit algorithms (NB-TS, WB-TS, NB-TTTS, WB-TTTS), which are combinations of two ways of weighting batches (Naive Batch and Weighted Batch) and two Bayesian sampling strategies (Thompson Sampling and Top-Two Thompson Sampling) to adaptively determine traffic allocation. These derived Bayesian sampling algorithms are practically based on summary batch statistics of a reward metric for pilot experiments, where one of the combination WB-TTTS in this paper seems to be newly discussed. The comprehensive evaluation on the four Bayesian sampling algorithms covers trustworthiness, sensitivity and regret of a testing methodology. Moreover, the evaluation includes 4 real-world eBay experiments and 40 reproducible synthetic experiments to reveal the learnings, which covers both stationary and non-stationary situations. Our evaluation reveals that, (a) There exist false positives inflation with equivalent best arms, while seldom discussed in literatures; (b) To control false positives, connections between convergence of posterior optimal probabilities and neutral posterior reshaping are discovered; (c) WB-TTTS shows competitive recall, higher precision, and robustness against non-stationary trend; (d) NB-TS outperforms on minimizing regret trials except on precision and robustness; (e) WB-TTTS is a promising alternative if regret of A/B Testing is affordable, otherwise NB-TS is still a powerful choice with regret consideration for pilot experiments.

北京阿比特科技有限公司