亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

Let $\mathbf{H}$ be the cartesian product of a family of left modules over a ring $S$, indexed by a finite set $\Omega$. We are concerned with the $(\mathbf{P},\omega)$-weight on $\mathbf{H}$, where $\mathbf{P}=(\Omega,\preccurlyeq_{\mathbf{P}})$ is a poset and $\omega:\Omega\longrightarrow\mathbb{R}^{+}$ is a weight function. We characterize the group of $(\mathbf{P},\omega)$-weight isometries of $\mathbf{H}$, and give a canonical decomposition for semi-simple subcodes of $\mathbf{H}$ when $\mathbf{P}$ is hierarchical. We then study the MacWilliams extension property (MEP) for $(\mathbf{P},\omega)$-weight. We show that the MEP implies the unique decomposition property (UDP) of $(\mathbf{P},\omega)$, which further implies that $\mathbf{P}$ is hierarchical if $\omega$ is identically $1$. For the case that either $\mathbf{P}$ is hierarchical or $\omega$ is identically $1$, we show that the MEP for $(\mathbf{P},\omega)$-weight can be characterized in terms of the MEP for Hamming weight, and give necessary and sufficient conditions for $\mathbf{H}$ to satisfy the MEP for $(\mathbf{P},\omega)$-weight when $S$ is an Artinian simple ring (either finite or infinite). When $S$ is a finite field, in the context of $(\mathbf{P},\omega)$-weight, we compare the MEP with other coding theoretic properties including the MacWilliams identity, Fourier-reflexivity of partitions and the UDP, and show that the MEP is strictly stronger than all the rest among them.

相關內容

iOS 8 提供的應用間和應用跟系統的功能交互特性。
  • Today (iOS and OS X): widgets for the Today view of Notification Center
  • Share (iOS and OS X): post content to web services or share content with others
  • Actions (iOS and OS X): app extensions to view or manipulate inside another app
  • Photo Editing (iOS): edit a photo or video in Apple's Photos app with extensions from a third-party apps
  • Finder Sync (OS X): remote file storage in the Finder with support for Finder content annotation
  • Storage Provider (iOS): an interface between files inside an app and other apps on a user's device
  • Custom Keyboard (iOS): system-wide alternative keyboards

Source:

Estimating sample size and statistical power is an essential part of a good study design. This R package allows users to conduct power analysis based on Monte Carlo simulations in settings in which consideration of the correlations between predictors is important. It runs power analyses given a data generative model and an inference model. It can set up a data generative model that preserves dependence structures among variables given existing data (continuous, binary, or ordinal) or high-level descriptions of the associations. Users can generate power curves to assess the trade-offs between sample size, effect size, and power of a design. This paper presents tutorials and examples focusing on applications for environmental mixture studies when predictors tend to be moderately to highly correlated. It easily interfaces with several existing and newly developed analysis strategies for assessing associations between exposures and health outcomes. However, the package is sufficiently general to facilitate power simulations in a wide variety of settings.

A support vector machine (SVM) is an algorithm that finds a hyperplane which optimally separates labeled data points in $\mathbb{R}^n$ into positive and negative classes. The data points on the margin of this separating hyperplane are called support vectors. We connect the possible configurations of support vectors to Radon's theorem, which provides guarantees for when a set of points can be divided into two classes (positive and negative) whose convex hulls intersect. If the convex hulls of the positive and negative support vectors are projected onto a separating hyperplane, then the projections intersect if and only if the hyperplane is optimal. Further, with a particular type of general position, we show that (a) the projected convex hulls of the support vectors intersect in exactly one point, (b) the support vectors are stable under perturbation, (c) there are at most $n+1$ support vectors, and (d) every number of support vectors from 2 up to $n+1$ is possible. Finally, we perform computer simulations studying the expected number of support vectors, and their configurations, for randomly generated data. We observe that as the distance between classes of points increases for this type of randomly generated data, configurations with fewer support vectors become more likely.

Ti-6Al-4V is a titanium alloy with excellent properties for lightweight applications and its production through Additive Manufacturing processes is attractive for different industrial sectors. In this work, the influence of mechanical properties on the notch fracture resistance of Ti-6Al-4V produced by Selective Laser Melting is numerically investigated. Literature data is used to inform material behaviour. The as-built brittle behaviour is compared to the enhanced ductile response after heat treatment (HT) and hot isostatic pressing (HIP) post-processes. A Phase Field framework is adopted to capture damage nucleation and propagation from two different notch geometries and a discussion on the influence of fracture energy and the characteristic length is carried out. In addition, the influence of oxygen uptake is analysed by reproducing non-inert atmospheres during HT and HIP, showing that oxygen shifts fracture to brittle failures due to the formation of an alpha case layer, especially for the V-notch geometry. Results show that a pure elastic behaviour can be assumed for the as-built SLM condition, whereas elastic-plastic phenomena must be modelled for specimens subjected to heat treatment or hot isostatic pressing. The present brittle Phase Field framework coupled with an elastic-plastic constitutive analysis is demonstrated to be a robust prediction tool for notch fracture after different post-processing routes.

In this paper, we investigate the matrix estimation problem in the multi-response regression model with measurement errors. A nonconvex error-corrected estimator based on a combination of the amended loss function and the nuclear norm regularizer is proposed to estimate the matrix parameter. Then under the (near) low-rank assumption, we analyse statistical and computational theoretical properties of global solutions of the nonconvex regularized estimator from a general point of view. In the statistical aspect, we establish the nonasymptotic recovery bound for any global solution of the nonconvex estimator, under restricted strong convexity on the loss function. In the computational aspect, we solve the nonconvex optimization problem via the proximal gradient method. The algorithm is proved to converge to a near-global solution and achieve a linear convergence rate. In addition, we also verify sufficient conditions for the general results to be held, in order to obtain probabilistic consequences for specific types of measurement errors, including the additive noise and missing data. Finally, theoretical consequences are demonstrated by several numerical experiments on corrupted errors-in-variables multi-response regression models. Simulation results reveal excellent consistency with our theory under high-dimensional scaling.

Traditional nonparametric estimation methods often lead to a slow convergence rate in large dimensions and require unrealistically enormous sizes of datasets for reliable conclusions. We develop an approach based on mixed gradients, either observed or estimated, to effectively estimate the function at near-parametric convergence rates. The novel approach and computational algorithm could lead to methods useful to practitioners in many areas of science and engineering. Our theoretical results reveal a behavior universal to this class of nonparametric estimation problems. We explore a general setting involving tensor product spaces and build upon the smoothing spline analysis of variance (SS-ANOVA) framework. For $d$-dimensional models under full interaction, the optimal rates with gradient information on $p$ covariates are identical to those for the $(d-p)$-interaction models without gradients and, therefore, the models are immune to the "curse of interaction". For additive models, the optimal rates using gradient information are root-$n$, thus achieving the "parametric rate". We demonstrate aspects of the theoretical results through synthetic and real data applications.

We establish the notion of limit consistency as a modular part in proving the consistency of lattice Boltzmann equations (LBE) with respect to a given partial differential equation (PDE) system. The incompressible Navier-Stokes equations (NSE) are used as paragon. Based upon the diffusion limit [L. Saint-Raymond (2003), doi: 10.1016/S0012-9593(03)00010-7] of the Bhatnagar-Gross-Krook (BGK) Boltzmann equation towards the NSE, we provide a successive discretization by nesting conventional Taylor expansions and finite differences. Elaborating the work in [M. J. Krause (2010), doi: 10.5445/IR/1000019768], we track the discretization state of the domain for the particle distribution functions and measure truncation errors at all levels within the derivation procedure. Via parametrizing equations and proving the limit consistency of the respective sequences, we retain the path towards the targeted PDE at each step of discretization, i.e. for the discrete velocity BGK Boltzmann equation and the space-time discretized LBE. As a direct result, we unfold the discretization technique of lattice Boltzmann methods as chaining finite differences and provide a generic top-down derivation of the numerical scheme which upholds the continuous limit.

Variational Bayesian posterior inference often requires simplifying approximations such as mean-field parametrisation to ensure tractability. However, prior work has associated the variational mean-field approximation for Bayesian neural networks with underfitting in the case of small datasets or large model sizes. In this work, we show that invariances in the likelihood function of over-parametrised models contribute to this phenomenon because these invariances complicate the structure of the posterior by introducing discrete and/or continuous modes which cannot be well approximated by Gaussian mean-field distributions. In particular, we show that the mean-field approximation has an additional gap in the evidence lower bound compared to a purpose-built posterior that takes into account the known invariances. Importantly, this invariance gap is not constant; it vanishes as the approximation reverts to the prior. We proceed by first considering translation invariances in a linear model with a single data point in detail. We show that, while the true posterior can be constructed from a mean-field parametrisation, this is achieved only if the objective function takes into account the invariance gap. Then, we transfer our analysis of the linear model to neural networks. Our analysis provides a framework for future work to explore solutions to the invariance problem.

This paper considers a natural fault-tolerant shortest paths problem: for some constant integer $f$, given a directed weighted graph with no negative cycles and two fixed vertices $s$ and $t$, compute (either explicitly or implicitly) for every tuple of $f$ edges, the distance from $s$ to $t$ if these edges fail. We call this problem $f$-Fault Replacement Paths ($f$FRP). We first present an $\tilde{O}(n^3)$ time algorithm for $2$FRP in $n$-vertex directed graphs with arbitrary edge weights and no negative cycles. As $2$FRP is a generalization of the well-studied Replacement Paths problem (RP) that asks for the distances between $s$ and $t$ for any single edge failure, $2$FRP is at least as hard as RP. Since RP in graphs with arbitrary weights is equivalent in a fine-grained sense to All-Pairs Shortest Paths (APSP) [Vassilevska Williams and Williams FOCS'10, J.~ACM'18], $2$FRP is at least as hard as APSP, and thus a substantially subcubic time algorithm in the number of vertices for $2$FRP would be a breakthrough. Therefore, our algorithm in $\tilde{O}(n^3)$ time is conditionally nearly optimal. Our algorithm implies an $\tilde{O}(n^{f+1})$ time algorithm for the $f$FRP problem, giving the first improvement over the straightforward $O(n^{f+2})$ time algorithm. Then we focus on the restriction of $2$FRP to graphs with small integer weights bounded by $M$ in absolute values. Using fast rectangular matrix multiplication, we obtain a randomized algorithm that runs in $\tilde{O}(M^{2/3}n^{2.9153})$ time. This implies an improvement over our $\tilde{O}(n^{f+1})$ time arbitrary weight algorithm for all $f>1$. We also present a data structure variant of the algorithm that can trade off pre-processing and query time. In addition to the algebraic algorithms, we also give an $n^{8/3-o(1)}$ conditional lower bound for combinatorial $2$FRP algorithms in directed unweighted graphs.

I survey, for a general scientific audience, three decades of research into which sorts of problems admit exponential speedups via quantum computers -- from the classics (like the algorithms of Simon and Shor), to the breakthrough of Yamakawa and Zhandry from April 2022. I discuss both the quantum circuit model, which is what we ultimately care about in practice but where our knowledge is radically incomplete, and the so-called oracle or black-box or query complexity model, where we've managed to achieve a much more thorough understanding that then informs our conjectures about the circuit model. I discuss the strengths and weaknesses of switching attention to sampling tasks, as was done in the recent quantum supremacy experiments. I make some skeptical remarks about widely-repeated claims of exponential quantum speedups for practical machine learning and optimization problems. Through many examples, I try to convey the "law of conservation of weirdness," according to which every problem admitting an exponential quantum speedup must have some unusual property to allow the amplitude to be concentrated on the unknown right answer(s).

We extend the theory of graphical designs, which are quadrature rules for graphs, to positively weighted graphs. Through Gale duality for polytopes, we show that there is a bijection between graphical designs and the faces of eigenpolytopes associated to the graph. This bijection proves the existence of graphical designs with positive quadrature weights, and upper bounds the size of a graphical design. We further show that any combinatorial polytope appears as the eigenpolytope of a positively weighted graph. Through this universality, we establish two complexity results for graphical designs: it is strongly NP-complete to determine if there is a graphical design smaller than the mentioned upper bound, and it is #P-complete to count the number of minimal graphical designs.

北京阿比特科技有限公司