亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

In distributed storage systems, locally repairable codes (LRCs) are designed to reduce disk I/O and repair costs by enabling recovery of each code symbol from a small number of other symbols. To handle multiple node failures, $(r,\delta)$-LRCs are introduced to enable local recovery in the event of up to $\delta-1$ failed nodes. Constructing optimal $(r,\delta)$-LRCs has been a significant research topic over the past decade. In \cite{Luo2022}, Luo \emph{et al.} proposed a construction of linear codes by using unions of some projective subspaces within a projective space. Several new classes of Griesmer codes and distance-optimal codes were constructed, and some of them were proved to be alphabet-optimal $2$-LRCs. In this paper, we first modify the method of constructing linear codes in \cite{Luo2022} by considering a more general situation of intersecting projective subspaces. This modification enables us to construct good codes with more flexible parameters. Additionally, we present the conditions for the constructed linear codes to qualify as Griesmer codes or achieve distance optimality. Next, we explore the locality of linear codes constructed by eliminating elements from a complete projective space. The novelty of our work lies in establishing the locality as $(2,p-2)$, $(2,p-1)$, or $(2,p)$-locality, in contrast to the previous literature that only considered $2$-locality. Moreover, by combining analysis of code parameters and the C-M like bound for $(r,\delta)$-LRCs, we construct some alphabet-optimal $(2,\delta)$-LRCs which may be either Griesmer codes or not Griesmer codes. Finally, we investigate the availability and alphabet-optimality of $(r,\delta)$-LRCs constructed from our modified framework.

相關內容

This paper presents the error analysis of numerical methods on graded meshes for stochastic Volterra equations with weakly singular kernels. We first prove a novel regularity estimate for the exact solution via analyzing the associated convolution structure. This reveals that the exact solution exhibits an initial singularity in the sense that its H\"older continuous exponent on any neighborhood of $t=0$ is lower than that on every compact subset of $(0,T]$. Motivated by the initial singularity, we then construct the Euler--Maruyama method, fast Euler--Maruyama method, and Milstein method based on graded meshes. By establishing their pointwise-in-time error estimates, we give the grading exponents of meshes to attain the optimal uniform-in-time convergence orders, where the convergence orders improve those of the uniform mesh case. Numerical experiments are finally reported to confirm the sharpness of theoretical findings.

We present a multigrid algorithm to solve efficiently the large saddle-point systems of equations that typically arise in PDE-constrained optimization under uncertainty. The algorithm is based on a collective smoother that at each iteration sweeps over the nodes of the computational mesh, and solves a reduced saddle-point system whose size depends on the number $N$ of samples used to discretized the probability space. We show that this reduced system can be solved with optimal $O(N)$ complexity. We test the multigrid method on three problems: a linear-quadratic problem for which the multigrid method is used to solve directly the linear optimality system; a nonsmooth problem with box constraints and $L^1$-norm penalization on the control, in which the multigrid scheme is used within a semismooth Newton iteration; a risk-adverse problem with the smoothed CVaR risk measure where the multigrid method is called within a preconditioned Newton iteration. In all cases, the multigrid algorithm exhibits very good performances and robustness with respect to all parameters of interest.

We investigate a class of parametric elliptic eigenvalue problems with homogeneous essential boundary conditions where the coefficients (and hence the solution $u$) may depend on a parameter $y$. For the efficient approximate evaluation of parameter sensitivities of the first eigenpairs on the entire parameter space we propose and analyse Gevrey class and analytic regularity of the solution with respect to the parameters. This is made possible by a novel proof technique which we introduce and demonstrate in this paper. Our regularity result has immediate implications for convergence of various numerical schemes for parametric elliptic eigenvalue problems, in particular, for elliptic eigenvalue problems with infinitely many parameters arising from elliptic differential operators with random coefficients.

Gaussian elimination (GE) is the most used dense linear solver. Error analysis of GE with selected pivoting strategies on well-conditioned systems can focus on studying the behavior of growth factors. Although exponential growth is possible with GE with partial pivoting (GEPP), growth tends to stay much smaller in practice. Support for this behavior was provided last year by Huang and Tikhomirov's average-case analysis of GEPP, which showed GEPP growth factors stay at most polynomial with very high probability when using small Gaussian perturbations. GE with complete pivoting (GECP) has also seen a lot of recent interest, with recent improvements to lower bounds on worst-case GECP growth provided by Edelman and Urschel earlier this year. We are interested in studying how GEPP and GECP behave on the same linear systems as well as studying large growth on particular subclasses of matrices, including orthogonal matrices. We will also study systems when GECP leads to larger growth than GEPP, which will lead to new empirical lower bounds on how much worse GECP can behave compared to GEPP in terms of growth. We also present an empirical study on a family of exponential GEPP growth matrices whose polynomial behavior in small neighborhoods limits to the initial GECP growth factor.

The first linear programming bound of McEliece, Rodemich, Rumsey, and Welch is the best known asymptotic upper bound for binary codes, for a certain subrange of distances. Starting from the work of Friedman and Tillich, there are, by now, some arguably easier and more direct arguments for this bound. We show that this more recent line of argument runs into certain difficulties if one tries to go beyond this bound (say, towards the second linear programming bound of McEliece, Rodemich, Rumsey, and Welch).

This paper studies $k$-claw-free graphs, exploring the connection between an extremal combinatorics question and the power of a convex program in approximating the maximum-weight independent set in this graph class. For the extremal question, we consider the notion, that we call \textit{conditional $\chi$-boundedness} of a graph: Given a graph $G$ that is assumed to contain an independent set of a certain (constant) size, we are interested in upper bounding the chromatic number in terms of the clique number of $G$. This question, besides being interesting on its own, has algorithmic implications (which have been relatively neglected in the literature) on the performance of SDP relaxations in estimating the value of maximum-weight independent set. For $k=3$, Chudnovsky and Seymour (JCTB 2010) prove that any $3$-claw-free graph $G$ with an independent set of size three must satisfy $\chi(G) \leq 2 \omega(G)$. Their result implies a factor $2$-estimation algorithm for the maximum weight independent set via an SDP relaxation (providing the first non-trivial result for maximum-weight independent set in such graphs via a convex relaxation). An obvious open question is whether a similar conditional $\chi$-boundedness phenomenon holds for any $k$-claw-free graph. Our main result answers this question negatively. We further present some evidence that our construction could be useful in studying more broadly the power of convex relaxations in the context of approximating maximum weight independent set in $k$-claw free graphs. In particular, we prove a lower bound on families of convex programs that are stronger than known convex relaxations used algorithmically in this context.

High-dimensional Partial Differential Equations (PDEs) are a popular mathematical modelling tool, with applications ranging from finance to computational chemistry. However, standard numerical techniques for solving these PDEs are typically affected by the curse of dimensionality. In this work, we tackle this challenge while focusing on stationary diffusion equations defined over a high-dimensional domain with periodic boundary conditions. Inspired by recent progress in sparse function approximation in high dimensions, we propose a new method called compressive Fourier collocation. Combining ideas from compressive sensing and spectral collocation, our method replaces the use of structured collocation grids with Monte Carlo sampling and employs sparse recovery techniques, such as orthogonal matching pursuit and $\ell^1$ minimization, to approximate the Fourier coefficients of the PDE solution. We conduct a rigorous theoretical analysis showing that the approximation error of the proposed method is comparable with the best $s$-term approximation (with respect to the Fourier basis) to the solution. Using the recently introduced framework of random sampling in bounded Riesz systems, our analysis shows that the compressive Fourier collocation method mitigates the curse of dimensionality with respect to the number of collocation points under sufficient conditions on the regularity of the diffusion coefficient. We also present numerical experiments that illustrate the accuracy and stability of the method for the approximation of sparse and compressible solutions.

We prove linear convergence for a new family of modified Dirichlet--Neumann methods applied to quasilinear parabolic equations, as well as the convergence of the Robin--Robin method. Such nonoverlapping domain decomposition methods are commonly employed for the parallelization of partial differential equation solvers. Convergence has been extensively studied for elliptic equations, but in the case of parabolic equations there are hardly any convergence results that are not relying on strong regularity assumptions. Hence, we construct a new framework for analyzing domain decomposition methods applied to quasilinear parabolic problems, based on fractional time derivatives and time-dependent Steklov--Poincar\'e operators. The convergence analysis is conducted without assuming restrictive regularity assumptions on the solutions or the numerical iterates. We also prove that these continuous convergence results extend to the discrete case obtained when combining domain decompositions with space-time finite elements.

Introduction: Oblique Target-rotation in the context of exploratory factor analysis is a relevant method for the investigation of the oblique independent clusters model. It was argued that minimizing single cross-loadings by means of target rotation may lead to large effects of sampling error on the target rotated factor solutions. Method: In order to minimize effects of sampling error on results of Target-rotation we propose to compute the mean cross-loadings for each block of salient loadings of the independent clusters model and to perform target rotation for the block-wise mean cross-loadings. The resulting transformation-matrix is than applied to the complete unrotated loading matrix in order to produce mean Target-rotated factors. Results: A simulation study based on correlated independent factor models revealed that mean oblique Target-rotation resulted in smaller negative bias of factor inter-correlations than conventional Target-rotation based on single loadings, especially when sample size was small and when the number of factors was large. An empirical example revealed that the similarity of Target-rotated factors computed for small subsamples with Target-rotated factors of the total sample was more pronounced for mean Target-rotation than for conventional Target-rotation. Discussion: Mean Target-rotation can be recommended in the context of oblique independent factor models, especially for small samples. An R-script and an SPSS-script for this form of Target-rotation are provided in the Appendix.

In epidemiology and social sciences, propensity score methods are popular for estimating treatment effects using observational data, and multiple imputation is popular for handling covariate missingness. However, how to appropriately use multiple imputation for propensity score analysis is not completely clear. This paper aims to bring clarity on the consistency (or lack thereof) of methods that have been proposed, focusing on the within approach (where the effect is estimated separately in each imputed dataset and then the multiple estimates are combined) and the across approach (where typically propensity scores are averaged across imputed datasets before being used for effect estimation). We show that the within method is valid and can be used with any causal effect estimator that is consistent in the full-data setting. Existing across methods are inconsistent, but a different across method that averages the inverse probability weights across imputed datasets is consistent for propensity score weighting. We also comment on methods that rely on imputing a function of the missing covariate rather than the covariate itself, including imputation of the propensity score and of the probability weight. Based on consistency results and practical flexibility, we recommend generally using the standard within method. Throughout, we provide intuition to make the results meaningful to the broad audience of applied researchers.

北京阿比特科技有限公司