亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

We present an analysis and numerical study of an optimal control problem for the Landau-de Gennes (LdG) model of nematic liquid crystals (LCs), which is a crucial component in modern technology. They exhibit long range orientational order in their nematic phase, which is represented by a tensor-valued (spatial) order parameter $Q = Q(x)$. Equilibrium LC states correspond to $Q$ functions that (locally) minimize an LdG energy functional. Thus, we consider an $L^2$-gradient flow of the LdG energy that allows for finding local minimizers and leads to a semi-linear parabolic PDE, for which we develop an optimal control framework. We then derive several a priori estimates for the forward problem, including continuity in space-time, that allow us to prove existence of optimal boundary and external ``force'' controls and to derive optimality conditions through the use of an adjoint equation. Next, we present a simple finite element scheme for the LdG model and a straightforward optimization algorithm. We illustrate optimization of LC states through numerical experiments in two and three dimensions that seek to place LC defects (where $Q(x) = 0$) in desired locations, which is desirable in applications.

相關內容

We present a novel stabilized isogeometric formulation for the Stokes problem, where the geometry of interest is obtained via overlapping NURBS (non-uniform rational B-spline) patches, i.e., one patch on top of another in an arbitrary but predefined hierarchical order. All the visible regions constitute the computational domain, whereas independent patches are coupled through visible interfaces using Nitsche's formulation. Such a geometric representation inevitably involves trimming, which may yield trimmed elements of extremely small measures (referred to as bad elements) and thus lead to the instability issue. Motivated by the minimal stabilization method that rigorously guarantees stability for trimmed geometries [1], in this work we generalize it to the Stokes problem on overlapping patches. Central to our method is the distinct treatments for the pressure and velocity spaces: Stabilization for velocity is carried out for the flux terms on interfaces, whereas pressure is stabilized in all the bad elements. We provide a priori error estimates with a comprehensive theoretical study. Through a suite of numerical tests, we first show that optimal convergence rates are achieved, which consistently agrees with our theoretical findings. Second, we show that the accuracy of pressure is significantly improved by several orders using the proposed stabilization method, compared to the results without stabilization. Finally, we also demonstrate the flexibility and efficiency of the proposed method in capturing local features in the solution field.

In this paper we discuss potentially practical ways to produce expander graphs with good spectral properties and a compact description. We focus on several classes of uniform and bipartite expander graphs defined as random Schreier graphs of the general linear group over the finite field of size two. We perform numerical experiments and show that such constructions produce spectral expanders that can be useful for practical applications. To find a theoretical explanation of the observed experimental results, we used the method of moments to prove upper bounds for the expected second largest eigenvalue of the random Schreier graphs used in our constructions. We focus on bounds for which it is difficult to study the asymptotic behaviour but it is possible to compute non-trivial conclusions for relatively small graphs with parameters from our numerical experiments (e.g., with less than 2^200 vertices and degree at least logarithmic in the number of vertices).

Data Fusion of wireless sensors is a common technique employed in many communication systems. This work focuses on incorporating the principles of non-orthogonal-multiple-access (NOMA) to optimize error performance directly in the choice of constellation design. More specifically, the problem of two sensor data fusion of a binary uniform source sent over a Gaussian multiple access channel via symmetric binary constellations is investigated. A so-called planar upper bound on the error probability is analytically derived. A constellation design is then obtained by establishing in closed form its rotation parameter that minimizes the upper bound. Simulation results show that the resulting constellations achieve a near identical performance as experimentally determined optimal constellations.

A long line of work in the past two decades or so established close connections between several different pseudorandom objects and applications. These connections essentially show that an asymptotically optimal construction of one central object will lead to asymptotically optimal solutions to all the others. However, despite considerable effort, previous works can get close but still lack one final step to achieve truly asymptotically optimal constructions. In this paper we provide the last missing link, thus simultaneously achieving explicit, asymptotically optimal constructions and solutions for various well studied extractors and applications, that have been the subjects of long lines of research. Our results include: Asymptotically optimal seeded non-malleable extractors, which in turn give two source extractors for asymptotically optimal min-entropy of $O(\log n)$, explicit constructions of $K$-Ramsey graphs on $N$ vertices with $K=\log^{O(1)} N$, and truly optimal privacy amplification protocols with an active adversary. Two source non-malleable extractors and affine non-malleable extractors for some linear min-entropy with exponentially small error, which in turn give the first explicit construction of non-malleable codes against $2$-split state tampering and affine tampering with constant rate and \emph{exponentially} small error. Explicit extractors for affine sources, sumset sources, interleaved sources, and small space sources that achieve asymptotically optimal min-entropy of $O(\log n)$ or $2s+O(\log n)$ (for space $s$ sources). An explicit function that requires strongly linear read once branching programs of size $2^{n-O(\log n)}$, which is optimal up to the constant in $O(\cdot)$. Previously, even for standard read once branching programs, the best known size lower bound for an explicit function is $2^{n-O(\log^2 n)}$.

Online allocation is a broad class of problems where items arriving online have to be allocated to agents who have a fixed utility/cost for each assigned item so to maximize/minimize some objective. This framework captures a broad range of fundamental problems such as the Santa Claus problem (maximizing minimum utility), Nash welfare maximization (maximizing geometric mean of utilities), makespan minimization (minimizing maximum cost), minimization of $\ell_p$-norms, and so on. We focus on divisible items (i.e., fractional allocations) in this paper. Even for divisible items, these problems are characterized by strong super-constant lower bounds in the classical worst-case online model. In this paper, we study online allocations in the {\em learning-augmented} setting, i.e., where the algorithm has access to some additional (machine-learned) information about the problem instance. We introduce a {\em general} algorithmic framework for learning-augmented online allocation that produces nearly optimal solutions for this broad range of maximization and minimization objectives using only a single learned parameter for every agent. As corollaries of our general framework, we improve prior results of Lattanzi et al. (SODA 2020) and Li and Xian (ICML 2021) for learning-augmented makespan minimization, and obtain the first learning-augmented nearly-optimal algorithms for the other objectives such as Santa Claus, Nash welfare, $\ell_p$-minimization, etc. We also give tight bounds on the resilience of our algorithms to errors in the learned parameters, and study the learnability of these parameters.

This paper proposes a stable sparse rapidly-exploring random trees (SST) algorithm to solve the optimal motion planning problem for hybrid systems. At each iteration, the proposed algorithm, called HySST, selects a vertex with the lowest cost among all the vertices within the neighborhood of a randomly selected sample and then extends the search tree by flow or jump, which is also chosen randomly when both regimes are possible. In addition, HySST maintains a static set of witness points such that all the vertices within the neighborhood of each witness are pruned except the vertex with the lowest cost. Through a definition of concatenation of functions defined on hybrid time domains, we show that HySST is asymptotically near optimal, namely, the probability of failing to find a motion plan such that its cost is close to the optimal cost approaches zero as the number of iterations of the algorithm increases to infinity. This property is guaranteed under mild conditions on the data defining the motion plan, which include a relaxation of the usual positive clearance assumption imposed in the literature of classical systems. The proposed algorithm is applied to an actuated bouncing ball system and a collision-resilient tensegrity multicopter system so as to highlight its generality and computational features.

Over the last decade, approximating functions in infinite dimensions from samples has gained increasing attention in computational science and engineering, especially in computational uncertainty quantification. This is primarily due to the relevance of functions that are solutions to parametric differential equations in various fields, e.g. chemistry, economics, engineering, and physics. While acquiring accurate and reliable approximations of such functions is inherently difficult, current benchmark methods exploit the fact that such functions often belong to certain classes of holomorphic functions to get algebraic convergence rates in infinite dimensions with respect to the number of (potentially adaptive) samples $m$. Our work focuses on providing theoretical approximation guarantees for the class of $(\boldsymbol{b},\varepsilon)$-holomorphic functions, demonstrating that these algebraic rates are the best possible for Banach-valued functions in infinite dimensions. We establish lower bounds using a reduction to a discrete problem in combination with the theory of $m$-widths, Gelfand widths and Kolmogorov widths. We study two cases, known and unknown anisotropy, in which the relative importance of the variables is known and unknown, respectively. A key conclusion of our paper is that in the latter setting, approximation from finite samples is impossible without some inherent ordering of the variables, even if the samples are chosen adaptively. Finally, in both cases, we demonstrate near-optimal, non-adaptive (random) sampling and recovery strategies which achieve close to same rates as the lower bounds.

The training process of ReLU neural networks often exhibits complicated nonlinear phenomena. The nonlinearity of models and non-convexity of loss pose significant challenges for theoretical analysis. Therefore, most previous theoretical works on the optimization dynamics of neural networks focus either on local analysis (like the end of training) or approximate linear models (like Neural Tangent Kernel). In this work, we conduct a complete theoretical characterization of the training process of a two-layer ReLU network trained by Gradient Flow on a linearly separable data. In this specific setting, our analysis captures the whole optimization process starting from random initialization to final convergence. Despite the relatively simple model and data that we studied, we reveal four different phases from the whole training process showing a general simplifying-to-complicating learning trend. Specific nonlinear behaviors can also be precisely identified and captured theoretically, such as initial condensation, saddle-to-plateau dynamics, plateau escape, changes of activation patterns, learning with increasing complexity, etc.

Cross-validation (CV) is one of the most popular tools for assessing and selecting predictive models. However, standard CV suffers from high computational cost when the number of folds is large. Recently, under the empirical risk minimization (ERM) framework, a line of works proposed efficient methods to approximate CV based on the solution of the ERM problem trained on the full dataset. However, in large-scale problems, it can be hard to obtain the exact solution of the ERM problem, either due to limited computational resources or due to early stopping as a way of preventing overfitting. In this paper, we propose a new paradigm to efficiently approximate CV when the ERM problem is solved via an iterative first-order algorithm, without running until convergence. Our new method extends existing guarantees for CV approximation to hold along the whole trajectory of the algorithm, including at convergence, thus generalizing existing CV approximation methods. Finally, we illustrate the accuracy and computational efficiency of our method through a range of empirical studies.

The randomized singular value decomposition (R-SVD) is a popular sketching-based algorithm for efficiently computing the partial SVD of a large matrix. When the matrix is low-rank, the R-SVD produces its partial SVD exactly; but when the rank is large, it only yields an approximation. Motivated by applications in data science and principal component analysis (PCA), we analyze the R-SVD under a low-rank signal plus noise measurement model; specifically, when its input is a spiked random matrix. The singular values produced by the R-SVD are shown to exhibit a BBP-like phase transition: when the SNR exceeds a certain detectability threshold, that depends on the dimension reduction factor, the largest singular value is an outlier; below the threshold, no outlier emerges from the bulk of singular values. We further compute asymptotic formulas for the overlap between the ground truth signal singular vectors and the approximations produced by the R-SVD. Dimensionality reduction has the adverse affect of amplifying the noise in a highly nonlinear manner. Our results demonstrate the statistical advantage -- in both signal detection and estimation -- of the R-SVD over more naive sketched PCA variants; the advantage is especially dramatic when the sketching dimension is small. Our analysis is asymptotically exact, and substantially more fine-grained than existing operator-norm error bounds for the R-SVD, which largely fail to give meaningful error estimates in the moderate SNR regime. It applies for a broad family of sketching matrices previously considered in the literature, including Gaussian i.i.d. sketches, random projections, and the sub-sampled Hadamard transform, among others. Lastly, we derive an optimal singular value shrinker for singular values and vectors obtained through the R-SVD, which may be useful for applications in matrix denoising.

北京阿比特科技有限公司