{mayi_des}

亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

We develop new matching estimators for estimating causal quantile exposure-response functions and quantile exposure effects with continuous treatments. We provide identification results for the parameters of interest and establish the asymptotic properties of the derived estimators. We introduce a two-step estimation procedure. In the first step, we construct a matched data set via generalized propensity score matching, adjusting for measured confounding. In the second step, we fit a kernel quantile regression to the matched set. We also derive a consistent estimator of the variance of the matching estimators. Using simulation studies, we compare the introduced approach with existing alternatives in various settings. We apply the proposed method to Medicare claims data for the period 2012-2014, and we estimate the causal effect of exposure to PM$_{2.5}$ on the length of hospital stay for each zip code of the contiguous United States.

相關內容

Teams that have trained large Transformer-based models have reported training instabilities at large scale that did not appear when training with the same hyperparameters at smaller scales. Although the causes of such instabilities are of scientific interest, the amount of resources required to reproduce them has made investigation difficult. In this work, we seek ways to reproduce and study training stability and instability at smaller scales. First, we focus on two sources of training instability described in previous work: the growth of logits in attention layers (Dehghani et al., 2023) and divergence of the output logits from the log probabilities (Chowdhery et al., 2022). By measuring the relationship between learning rate and loss across scales, we show that these instabilities also appear in small models when training at high learning rates, and that mitigations previously employed at large scales are equally effective in this regime. This prompts us to investigate the extent to which other known optimizer and model interventions influence the sensitivity of the final loss to changes in the learning rate. To this end, we study methods such as warm-up, weight decay, and the $\mu$Param (Yang et al., 2022), and combine techniques to train small models that achieve similar losses across orders of magnitude of learning rate variation. Finally, to conclude our exploration we study two cases where instabilities can be predicted before they emerge by examining the scaling behavior of model activation and gradient norms.

This study examines, in the framework of variational regularization methods, a multi-penalty regularization approach which builds upon the Uniform PENalty (UPEN) method, previously proposed by the authors for Nuclear Magnetic Resonance (NMR) data processing. The paper introduces two iterative methods, UpenMM and GUpenMM, formulated within the Majorization-Minimization (MM) framework. These methods are designed to identify appropriate regularization parameters and solutions for linear inverse problems utilizing multi-penalty regularization. The paper demonstrates the convergence of these methods and illustrates their potential through numerical examples in one and two-dimensional scenarios, showing the practical utility of point-wise regularization terms in solving various inverse problems.

We investigate the combinatorics of max-pooling layers, which are functions that downsample input arrays by taking the maximum over shifted windows of input coordinates, and which are commonly used in convolutional neural networks. We obtain results on the number of linearity regions of these functions by equivalently counting the number of vertices of certain Minkowski sums of simplices. We characterize the faces of such polytopes and obtain generating functions and closed formulas for the number of vertices and facets in a 1D max-pooling layer depending on the size of the pooling windows and stride, and for the number of vertices in a special case of 2D max-pooling.

We perturb a real matrix $A$ of full column rank, and derive lower bounds for the smallest singular values of the perturbed matrix, in terms of normwise absolute perturbations. Our bounds, which extend existing lower-order expressions, demonstrate the potential increase in the smallest singular values, and represent a qualitative model for the increase in the small singular values after a matrix has been downcast to a lower arithmetic precision. Numerical experiments confirm the qualitative validity of this model and its ability to predict singular values changes in the presence of decreased arithmetic precision.

In this paper, a high-order approximation to Caputo-type time-fractional diffusion equations involving an initial-time singularity of the solution is proposed. At first, we employ a numerical algorithm based on the Lagrange polynomial interpolation to approximate the Caputo derivative on the non-uniform mesh. Then truncation error rate and the optimal grading constant of the approximation on a graded mesh are obtained as $\min\{4-\alpha,r\alpha\}$ and $\frac{4-\alpha}{\alpha}$, respectively, where $\alpha\in(0,1)$ is the order of fractional derivative and $r\geq 1$ is the mesh grading parameter. Using this new approximation, a difference scheme for the Caputo-type time-fractional diffusion equation on graded temporal mesh is formulated. The scheme proves to be uniquely solvable for general $r$. Then we derive the unconditional stability of the scheme on uniform mesh. The convergence of the scheme, in particular for $r=1$, is analyzed for non-smooth solutions and concluded for smooth solutions. Finally, the accuracy of the scheme is verified by analyzing the error through a few numerical examples.

A rigidity circuit (in 2D) is a minimal dependent set in the rigidity matroid, i.e. a minimal graph supporting a non-trivial stress in any generic placement of its vertices in $\mathbb R^2$. Any rigidity circuit on $n\geq 5$ vertices can be obtained from rigidity circuits on a fewer number of vertices by applying the combinatorial resultant (CR) operation. The inverse operation is called a combinatorial resultant decomposition (CR-decomp). Any rigidity circuit on $n\geq 5$ vertices can be successively decomposed into smaller circuits, until the complete graphs $K_4$ are reached. This sequence of CR-decomps has the structure of a rooted binary tree called the combinatorial resultant tree (CR-tree). A CR-tree encodes an elimination strategy for computing circuit polynomials via Sylvester resultants. Different CR-trees lead to elimination strategies that can vary greatly in time and memory consumption. It is an open problem to establish criteria for optimal CR-trees, or at least to characterize those CR-trees that lead to good elimination strategies. In [12] we presented an algorithm for enumerating CR-trees where we give the algorithms for decomposing 3-connected rigidity circuits in polynomial time. In this paper we focus on those circuits that are not 3-connected, which we simply call 2-connected. In order to enumerate CR-decomps of 2-connected circuits $G$, a brute force exp-time search has to be performed among the subgraphs induced by the subsets of $V(G)$. This exp-time bottleneck is not present in the 3-connected case. In this paper we will argue that we do not have to account for all possible CR-decomps of 2-connected rigidity circuits to find a good elimination strategy; we only have to account for those CR-decomps that are a 2-split, all of which can be enumerated in polynomial time. We present algorithms and computational evidence in support of this heuristic.

We present here a new splitting method to solve Lyapunov equations in a Kronecker product form. Although this resulting matrix is of order $n^2$, each iteration demands two operations with the matrix $A$: a multiplication of the form $(A-\sigma I) \tilde{B}$ and a inversion of the form $(A-\sigma I)^{-1}\tilde{B}$. We see that for some choice of a parameter the iteration matrix is such that all their eigenvalues are in absolute value less than 1. Moreover we present a theorem that enables us to get a good starting vector for the method.

We consider a one-dimensional singularly perturbed 4th order problem with the additional feature of a shift term. An expansion into a smooth term, boundary layers and an inner layer yields a formal solution decomposition, and together with a stability result we have estimates for the subsequent numerical analysis. With classical layer adapted meshes we present a numerical method, that achieves supercloseness and optimal convergence orders in the associated energy norm. We also consider coarser meshes in view of the weak layers. Some numerical examples conclude the paper and support the theory.

Information inequalities appear in many database applications such as query output size bounds, query containment, and implication between data dependencies. Recently Khamis et al. proposed to study the algorithmic aspects of information inequalities, including the information inequality problem: decide whether a linear inequality over entropies of random variables is valid. While the decidability of this problem is a major open question, applications often involve only inequalities that adhere to specific syntactic forms linked to useful semantic invariance properties. This paper studies the information inequality problem in different syntactic and semantic scenarios that arise from database applications. Focusing on the boundary between tractability and intractability, we show that the information inequality problem is coNP-complete if restricted to normal polymatroids, and in polynomial time if relaxed to monotone functions. We also examine syntactic restrictions related to query output size bounds, and provide an alternative proof, through monotone functions, for the polynomial-time computability of the entropic bound over simple sets of degree constraints.

We give a short survey of recent results on sparse-grid linear algorithms of approximate recovery and integration of functions possessing a unweighted or weighted Sobolev mixed smoothness based on their sampled values at a certain finite set. Some of them are extended to more general cases.

北京阿比特科技有限公司
{2.5}$ on the length of hospital stay for each zip code of the contiguous United States. ">

亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

We develop new matching estimators for estimating causal quantile exposure-response functions and quantile exposure effects with continuous treatments. We provide identification results for the parameters of interest and establish the asymptotic properties of the derived estimators. We introduce a two-step estimation procedure. In the first step, we construct a matched data set via generalized propensity score matching, adjusting for measured confounding. In the second step, we fit a kernel quantile regression to the matched set. We also derive a consistent estimator of the variance of the matching estimators. Using simulation studies, we compare the introduced approach with existing alternatives in various settings. We apply the proposed method to Medicare claims data for the period 2012-2014, and we estimate the causal effect of exposure to PM$_{2.5}$ on the length of hospital stay for each zip code of the contiguous United States.

相關內容

Teams that have trained large Transformer-based models have reported training instabilities at large scale that did not appear when training with the same hyperparameters at smaller scales. Although the causes of such instabilities are of scientific interest, the amount of resources required to reproduce them has made investigation difficult. In this work, we seek ways to reproduce and study training stability and instability at smaller scales. First, we focus on two sources of training instability described in previous work: the growth of logits in attention layers (Dehghani et al., 2023) and divergence of the output logits from the log probabilities (Chowdhery et al., 2022). By measuring the relationship between learning rate and loss across scales, we show that these instabilities also appear in small models when training at high learning rates, and that mitigations previously employed at large scales are equally effective in this regime. This prompts us to investigate the extent to which other known optimizer and model interventions influence the sensitivity of the final loss to changes in the learning rate. To this end, we study methods such as warm-up, weight decay, and the $\mu$Param (Yang et al., 2022), and combine techniques to train small models that achieve similar losses across orders of magnitude of learning rate variation. Finally, to conclude our exploration we study two cases where instabilities can be predicted before they emerge by examining the scaling behavior of model activation and gradient norms.

This study examines, in the framework of variational regularization methods, a multi-penalty regularization approach which builds upon the Uniform PENalty (UPEN) method, previously proposed by the authors for Nuclear Magnetic Resonance (NMR) data processing. The paper introduces two iterative methods, UpenMM and GUpenMM, formulated within the Majorization-Minimization (MM) framework. These methods are designed to identify appropriate regularization parameters and solutions for linear inverse problems utilizing multi-penalty regularization. The paper demonstrates the convergence of these methods and illustrates their potential through numerical examples in one and two-dimensional scenarios, showing the practical utility of point-wise regularization terms in solving various inverse problems.

We investigate the combinatorics of max-pooling layers, which are functions that downsample input arrays by taking the maximum over shifted windows of input coordinates, and which are commonly used in convolutional neural networks. We obtain results on the number of linearity regions of these functions by equivalently counting the number of vertices of certain Minkowski sums of simplices. We characterize the faces of such polytopes and obtain generating functions and closed formulas for the number of vertices and facets in a 1D max-pooling layer depending on the size of the pooling windows and stride, and for the number of vertices in a special case of 2D max-pooling.

We perturb a real matrix $A$ of full column rank, and derive lower bounds for the smallest singular values of the perturbed matrix, in terms of normwise absolute perturbations. Our bounds, which extend existing lower-order expressions, demonstrate the potential increase in the smallest singular values, and represent a qualitative model for the increase in the small singular values after a matrix has been downcast to a lower arithmetic precision. Numerical experiments confirm the qualitative validity of this model and its ability to predict singular values changes in the presence of decreased arithmetic precision.

In this paper, a high-order approximation to Caputo-type time-fractional diffusion equations involving an initial-time singularity of the solution is proposed. At first, we employ a numerical algorithm based on the Lagrange polynomial interpolation to approximate the Caputo derivative on the non-uniform mesh. Then truncation error rate and the optimal grading constant of the approximation on a graded mesh are obtained as $\min\{4-\alpha,r\alpha\}$ and $\frac{4-\alpha}{\alpha}$, respectively, where $\alpha\in(0,1)$ is the order of fractional derivative and $r\geq 1$ is the mesh grading parameter. Using this new approximation, a difference scheme for the Caputo-type time-fractional diffusion equation on graded temporal mesh is formulated. The scheme proves to be uniquely solvable for general $r$. Then we derive the unconditional stability of the scheme on uniform mesh. The convergence of the scheme, in particular for $r=1$, is analyzed for non-smooth solutions and concluded for smooth solutions. Finally, the accuracy of the scheme is verified by analyzing the error through a few numerical examples.

A rigidity circuit (in 2D) is a minimal dependent set in the rigidity matroid, i.e. a minimal graph supporting a non-trivial stress in any generic placement of its vertices in $\mathbb R^2$. Any rigidity circuit on $n\geq 5$ vertices can be obtained from rigidity circuits on a fewer number of vertices by applying the combinatorial resultant (CR) operation. The inverse operation is called a combinatorial resultant decomposition (CR-decomp). Any rigidity circuit on $n\geq 5$ vertices can be successively decomposed into smaller circuits, until the complete graphs $K_4$ are reached. This sequence of CR-decomps has the structure of a rooted binary tree called the combinatorial resultant tree (CR-tree). A CR-tree encodes an elimination strategy for computing circuit polynomials via Sylvester resultants. Different CR-trees lead to elimination strategies that can vary greatly in time and memory consumption. It is an open problem to establish criteria for optimal CR-trees, or at least to characterize those CR-trees that lead to good elimination strategies. In [12] we presented an algorithm for enumerating CR-trees where we give the algorithms for decomposing 3-connected rigidity circuits in polynomial time. In this paper we focus on those circuits that are not 3-connected, which we simply call 2-connected. In order to enumerate CR-decomps of 2-connected circuits $G$, a brute force exp-time search has to be performed among the subgraphs induced by the subsets of $V(G)$. This exp-time bottleneck is not present in the 3-connected case. In this paper we will argue that we do not have to account for all possible CR-decomps of 2-connected rigidity circuits to find a good elimination strategy; we only have to account for those CR-decomps that are a 2-split, all of which can be enumerated in polynomial time. We present algorithms and computational evidence in support of this heuristic.

We present here a new splitting method to solve Lyapunov equations in a Kronecker product form. Although this resulting matrix is of order $n^2$, each iteration demands two operations with the matrix $A$: a multiplication of the form $(A-\sigma I) \tilde{B}$ and a inversion of the form $(A-\sigma I)^{-1}\tilde{B}$. We see that for some choice of a parameter the iteration matrix is such that all their eigenvalues are in absolute value less than 1. Moreover we present a theorem that enables us to get a good starting vector for the method.

We consider a one-dimensional singularly perturbed 4th order problem with the additional feature of a shift term. An expansion into a smooth term, boundary layers and an inner layer yields a formal solution decomposition, and together with a stability result we have estimates for the subsequent numerical analysis. With classical layer adapted meshes we present a numerical method, that achieves supercloseness and optimal convergence orders in the associated energy norm. We also consider coarser meshes in view of the weak layers. Some numerical examples conclude the paper and support the theory.

Information inequalities appear in many database applications such as query output size bounds, query containment, and implication between data dependencies. Recently Khamis et al. proposed to study the algorithmic aspects of information inequalities, including the information inequality problem: decide whether a linear inequality over entropies of random variables is valid. While the decidability of this problem is a major open question, applications often involve only inequalities that adhere to specific syntactic forms linked to useful semantic invariance properties. This paper studies the information inequality problem in different syntactic and semantic scenarios that arise from database applications. Focusing on the boundary between tractability and intractability, we show that the information inequality problem is coNP-complete if restricted to normal polymatroids, and in polynomial time if relaxed to monotone functions. We also examine syntactic restrictions related to query output size bounds, and provide an alternative proof, through monotone functions, for the polynomial-time computability of the entropic bound over simple sets of degree constraints.

We give a short survey of recent results on sparse-grid linear algorithms of approximate recovery and integration of functions possessing a unweighted or weighted Sobolev mixed smoothness based on their sampled values at a certain finite set. Some of them are extended to more general cases.

北京阿比特科技有限公司