亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

An arborescence, which is a directed analogue of a spanning tree in an undirected graph, is one of the most fundamental combinatorial objects in a digraph. In this paper, we study arborescences in digraphs from the viewpoint of combinatorial reconfiguration, which is the field where we study reachability between two configurations of some combinatorial objects via some specified operations. Especially, we consider reconfiguration problems for time-respecting arborescences, which were introduced by Kempe, Kleinberg, and Kumar. We first prove that if the roots of the initial and target time-respecting arborescences are the same, then the target arborescence is always reachable from the initial one and we can find a shortest reconfiguration sequence in polynomial time. Furthermore, we show if the roots are not the same, then the target arborescence may not be reachable from the initial one. On the other hand, we show that we can determine whether the target arborescence is reachable form the initial one in polynomial time. Finally, we prove that it is NP-hard to find a shortest reconfiguration sequence in the case where the roots are not the same. Our results show an interesting contrast to the previous results for (ordinary) arborescences reconfiguration problems.

相關內容

This article proposes a new information theoretic necessary condition for reconstructing a discrete random variable $X$ based on the knowledge of a set of discrete functions of $X$. The reconstruction condition is derived from the Shannon's Lattice of Information (LoI) \cite{Shannon53} and two entropic metrics proposed respectively by Shannon and Rajski. This theoretical material being relatively unknown and/or dispersed in different references, we provide a complete and synthetic description of the LoI concepts like the total, common and complementary informations with complete proofs. The two entropic metrics definitions and properties are also fully detailled and showed compatible with the LoI structure. A new geometric interpretation of the Lattice structure is then investigated that leads to a new necessary condition for reconstructing the discrete random variable $X$ given a set $\{ X_0$,...,$X_{n-1} \}$ of elements of the lattice generated by $X$. Finally, this condition is derived in five specific examples of reconstruction of $X$ from a set of deterministic functions of $X$: the reconstruction of a symmetric random variable from the knowledge of its sign and of its absolute value, the reconstruction of a binary word from a set of binary linear combinations, the reconstruction of an integer from its prime signature (Fundamental theorem of arithmetics) and from its reminders modulo a set of coprime integers (Chinese reminder theorem), and the reconstruction of the sorting permutation of a list from a set of 2-by-2 comparisons. In each case, the necessary condition is shown compatible with the corresponding well-known results.

Context. Algorithmic racism is the term used to describe the behavior of technological solutions that constrains users based on their ethnicity. Lately, various data-driven software systems have been reported to discriminate against Black people, either for the use of biased data sets or due to the prejudice propagated by software professionals in their code. As a result, Black people are experiencing disadvantages in accessing technology-based services, such as housing, banking, and law enforcement. Goal. This study aims to explore algorithmic racism from the perspective of software professionals. Method. A survey questionnaire was applied to explore the understanding of software practitioners on algorithmic racism, and data analysis was conducted using descriptive statistics and coding techniques. Results. We obtained answers from a sample of 73 software professionals discussing their understanding and perspectives on algorithmic racism in software development. Our results demonstrate that the effects of algorithmic racism are well-known among practitioners. However, there is no consensus on how the problem can be effectively addressed in software engineering. In this paper, some solutions to the problem are proposed based on the professionals' narratives. Conclusion. Combining technical and social strategies, including training on structural racism for software professionals, is the most promising way to address the algorithmic racism problem and its effects on the software solutions delivered to our society.

Separating signals from an additive mixture may be an unnecessarily hard problem when one is only interested in specific properties of a given signal. In this work, we tackle simpler "statistical component separation" problems that focus on recovering a predefined set of statistical descriptors of a target signal from a noisy mixture. Assuming access to samples of the noise process, we investigate a method devised to match the statistics of the solution candidate corrupted by noise samples with those of the observed mixture. We first analyze the behavior of this method using simple examples with analytically tractable calculations. Then, we apply it in an image denoising context employing 1) wavelet-based descriptors, 2) ConvNet-based descriptors on astrophysics and ImageNet data. In the case of 1), we show that our method better recovers the descriptors of the target data than a standard denoising method in most situations. Additionally, despite not constructed for this purpose, it performs surprisingly well in terms of peak signal-to-noise ratio on full signal reconstruction. In comparison, representation 2) appears less suitable for image denoising. Finally, we extend this method by introducing a diffusive stepwise algorithm which gives a new perspective to the initial method and leads to promising results for image denoising under specific circumstances.

We study statistical inference for the optimal transport (OT) map (also known as the Brenier map) from a known absolutely continuous reference distribution onto an unknown finitely discrete target distribution. We derive limit distributions for the $L^p$-error with arbitrary $p \in [1,\infty)$ and for linear functionals of the empirical OT map, together with their moment convergence. The former has a non-Gaussian limit, whose explicit density is derived, while the latter attains asymptotic normality. For both cases, we also establish consistency of the nonparametric bootstrap. The derivation of our limit theorems relies on new stability estimates of functionals of the OT map with respect to the dual potential vector, which may be of independent interest. We also discuss applications of our limit theorems to the construction of confidence sets for the OT map and inference for a maximum tail correlation.

Harrel's concordance index is a commonly used discrimination metric for survival models, particularly for models where the relative ordering of the risk of individuals is time-independent, such as the proportional hazards model. There are several suggestions, but no consensus, on how it could be extended to models where relative risk can vary over time, e.g.\ in case of crossing hazard rates. We show that these concordance indices are not proper, in the sense that they are maximised in the limit by the true data generating model. Furthermore, we show that a concordance index is proper if and only if the risk score used is concordant with the hazard rate at the first event time for each comparable pair of events. Thus, we suggest using the hazard rate as the time-varying risk score when calculating concordance. Through simulations, we demonstrate situations in which other concordance indices can lead to incorrect models being selected over a true model, justifying the use of our suggested risk prediction in both model selection and in loss functions in, e.g., deep learning models.

Given an arbitrary basis for a mathematical lattice, to find a ``good" basis for it is one of the classic and important algorithmic problems. In this note, we give a new and simpler proof of a theorem by Regavim (arXiv:2106.03183): we construct a 18-dimensional lattice that does not have a basis that satisfies the following two properties simultaneously: 1. The basis includes the shortest non-zero lattice vector. 2. The basis is shortest, that is, minimizes the longest basis vector (alternatively: the sum or the sum-of-squares of the basis vectors). The vectors' length can be measured in any $\ell^q$ norm, for $q\in \mathbb{N}_+$ (albeit, via another lattice, of a somewhat larger dimension).

This study addresses a fundamental, yet overlooked, gap between standard theory and empirical modelling practices in the OLS regression model $\boldsymbol{y}=\boldsymbol{X\beta}+\boldsymbol{u}$ with collinearity. In fact, while an estimated model in practice is desired to have stability and efficiency in its "individual OLS estimates", $\boldsymbol{y}$ itself has no capacity to identify and control the collinearity in $\boldsymbol{X}$ and hence no theory including model selection process (MSP) would fill this gap unless $\boldsymbol{X}$ is controlled in view of sampling theory. In this paper, first introducing a new concept of "empirically effective modelling" (EEM), we propose our EEM methodology (EEM-M) as an integrated process of two MSPs with data $(\boldsymbol{y^o,X})$ given. The first MSP uses $\boldsymbol{X}$ only, called the XMSP, and pre-selects a class $\scr{D}$ of models with individually inefficiency-controlled and collinearity-controlled OLS estimates, where the corresponding two controlling variables are chosen from predictive standard error of each estimate. Next, defining an inefficiency-collinearity risk index for each model, a partial ordering is introduced onto the set of models to compare without using $\boldsymbol{y^o}$, where the better-ness and admissibility of models are discussed. The second MSP is a commonly used MSP that uses $(\boldsymbol{y^o,X})$, and evaluates total model performance as a whole by such AIC, BIC, etc. to select an optimal model from $\scr{D}$. Third, to materialize the XMSP, two algorithms are proposed.

For a given function $F$ from $\mathbb F_{p^n}$ to itself, determining whether there exists a function which is CCZ-equivalent but EA-inequivalent to $F$ is a very important and interesting problem. For example, K\"olsch \cite{KOL21} showed that there is no function which is CCZ-equivalent but EA-inequivalent to the inverse function. On the other hand, for the cases of Gold function $F(x)=x^{2^i+1}$ and $F(x)=x^3+{\rm Tr}(x^9)$ over $\mathbb F_{2^n}$, Budaghyan, Carlet and Pott (respectively, Budaghyan, Carlet and Leander) \cite{BCP06, BCL09FFTA} found functions which are CCZ-equivalent but EA-inequivalent to $F$. In this paper, when a given function $F$ has a component function which has a linear structure, we present functions which are CCZ-equivalent to $F$, and if suitable conditions are satisfied, the constructed functions are shown to be EA-inequivalent to $F$. As a consequence, for every quadratic function $F$ on $\mathbb F_{2^n}$ ($n\geq 4$) with nonlinearity $>0$ and differential uniformity $\leq 2^{n-3}$, we explicitly construct functions which are CCZ-equivalent but EA-inequivalent to $F$. Also for every non-planar quadratic function on $\mathbb F_{p^n}$ $(p>2, n\geq 4)$ with $|\mathcal W_F|\leq p^{n-1}$ and differential uniformity $\leq p^{n-3}$, we explicitly construct functions which are CCZ-equivalent but EA-inequivalent to $F$.

Entropic optimal transport (EOT) presents an effective and computationally viable alternative to unregularized optimal transport (OT), offering diverse applications for large-scale data analysis. In this work, we derive novel statistical bounds for empirical plug-in estimators of the EOT cost and show that their statistical performance in the entropy regularization parameter $\epsilon$ and the sample size $n$ only depends on the simpler of the two probability measures. For instance, under sufficiently smooth costs this yields the parametric rate $n^{-1/2}$ with factor $\epsilon^{-d/2}$, where $d$ is the minimum dimension of the two population measures. This confirms that empirical EOT also adheres to the lower complexity adaptation principle, a hallmark feature only recently identified for unregularized OT. As a consequence of our theory, we show that the empirical entropic Gromov-Wasserstein distance and its unregularized version for measures on Euclidean spaces also obey this principle. Additionally, we comment on computational aspects and complement our findings with Monte Carlo simulations. Our techniques employ empirical process theory and rely on a dual formulation of EOT over a single function class. Crucial to our analysis is the observation that the entropic cost-transformation of a function class does not increase its uniform metric entropy by much.

Knowledge graph embedding (KGE) is a increasingly popular technique that aims to represent entities and relations of knowledge graphs into low-dimensional semantic spaces for a wide spectrum of applications such as link prediction, knowledge reasoning and knowledge completion. In this paper, we provide a systematic review of existing KGE techniques based on representation spaces. Particularly, we build a fine-grained classification to categorise the models based on three mathematical perspectives of the representation spaces: (1) Algebraic perspective, (2) Geometric perspective, and (3) Analytical perspective. We introduce the rigorous definitions of fundamental mathematical spaces before diving into KGE models and their mathematical properties. We further discuss different KGE methods over the three categories, as well as summarise how spatial advantages work over different embedding needs. By collating the experimental results from downstream tasks, we also explore the advantages of mathematical space in different scenarios and the reasons behind them. We further state some promising research directions from a representation space perspective, with which we hope to inspire researchers to design their KGE models as well as their related applications with more consideration of their mathematical space properties.

北京阿比特科技有限公司