亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

Inferring the means in the multivariate normal model $X \sim N_n(\theta, I)$ with unknown mean vector $\theta=(\theta_1,...,\theta_n)' \in \mathbb{R}^n$ and observed data $X=(X_1,...,X_n)'\in {\mathbb R}^n$ is a challenging task, known as the problem of many normal means (MNMs). This paper tackles two fundamental kinds of MNMs within the framework of Inferential Models (IMs). The first kind, referred to as the {\it classic} kind, is presented as is. The second kind, referred to as the {\it empirical Bayes} kind, assumes that the individual means $\theta_i$'s are drawn independently {\it a priori} from an unknown distribution $G(.)$. The IM formulation for the empirical Bayes kind utilizes numerical deconvolution, enabling prior-free probabilistic inference with over-parameterization for $G(.)$. The IM formulation for the classic kind, on the other hand, utilizes a latent random permutation, providing a novel approach for reasoning with uncertainty and deeper understanding. For uncertainty quantification within the familiar frequentist inference framework, the IM method of maximum plausibility is used for point estimation. Conservative interval estimation is obtained based on plausibility, using a Monte Carlo-based adaptive adjustment approach to construct shorter confidence intervals with targeted coverage. These methods are demonstrated through simulation studies and a real-data example. The numerical results show that the proposed methods for point estimation outperform traditional James-Stein and Efron's $g$-modeling in terms of mean square error, and the adaptive intervals are satisfactory in both coverage and efficiency. The paper concludes with suggestions for future developments and extensions of the proposed methods.

相關內容

IM:IFIP/IEEE International Symposium on Integrated Network Management。 Explanation:綜合網絡管理國際研討會。 Publisher:IFIP/IEEE SIT:

For a set of $p$-variate data points $\boldsymbol y_1,\ldots,\boldsymbol y_n$, there are several versions of multivariate median and related multivariate sign test proposed and studied in the literature. In this paper we consider the asymptotic properties of the multivariate extension of the Hodges-Lehmann (HL) estimator, the spatial HL-estimator, and the related test statistic. The asymptotic behavior of the spatial HL-estimator and the related test statistic when $n$ tends to infinity are collected, reviewed, and proved, some for the first time though being used already for a longer time. We also derive the limiting behavior of the HL-estimator when both the sample size $n$ and the dimension $p$ tend to infinity.

A parameterized string (p-string) is a string over an alphabet $(\Sigma_{s} \cup \Sigma_{p})$, where $\Sigma_{s}$ and $\Sigma_{p}$ are disjoint alphabets for static symbols (s-symbols) and for parameter symbols (p-symbols), respectively. Two p-strings $x$ and $y$ are said to parameterized match (p-match) if and only if $x$ can be transformed into $y$ by applying a bijection on $\Sigma_{p}$ to every occurrence of p-symbols in $x$. The indexing problem for p-matching is to preprocess a p-string $T$ of length $n$ so that we can efficiently find the occurrences of substrings of $T$ that p-match with a given pattern. Extending the Burrows-Wheeler Transform (BWT) based index for exact string pattern matching, Ganguly et al. [SODA 2017] proposed the first compact index (named pBWT) for p-matching, and posed an open problem on how to construct it in compact space, i.e., in $O(n \lg |\Sigma_{s} \cup \Sigma_{p}|)$ bits of space. Hashimoto et al. [SPIRE 2022] partially solved this problem by showing how to construct some components of pBWTs for $T$ in $O(n \frac{|\Sigma_{p}| \lg n}{\lg \lg n})$ time in an online manner while reading the symbols of $T$ from right to left. In this paper, we improve the time complexity to $O(n \frac{\lg |\Sigma_{p}| \lg n}{\lg \lg n})$. We remark that removing the multiplicative factor of $|\Sigma_{p}|$ from the complexity is of great interest because it has not been achieved for over a decade in the construction of related data structures like parameterized suffix arrays even in the offline setting. We also show that our data structure can support backward search, a core procedure of BWT-based indexes, at any stage of the online construction, making it the first compact index for p-matching that can be constructed in compact space and even in an online manner.

In the online facility assignment on a line ${\rm OFAL}(S,c)$ with a set $S$ of $k$ servers and a capacity $c:S\to\mathbb{N}$, each server $s\in S$ with a capacity $c(s)$ is placed on a line, and a request arrives on a line one-by-one. The task of an online algorithm is to irrevocably match a current request with one of the servers with vacancies before the next request arrives. An algorithm can match up to $c(s)$ requests to a server $s\in S$. In this paper, we propose a new online algorithm PTCP (Policy Transition at Critical Point) for $\mathrm{OFAL}(S,c)$ and show that PTCP is $(2\alpha(S)+1)$-competitive, where $\alpha(S)$ is informally the ratio of the diameter of $S$ to the maximum distance between two adjacent servers in $S$. Depending on the layout of servers, $\alpha(S)$ ranges from constant (independent of $k$) to $k-1$. Among all of known algorithms for $\mathrm{OFAL}(S,c)$, this upper bound on the competitive ratio is the best when $\alpha(S)$ is small. We also show that the competitive ratio of any MPFS (Most Preferred Free Servers) algorithm is at least $2\alpha(S)+1$. For $\mathrm{OFAL}(S,c)$, recall that MPFS is a class of algorithms whose competitive ratio does not depend on a capacity $c$ and it includes the natural greedy algorithm and PTCP, etc. Thus, this implies that PTCP is the best for $\mathrm{OFAL}(S,c)$ in the class MPFS.

Critical points mark locations in the domain where the level-set topology of a scalar function undergoes fundamental changes and thus indicate potentially interesting features in the data. Established methods exist to locate and relate such points in a deterministic setting, but it is less well understood how the concept of critical points can be extended to the analysis of uncertain data. Most methods for this task aim at finding likely locations of critical points or estimate the probability of their occurrence locally but do not indicate if critical points at potentially different locations in different realizations of a stochastic process are manifestations of the same feature, which is required to characterize the spatial uncertainty of critical points. Previous work on relating critical points across different realizations reported challenges for interpreting the resulting spatial distribution of critical points but did not investigate the causes. In this work, we provide a mathematical formulation of the problem of finding critical points with spatial uncertainty and computing their spatial distribution, which leads us to the notion of uncertain critical points. We analyze the theoretical properties of these structures and highlight connections to existing works for special classes of uncertain fields. We derive conditions under which well-interpretable results can be obtained and discuss the implications of those restrictions for the field of visualization. We demonstrate that the discussed limitations are not purely academic but also arise in real-world data.

Foundation models, such as OpenAI's GPT-3 and GPT-4, Meta's LLaMA, and Google's PaLM2, have revolutionized the field of artificial intelligence. A notable paradigm shift has been the advent of the Segment Anything Model (SAM), which has exhibited a remarkable capability to segment real-world objects, trained on 1 billion masks and 11 million images. Although SAM excels in general object segmentation, it lacks the intrinsic ability to detect salient objects, resulting in suboptimal performance in this domain. To address this challenge, we present the Segment Salient Object Model (SSOM), an innovative approach that adaptively fine-tunes SAM for salient object detection by harnessing the low-rank structure inherent in deep learning. Comprehensive qualitative and quantitative evaluations across five challenging RGB benchmark datasets demonstrate the superior performance of our approach, surpassing state-of-the-art methods.

Given a target distribution $\pi$ and an arbitrary Markov infinitesimal generator $L$ on a finite state space $\mathcal{X}$, we develop three structured and inter-related approaches to generate new reversiblizations from $L$. The first approach hinges on a geometric perspective, in which we view reversiblizations as projections onto the space of $\pi$-reversible generators under suitable information divergences such as $f$-divergences. With different choices of functions $f$, we not only recover nearly all established reversiblizations but also unravel and generate new reversiblizations. Along the way, we unveil interesting geometric results such as bisection properties, Pythagorean identities, parallelogram laws and a Markov chain counterpart of the arithmetic-geometric-harmonic mean inequality governing these reversiblizations. This further serves as motivation for introducing the notion of information centroids of a sequence of Markov chains and to give conditions for their existence and uniqueness. Building upon the first approach, we view reversiblizations as generalized means. In this second approach, we construct new reversiblizations via different natural notions of generalized means such as the Cauchy mean or the dual mean. In the third approach, we combine the recently introduced locally-balanced Markov processes framework and the notion of convex $*$-conjugate in the study of $f$-divergence. The latter offers a rich source of balancing functions to generate new reversiblizations.

Let $ \bbB_n =\frac{1}{n}(\bbR_n + \bbT^{1/2}_n \bbX_n)(\bbR_n + \bbT^{1/2}_n \bbX_n)^* $ where $ \bbX_n $ is a $ p \times n $ matrix with independent standardized random variables, $ \bbR_n $ is a $ p \times n $ non-random matrix, representing the information, and $ \bbT_{n} $ is a $ p \times p $ non-random nonnegative definite Hermitian matrix. Under some conditions on $ \bbR_n \bbR_n^* $ and $ \bbT_n $, it has been proved that for any closed interval outside the support of the limit spectral distribution, with probability one there will be no eigenvalues falling in this interval for all $ p $ sufficiently large. The purpose of this paper is to carry on with the study of the support of the limit spectral distribution, and we show that there is an exact separation phenomenon: with probability one, the proper number of eigenvalues lie on either side of these intervals.

We revisit the main result of Carmosino et al \cite{CILM18} which shows that an $\Omega(n^{\omega/2+\epsilon})$ size noncommutative arithmetic circuit size lower bound (where $\omega$ is the matrix multiplication exponent) for a constant-degree $n$-variate polynomial family $(g_n)_n$, where each $g_n$ is a noncommutative polynomial, can be ``lifted'' to an exponential size circuit size lower bound for another polynomial family $(f_n)$ obtained from $(g_n)$ by a lifting process. In this paper, we present a simpler and more conceptual automata-theoretic proof of their result.

We study pseudo-polynomial time algorithms for the fundamental \emph{0-1 Knapsack} problem. In terms of $n$ and $w_{\max}$, previous algorithms for 0-1 Knapsack have cubic time complexities: $O(n^2w_{\max})$ (Bellman 1957), $O(nw_{\max}^2)$ (Kellerer and Pferschy 2004), and $O(n + w_{\max}^3)$ (Polak, Rohwedder, and W\k{e}grzycki 2021). On the other hand, fine-grained complexity only rules out $O((n+w_{\max})^{2-\delta})$ running time, and it is an important question in this area whether $\tilde O(n+w_{\max}^2)$ time is achievable. Our main result makes significant progress towards solving this question: - The 0-1 Knapsack problem has a deterministic algorithm in $\tilde O(n + w_{\max}^{2.5})$ time. Our techniques also apply to the easier \emph{Subset Sum} problem: - The Subset Sum problem has a randomized algorithm in $\tilde O(n + w_{\max}^{1.5})$ time. This improves (and simplifies) the previous $\tilde O(n + w_{\max}^{5/3})$-time algorithm by Polak, Rohwedder, and W\k{e}grzycki (2021) (based on Galil and Margalit (1991), and Bringmann and Wellnitz (2021)). Similar to recent works on Knapsack (and integer programs in general), our algorithms also utilize the \emph{proximity} between optimal integral solutions and fractional solutions. Our new ideas are as follows: - Previous works used an $O(w_{\max})$ proximity bound in the $\ell_1$-norm. As our main conceptual contribution, we use an additive-combinatorial theorem by Erd\H{o}s and S\'{a}rk\"{o}zy (1990) to derive an $\ell_0$-proximity bound of $\tilde O(\sqrt{w_{\max}})$. - Then, the main technical component of our Knapsack result is a dynamic programming algorithm that exploits both $\ell_0$- and $\ell_1$-proximity. It is based on a vast extension of the ``witness propagation'' method, originally designed by Deng, Mao, and Zhong (2023) for the easier \emph{unbounded} setting only.

In multi-turn dialog, utterances do not always take the full form of sentences \cite{Carbonell1983DiscoursePA}, which naturally makes understanding the dialog context more difficult. However, it is essential to fully grasp the dialog context to generate a reasonable response. Hence, in this paper, we propose to improve the response generation performance by examining the model's ability to answer a reading comprehension question, where the question is focused on the omitted information in the dialog. Enlightened by the multi-task learning scheme, we propose a joint framework that unifies these two tasks, sharing the same encoder to extract the common and task-invariant features with different decoders to learn task-specific features. To better fusing information from the question and the dialog history in the encoding part, we propose to augment the Transformer architecture with a memory updater, which is designed to selectively store and update the history dialog information so as to support downstream tasks. For the experiment, we employ human annotators to write and examine a large-scale dialog reading comprehension dataset. Extensive experiments are conducted on this dataset, and the results show that the proposed model brings substantial improvements over several strong baselines on both tasks. In this way, we demonstrate that reasoning can indeed help better response generation and vice versa. We release our large-scale dataset for further research.

北京阿比特科技有限公司