54
54
Sep 23, 2013
09/13
by
Nicholas Ruozzi; Sekhar Tatikonda
texts
eye 54
favorite 0
comment 0
Gaussian belief propagation (GaBP) is an iterative algorithm for computing the mean of a multivariate Gaussian distribution, or equivalently, the minimum of a multivariate positive definite quadratic function. Sufficient conditions, such as walk-summability, that guarantee the convergence and correctness of GaBP are known, but GaBP may fail to converge to the correct solution given an arbitrary positive definite quadratic function. As was observed in previous work, the GaBP algorithm fails to...
Source: http://arxiv.org/abs/1212.0171v1
7
7.0
Jun 29, 2018
06/18
by
Patrick Rebeschini; Sekhar Tatikonda
texts
eye 7
favorite 0
comment 0
This paper investigates message-passing algorithms for solving systems of linear equations in the Laplacian matrices of graphs and to compute electric flows. These two problems are fundamental primitives that arise in several domains such as computer science, electrical engineering, operations research, and machine learning. Despite the extensive literature on approximately solving these problems in quasi-linear time, the algorithms that have been proposed are typically centralized and involve...
Topics: Data Structures and Algorithms, Machine Learning, Mathematics, Optimization and Control,...
Source: http://arxiv.org/abs/1611.07138
77
77
Sep 20, 2013
09/13
by
Nicholas Ruozzi; Sekhar Tatikonda
texts
eye 77
favorite 0
comment 0
The max-product algorithm, a local message-passing scheme that attempts to compute the most probable assignment (MAP) of a given probability distribution, has been successfully employed as a method of approximate inference for applications arising in coding theory, computer vision, and machine learning. However, the max-product algorithm is not guaranteed to converge to the MAP assignment, and if it does, is not guaranteed to recover the MAP assignment. Alternative convergent message-passing...
Source: http://arxiv.org/abs/1002.3239v3
60
60
Sep 18, 2013
09/13
by
Jian Ni; Sekhar Tatikonda
texts
eye 60
favorite 0
comment 0
Inference of the network structure (e.g., routing topology) and dynamics (e.g., link performance) is an essential component in many network design and management tasks. In this paper we propose a new, general framework for analyzing and designing routing topology and link performance inference algorithms using ideas and tools from phylogenetic inference in evolutionary biology. The framework is applicable to a variety of measurement techniques. Based on the framework we introduce and develop...
Source: http://arxiv.org/abs/0809.0158v1
115
115
Jul 20, 2013
07/13
by
Sekhar Tatikonda; Sanjoy Mitter
texts
eye 115
favorite 0
comment 0
We introduce a general framework for treating channels with memory and feedback. First, we generalize Massey's concept of directed information and use it to characterize the feedback capacity of general channels. Second, we present coding results for Markov channels. This requires determining appropriate sufficient statistics at the encoder and decoder. Third, a dynamic programming framework for computing the capacity of Markov channels is presented. Fourth, it is shown that the average cost...
Source: http://arxiv.org/abs/cs/0609139v1
48
48
Sep 23, 2013
09/13
by
Ramji Venkataramanan; Sekhar Tatikonda
texts
eye 48
favorite 0
comment 0
We study a new class of codes for Gaussian multi-terminal source and channel coding. These codes are designed using the statistical framework of high-dimensional linear regression and are called Sparse Superposition or Sparse Regression codes. Codewords are linear combinations of subsets of columns of a design matrix. These codes were recently introduced by Barron and Joseph and shown to achieve the channel capacity of AWGN channels with computationally feasible decoding. They have also...
Source: http://arxiv.org/abs/1212.2125v1
12
12
Aug 11, 2020
08/20
by
Nicholas Ruozzi; Sekhar Tatikonda
data
eye 12
favorite 0
comment 0
Source: http://academictorrents.com/details/264e2cd52f86ee907b6b4f3d433b2c5e21cb6c9c
44
44
Sep 23, 2013
09/13
by
Ramji Venkataramanan; Tuhin Sarkar; Sekhar Tatikonda
texts
eye 44
favorite 0
comment 0
We propose computationally efficient encoders and decoders for lossy compression using a Sparse Regression Code. The codebook is defined by a design matrix and codewords are structured linear combinations of columns of this matrix. The proposed encoding algorithm sequentially chooses columns of the design matrix to successively approximate the source sequence. The algorithm is shown to achieve the optimal distortion-rate function for i.i.d Gaussian sources under the squared-error distortion...
Source: http://arxiv.org/abs/1212.1707v1
50
50
Jul 20, 2013
07/13
by
John Hartigan; David Pollard; Sekhar Tatikonda
texts
eye 50
favorite 0
comment 0
The paper provides a simpler method for proving a delicate inequality that was used by Achlioptis and Naor to establish asymptotic concentration for chromatic numbers of Erdos-Renyi random graphs. The simplifications come from two new ideas. The first involves a sharpened form of a piece of statistical folklore regarding goodness-of-fit tests for two-way tables of Poisson counts under linear conditioning constraints. The second idea takes the form of a new inequality that controls the extreme...
Source: http://arxiv.org/abs/1107.3818v1
50
50
Sep 19, 2013
09/13
by
Shaohua Yang; Aleksandar Kavcic; Sekhar Tatikonda
texts
eye 50
favorite 0
comment 0
For a stationary additive Gaussian-noise channel with a rational noise power spectrum of a finite-order $L$, we derive two new results for the feedback capacity under an average channel input power constraint. First, we show that a very simple feedback-dependent Gauss-Markov source achieves the feedback capacity, and that Kalman-Bucy filtering is optimal for processing the feedback. Based on these results, we develop a new method for optimizing the channel inputs for achieving the Cover-Pombra...
Source: http://arxiv.org/abs/cs/0701099v1
64
64
Sep 18, 2013
09/13
by
Giacomo Como; Serdar Yuksel; Sekhar Tatikonda
texts
eye 64
favorite 0
comment 0
The error exponent of Markov channels with feedback is studied in the variable-length block-coding setting. Burnashev's classic result is extended and a single letter characterization for the reliability function of finite-state Markov channels is presented, under the assumption that the channel state is causally observed both at the transmitter and at the receiver side. Tools from stochastic control theory are used in order to treat channels with intersymbol interference. In particular the...
Source: http://arxiv.org/abs/0707.2014v1
46
46
Sep 18, 2013
09/13
by
Ramji Venkataramanan; Sekhar Tatikonda; Kannan Ramchandran
texts
eye 46
favorite 0
comment 0
This paper considers a binary channel with deletions and insertions, where each input bit is transformed in one of the following ways: it is deleted with probability d, or an extra bit is added after it with probability i, or it is transmitted unmodified with probability 1-d-i. A computable lower bound on the capacity of this channel is derived. The transformation of the input sequence by the channel may be viewed in terms of runs as follows: some runs of the input sequence get shorter/longer,...
Source: http://arxiv.org/abs/1102.5112v4
48
48
Sep 21, 2013
09/13
by
Jialing Liu; Nicola Elia; Sekhar Tatikonda
texts
eye 48
favorite 0
comment 0
In this paper, we propose capacity-achieving communication schemes for Gaussian finite-state Markov channels (FSMCs) subject to an average channel input power constraint, under the assumption that the transmitters can have access to delayed noiseless output feedback as well as instantaneous or delayed channel state information (CSI). We show that the proposed schemes reveals connections between feedback communication and feedback control.
Source: http://arxiv.org/abs/0808.2089v2
59
59
Sep 23, 2013
09/13
by
Ramji Venkataramanan; Antony Joseph; Sekhar Tatikonda
texts
eye 59
favorite 0
comment 0
We study a new class of codes for lossy compression with the squared-error distortion criterion, designed using the statistical framework of high-dimensional linear regression. Codewords are linear combinations of subsets of columns of a design matrix. Called a Sparse Superposition or Sparse Regression codebook, this structure is motivated by an analogous construction proposed recently by Barron and Joseph for communication over an AWGN channel. For i.i.d Gaussian sources and minimum-distance...
Source: http://arxiv.org/abs/1202.0840v2
15
15
Jun 28, 2018
06/18
by
Michael J. Kane; Bryan Lewis; Sekhar Tatikonda; Simon Urbanek
texts
eye 15
favorite 0
comment 0
Linear regression models depend directly on the design matrix and its properties. Techniques that efficiently estimate model coefficients by partitioning rows of the design matrix are increasingly popular for large-scale problems because they fit well with modern parallel computing architectures. We propose a simple measure of {\em concordance} between a design matrix and a subset of its rows that estimates how well a subset captures the variance-covariance structure of a larger data set. We...
Topics: Statistics, Machine Learning
Source: http://arxiv.org/abs/1507.03285