Internet
Fact-checked

At WiseGEEK, we're committed to delivering accurate, trustworthy information. Our expert-authored content is rigorously fact-checked and sourced from credible authorities. Discover how we uphold the highest standards in providing you with reliable knowledge.

Learn more...

What Is Distributed Source Coding?

Jean Marie Asta
Jean Marie Asta

In communication and information theory, distributed source coding (DSC) is a crucial problem that describes the compression of information sources that are correlated in multiples but cannot communicate with one another. DSC allows relationship paradigms in video coding that swap the complexity of encoders and decoders, representing a conceptual shift in video processing. A correlation with many sources can be modeled between channel codes and decoder sides, enabling distributed source coding to shift computational complexity between the encoder side and decoder side. This provides an appropriate framework for applications that have a sender that is complexity strained, like a sensor network or video compression.

Two men named Jack K. Wolf and David Slepian proposed a theoretical bound of lossless compression concerning distributed source coding, which is now called the Slepian-Wolf theorem or bound. The bound was proposed in entropy terms with correlated sources of information in the year 1973. One of the things they were able to present was that two separate and isolated sources are able to compress data efficiently and as if both sources communicated directly to each other. Later, in 1975, a man named Thomas M. Cover extended this theorem to an instance of more than two sources.

DSC allows relationship paradigms in video coding that swap the complexity of encoders and decoders, representing a conceptual shift in video processing.
DSC allows relationship paradigms in video coding that swap the complexity of encoders and decoders, representing a conceptual shift in video processing.

In distributed source coding, multiple dependent sources are coded with separate joint decoders and encoders. The Slepian-Wolf theorem, which represents these sources as two different variables, assumes that two separate and correlated signals came from different sources and did not communicate with one another. These are the encoders and their signals are transferred to a receiver, which is the decoder that can perform the process of joint decoding of both signals of information. The theorem attempts to solve what the probability rate is of the receiver decoding an error and approaching zero, which is represented as its joint entropy. As both Wolf and Slepian proved in 1973, even if correlated signals become separately encoded, the combined rate is sufficient.

Though this theorem theoretically postulates that this is achievable in distributed source coding, the limits of the theory have not been realized or even closely approached in practical applications. Two other scientists, Ramchandran and Pradhan, have attempted to solve how to reach this theoretical limit and demonstrate the plausibility of the Slepian-Wolf theorem. They attempted this by the provision of a particular solution for the two encoded signals having a maximum separation distance.

Discuss this Article

Post your comments
Login:
Forgot password?
Register:
    • DSC allows relationship paradigms in video coding that swap the complexity of encoders and decoders, representing a conceptual shift in video processing.
      By: Haider Y. Abdulla
      DSC allows relationship paradigms in video coding that swap the complexity of encoders and decoders, representing a conceptual shift in video processing.