Tuesday 26 March 2013

FLAT mid2 qn paper

1. Explain Normal Forms with an example?
2. What is Non deterministic push down automata?construct NPDA for palindrome of given input {a,b}
3. What is TM?construct TM for the language L = {a^nb^n}where n>1
4. a) Explain P and NP complexity of algorithms?
    b) Explain about Universal Turing Machine(UTM)?

Saturday 23 March 2013

Multimedia

Multimedia is the holy grail of networking. Multimedia requires high bandwidth and so getting it to work over fixed connections is hard enough. Even VHS-quality video over wireless is a few years away

Literally, multimedia is just two or more media. After all, it contains two media: text and graphics (the figures). Nevertheless, when most people refer to multimedia, they generally mean the combination of two or more continuous media.

That is, media that have to be played during some well-defined time interval, usually with some user interaction. In practice, the two media are normally audio and video, that is, sound plus moving pictures.

However, many people often refer to pure audio, such as Internet telephony or Internet radio as multimedia as well, which it is clearly not. Actually, a better term is streaming media, consider real-time audio to be multimedia as well.

1.Introduction to Digital Audio

An audio (sound) wave is a one-dimensional acoustic (pressure) wave. When an acoustic wave enters the ear, the eardrum vibrates, causing the tiny bones of the inner ear to vibrate along with it, sending nerve pulses to the brain. These pulses are perceived as sound by the listener.

In a similar way, when an acoustic wave strikes a microphone, the microphone generates an electrical signal, representing the sound amplitude as a function of time. The representation, processing, storage, and transmission of such audio signals are a major part of the study of multimedia systems.

The frequency range of the human ear runs from 20 Hz to 20,000 Hz. The ear hears logarithmically, so the ratio of two sounds with power A and B is conventionally expressed in dB (decibels) according to the formula

If we define the lower limit of audibility (a pressure of about 0.0003 dyne/cm2) for a 1-kHz sine wave as 0 dB, an ordinary conversation is about 50 dB and the pain threshold is about 120 dB, a dynamic range of a factor of 1 million.

The ear is surprisingly sensitive to sound variations lasting only a few milliseconds. The eye, in contrast, does not notice changes in light level that last only a few milliseconds. The result of this observation is that jitter of only a few milliseconds during a multimedia transmission affects the perceived sound quality more than it affects the perceived image quality.

Audio waves can be converted to digital form by an ADC (Analog Digital Converter). An ADC takes an electrical voltage as input and generates a binary number as output. In Fig. 5- 24(a) we see an example of a sine wave.

To represent this signal digitally, we can sample it every T seconds, as shown by the bar heights in Fig. 5-24(b). If a sound wave is not a pure sine wave but a linear superposition of sine waves where the highest frequency component present is f, then the Nyquist theorem states that it is sufficient to make samples at a frequency 2f.

Figure 5-24. (a) A sine wave. (b) Sampling the sine wave. (c) Quantizing the samples to 4 bits.
                                                                    
Digital samples are never exact. The samples of Fig. 5-24(c)allow only nine values, from -1.00 to +1.00 in steps of 0.25. An 8-bit sample would allow 256 distinct values. A 16-bit sample would allow 65,536 distinct values.

 The error introduced by the finite number of bits per sample is called the quantization noise. If it is too large, the ear detects it. Two well-known examples where sampled sound is used are the telephone and audio compact discs.

Pulse code modulation, as used within the telephone system, uses 8-bit samples made 8000 times per second. In North America and Japan, 7 bits are for data and 1 is for control; in Europe all 8 bits are for data. This system gives a data rate of 56,000 bps or 64,000 bps. With only 8000 samples/sec, frequencies above 4 kHz are lost.

Audio CDs are digital with a sampling rate of 44,100 samples/sec, enough to capture frequencies up to 22,050 Hz, which is good enough for people, but bad for canine music lovers. The samples are 16 bits each and are linear over the range of amplitudes.

Note that 16-bit samples allow only 65,536 distinct values, even though the dynamic range of the ear is about 1 million when measured in steps of the smallest audible sound. Thus, using only 16 bits per sample introduces some quantization noise.

With 44,100 samples/sec of 16 bits each, an audio CD needs a bandwidth of 705.6 kbps for monaural and 1.411 Mbps for stereo. While this is lower than what video needs, it still takes almost a full T1 channel to transmit uncompressed CD quality stereo sound in real time.

Music, of course, is just a special case of general audio, but an important one. Another important special case is speech. Human speech tends to be in the 600-Hz to 6000-Hz range. Speech is made up of vowels and consonants, which have different properties.

Vowels are produced when the vocal tract is unobstructed, producing resonances whose fundamental frequency depends on the size and shape of the vocal system and the position of the speaker's tongue and jaw. These sounds are almost periodic for intervals of about 30 msec.

Consonants are produced when the vocal tract is partially blocked. These sounds are less regular than vowels. Some speech generation and transmission systems make use of models of the vocal system to reduce speech to a few parameters, rather than just sampling the speech waveform.

2. Audio Compression
CD-quality audio requires a transmission bandwidth of 1.411 Mbps. Substantial compression is needed to make transmission over the Internet practical. For this reason, various audio compression algorithms have been developed.

Probably the most popular one is MPEG audio, which has three layers (variants), of which MP3 (MPEG audio layer 3) is the most powerful and best known. Large amounts of music in MP3 format are available on the Internet.

MP3 belongs to the audio portion of the MPEG video compression standard. Audio compression can be done in one of two ways. In waveform coding the signal is transformed mathematically by a Fourier transform into its frequency components.

The amplitude of each component is then encoded in a minimal way. The goal is to reproduce the waveform accurately at the other end in as few bits as possible. The key property of perceptual coding is that some sounds can mask other sounds.

Frequency masking—the ability of a loud sound in one frequency band to hide a softer sound in another frequency band that would have been audible in the absence of the loud sound. The sound is inaudible for a short period of time because the ear turns down its gain when they start and it takes a finite time to turn it up again. This effect is called temporal masking.

Figure 5-25. (a) The threshold of audibility as a function of frequency. (b) The masking effect.

                                                                
                                                                       
Now consider Experiment 2. The computer runs experiment 1 again, but this time with a constant-amplitude sine wave at, say, 150 Hz, superimposed on the test frequency. The threshold of audibility for frequencies near 150 Hz is raised, as shown in Fig. 5-25(b).

The consequence of this new observation is that by keeping track of which signals are being masked by more powerful signals in nearby frequency bands, we can omit more and more frequencies in the encoded signal, saving bits.

In Fig. 5-25, the 125-Hz signal can be completely omitted from the output and no one will be able to hear the difference. Even after a powerful signal stops in some frequency band, knowledge of its temporal masking properties allow us to continue to omit the masked frequencies for some time interval as the ear recovers.

The essence of MP3 is to Fourier-transform the sound to get the power at each frequency and then transmit only the unmasked frequencies, encoding these in as few bits as possible. With this information as background, we can now see how the encoding is done.

The audio compression is done by sampling the waveform at 32 kHz, 44.1 kHz, or 48 kHz. Sampling can be done on one or two channels, in any of four configurations:

1. Monophonic (a single input stream).

2. Dual monophonic (e.g., an English and a Japanese soundtrack).

3. Disjoint stereo (each channel compressed separately).

4. Joint stereo (interchannel redundancy fully exploited).

First, the output bit rate is chosen. MP3 can compress a stereo rock 'n roll CD down to 96 kbps with little perceptible loss in quality, even for rock 'n roll fans with no hearing loss. For a piano concert, at least 128 kbps are needed.

These differ because the signal-to-noise ratio for rock 'n roll is much higher than for a piano concert. It is also possible to choose lower output rates and accept some loss in quality.

In the next phase the available bit budget is divided among the bands, with more bits allocated to the bands with the most unmasked spectral power, fewer bits allocated to unmasked bands with less spectral power, and no bits allocated to masked bands.

Finally, the bits are encoded using Huffman encoding, which assigns short codes to numbers that appear frequently and long codes to those that occur infrequently. Various techniques are also used for noise reduction, antialiasing, and exploiting the interchannel redundancy.

ACN mid2 question paper


1. What do you mean by quality of service? what attributes can be used to describe the data flow?

2. What is DNS? what is its use? How DNS works?

3. a) What are the applications of MANETs?

    b) Discuss the challenges or issues faced by mobile adhoc networks?

4. Explain the design of WMN?