课程名称︰数位语音处理概论
课程性质︰选修
课程教师︰李琳山
开课学院:电资学院
开课系所︰电机、资工系
考试日期(年月日)︰2004.12.10
考试时限(分钟):120
是否需发放奖励金:是
(如未明确表示,则不予发放)
试题 :
Digital Speech Processing
December 10 2004, 10:10-12:10
● OPEN EVERYTHING
● 除专有名词可用英文外,所有文字说明一律以中文为限,未用中文者不予计分
● Total points: 120, Time Allocation: 1 point / minute
───────────────────────────────────────
1. (10)
(i) (5) What are voiced/unvoiced speech signals and their time-domain
waveform characteristics?
(ii) (5) What is the pitch in speech signals and how is it related to the
tones in Mandarin Chinese?
╴
2. (20) Given a HMM λ = (A, B, π), an observation sequence O = o_1 o_2 ...
o_t ... o_T and a state sequence q(上面加底线) = q_1 q_2 ... q_t ... q_T,
define
α_t(i) = Prob[o_1 o_2 ... o_t, q_t = i│λ]
β_t(i) = Prob[o_(t+1) o_(t+2) ... o_T│q_t = i, λ]
N
(i) (5) What is Σ α_t(i) β_t(i) ? Show your results.
i=1
(ii) (5) What is α_t(i) a_ij b_j(o_(t+1)) β_(t+1)(j)? Show your results.
(iii) (10) Formulate and describe the Viterbi algorithm to find the best
state sequence q*(上面加底线) = q_1* q_2* ... q_t* ... q_T* giving the
highest probability Prob[q*(上面加底线), O(上面加底线) │λ].
Explain how it works and why backtracking is necessary.
3. (10) Explain and describe what you know about "dialogue modeling and
management".
4. (10) Explain and describe what you know about "Text-to-speech Synthesis".
5. (10) Write down the procedures for LBG algorithm and discuss why and how it
is better than the K-means algorithm.
6. (10) Explain the detailed principles and process for Katz smoothing.
7. (10) What is the perplexity of a language source? What is the perplexity of
a language model with respect to a test corpus? How are they related to a
"virtual vacabulary"?
8. (10) Explain how the MAP principle can be used to find a word sequence.
W(上面加底线) = w_1 w_2 ... w_n given an observation sequence
O(上面加底线) = o_1 o_2 ... o_T, how the hidden Markov Model and language
model can be used, and which the likelihood function and the prior
probability are?
9. (30) Write down anything you learned about the following subjects that were
NOT mentioned in the class. Don't write anything mentioned in the class.
(i) (15) classification and regression tree (CART)
(ii) (15) search problem/algorithm for large vocabulary continuous speech
recognition