[试题] 95下 李琳山 数位语音处理概论 期中考

楼主: rod24574575 (天然呆)   2014-04-21 21:19:51
课程名称︰数位语音处理概论
课程性质︰选修
课程教师︰李琳山
开课学院:电资学院
开课系所︰电机、资工系
考试日期(年月日)︰2007.05.15
考试时限(分钟):120
是否需发放奖励金:是
(如未明确表示,则不予发放)
试题 :
Digital Speech Processing, Midterm
May. 15, 2007, 9:10-11:10
● OPEN EVERYTHING
● 除专有名词可用英文以外,所有文字说明一律以中文为限,未用中文者不计分
● Total points: 170
───────────────────────────────────────
1. (10) Describe what you know about the basic elements, operations and
relevant research issues of conversational interfaces or spoken dialogue
systems.
╴ t
2. (10) Assume X = (x1, x2) is a two-dimensional random vector with t
bi-variate Gaussian distribution, a mean vector μ(上面加底线) = (μ1, μ2)
and a co-variance matrix Σ. x1, x2 are two random variables and "t" means
transpose. Discuss how the distribution of X(上面加底线) depends on
μ(上面加底线) and Σ.
3. (20) Given a HMM λ = (A, B, π) with N states, an observation sequence
O(上面加底线) = o_1 o_2 ... o_t ... o_T and a state sequence
q(上面加底线) = q_1 q_2 ... q_t ... q_T, define
α_t(i) = Prob[o_1 o_2 ... o_t, q_t = i│λ]
β_t(i) = Prob[o_(t+1) o_(t+2) ... o_T│q_t = i, λ]
N
(a) (5) What is Σ α_t(i) β_t(i) ? Show your results.
i=1
α_t(i) β_t(i)
(b) (5) What is ─────────── ? Show your results.
N
Σ [α_t(i) β_t(i)]
i=1
(c) (5) What is α_t(i) a_ij b_j(o_(t+1)) β_(t+1)(j)? Show your results.
(d) (10) Formulate and describe the Viterbi algorithm to find the best
state sequence q*(上面加底线) = q_1* q_2* ... q_t* ... q_T* giving the
highest probability Prob[O(上面加底线), q*(上面加底线)│λ]. Explain
how it works and why backtracking is necessary.
4. (10) What is LBG algorithm and why is it better than K-means algorithm?
5. (10) Explain why and how the unseen triphones can be trained using decision
trees.
6. (10) In acoustic modeling the concept of "senones" is very useful. Explain
what is a "senone" and how it can be used.
7. (10) Explain the basic principles in selecting the voice units for a
language for hidden Markov modeling.
8. (10) Explain what the class-based language model is and why it is useful?
9. (10) What is the perplexity of a language source? What is the perplexity of
a language model with respect to a corpus? How are they related to a
"virtual vocabulary"?
10. (10) Explain why the use of a window with finite length, w(n), n = 0, 1, 2,
... , L-1, is necessary for feature extraction in speech recognition.
11. (10) In feature extraction for speech recognition, after you obtain
12 MFCC parameters plus a short-time energy (a total of 13 parameters),
explain how to obtain the other 26 parameters and what they are.
12. (10) In large vocabulary continuous speech recognition, explain:
(a) (5) What the "language model weight" is.
(b) (5) Why the language model has the function as the penalty of
inserting extra words.
13. (20) What is the maximum a posteriori (MAP) principle? How can it be used
to integrate acoustic modeling and language modeling for large vocabulary
speech recognition? Why and how this can be solved by a Viterbi algorithm
over a series of lexicon trees?
14. (15) Under what kind of condition a heuristic search is admissible? Show
or explain why?

Links booklink

Contact Us: admin [ a t ] ucptt.com