[试题] 101上 郭大维 即时系统 期末考+解答

楼主: rod24574575 (天然呆)   2015-01-17 08:29:17
课程名称︰即时系统
课程性质︰选修
课程教师︰郭大维
开课学院:电资学院
开课系所︰资工所、网媒所
考试日期(年月日)︰2013.01.08
考试时限(分钟):120
试题 :
Fall 2012 Final-Exam (答案卷) RTS
Read each question over carefully several times. Answer all questions in the
space provided. The exam is two hours long. Total score = 115.
(1) Please define the following terminologies (15pts):
a. MLC Flash Memory
Ans: Flash Memory in which every cell stores more than one bit of
information.
b. Dynamic Wear Leveling
Ans: Only blocks of invalid pages are recycled, where wear leveling is
to evenly distribute erases over blocks, and blocks have limited
numbers of erasing.
c. Configuration Selection Problems
Ans: Given a set of configurations, choose a schedulable configuration.
d. The Harmonic Base of a Process Set (Hint: Division Graph)
Ans: The set of periods of the process set that could not be divided by
any other period.
e. The Wait Policy for Broadcast Commit in Optimistic Concurrency Control
Ans: When T commits at its validation phase, T waits until any
high-priority transaction H commits. If H commits, abort T;
otherwise, T commits.
(2) Please order the Polling Server, the Deferrable Server, the Total Bandwidth
Server (TBS). and the Constant Utilization Server (CUS) in terms of the
response time to the scheduling of sporadic tasks, for modest workloads
(i.e., workloads close to but no larger than the server utilization). Which
one of the four is the best in terms of the response time to the scheduling
of sporadic tasks, for very heavy workloads (compared to the server
utilization)? Which one might be the worst in terms of the response time to
the scheduling of sporadic tasks, for light workloads (compared to the
server utilization)? You must provide explanation to receive any credits.
(16pts)
Ans:
(A) Deferrable Server < Total Bandwidth Server, Constant Utilization Server
< Polling Server, where a < b means a is better than b.
(B) Best for heavy workloads: Total Bandwidth Server because sporadic tasks
get serviced even the server utilization is
over its setup.
(C) Worst for light workloads: Polling Server because the other 3 Servers
are demand-based.
(3) Consider a control system with the following communication graph and system
requirements:
■ Sample x at 10 times per second. Then update u.
■ Sample y at 20 times per second. Then update u.
Let each one of Function f_x, f_y, and f_s take 2 time units to execute.
Please show me the pseudo code by creating tasks by "Decomposition by
Critical Timing Constraints" and "Decomposition by Centralizing Concurrency
Control". (12pts)
x ┌─┐ x'
─→│fx├─┐
└─┘ │
└→┌─┐ u
│fs├─→
┌→└─┘
┌─┐ │
─→│fy├─┘
y └─┘ y'
Ans:
(A) Process XS
activated by timer;
attribute period = 10, deadline = 10;
x = sensor_x(); x' = f_x(x);
rendezvous S;
end XS
and Processes YS and S
(B) Process XYS /* Define skip_y() */
activated by timer;
attribute period = 10, deadline = 10
x = sensor_x(); x' = f_x(x);
if not skip_y() then y = sensor_y(); y' = f_y(y);
u = f_s(x', y')
end XYS
(4) Consider real-time disk scheduling. Please answer the following questions.
You might provide explanation to receive any credits. (12pts)
(A) Is SCAN better than the Shortest Seek Time First algorithm (SSTF) in
meeting the deadlines of requests?
(B) Consider a policy that always service the request with the highest
priority p_i = f(d_i, b_i) = a*d_i + (1-a)*b_i, where a, d_i, and b_i
are a design factor, the deadline, and the time that the disk arm has
to take to move from its current position to serve the request,
respectively. Suppose that we like to improve this policy by reducing
the disk arm movements with the considerations of 3 requests at a time.
What would you do?
Ans:
(A) Yes. SCAN is better because SSTF might cause the starvation of a
request with any deadline.
(B) One way is to group every 3 requests in the queue ordered by their
priorities and then service each group by C-SCAN.
(5) Consider flash memory management. Please answer the following questions.
You might provide explanation to receive any credits. (24pts)
(A) Why the write throughput of a flash-memory drive drops significantly
when it is used for some time?
(B) Why the read/program disturb becomes more and more serious recently?
(C) FTL is a page-level address translation mechanism. Why it becomes less
practical for current products?
(D) What is the main challenge in doing wear leveling?
(E) Why a block-level mapping mechanism, such as NFTL, usually has more
erases (in the run time) than a page-level mapping mechanism does,
such as FTL?
(F) Please explain the Drain Disturb problem.
Ans:
(A) It is because garbage collection/erasing starts.
(B) It is because flash memory chips become less reliable, and more cells
are connected by a bit line or a word line.
(C) It is because every page requires the mechanism to keep its mapping
information such that the DRAM needs become enormous.
(D) It is to know which blocks have less numbers of erasing so far.
(E) It is because a block-level mapping mechanism usually has free pages
in a to-be-recycled block.
(F) Cells sharing a bit line are to be programmed so that their states
might be lost, since electrons are tunneled from the floating gate
through the gate oxide to the drain.
(6) What are the ACID properties of database systems? (8pts)
Ans: Atomicity, Consistency, Isolation, and Durability.
(7) Compare a main-memory database and a disk-based database. Tell me two
advantages of main-memory databases and one of the disadvantages. What
would be the main difference in the index designs for these two kinds of
databases? (16pts)
Ans:
(A) Advantages: predictable performance, high performance.
Disadvantage: vulnerable in system failure.
(B) A disk-based database must have shallow and big-fanout index designs.
(8) RWPCP improves PCP by providing read and write locks, where PCP only
supports exclusive locks. Please explain how RWPCP guarantees one priority
inversion to a real-time transaction that might write data objects? Please
prove why there is one priority inversion under 2VPCP. (12pts)
Ans:
(A) A read lock to a data object will set the RW Priority Ceiling to the
highest priority of a writer such that no lower-priority reader can
introduce more priority inversions to a potential writer.
(B) A high-priority transaction T_H is blocked when its priority is no
larger than the RW Priority Ceiling of any data object locked by any
other transaction under 2VPCP. The first transaction that might obtain
such a lock to block T_H must prevent any other to lock any data object
in a mode to block T_H.

Links booklink

Contact Us: admin [ a t ] ucptt.com