搜索
您的当前位置:首页正文

-操作系统精髓与设计原理(第五版)+课后题答案1

来源:抵帆知识网
CHAPTER 2 OPERATING SYSTEM

OVERVIEW

Review Questions

2.1 Convenience: An operating system makes a computer more convenient

to use. Efficiency: An operating system allows the computer system resources to be used in an efficient manner. Ability to evolve: An

operating system should be constructed in such a way as to permit the effective development, testing, and introduction of new system functions without interfering with service.

2.5 The execution context, or process state, is the internal data by which the

operating system is able to supervise and control the process. This

internal information is separated from the process, because the operating system has information not permitted to the process. The context includes all of the information that the operating system needs to manage the

process and that the processor needs to execute the process properly. The context includes the contents of the various processor registers, such as the program counter and data registers. It also includes information of use to the operating system, such as the priority of the process and whether the process is waiting for the completion of a particular I/O event.

Problems

2.1 The answers are the same for (a) and (b). Assume that although processor

operations cannot overlap, I/O operations can. 1 Job: TAT = NT Processor utilization = 50% 2 Jobs: TAT = NT Processor utilization = 100% 4 Jobs: TAT = (2N – 1)NT Processor utilization = 100% 2.4 A system call is used by an application program to invoke a function

provided by the operating system. Typically, the system call results in transfer to a system program that runs in kernel mode.

CHAPTER 3

PROCESS DESCRIPTION AND

CONTROL

Review Questions

3.5 Swapping involves moving part or all of a process from main memory to

disk. When none of the processes in main memory is in the Ready state, the operating system swaps one of the blocked processes out onto disk into a suspend queue, so that another process may be brought into main memory to execute.

3.10 The user mode has restrictions on the instructions that can be executed

and the memory areas that can be accessed. This is to protect the operating system from damage or alteration. In kernel mode, the

operating system does not have these restrictions, so that it can perform its tasks. Problems

3.1 •Creation and deletion of both user and system processes. The

processes in the system can execute concurrently for information sharing, computation speedup, modularity, and convenience.

Concurrent execution requires a mechanism for process creation and deletion. The required resources are given to the process when it is created, or allocated to it while it is running. When the process terminates, the OS needs to reclaim any reusable resources. •Suspension and resumption of processes. In process scheduling, the

OS needs to change the process's state to waiting or ready state when it is waiting for some resources. When the required resources are

available, OS needs to change its state to running state to resume its execution. •Provision of mechanism for process synchronization. Cooperating

processes may share data. Concurrent access to shared data may result in data inconsistency. OS has to provide mechanisms for processes synchronization to ensure the orderly execution of cooperating processes, so that data consistency is maintained. •Provision of mechanism for process communication. The processes

executing under the OS may be either independent processes or

cooperating processes. Cooperating processes must have the means to communicate with each other. •Provision of mechanisms for deadlock handling. In a

multiprogramming environment, several processes may compete for a finite number of resources. If a deadlock occurs, all waiting processes

will never change their waiting state to running state again, resources are wasted and jobs will never be completed.

3.3 Figure 9.3 shows the result for a single blocked queue. The figure readily generalizes to multiple blocked queues.

CHAPTER 4

PROCESS DESCRIPTION AND

CONTROL

Review Questions

4.2 Less state information is involved.

4.5 Address space, file resources, execution privileges are examples.

4.6 1. Thread switching does not require kernel mode privileges because all

of the thread management data structures are within the user address space of a single process. Therefore, the process does not switch to the kernel mode to do thread management. This saves the overhead of two mode switches (user to kernel; kernel back to user). 2. Scheduling can be application specific. One application may benefit most from a simple round-robin scheduling algorithm, while another might benefit from a priority-based scheduling algorithm. The scheduling algorithm can be tailored to the application without disturbing the underlying OS scheduler. 3. ULTs can run on any operating system. No changes are required to the underlying kernel to support ULTs. The threads library is a set of application-level utilities shared by all applications.

4.7 1. In a typical operating system, many system calls are blocking. Thus,

when a ULT executes a system call, not only is that thread blocked, but also all of the threads within the process are blocked. 2. In a pure ULT strategy, a multithreaded application cannot take advantage of

multiprocessing. A kernel assigns one process to only one processor at a time. Therefore, only a single thread within a process can execute at a time. Problems

4.2 Because, with ULTs, the thread structure of a process is not visible to the operating system, which only schedules on the basis of processes.

CHAPTER 5

CONCURRENCY: MUTUAL

EXCLUSION AND SYNCHRONIZATION

Review Questions

5.1 Communication among processes, sharing of and competing for

resources, synchronization of the activities of multiple processes, and allocation of processor time to processes.

5.9 A binary semaphore may only take on the values 0 and 1. A general

semaphore may take on any integer value.

Problems

5.2 ABCDE; ABDCE; ABDEC; ADBCE; ADBEC; ADEBC; DEABC; DAEBC; DABEC; DABCE

5.5 Consider the case in which turn equals 0 and P(1) sets blocked[1] to true and then finds blocked[0] set to false. P(0) will then set blocked[0] to true, find turn = 0, and enter its critical section. P(1) will then assign 1 to turn and will also enter its critical section.

CHAPTER 6

CONCURRENCY: DEADLOCK AND

STARVATION

Review Questions

6.2 Mutual exclusion. Only one process may use a resource at a time. Hold

and wait. A process may hold allocated resources while awaiting assignment of others. No preemption. No resource can be forcibly removed from a process holding it.

6.3 The above three conditions, plus: Circular wait. A closed chain of

processes exists, such that each process holds at least one resource needed by the next process in the chain.

Problems

6.4 a. 0 0 0 0 0 7 5 0 6 6 2 2 2 0 0 2 0 3 2 0 b. to d. Running the banker's algorithm, we see processes can finish

in the order p1, p4, p5, p2, p3.

e. Change available to (2,0,0,0) and p3's row of \"still needs\" to (6,5,2,2).

Now p1, p4, p5 can finish, but with available now (4,6,9,8) neither p2 nor p3's \"still needs\" can be satisfied. So it is not safe to grant p3's request.

6.5 1. W = (2 1 0 0) 2. Mark P3; W = (2 1 0 0) + (0 1 2 0) = (2 2 2 0) 3. Mark P2; W = (2 2 2 0) + (2 0 0 1) = (4 2 2 1) 4. Mark P1; no deadlock detected

CHAPTER 7

MEMORY MANAGEMENT

Review Questions

7.1 Relocation, protection, sharing, logical organization, physical

organization.

7.7 A logical address is a reference to a memory location independent of the

current assignment of data to memory; a translation must be made to a physical address before the memory access can be achieved. A relative address is a particular example of logical address, in which the address is expressed as a location relative to some known point, usually the

beginning of the program. A physical address, or absolute address, is an actual location in main memory.

Problems

7.6 a. The 40 M block fits into the second hole, with a starting address of 80M.

The 20M block fits into the first hole, with a starting address of 20M. The 10M block is placed at location 120M.

40M40M60M40M30M40M40M

b. The three starting addresses are 230M, 20M, and 160M, for the 40M, 20M, and 10M blocks, respectively.

40M60M60M40M30M40M40M

c. The three starting addresses are 80M, 120M, and 160M, for the 40M, 20M, and 10M blocks, respectively.

7.12 a. The number of bytes in the logical address space is (216 pages)  (210

bytes/page) = 226 bytes. Therefore, 26 bits are required for the logical address.

b. A frame is the same size as a page, 210 bytes.

c. The number of frames in main memory is (232 bytes of main memory)/(210 bytes/frame) = 222 frames. So 22 bits is needed to

specify the frame.

d. There is one entry for each page in the logical address space. Therefore there are 216 entries.

e. In addition to the valid/invalid bit, 22 bits are needed to specify the frame location in main memory, for a total of 23 bits.

40M40M60M40M30M40M40M

d. The three starting addresses are 80M, 230M, and 360M, for the 40M, 20M, and 10M blocks, respectively.

CHAPTER 8 VIRTUAL MEMORY

Review Questions

8.1 Simple paging: all the pages of a process must be in main memory for

process to run, unless overlays are used. Virtual memory paging: not all pages of a process need be in main memory frames for the process to run.; pages may be read in as needed

8.2 A phenomenon in virtual memory schemes, in which the processor

spends most of its time swapping pieces rather than executing instructions.

Problems

8.1 a. Split binary address into virtual page number and offset; use VPN as

index into page table; extract page frame number; concatenate offset to get physical memory address

b. (i) 1052 = 1024 + 28 maps to VPN 1 in PFN 7, (7  1024+28 = 7196) (ii) 2221 = 2  1024 + 173 maps to VPN 2, page fault (iii) 5499 = 5  1024 + 379 maps to VPN 5 in PFN 0, (0  1024+379 =

379)

8.4 a. PFN 3 since loaded longest ago at time 20 b. PFN 1 since referenced longest ago at time 160 c. Clear R in PFN 3 (oldest loaded), clear R in PFN 2 (next oldest

loaded), victim PFN is 0 since R=0

d. Replace the page in PFN 3 since VPN 3 (in PFN 3) is used furthest in

the future

e. There are 6 faults, indicated by *

3 0 2 1

* 4 4 3 0 2

0 0 4 3

0 0 4 3

0 0 4

* 2 2 0

* 4 4 2 0

2 2 4 0

* 1 1 2 4

* 0 0 1 2 4

* 3 3 0 1 2

2 2

VPN of pages in memory in LRU order

CHAPTER 9 UNIPROCESSOR SCHEDULING

Review Questions

9.1 Long-term scheduling: The decision to add to the pool of processes to be

executed. Medium-term scheduling: The decision to add to the number of processes that are partially or fully in main memory. Short-term

scheduling: The decision as to which available process will be executed by the processor

9.3 Turnaround time is the total time that a request spends in the system

(waiting time plus service time. Response time is the elapsed time

between the submission of a request until the response begins to appear as output.

Problems

9.1 Each square represents one time unit; the number in the square refers to

the currently-running process.

FCFS A A A B B B B B C C D D D D D E E E RR, q = 1 A B A B C A B C B D B D E D E D E D RR, q = 4 A A A B B B B C C B D D D D E E E E SPN A A A C C B B B B B D D D D D E E E SRT A A A C C B B B B B D D D D D E E E HRRN A A A B B B B B C C D D D D D E E E Feedback, q = 1 A B A C B C A B B D B D E D E D E D A B A A C B B C B B D D E D D E E D Feedback, q = 2i

E E D E E E E E E E E E E E E E

FCFS RR q = 1

RR q = 4

SPN SRT HRRN FB q = 1

FB q = 2i

Ta Ts Tf Tr Tr/Ts Tf Tr Tr/Ts Tf Tr Tr/Ts Tf Tr Tr/Ts Tf Tr Tr/Ts Tf Tr Tr/Ts Tf Tr Tr/Ts Tf Tr Tr/Ts

A 0 3 3 3.00 1.00 6.00 6.00 2.00 3.00 3.00 1.00 3.00 3.00 1.00 3.00 3.00 1.00 3.00 3.00 1.00 7.00 7.00 2.33 4.00 4.00 1.33

B 1 5 8 7.00 1.40 11.00 10.00 2.00 10.00 9.00 1.80 10.00 9.00 1.80 10.00 9.00 1.80 8.00 7.00 1.40 11.00 10.00 2.00 10.00 9.00 1.80

C 3 2 10 7.00 3.50 8.00 5.00 2.50 9.00 6.00 3.00 5.00 2.00 1.00 5.00 2.00 1.00 10.00 7.00 3.50 6.00 3.00 1.50 8.00 5.00 2.50

D 9 5 15 6.00 1.20 18.00 9.00 1.80 19.00 10.00 2.00 15.00 6.00 1.20 15.00 6.00 1.20 15.00 6.00 1.20 18.00 9.00 1.80 18.00 9.00 1.80

E 12 5 20 8.00 1.60 20.00 8.00 1.60 20.00 8.00 1.60 20.00 8.00 1.60 20.00 8.00 1.60 20.00 8.00 1.60 20.00 8.00 1.60 20.00 8.00 1.60

6.20 1.74 7.60 1.98 7.20 1.88 5.60 1.32 5.60 1.32 6.20 1.74 7.40 1.85 7.00 1.81

9.16 a. Sequence with which processes will get 1 min of processor time:

1 2 3 4 5 Elapsed time A B C D E 5 A B C D E 10 A B C D E 15 A B D E 19 A B D E 23 A B D E 27 A B E 30 A B E 33 A B E 36 A E 38 A E 40 A E 42 A 43 A 44 A 45

The turnaround time for each process:

A = 45 min, B = 35 min, C = 13 min, D = 26 min, E = 42 min

The average turnaround time is = (45+35+13+26+42) / 5 = 32.2 min b.

Priority Job Turnaround Time 3 B 9 4 E 9 + 12 = 21 6 A 21 + 15 = 36 7 C 36 + 3 = 39 9 D 39 + 6 = 45

The average turnaround time is: (9+21+36+39+45) / 5 = 30 min c.

Job Turnaround Time A 15 B 15 + 9 = 24 C 24 + 3 = 27 D 27 + 6 = 33 E 33 + 12 = 45

The average turnaround time is: (15+24+27+33+45) / 5 = 28.8 min d.

Running Time 3 6 9 12 15

Job C D B E A

Turnaround Time 3

3 + 6 = 9 9 + 9 = 18 18 + 12 = 30 30 + 15 = 45

The average turnaround time is: (3+9+18+30+45) / 5 = 21 min

CHAPTER 10

MULTIPROCESSOR AND REAL-TIME

SCHEDULING

Review Questions

10.1 Fine: Parallelism inherent in a single instruction stream. Medium: Parallel

processing or multitasking within a single application. Coarse:

Multiprocessing of concurrent processes in a multiprogramming environment. Very Coarse: Distributed processing across network nodes to form a single computing environment. Independent: Multiple unrelated processes.

10.4 A hard real-time task is one that must meet its deadline; otherwise it will cause

undesirable damage or a fatal error to the system. A soft real-time task has an associated deadline that is desirable but not mandatory; it still makes sense to schedule and complete the task even if it has passed its deadline.

Problems

10.1 For fixed priority, we do the case in which the priority is A, B, C. Each

square represents five time units; the letter in the square refers to the currently-running process. The first row is fixed priority; the second row is earliest deadline scheduling using completion deadlines. A A B B A A C C A A B B A A C C A A A A B B A C C A C A A B B A A C C C A A For fixed priority scheduling, process C always misses its deadline. 10.4

s lockedby T1s unlockedT1T2s lockedby T3s unlockedT3normal executionexecution in critical section

Once T3 enters its critical section, it is assigned a priority higher than T1. When T3 leaves its critical section, it is preempted by T1.

CHAPTER 11

I/O MANAGEMENT AND DISK SCHEDULING

Review Questions

11.1 Programmed I/O: The processor issues an I/O command, on behalf of a

process, to an I/O module; that process then busy-waits for the

operation to be completed before proceeding. Interrupt-driven I/O: The processor issues an I/O command on behalf of a process, continues to execute subsequent instructions, and is interrupted by the I/O module when the latter has completed its work. The subsequent instructions may be in the same process, if it is not necessary for that process to wait for the completion of the I/O. Otherwise, the process is suspended pending the interrupt and other work is performed. Direct memory access (DMA): A DMA module controls the exchange of data between main memory and an I/O module. The processor sends a request for the transfer of a block of data to the DMA module and is interrupted only after the entire block has been transferred. 11.5 Seek time, rotational delay, access time.

Problems

11.1 If the calculation time exactly equals the I/O time (which is the most

favorable situation), both the processor and the peripheral device

running simultaneously will take half as long as if they ran separately. Formally, let C be the calculation time for the entire program and let T be the total I/O time required. Then the best possible running time with

buffering is max(C, T), while the running time without buffering is C + T; and of course ((C + T)/2)  max(C, T)  (C + T). Source: [KNUT97].

11.3 Disk head is initially moving in the direction of decreasing track number: FIFO SSTF SCAN C-SCAN Next track Number Next track Number Next track Number Next track Number accessed of tracks accessed of tracks accessed of tracks accessed of tracks traversed traversed traversed traversed 27 73 110 10 64 36 64 129 102 120 10 41 23 41 110 19 129 9 27 14 27 186 76 147 18 10 17 10 147 39 186 39 110 100 186 41 106 64 122 120 10 147 10 31 41 23 129 9 129 64 54 27 14 147 18 120 120 56 10 17 186 39 110 Average 61.8 Average 29.1 Average 29.6 Average If the disk head is initially moving in the direction of increasing track

number, only the SCAN and C-SCAN results change:

SCAN C-SCAN Next track Number Next track Number accessed of tracks accessed of tracks traversed traversed 110 10 110 10 120 10 120 10 129 9 129 9 147 18 147 18 186 39 186 39 64 122 10 176 41 23 27 17 27 14 41 14 10 17 64 23 Average 29.1 Average 35.1

36 23 14 17 176 39 18 9 10 38

CHAPTER 12 FILE MANAGEMENT

Review Questions

12.1 A field is the basic element of data containing a single value. A record is

a collection of related fields that can be treated as a unit by some application program.

12.5 Pile: Data are collected in the order in which they arrive. Each record

consists of one burst of data. Sequential file: A fixed format is used for records. All records are of the same length, consisting of the same number of fixed-length fields in a particular order. Because the length and position of each field is known, only the values of fields need to be stored; the field name and length for each field are attributes of the file structure. Indexed sequential file: The indexed sequential file maintains the key characteristic of the sequential file: records are organized in sequence based on a key field. Two features are added; an index to the file to support random access, and an overflow file. The index provides a lookup capability to reach quickly the vicinity of a desired record. The overflow file is similar to the log file used with a sequential file, but is integrated so that records in the overflow file are located by following a pointer from their predecessor record. Indexed file: Records are

accessed only through their indexes. The result is that there is now no restriction on the placement of records as long as a pointer in at least one index refers to that record. Furthermore, variable-length records can be employed. Direct, or hashed, file: The direct file makes use of hashing on the key value.

Problems

B12.1 Fixed blocking: F = largest integer 

R When records of variable length are packed into blocks, data for marking

the record boundaries within the block has to be added to separate the records. When spanned records bridge block boundaries, some reference to the successor block is also needed. One possibility is a length indicator preceding each record. Another possibility is a special separator marker between records. In any case, we can assume that each record requires a marker, and we assume that the size of a marker is about equal to the size of a block pointer [WEID87]. For spanned blocking, a block pointer

of size P to its successor block may be included in each block, so that the pieces of a spanned record can easily be retrieved. Then we have

B-PF=Variable-length spanned blocking:

R+PWith unspanned variable-length blocking, an average of R/2 will be wasted because of the fitting problem, but no successor pointer is required:

Variable-length unspanned blocking:

12.3 a. Indexed b. Indexed sequential c. Hashed or indexed

B-RF=2R+P

因篇幅问题不能全部显示,请点此查看更多更全内容

Top