1. The major difference between deadlock, starvation and race is that in deadlock, the problem occurs when the jobs are processed. Starvation, however is the allocation of resource that prevents one job to be executed. Race occurs before the process has been started.
2. Example of Deadlock: When two person are about to buy one product at the time.
Example of Starvation: When one person borrowed a pen from his classmate and his
classmate get his pen back.
Example of Race: When two guys have the same girlfriend.
3. Four necessary condition needed for the deadlock from exercise #2:
if the product is only one.
if the two person needed that one product urgently.
if there's other alternative products available.
if the two person are brand concious and the product happen to be what they like.
4.
5.
a. Deadlock will not happen because there are two traffic lights that control the traffic. But when some motorist don't follow the traffic lights, deadlock can occur because there's only one bridge to drive through.
b. Deadlock can be detected when there will be a huge bumper to bumper to the traffic and there will be accident that will happen.
c. The solution to prevent deadlock is that, the traffic lights should be accurate and motorist should follow it. In order to have a nice driving through the bridge.
Tuesday, January 15, 2008
Thursday, December 13, 2007
Assignment #3
Page 104
Question #4
What is the cause of thrashing? How does the system detect thrashing? Once it detects thrashing, what can the system do to eliminate this problem?
- Thrashing is caused by under allocation of the minimum number of pages required by a process, forcing it to continuously page fault. The system can detect thrashing by evaluating the level of CPU utilization as compared to the level of multiprogramming. It can be eliminated by reducing the level of multiprogramming.
Page 56
Question 1-3
a.Multiprogramming
-is the technique of running several programs at a time using timesharing. It allows a computer to do several things at the same time. Multiprogramming creates logical parallelism. The concept of multiprogramming is that the operating system keeps several jobs in memory simultaneously. The operating system selects a job from the job pool and starts executing a job, when that job needs to wait for any i/o operations the CPU is switched to another job. So the main idea here is that the CPU is never idle.
b.External Fragmentation:
- External Fragmentation happens when a dynamic memory allocation algorithm allocates some memory and a small piece is left over that cannot be effectively used. If too much external fragmentation occurs, the amount of usable memory is drastically reduced. Total memory space exists to satisfy a request, but it is not contiguous.
c.Internal Fragmentation:
-Internal fragmentation is the space wasted inside of allocated memory blocks because of restriction on the allowed sizes of allocated blocks. Allocated memory may be slightly larger than requested memory; this size difference is memory internal to a partition, but not being used.
2.) The advantages of the memory allocation are as follows:
- It avoids wastage of CPU idle time
-Operating system is easy to implement
3.) The disadvantages of the memory allocation are as follows:
- degree of multiprogramming is fixed- only 1 job per partition
- waste of main storage- some partitions not used
Question #4
What is the cause of thrashing? How does the system detect thrashing? Once it detects thrashing, what can the system do to eliminate this problem?
- Thrashing is caused by under allocation of the minimum number of pages required by a process, forcing it to continuously page fault. The system can detect thrashing by evaluating the level of CPU utilization as compared to the level of multiprogramming. It can be eliminated by reducing the level of multiprogramming.
Page 56
Question 1-3
a.Multiprogramming
-is the technique of running several programs at a time using timesharing. It allows a computer to do several things at the same time. Multiprogramming creates logical parallelism. The concept of multiprogramming is that the operating system keeps several jobs in memory simultaneously. The operating system selects a job from the job pool and starts executing a job, when that job needs to wait for any i/o operations the CPU is switched to another job. So the main idea here is that the CPU is never idle.
b.External Fragmentation:
- External Fragmentation happens when a dynamic memory allocation algorithm allocates some memory and a small piece is left over that cannot be effectively used. If too much external fragmentation occurs, the amount of usable memory is drastically reduced. Total memory space exists to satisfy a request, but it is not contiguous.
c.Internal Fragmentation:
-Internal fragmentation is the space wasted inside of allocated memory blocks because of restriction on the allowed sizes of allocated blocks. Allocated memory may be slightly larger than requested memory; this size difference is memory internal to a partition, but not being used.
2.) The advantages of the memory allocation are as follows:
- It avoids wastage of CPU idle time
-Operating system is easy to implement
3.) The disadvantages of the memory allocation are as follows:
- degree of multiprogramming is fixed- only 1 job per partition
- waste of main storage- some partitions not used
Thursday, November 29, 2007
Assignment No.2
WINDOWS:
On the 32-bit Windows platform, JVM programs can only ever use up to about 1.5–1.6 GiB of memory in RAM per Java VM process. Allocating a heap size greater than this amount will either not work, or force paging and poor performance.
From my own research on this issue (and taking into account Microsoft’s advice on KnowledgeBase about Win32 virtual memory allocation), this is what I understand about the limits of RAM usability, both for Win32 itself, and for 32-bit JVMs running on Windows. I haven’t researched if this limit applies for 64-bit Windows or JVMs, nor what Vista might be doing. Also, though the 4GiB limit is inherent to 32-bit machines, the 2GiB limit seems to be peculiar to Windows, and I’ve not seen it anywhere in Linux, Solaris or BSD: when unix runs on a 32-bit machine, there’s a 4GiB limit, but not 2GiB.
Any 32-bit binary processor has a hard limit of 4,294,967,296 which is the largest number that can be represented in a single 32 bit machine word: 232 = 4,294,967,296. On a byte-addressed computer like Intel IA32, that equates 4294967296 bytes = 4194304 KiB = 4096 MiB = 4GiB.
Normal memory access techniques used by 32-bit Windows programs use standard linear byte addressing, and so are limited to 4GiB of addressable memory, whether it is real or virtual.
On Windows, the amount of this 4GiB space that can actually occupy physical RAM is halved to 2GiB per process. Windows uses the other 2GiB of virutal address space as a per-process overhead for the kernel, and to speed up paging. This is really dumb because it means that if your process allocates > 2GiB heap memory, then Windows must page some to virtual memory, even if you have > 2GiB of actual RAM! Sort of like the DOS 640KiB limit reborn!
To overcome this dumb design, Windows has a memory addressing scheme called the AWE API, which allows a process to allocate up to 3GiB of memory and have that memory reside on chips. To use this, the program must be specially written to use the AWE API.There’s also another virtual memory technology in Windows called PAE. This is not useful to application programs — it is how the Windows kernel can use > 4GiB of real memory to allocate physical memory to all processes on the system. Each process is still limited to the 4GiB address space each, with 2GiB mapped to RAM (or 3GiB, if the program uses the AWE API) and the rest having to be virtual (paged to disc). PAE just lets Windows keep more than one of these big processes in memory at once even if their combined total heap is more than 4GiB (and assuming you have more than 4GiB of RAM of course).
Both Sun and BAE have refused to use the AWE API for implementing their Java VMs. This is probably because the AWE API does not allow for a contiguous address space of 3GiB, but rather breaks it at the 2GiB mark. The Java VM spec’ used to require a contiguous addressed heap (though this has changed for the JVMS 2nd Ed.…). So any Java VM running on Windows is still limited to at most < 2GiB per running program (actually, only about 1.5GiB is usable because of further overhead for the JVM itself). At least, so long as Win32 JVMs don’t use the AWE API. I’m not sure how difficult it would be for Sun or BAE to change their JVMs to make use of the AWE API in Windows, but the fact that they haven’t done so seems to indicate to me that it wouldn’t be easy. I have been unable to determine if IBM’s JVM uses it…
The only workable solution for utilizing greater than about 1.5–1.6 GiB per JVM process on a 32-bit host is to not run it in Windows (i.e. use Solaris, Linux or BSD). Real operating systems can let processes use 4GiB on 32-bit machines without special programming tricks. Or, you could switch to a 64-bit platform. Although there is a Windows for IA-64, I’m not sure about the availability of a 64-bit JVM for that platform.
It would be better to have a smaller heap on Win32, and if you need more, consider re-engineering the program to use less anyway. If your program is genuinely memory bound and you can’t get away from needing more than 1.5GiB heap, you could work around the Win32 memory limit by splitting your Java program into more than once process, each running on it’s own JVM, allocating 1.5GiB to each JVM, and then having the processes communicate with an IPC mechanism as needed, such as JMS. However there’s probably a lot more re-engineering work involved in this than there is to just migrate away from Win32 …
UNIX:
A reasonable approach that has been well established over the years is a layered system. Dijkstra's THE system (Fig. 1-25) was the first layered operating system. UNIX (Fig. 10-3) and Windows 2000 (Fig. 11-7) also have a layered structure, but the layering in both of them is more a way of trying to describe the system than a real guiding principle that was used in building the system.
For a new system, designers choosing to go this route should first very carefully choose the layers and define the functionality of each one. The bottom layer should always try to hide the worst idiosyncracies of the hardware, as the HAL does in Fig. 11-7. Probably the next layer should handle interrupts, context switching, and the MMU, so above this level, the code is mostly machine independent. Above this, different designers will have different tastes (and biases). One possibility is to have layer 3 manage threads, including scheduling and interthread synchronization, as shown in Fig. 12-1. The idea here is that starting at layer 4 we have proper threads that are scheduled normally and synchronize using a standard mechanism (e.g., mutexes).
In layer 4 we might find the device drivers, each one running as a separate thread, with its own state, program counter, registers, etc., possibly (but not necessarily) within the kernel address space. Such a design can greatly simplify the I/O structure because when an interrupt occurs, it can be converted into an unlock on a mutex and a call to the scheduler to (potentially) schedule the newly readied thread that was blocked on the mutex. MINIX uses this approach, but in UNIX, Linux, and Windows 2000, the interrupt handlers run in a kind of no-man's land, rather than as proper threads that can be scheduled, suspended, etc. Since a huge amount of the complexity of any operating system is in the I/O, any technique for making it more tractable and encapsulated is worth considering.
Above layer 4, we would expect to find virtual memory, one or more file systems, and the system call handlers. If the virtual memory is at a lower level than the file systems, then the block cache can be paged out, allowing the virtual memory manager to dynamically determine how the real memory should be divided among user pages and kernel pages, including the cache. Windows 2000 works this way.
On the 32-bit Windows platform, JVM programs can only ever use up to about 1.5–1.6 GiB of memory in RAM per Java VM process. Allocating a heap size greater than this amount will either not work, or force paging and poor performance.
From my own research on this issue (and taking into account Microsoft’s advice on KnowledgeBase about Win32 virtual memory allocation), this is what I understand about the limits of RAM usability, both for Win32 itself, and for 32-bit JVMs running on Windows. I haven’t researched if this limit applies for 64-bit Windows or JVMs, nor what Vista might be doing. Also, though the 4GiB limit is inherent to 32-bit machines, the 2GiB limit seems to be peculiar to Windows, and I’ve not seen it anywhere in Linux, Solaris or BSD: when unix runs on a 32-bit machine, there’s a 4GiB limit, but not 2GiB.
Any 32-bit binary processor has a hard limit of 4,294,967,296 which is the largest number that can be represented in a single 32 bit machine word: 232 = 4,294,967,296. On a byte-addressed computer like Intel IA32, that equates 4294967296 bytes = 4194304 KiB = 4096 MiB = 4GiB.
Normal memory access techniques used by 32-bit Windows programs use standard linear byte addressing, and so are limited to 4GiB of addressable memory, whether it is real or virtual.
On Windows, the amount of this 4GiB space that can actually occupy physical RAM is halved to 2GiB per process. Windows uses the other 2GiB of virutal address space as a per-process overhead for the kernel, and to speed up paging. This is really dumb because it means that if your process allocates > 2GiB heap memory, then Windows must page some to virtual memory, even if you have > 2GiB of actual RAM! Sort of like the DOS 640KiB limit reborn!
To overcome this dumb design, Windows has a memory addressing scheme called the AWE API, which allows a process to allocate up to 3GiB of memory and have that memory reside on chips. To use this, the program must be specially written to use the AWE API.There’s also another virtual memory technology in Windows called PAE. This is not useful to application programs — it is how the Windows kernel can use > 4GiB of real memory to allocate physical memory to all processes on the system. Each process is still limited to the 4GiB address space each, with 2GiB mapped to RAM (or 3GiB, if the program uses the AWE API) and the rest having to be virtual (paged to disc). PAE just lets Windows keep more than one of these big processes in memory at once even if their combined total heap is more than 4GiB (and assuming you have more than 4GiB of RAM of course).
Both Sun and BAE have refused to use the AWE API for implementing their Java VMs. This is probably because the AWE API does not allow for a contiguous address space of 3GiB, but rather breaks it at the 2GiB mark. The Java VM spec’ used to require a contiguous addressed heap (though this has changed for the JVMS 2nd Ed.…). So any Java VM running on Windows is still limited to at most < 2GiB per running program (actually, only about 1.5GiB is usable because of further overhead for the JVM itself). At least, so long as Win32 JVMs don’t use the AWE API. I’m not sure how difficult it would be for Sun or BAE to change their JVMs to make use of the AWE API in Windows, but the fact that they haven’t done so seems to indicate to me that it wouldn’t be easy. I have been unable to determine if IBM’s JVM uses it…
The only workable solution for utilizing greater than about 1.5–1.6 GiB per JVM process on a 32-bit host is to not run it in Windows (i.e. use Solaris, Linux or BSD). Real operating systems can let processes use 4GiB on 32-bit machines without special programming tricks. Or, you could switch to a 64-bit platform. Although there is a Windows for IA-64, I’m not sure about the availability of a 64-bit JVM for that platform.
It would be better to have a smaller heap on Win32, and if you need more, consider re-engineering the program to use less anyway. If your program is genuinely memory bound and you can’t get away from needing more than 1.5GiB heap, you could work around the Win32 memory limit by splitting your Java program into more than once process, each running on it’s own JVM, allocating 1.5GiB to each JVM, and then having the processes communicate with an IPC mechanism as needed, such as JMS. However there’s probably a lot more re-engineering work involved in this than there is to just migrate away from Win32 …
UNIX:
A reasonable approach that has been well established over the years is a layered system. Dijkstra's THE system (Fig. 1-25) was the first layered operating system. UNIX (Fig. 10-3) and Windows 2000 (Fig. 11-7) also have a layered structure, but the layering in both of them is more a way of trying to describe the system than a real guiding principle that was used in building the system.
For a new system, designers choosing to go this route should first very carefully choose the layers and define the functionality of each one. The bottom layer should always try to hide the worst idiosyncracies of the hardware, as the HAL does in Fig. 11-7. Probably the next layer should handle interrupts, context switching, and the MMU, so above this level, the code is mostly machine independent. Above this, different designers will have different tastes (and biases). One possibility is to have layer 3 manage threads, including scheduling and interthread synchronization, as shown in Fig. 12-1. The idea here is that starting at layer 4 we have proper threads that are scheduled normally and synchronize using a standard mechanism (e.g., mutexes).
In layer 4 we might find the device drivers, each one running as a separate thread, with its own state, program counter, registers, etc., possibly (but not necessarily) within the kernel address space. Such a design can greatly simplify the I/O structure because when an interrupt occurs, it can be converted into an unlock on a mutex and a call to the scheduler to (potentially) schedule the newly readied thread that was blocked on the mutex. MINIX uses this approach, but in UNIX, Linux, and Windows 2000, the interrupt handlers run in a kind of no-man's land, rather than as proper threads that can be scheduled, suspended, etc. Since a huge amount of the complexity of any operating system is in the I/O, any technique for making it more tractable and encapsulated is worth considering.
Above layer 4, we would expect to find virtual memory, one or more file systems, and the system call handlers. If the virtual memory is at a lower level than the file systems, then the block cache can be paged out, allowing the virtual memory manager to dynamically determine how the real memory should be divided among user pages and kernel pages, including the cache. Windows 2000 works this way.
Wednesday, November 21, 2007
Assignment #.1
OPERATING SYSTEM
1. The operating system (OS) can be considered as the most important program that runs on a computer. Every
general-purpose computer must have an operating system to provide a software platform on top of which other
programs (the application software) can run. It is also the main control program of a computer that schedules
tasks, manages storage, and handles communication with peripherals. The central module of an operating
system is the 'kernel'. It is the part of the operating system that loads first, and it remains in main memory.
Because it stays in memory, it is important for the kernel to be as small as possible while still providing all the
essential services required by other parts of the operating system and applications. Typically, the kernel is
responsible for memory management, process and task management, and disk management.
In general an application software must be written to run on top of a particular operating system. Your choice of
operating system, therefore, determines to a great extent the applications you can run. For PCs, the most
popular operating systems are Windows 95/98, MS-DOS (Microsoft-Disk Operating System), OS/2, but others
are available, such as Linux, BeOS…
For large systems, the operating system has even greater responsibilities and powers. It is like a traffic cop: it
makes sure that different programs and users running at the same time do not interfere with each other. The
operating system is also responsible for security, ensuring that unauthorized users do not access the system.
From this point of view, operating systems can be classified as follows:
§ Multi-user: Allows two or more users to run programs at the same time. Some operating systems permit
hundreds or even thousands of concurrent users.
§ Multiprocessing: Supports running a program on more than one CPU.
§ Multitasking: Allows more than one program to run concurrently.
§ Multithreading: Allows different parts of a single program to run concurrently.
§ Real-time: Real time operating system (RTOS) responds to input instantly. General-purpose operating
systems, such as DOS and UNIX, are not real-time.
An OS is a 16-bit operating system if it processes 16 bits of data at once, e.g.: DOS. On the other hand,
Windows 98 and OS/2 Warp are 32-bit operating systems because they can process 32 bits of data at once.
A network operating system (NOS) is an operating system which makes it possible for computers to be on a
network, and manages the different aspects of the network. Some examples are Windows for Workgroups,
Windows NT, AppleTalk, DECnet, and LANtastic…
2. The two reason why we used six server computer in regional bank instead one super computer?
1. The operating system (OS) can be considered as the most important program that runs on a computer. Every
general-purpose computer must have an operating system to provide a software platform on top of which other
programs (the application software) can run. It is also the main control program of a computer that schedules
tasks, manages storage, and handles communication with peripherals. The central module of an operating
system is the 'kernel'. It is the part of the operating system that loads first, and it remains in main memory.
Because it stays in memory, it is important for the kernel to be as small as possible while still providing all the
essential services required by other parts of the operating system and applications. Typically, the kernel is
responsible for memory management, process and task management, and disk management.
In general an application software must be written to run on top of a particular operating system. Your choice of
operating system, therefore, determines to a great extent the applications you can run. For PCs, the most
popular operating systems are Windows 95/98, MS-DOS (Microsoft-Disk Operating System), OS/2, but others
are available, such as Linux, BeOS…
For large systems, the operating system has even greater responsibilities and powers. It is like a traffic cop: it
makes sure that different programs and users running at the same time do not interfere with each other. The
operating system is also responsible for security, ensuring that unauthorized users do not access the system.
From this point of view, operating systems can be classified as follows:
§ Multi-user: Allows two or more users to run programs at the same time. Some operating systems permit
hundreds or even thousands of concurrent users.
§ Multiprocessing: Supports running a program on more than one CPU.
§ Multitasking: Allows more than one program to run concurrently.
§ Multithreading: Allows different parts of a single program to run concurrently.
§ Real-time: Real time operating system (RTOS) responds to input instantly. General-purpose operating
systems, such as DOS and UNIX, are not real-time.
An OS is a 16-bit operating system if it processes 16 bits of data at once, e.g.: DOS. On the other hand,
Windows 98 and OS/2 Warp are 32-bit operating systems because they can process 32 bits of data at once.
A network operating system (NOS) is an operating system which makes it possible for computers to be on a
network, and manages the different aspects of the network. Some examples are Windows for Workgroups,
Windows NT, AppleTalk, DECnet, and LANtastic…
2. The two reason why we used six server computer in regional bank instead one super computer?
- We used six server computer it because six server is more memory than to super computer and six server computer is fastiest than to super computer.
- It because the six server computer is new model than to super computer.
Subscribe to:
Posts (Atom)