sign in
 
   
 
 
 
OPERATING SYSTEM _ INTRODUCTION
Clustered System:
Like multiprocessor systems, clustered systems gather together multiple CPUs to accomplish computational work. Clustered systems are composed of two or more individual systems coupled together. Clustered computers share storage and are closely linked via a local-area network (LAN).

Clustering is usually used to provide high-availability service; that is, service will continue even if one or more systems in the cluster fails. A layer of cluster software runs on the cluster nodes. Each node can monitor one or more of the others. If the monitored machine fails, the monitoring machine can take ownership of its storage and restart the applications that were running on the failed machine.

Clustering can be structured asymmetrically or symmetrically.

In asymmetric clustering, one machine is in hot-standby mode while the other is running the applications. The hot-standby host machine does nothing but monitor the active server. If that server fails, the hot-standby host becomes the active server.

In symmetric mode, two or more hosts are running applications, and are monitoring each other. This mode is more efficient as it uses all of the available hardware. Other forms of clusters include parallel clusters and clustering over a wide-area network (WAN).

Operating-System Structure:
An operating system provides the environments within which programs are executed. One of the most important and aspects of operating systems is the ability to multi-program. Multiprogramming increases CPU utilization by organizing jobs. The operating system keeps several jobs in memory simultaneously. This set of jobs can be a sub-set of the jobs kept in the job-pool, which contains all jobs that enter the system.
Memory layout for a multiprogramming system

The operating system picks and begins to execute one of the jobs in memory. Sometimes, the job may have to wait for some task, such as an I/O operation, to complete. In such a situation, the operating system simply switches to and executes another job. When that job needs to wait, the CPU is switched to another job, and so on. Eventually, the first job finishes waiting and gets the CPU back.

Time-sharing is a logical extension of multiprogramming. In time-sharing system, the CPU executes multiple job by switching among them, but the switches occur so frequently that the users can interact with each program while it is running.

A time-shared operating system allows many users to share the computer simultaneously. As the system switches rapidly from one user to the next each user is given the impression that the entire computer system is dedicated to his use, even though it is being shared among many users.
Time-sharing and multiprogramming require several jobs to be kept simultaneously in memory. Since in general main memory is too small to accommodate all jobs, the jobs are kept initially on the disk in the job pool. This pool consists of all processes residing on the disk awaiting allocation of main memory. If several jobs are ready to be brought into memory, and if there is not enough room for all of them, then the system must choose among them. This is called job scheduling.

A proper memory management is required in the operating system since several programs are residing in the memory at the same time.

The main memory may be filled with several jobs, then the system must choose each job among them. This is called CPU scheduling.

In a time-sharing system, the operating system must ensure reasonable response time, which is sometimes accomplished through swapping, where processes are swapped in and out of main memory to the disk.

The virtual-memory scheme enables users to run programs that are larger than actual memory.
Previous Page Previous