UML Tutorial
UML is a standard language for specifying, visualizing, constructing, and documenting the artifacts of software systems.
UML was created by Object Management Group and UML 1.0 specification draft was proposed to the OMG in January 1997.
This tutorial gives a complete understanding on UML.
UML is a standard language for specifying, visualizing, constructing, and documenting the artifacts of software systems.
UML was created by Object Management Group (OMG) and UML 1.0 specification draft was proposed to the OMG in January 1997.
OMG is continuously putting effort to make a truly industry standard.
So
UML can be described as a general purpose visual modeling language to
visualize, specify, construct and document software system. Although UML
is generally used to model software systems but it is not limited
within this boundary. It is also used to model non software systems as
well like process flow in a manufacturing unit etc.
UML
is not a programming language but tools can be used to generate code in
various languages using UML diagrams. UML has a direct relation with
object oriented analysis and design. After some standardization UML is
become an OMG (Object Management Group) standard.
|
OO Analysis --> OO Design --> OO implementation using OO languages
|
Now the above three points can be described in details:
- During object oriented analysis the most important purpose is to identify objects and describing them in a proper way. If these objects are identified efficiently then the next job of design is easy. The objects should be identified with responsibilities. Responsibilities are the functions performed by the object. Each and every object has some type of responsibilities to be performed. When these responsibilities are collaborated the purpose of the system is fulfilled.
- The second phase is object oriented design. During this phase emphasis is given upon the requirements and their fulfilment. In this stage the objects are collaborated according to their intended association. After the association is complete the design is also complete.
- The third phase is object oriented implementation. In this phase the design is implemented using object oriented languages like Java, C++ etc.
Role of UML in OO design:
UML
is a modeling language used to model software and non software systems.
Although UML is used for non software systems the emphasis is on
modeling object oriented software applications. Most of the UML diagrams
discussed so far are used to model different aspects like static,
dynamic etc. Now what ever be the aspect the artifacts are nothing but
objects.
If
we look into class diagram, object diagram, collaboration diagram,
interaction diagrams all would basically be designed based on the
objects.
So
the relation between OO design and UML is very important to understand.
The OO design is transformed into UML diagrams according to the
requirement. Before understanding the UML in details the OO concepts
should be learned properly. Once the OO analysis and design is done the
next step is very easy. The input from the OO analysis and design is the
input to the UML diagrams.
UML Overview:
UML
is a general purpose modeling language. It was initially started to
capture the behavior of complex software and non software system and now
it has become an OMG standard.
UML
provides elements and components to support the requirement of complex
systems. UML follows the object oriented concepts and methodology. So
object oriented systems are generally modeled using the pictorial
language.
UML diagrams are drawn from different perspectives like design, implementation, deployment etc.
At
the conclusion UML can be defined as a modeling language to capture the
architectural, behavioral and structural aspects of a system.
Objects
are the key to this object oriented world. The basic requirement of
object oriented analysis and design is to identify the object
efficiently. After that the responsibilities are assigned to the
objects. Once this task is complete the design is done using the input
from analysis.
The
UML has an important role in this OO analysis and design, The UML
diagrams are used to model the design. So the UML has an important role
to play.
UML notations:
UML
notations are the most important elements in modeling. Efficient and
appropriate use of notations is very important for making a complete and
meaningful model. The model is useless unless its purpose is depicted
properly.
So
learning notations should be emphasized from the very beginning.
Different notations are available for things and relationships. And the
UML diagrams are made using the notations of things and relationships.
Extensibility is another important feature which makes UML more powerful
and flexible.
UML Diagrams:
Diagrams are the heart of UML. These diagrams are broadly categorized as structural and behavioral diagrams.
- Structural diagrams are consists of static diagrams like class diagram, object diagram etc.
- Behavioral diagrams are consists of dynamic diagrams like sequence diagram, collaboration diagram etc.
The static and dynamic nature of a system is visualized by using these diagrams.
Class diagrams:
Class
diagrams are the most popular UML diagrams used by the object oriented
community. It describes the objects in a system and their relationships.
Class diagram consists of attributes and functions.
A
single class diagram describes a specific aspect of the system and the
collection of class diagrams represents the whole system. Basically the
class diagram represents the static view of a system.
Class
diagrams are the only UML diagrams which can be mapped directly with
object oriented languages. So it is widely used by the developer
community.
Object Diagram:
An
object diagram is an instance of a class diagram. So the basic elements
are similar to a class diagram. Object diagrams are consists of objects
and links. It captures the instance of the system at a particular
moment.
Object diagrams are used for prototyping, reverse engineering and modeling practical scenarios.
Component Diagram:
Component
diagrams are special kind of UML diagram to describe static
implementation view of a system. Component diagrams consist of physical
components like libraries, files, folders etc.
This
diagram is used from implementation perspective. More than one
component diagrams are used to represent the entire system. Forward and
reverse engineering techniques are used to make executables from
component diagrams.
Deployment Diagram:
Component
diagrams are used to describe the static deployment view of a system.
These diagrams are mainly used by system engineers.
Deployment
diagrams are consists of nodes and their relationships. An efficient
deployment diagram is an integral part of software application
development.
Use Case Diagram;
Use
case diagram is used to capture the dynamic nature of a system. It
consists of use cases, actors and their relationships. Use case diagram
is used at a high level design to capture the requirements of a system.
So
it represents the system functionalities and their flow. Although the
use case diagrams are not a good candidate for forward and reverse
engineering but still they are used in a slightly differently way to
model it.
Interaction Diagram:
Interaction
diagrams are used for capturing dynamic nature of a system. Sequence
and collaboration diagrams are the interaction diagrams used for this
purpose.
Sequence
diagrams are used to capture time ordering of message flow and
collaboration diagrams are used to understand the structural
organization of the system. Generally a set of sequence and
collaboration diagrams are used to model an entire system.
State-chart Diagram:
Statechart
diagrams are one of the five diagrams used for modeling dynamic nature
of a system. These diagrams are used to model the entire life cycle of
an object. Activity diagram is a special kind of Statechart diagram.
State
of an object is defined as the condition where an object resides for a
particular time and the object again moves to other states when some
events occur. State-chart diagrams are also used for forward and reverse
engineering.
Activity Diagram:
Activity
diagram is another important diagram to describe dynamic behavior.
Activity diagram consists of activities, links, relationships etc. It
models all types of flows like parallel, single, concurrent etc.
Activity
diagram describes the flow control from one activity to another without
any messages. These diagrams are used to model high level view of
business requirements.
Overview:
UML 2.0 is totally a different dimension in the world of Unified Modeling Language. It is more complex and extensive in nature.
The
extent of documentation has also increased compared to UML 1.5 version.
UML 2.0 also added new features so that its usage can be more
extensive.
UML
2.0 adds the definition of formal and completely defined semantics.
This new possibility can be utilized for the development of models and
the corresponding systems can be generated from these models. But to
utilize this new dimension a considerable effort has to be made to
acquire the knowledge.
New dimensions in UML2.0:
The
structure and documentation of UML was completely revised in the latest
version of UML2.0. There are now two documents available that describe
UML:
- UML 2.0 Infrastructure defines the basic constructs of the language on which UML is based. This section is not directly relevant to the users of UML. This is directed more towards the developers of modeling tools. So this area is not in the scope of this tutorial.
- UML 2.0 Superstructure defines the user constructs of UML 2.0. It means those elements of UML that users will use at the immediate level. So this is the main focus for the user community of UML.
This
revision of UML was created to fulfil a goal to restructure and refine
UML so that usability, implementation, and adaptation are simplified.
The UML infrastructure is used to:
- Provide a reusable meta-language core. This is used to define UML itself.
- Provide mechanisms to adjustment the language.
The UML superstructure is used to:
- Provide better support for component-based development.
- Improve constructs for the specification of architectur.
e
- Provide better options for the modeling of behaviour.
So
the important point to note is the major divisions described above.
These divisions are used to increase the usability of UML and define a
clear understanding of its usage.
There
is another dimension which is already proposed in this new version. It
is a proposal for a completely new Object Constraint Language (OCL) and
Diagram Interchange. These features all together form the complete
UML2.0 package.
It
is very important to distinguish between the UML model. Different
diagrams are used for different type of UML modeling. There are three
important type of UML modelings:
Structural modeling:
Structural modeling captures the static features of a system. They consist of the followings:
- Classes diagrams
- Objects diagrams
- Deployment diagrams
- Package diagrams
- Composite structure diagram
- Component diagram
Structural
model represents the framework for the system and this framework is the
place where all other components exist. So the class diagram, component
diagram and deployment diagrams are the part of structural modeling.
They all represent the elements and the mechanism to assemble them.
But
the structural model never describes the dynamic behavior of the
system. Class diagram is the most widely used structural diagram.
Behavioral Modeling:
Behavioral
model describes the interaction in the system. It represents the
interaction among the structural diagrams. Behavioral modeling shows the
dynamic nature of the system. They consist of the following:
- Activity diagrams
- Interaction diagrams
- Use case diagrams
All the above show the dynamic sequence of flow in a system.
Architectural Modeling:
Architectural
model represents the overall framework of the system. It contains both
structural and behavioral elements of the system. Architectural model
can be defined as the blue print of the entire system. Package diagram
comes under architectural modeling.
Modeling diagrams in UML2.0:
Modeling Interactions:
The
interaction diagrams described in UML2.0 is different than the earlier
versions. But the basic concept remains same as the earlier version. The
major difference is the enhancement and additional features added to
the diagrams in UML2.0.
UML2.0 models object interaction in the following four different ways.
- Sequence diagram is a time dependent view of the interaction between objects to accomplish a behavioral goal of the system. The time sequence is similar to the earlier version of sequence diagram. An interaction may be designed at any level of abstraction within the system design, from subsystem interactions to instance-level.
- Communication diagram is a new name added in UML2.0. A Communication diagram is a structural view of the messaging between objects, taken from the Collaboration diagram concept of UML 1.4 and earlier versions. This can be defined as a modified version of collaboration diagram.
- Interaction Overview diagram is also a new addition in UML2.0. An Interaction Overview diagram describes a high-level view of a group of interactions combined into a logic sequence, including flow-control logic to navigate between the interactions.
- Timing diagram is also added in UML2.0. It is an optional diagram designed to specify the time constraints on messages sent and received in the course of an interaction.
So
from the above description it is important to note that the purposes of
all the diagrams are to send/receive messages. Now the handlings of
these messages are internal to the objects. So the objects are also
having options to receive and send messages, and here comes another
important aspect called interface. Now these interfaces are responsible
for accepting and sending messages to one another.
So
from the above discussion it can be concluded that the interactions in
UML2.0 are described in a different way and that is why the new diagram
names have come into picture. But if we analyze the new diagrams then it
is clear that all the diagrams are created based upon the interaction
diagrams described in the earlier versions. The only difference is the
additional features added in UML2.0 to make the diagrams more efficient
and purpose oriented.
Modeling Collaborations:
As
we have already discussed that collaboration is used to model common
interactions between objects. To clarify it, we can say that
collaboration is a interaction where a set of messages are handled by a
set of objects having pre defined roles.
The
important point to note is the difference between the collaboration
diagram in earlier version and in UML2.0 version. So to distinguish the
collaboration diagram the name has been changed in UML2.0. In UML2.0 it
is named as Communication diagram.
Consequently
collaboration is defined as a class with attributes (properties) and
behavior (operations). Compartments on the collaboration class can user
defined also and may be used for interactions (Sequence diagrams) and
the structural elements (Composite Structure diagram).
Figure
below models the Observer design pattern as collaboration between an
object in the role of an observable item and any number of objects as
the observers.
Modeling Communication:
Communication
diagram is slightly different than the collaboration diagrams of the
earlier versions. We can say it is a scaled back version of the earlier
UML versions. The distinguishing factor of the communication diagram is
the link between objects.
This
is a visual link and it is missing in sequence diagram. In sequence
diagram only the messages passed between objects are shown even if there
is no link between them.
The
Communication diagram is used to prevent the modeler from making this
mistake by using an Object diagram format as the basis for messaging.
Each object on a Communication diagram is called an object lifeline.
The
message types in a Communication diagram are the same as in a Sequence
diagram. A Communication diagram may model synchronous, asynchronous,
return, lost, found, and object-creation messages.
Figure
below shows an Object diagram with three objects and two links that
form the basis for the Communication diagram. Each object on a
Communication diagram is called an object lifeline.
Modeling an Interaction Overview:
In
practical usage, a sequence diagram is used to model a single scenario.
So numbers of sequence diagrams are used to complete the entire
application. So while modeling a single scenario it is possible to
forget the total process and this can introduce errors.
So
to solve this issue the new interaction overview diagram combines the
flow of control from an activity diagram and messaging specification
from the sequence diagram.
Activity
diagram uses activities and object flows to describe a process. The
Interaction Overview diagram uses interactions and interaction
occurrences. The lifelines and messages found in Sequence diagrams
appear only within the interactions or interaction occurrences. However,
the lifelines (objects) that participate in the Interaction Overview
diagram may be listed along with the diagram name.
Figure below shows an interaction overview diagram with decision diamonds, frames and termination point
Modeling a Timing Diagram:
The
name of this diagram itself describes the purpose of the diagram. It
basically deals with the time of the events over its entire lifecycle.
So
a timing diagram can be defined as a special purpose interaction
diagram made to focus on the events of an object in its life time. It is
basically a mixture of state machine and interaction diagram. The
timing diagram uses the following time lines:
- State time line
- General value time line
A
lifeline in a Timing diagram forms a rectangular space within the
content area of a frame. It is typically aligned horizontally to read
from left to right. Multiple lifelines may be stacked within the same
frame to model the interaction between them.
Summary:
UML2.0
is an enhanced version where the new features are added to make it more
usable and efficient. There are two major categories in UML2.0, One is
UML super structure and another is UML infrastructure. Although the new
diagrams are based on the old concepts but still they have additional
features.
UML
2.0 offers four interaction diagrams, the Sequence diagram,
Communication diagram, Interaction Overview diagram, and an optional
Timing diagram. All four diagrams utilize the frame notation to enclose
an interaction. The use of frames supports the reuse of interactions as
interaction occurrences
Units 5 6 7 8 Click d below link (Password Encrypted)
User Interface design, Testing, RMMM
Pressman Presentations (Software Engineering)
- Chapter 1 - Software and Software Engineering
- Chapter 2 - Software Process (including SEI TR-24 excerpts)
- Chapter 3 - Prescriptive Process Models
- Chapter 5 - Software Engineering Practice
- Chapter 6 - System Engineering
- Chapter 7 - Requirements Engineering
- Chapter 8 - Analysis Modeling
- Chapter 9 - Design Engineering
- Chapter 10 - Architectural Design
- Chapter 11 - Component-Level Design
- Chapter 12 - User Interface Analysis and Design
- Chapter 13 - Software Testing Strategies
- Chapter 14 - Software Testing Techniques
- Chapter 15 - Software Product Metrics
- Chapter 21 - Project Management Concepts (Updated with slides on group dynamics)
- Chapter 22 - Process and Project Metrics
- Chapter 23 - Estimation for Software Projects
- Chapter 24 - Software Project Scheduling
- Chapter 25 - Risk Management
- Chapter 26 - Quality Management
- Chapter 27 - Change Management
Resources Click on Resources to download Safe Home(Password)
- SafeHome dialog excerpts from SEPA
- SEPA pages regarding SafeHome
- Overall SafeHome deployment diagram
- SafeHome priliminary screen layout (see pg 376 of SEPA 6/e for more detail)
- SafeHome control panel (see pg 193 and 231 of SEPA 6/e for more detail)
- Note that we use "on", "off", "reset",
"away", "stay", "code" (new password assignment), and
"panic" functions in the control panel
- SafeHome dialog excerpts from SEPA
- SEPA pages regarding SafeHome
- Overall SafeHome deployment diagram
- SafeHome priliminary screen layout (see pg 376 of SEPA 6/e for more detail)
- SafeHome control panel (see pg 193 and 231 of SEPA 6/e for more detail)
- Note that we use "on", "off", "reset", "away", "stay", "code" (new password assignment), and "panic" functions in the control panel
Document Actions
Bachelor's Degree in Software Engineering
- Software Engineering 2004 - Curriculum Guidelines (IEEE and ACM)
- Contrasting Computer Science and Software Engineering (Source: Monmouth University)
- (Sloan) Software Engineering Overview
- (Sloan) Computer Science Overview
- Proposal for BS in Software Engineering (2003)
- Software Engineering and ABET (2007)
- Software Engineering Education
- ABET Accreditation of a Software Engineering Program
- Creating an Accreditable Software Engineering Bachelor's Program
- Software Engineering Body of Knowledge (pdf)
- Where are the Software Engineers of Tomorrow
- Critical Need for Software Engineering Education
Software Requirements Engineering from PAK UNIV
Software Engineering II Power points from Pakistan University
- Introduction
- History of Operating Systems
- Operating Systems Structure
- System Component
- Operating System Services
- System Calls and System Programs
- Layered Approach System Design
- Mechanism and Policy
- Process
- Definition of Process
- Process State
- Process Operations
- Process Control Block
- Threads
- Solaris-2 Operating Systems
- CPU/Process Scheduling
- Schedule Algorithm
- FCFS Scheduling
- Round Robin Scheduling
- SJF Scheduling
- SRT Scheduling
- Priority Scheduling
- Multilevel Queue Scheduling
- Multilevel Feedback Queue Scheduling
- Interprocess Communication
- Critical Section
- Mutual Exclusion
- Achieving Mutual Exclusion
- Semaphores
- Deadlock
- Necessary and Sufficient Deadlock Conditions
- Dealing with Deadlock Problem
- Deadlock Prevention
- Deadlock Avoidance
- Deadlock Detection
- Absolutely Important UNIX Commands
- References
Introduction
What is an Operating System?
The 1960’s definition of an operating system is “the software that controls the hardware”. However, today, due to microcode we need a better definition. We see an operating system as the programs that make the hardware useable. In brief, an operating system is the set of programs that controls a computer. Some examples of operating systems are UNIX, Mach, MS-DOS, MS-Windows, Windows/NT, Chicago, OS/2, MacOS, VMS, MVS, and VM.Controlling the computer involves software at several levels. We will differentiate kernel services, library services, and application-level services, all of which are part of the operating system. Processes run Applications, which are linked together with libraries that perform standard services. The kernel supports the processes by providing a path to the peripheral devices. The kernel responds to service calls from the processes and interrupts from the devices. The core of the operating system is the kernel, a control program that functions in privileged state (an execution context that allows all hardware instructions to be executed), reacting to interrupts from external devices and to service requests and traps from processes. Generally, the kernel is a permanent resident of the computer. It creates and terminates processes and responds to their request for service.
Operating Systems are resource managers. The main resource is computer hardware in the form of processors, storage, input/output devices, communication devices, and data. Some of the operating system functions are: implementing the user interface, sharing hardware among users, allowing users to share data among themselves, preventing users from interfering with one another, scheduling resources among users, facilitating input/output, recovering from errors, accounting for resource usage, facilitating parallel operations, organizing data for secure and rapid access, and handling network communications.
Objectives of Operating Systems
Modern Operating systems generally have following three major goals. Operating systems generally accomplish these goals by running processes in low privilege and providing service calls that invoke the operating system kernel in high-privilege state.- To hide details of hardware by creating abstractionAn
abstraction is software that hides lower level details and provides a
set of higher-level functions. An operating system transforms the
physical world of devices, instructions, memory, and time into virtual
world that is the result of abstractions built by the operating system.
There are several reasons for abstraction.
First, the code needed to control peripheral devices is not standardized. Operating systems provide subroutines called device drivers that perform operations on behalf of programs for example, input/output operations.
Second, the operating system introduces new functions as it abstracts the hardware. For instance, operating system introduces the file abstraction so that programs do not have to deal with disks.
Third, the operating system transforms the computer hardware into multiple virtual computers, each belonging to a different program. Each program that is running is called a process. Each process views the hardware through the lens of abstraction.
Fourth, the operating system can enforce security through abstraction.
- To allocate resources to processes (Manage resources)An operating system controls how processes (the active agents) may access resources (passive entities).
- Provide a pleasant and effective user interface The user interacts with the operating systems through the user interface and usually interested in the “look and feel” of the operating system. The most important components of the user interface are the command interpreter, the file system, on-line help, and application integration. The recent trend has been toward increasingly integrated graphical user interfaces that encompass the activities of multiple processes on networks of computers.
History of Operating Systems
Historically operating systems have been tightly related to the computer architecture, it is good idea to study the history of operating systems from the architecture of the computers on which they run.
Operating systems have evolved through a number of distinct phases or generations which corresponds roughly to the decades.
The 1940's - First Generations
The earliest electronic digital computers had no operating systems. Machines of the time were so primitive that programs were often entered one bit at time on rows of mechanical switches (plug boards). Programming languages were unknown (not even assembly languages). Operating systems were unheard of .The 1950's - Second Generation
By the early 1950's, the routine had improved somewhat with the introduction of punch cards. The General Motors Research Laboratories implemented the first operating systems in early 1950's for their IBM 701. The system of the 50's generally ran one job at a time. These were called single-stream batch processing systems because programs and data were submitted in groups or batches.The 1960's - Third Generation
The systems of the 1960's were also batch processing systems, but they were able to take better advantage of the computer's resources by running several jobs at once. So operating systems designers developed the concept of multiprogramming in which several jobs are in main memory at once; a processor is switched from job to job as needed to keep several jobs advancing while keeping the peripheral devices in use.For example, on the system with no multiprogramming, when the current job paused to wait for other I/O operation to complete, the CPU simply sat idle until the I/O finished. The solution for this problem that evolved was to partition memory into several pieces, with a different job in each partition. While one job was waiting for I/O to complete, another job could be using the CPU.
Another major feature in third-generation operating system was the technique called spooling (simultaneous peripheral operations on line). In spooling, a high-speed device like a disk interposed between a running program and a low-speed device involved with the program in input/output. Instead of writing directly to a printer, for example, outputs are written to the disk. Programs can run to completion faster, and other programs can be initiated sooner when the printer becomes available, the outputs may be printed.
Note that spooling technique is much like thread being spun to a spool so that it may be later be unwound as needed.
Another feature present in this generation was time-sharing technique, a variant of multi-programming technique, in which each user has an on-line (i.e., directly connected) terminal. Because the user is present and interacting with the computer, the computer system must respond quickly to user requests, otherwise user productivity could suffer. Time sharing systems were developed to multi-program large number of simultaneous interactive users.
Fourth Generation
With the development of LSI (Large Scale Integration) circuits, chips, operating system entered in the system entered in the personal computer and the workstation age. Microprocessor technology evolved to the point that it become possible to build desktop computers as powerful as the mainframes of the 1970s. Two operating systems have dominated the personal computer scene: MS-DOS, written by Microsoft, Inc. for the IBM PC and other machines using the Intel 8088 CPU and its successors, and UNIX, which is dominant on the large personal computers using the Motorola 6899 CPU family.System Components
Process Management
The operating system manages many kinds of activities ranging from user programs to system programs like printer spooler, name servers, file server etc. Each of these activities is encapsulated in a process. A process includes the complete execution context (code, data, PC, registers, OS resources in use etc.).It is important to note that a process is not a program. A process is only ONE instant of a program in execution. There are many processes can be running the same program. The five major activities of an operating system in regard to process management are
- Creation and deletion of user and system processes.
- Suspension and resumption of processes.
- A mechanism for process synchronization.
- A mechanism for process communication.
- A mechanism for deadlock handling.
Main-Memory Management
Primary-Memory or Main-Memory is a large array of words or bytes. Each word or byte has its own address. Main-memory provides storage that can be access directly by the CPU. That is to say for a program to be executed, it must in the main memory.The major activities of an operating in regard to memory-management are:
- Keep track of which part of memory are currently being used and by whom.
- Decide which process are loaded into memory when memory space becomes available.
- Allocate and de-allocate memory space as needed.
File Management
A file is a collected of related information defined by its creator. Computer can store files on the disk (secondary storage), which provide long term storage. Some examples of storage media are magnetic tape, magnetic disk and optical disk. Each of these media has its own properties like speed, capacity, data transfer rate and access methods.A file systems normally organized into directories to ease their use. These directories may contain files and other directions.
The five main major activities of an operating system in regard to file management are
- The creation and deletion of files.
- The creation and deletion of directions.
- The support of primitives for manipulating files and directions.
- The mapping of files onto secondary storage.
- The back up of files on stable storage media.
I/O System Management
I/O subsystem hides the peculiarities of specific hardware devices from the user. Only the device driver knows the peculiarities of the specific device to whom it is assigned.Secondary-Storage Management
Generally speaking, systems have several levels of storage, including primary storage, secondary storage and cache storage. Instructions and data must be placed in primary storage or cache to be referenced by a running program. Because main memory is too small to accommodate all data and programs, and its data are lost when power is lost, the computer system must provide secondary storage to back up main memory. Secondary storage consists of tapes, disks, and other media designed to hold information that will eventually be accessed in primary storage (primary, secondary, cache) is ordinarily divided into bytes or words consisting of a fixed number of bytes. Each location in storage has an address; the set of all addresses available to a program is called an address space.The three major activities of an operating system in regard to secondary storage management are:
- Managing the free space available on the secondary-storage device.
- Allocation of storage space when new files have to be written.
- Scheduling the requests for memory access.
Networking
A distributed systems is a collection of processors that do not share memory, peripheral devices, or a clock. The processors communicate with one another through communication lines called network. The communication-network design must consider routing and connection strategies, and the problems of contention and security.Protection System
If a computer systems has multiple users and allows the concurrent execution of multiple processes, then the various processes must be protected from one another's activities. Protection refers to mechanism for controlling the access of programs, processes, or users to the resources defined by a computer systems.Command Interpreter System
A command interpreter is an interface of the operating system with the user. The user gives commands with are executed by operating system (usually by turning them into system calls). The main function of a command interpreter is to get and execute the next user specified command. Command-Interpreter is usually not part of the kernel, since multiple command interpreters (shell, in UNIX terminology) may be support by an operating system, and they do not really need to run in kernel mode. There are two main advantages to separating the command interpreter from the kernel.- If we want to change the way the command interpreter looks, i.e., I want to change the interface of command interpreter, I am able to do that if the command interpreter is separate from the kernel. I cannot change the code of the kernel so I cannot modify the interface.
- If the command interpreter is a part of the kernel it is possible for a malicious process to gain access to certain part of the kernel that it showed not have to avoid this ugly scenario it is advantageous to have the command interpreter separate from kernel.
Operating Systems Services
Program Execution
The purpose of a computer systems is to allow the user to execute programs. So the operating systems provides an environment where the user can conveniently run programs. The user does not have to worry about the memory allocation or multitasking or anything. These things are taken care of by the operating systems.Running a program involves the allocating and deallocating memory, CPU scheduling in case of multiprocess. These functions cannot be given to the user-level programs. So user-level programs cannot help the user to run programs independently without the help from operating systems.
I/O Operations
Each program requires an input and produces output. This involves the use of I/O. The operating systems hides the user the details of underlying hardware for the I/O. All the user sees is that the I/O has been performed without any details. So the operating systems by providing I/O makes it convenient for the users to run programs.For efficiently and protection users cannot control I/O so this service cannot be provided by user-level programs.
File System Manipulation
The output of a program may need to be written into new files or input taken from some files. The operating systems provides this service. The user does not have to worry about secondary storage management. User gives a command for reading or writing to a file and sees his her task accomplished. Thus operating systems makes it easier for user programs to accomplished their task.This service involves secondary storage management. The speed of I/O that depends on secondary storage management is critical to the speed of many programs and hence I think it is best relegated to the operating systems to manage it than giving individual users the control of it. It is not difficult for the user-level programs to provide these services but for above mentioned reasons it is best if this service s left with operating system.
Communications
There are instances where processes need to communicate with each other to exchange information. It may be between processes running on the same computer or running on the different computers. By providing this service the operating system relieves the user of the worry of passing messages between processes. In case where the messages need to be passed to processes on the other computers through a network it can be done by the user programs. The user program may be customized to the specifics of the hardware through which the message transits and provides the service interface to the operating system.Error Detection
An error is one part of the system may cause malfunctioning of the complete system. To avoid such a situation the operating system constantly monitors the system for detecting the errors. This relieves the user of the worry of errors propagating to various part of the system and causing malfunctioning.This service cannot allowed to be handled by user programs because it involves monitoring and in cases altering area of memory or deallocation of memory for a faulty process. Or may be relinquishing the CPU of a process that goes into an infinite loop. These tasks are too critical to be handed over to the user programs. A user program if given these privileges can interfere with the correct (normal) operation of the operating systems.
System Calls and System Programs
System calls provide an interface between the process an the operating system. System calls allow user-level processes to request some services from the operating system which process itself is not allowed to do. In handling the trap, the operating system will enter in the kernel mode, where it has access to privileged instructions, and can perform the desired service on the behalf of user-level process. It is because of the critical nature of operations that the operating system itself does them every time they are needed. For example, for I/O a process involves a system call telling the operating system to read or write particular area and this request is satisfied by the operating system.
System programs provide basic functioning to users so that they do not need to write their own environment for program development (editors, compilers) and program execution (shells). In some sense, they are bundles of useful system calls.
Layered Approach Design
In this case the system is easier to debug and modify, because changes affect only limited portions of the code, and programmer does not have to know the details of the other layers. Information is also kept only where it is needed and is accessible only in certain ways, so bugs affecting that data are limited to a specific module or layer.
Mechanisms and Policies
The policies what is to be done while the mechanism specifies how it is to be done. For instance, the timer construct for ensuring CPU protection is mechanism. On the other hand, the decision of how long the timer is set for a particular user is a policy decision.
The separation of mechanism and policy is important to provide flexibility to a system. If the interface between mechanism and policy is well defined, the change of policy may affect only a few parameters. On the other hand, if interface between these two is vague or not well defined, it might involve much deeper change to the system.
Once the policy has been decided it gives the programmer the choice of using his/her own implementation. Also, the underlying implementation may be changed for a more efficient one without much trouble if the mechanism and policy are well defined. Specifically, separating these two provides flexibility in a variety of ways. First, the same mechanism can be used to implement a variety of policies, so changing the policy might not require the development of a new mechanism, but just a change in parameters for that mechanism, but just a change in parameters for that mechanism from a library of mechanisms. Second, the mechanism can be changed for example, to increase its efficiency or to move to a new platform, without changing the overall policy.
Definition of Process
Definition
The term "process" was first used by the designers of the MULTICS in 1960's. Since then, the term process, used somewhat interchangeably with 'task' or 'job'. The process has been given many definitions for instance- A program in Execution.
- An asynchronous activity.
- The 'animated sprit' of a procedure in execution.
- The entity to which processors are assigned.
- The 'dispatchable' unit.
Now that we agreed upon the definition of process, the question is what is the relation between process and program. It is same beast with different name or when this beast is sleeping (not executing) it is called program and when it is executing becomes process. Well, to be very precise. Process is not the same as program. In the following discussion we point out some of the difference between process and program. As we have mentioned earlier.
Process is not the same as program. A process is more than a program code. A process is an 'active' entity as oppose to program which consider to be a 'passive' entity. As we all know that a program is an algorithm expressed in some suitable notation, (e.g., programming language). Being a passive, a program is only a part of process. Process, on the other hand, includes:
- Current value of Program Counter (PC)
- Contents of the processors registers
- Value of the variables
- The process stack (SP) which typically contains temporary data such as subroutine parameter, return address, and temporary variables.
- A data section that contains global variables.
In Process model, all software on the computer is organized into a number of sequential processes. A process includes PC, registers, and variables. Conceptually, each process has its own virtual CPU. In reality, the CPU switches back and forth among processes. (The rapid switching back and forth is called multi programming).
Process State
The process state consist of everything necessary to resume the process execution if it is somehow put aside temporarily. The process state consists of at least following:
- Code for the program.
- Program's static data.
- Program's dynamic data.
- Program's procedure call stack.
- Contents of general purpose registers.
- Contents of program counter (PC)
- Contents of program status word (PSW).
- Operating Systems resource in use.
Process Operations
Process Creation
In general-purpose systems, some way is needed to create processes as needed during operation. There are four principal events led to processes creation.- System initialization.
- Execution of a process Creation System calls by a running process.
- A user request to create a new process.
- Initialization of a batch job.
A process may create a new process by some create process such as 'fork'. It choose to does so, creating process is called parent process and the created one is called the child processes. Only one parent is needed to create a child process. Note that unlike plants and animals that use sexual representation, a process has only one parent. This creation of process (processes) yields a hierarchical structure of processes like one in the figure. Notice that each child has only one parent but each parent may have many children. After the fork, the two processes, the parent and the child, have the same memory image, the same environment strings and the same open files. After a process is created, both the parent and child have their own distinct address space. If either process changes a word in its address space, the change is not visible to the other process.
<Figure 3.2 pp.55 From Dietel>
Following are some reasons for creation of a process
- User logs on.
- User starts a program.
- Operating systems creates process to provide service, e.g., to manage printer.
- Some program starts another process, e.g., Netscape calls xv to display a picture.
A process terminates when it finishes executing its last statement. Its resources are returned to the system, it is purged from any system lists or tables, and its process control block (PCB) is erased i.e., the PCB's memory space is returned to a free memory pool. The new process terminates the existing process, usually due to following reasons:
- Normal Exist Most processes terminates because they have done their job. This call is exist in UNIX.
- Error Exist When process discovers a fatal error. For example, a user tries to compile a program that does not exist.
- Fatal Error An error caused by process due to a bug in program for example, executing an illegal instruction, referring non-existing memory or dividing by zero.
- Killed by another Process A process executes a system call telling the Operating Systems to terminate some other process. In UNIX, this call is kill. In some systems when a process kills all processes it created are killed as well (UNIX does not work this way).
A process goes through a series of discrete process states.
- New State The process being created.
- Terminated State The process has finished execution.
- Blocked (waiting) State When a process blocks, it does so because logically it cannot continue, typically because it is waiting for input that is not yet available. Formally, a process is said to be blocked if it is waiting for some event to happen (such as an I/O completion) before it can proceed. In this state a process is unable to run until some external event happens.
- Running State A process is said t be running if it currently has the CPU, that is, actually using the CPU at that particular instant.
- Ready State A process is said to be ready if it use a CPU if one were available. It is runable but temporarily stopped to let another process run.
Process State Transitions
Following are six(6) possible transitions among above mentioned five (5) statesFIGURE
- Transition 1 occurs when process discovers that it cannot
continue. If running process initiates an I/O operation before its
allotted time expires, the running process voluntarily relinquishes the
CPU.
This state transition is:
Block (process-name): Running → Block.
- Transition 2 occurs when the scheduler decides that the
running process has run long enough and it is time to let another
process have CPU time.
This state transition is:
Time-Run-Out (process-name): Running → Ready.
- Transition 3 occurs when all other processes have had their share and it is time for the first process to run again
This state transition is:
Dispatch (process-name): Ready → Running.
- Transition 4 occurs when the external event for which a process was waiting (such as arrival of input) happens.
This state transition is:
Wakeup (process-name): Blocked → Ready.
- Transition 5 occurs when the process is created.
This state transition is:
Admitted (process-name): New → Ready.
- Transition 6 occurs when the process has finished execution.
This state transition is:
Exit (process-name): Running → Terminated.
Process Control Block
A process in an operating system is represented by a data structure known as a process control block (PCB) or process descriptor. The PCB contains important information about the specific process including
- The current state of the process i.e., whether it is ready, running, waiting, or whatever.
- Unique identification of the process in order to track "which is which" information.
- A pointer to parent process.
- Similarly, a pointer to child process (if it exists).
- The priority of process (a part of CPU scheduling information).
- Pointers to locate memory of processes.
- A register save area.
- The processor it is running on.
Threads
- Threads
- Processes Vs Threads
- Why Threads?
- User-Level Threads
- Kernel-Level Threads
- Advantages of Threads over Multiple Processes
- Disadvantages of Threads over Multiprocesses
- Application that Benefits from Threads
- Application that cannot benefit from Threads
- Resources used in Thread creation and Process Creation
- Context Switch
- Major Steps of Context Switching
- Action of Kernel to Context switch among threads
- Action of kernel to Context switch among processes
Threads
Despite of the fact that a thread must execute in process, the process and its associated threads are different concept. Processes are used to group resources together and threads are the entities scheduled for execution on the CPU.A thread is a single sequence stream within in a process. Because threads have some of the properties of processes, they are sometimes called lightweight processes. In a process, threads allow multiple executions of streams. In many respect, threads are popular way to improve application through parallelism. The CPU switches rapidly back and forth among the threads giving illusion that the threads are running in parallel. Like a traditional process i.e., process with one thread, a thread can be in any of several states (Running, Blocked, Ready or Terminated). Each thread has its own stack. Since thread will generally call different procedures and thus a different execution history. This is why thread needs its own stack. An operating system that has thread facility, the basic unit of CPU utilization is a thread. A thread has or consists of a program counter (PC), a register set, and a stack space. Threads are not independent of one other like processes as a result threads shares with other threads their code section, data section, OS resources also known as task, such as open files and signals.
Processes Vs Threads
As we mentioned earlier that in many respect threads operate in the same way as that of processes. Some of the similarities and differences are:Similarities
- Like processes threads share CPU and only one thread active (running) at a time.
- Like processes, threads within a processes, threads within a processes execute sequentially.
- Like processes, thread can create children.
- And like process, if one thread is blocked, another thread can run.
- Unlike processes, threads are not independent of one another.
- Unlike processes, all threads can access every address in the task .
- Unlike processes, thread are design to assist one other. Note that processes might or might not assist one another because processes may originate from different users.
Why Threads?
Following are some reasons why we use threads in designing operating systems.- A process with multiple threads make a great server for example printer server.
- Because threads can share common data, they do not need to use interprocess communication.
- Because of the very nature, threads can take advantage of multiprocessors.
- They only need a stack and storage for registers therefore, threads are cheap to create.
- Threads use very little resources of an operating system in which they are working. That is, threads do not need new address space, global data, program code or operating system resources.
- Context switching are fast when working with threads. The reason is that we only have to save and/or restore PC, SP and registers.
User-Level Threads
User-level threads implement in user-level libraries, rather than via systems calls, so thread switching does not need to call operating system and to cause interrupt to the kernel. In fact, the kernel knows nothing about user-level threads and manages them as if they were single-threaded processes.Advantages:
The most obvious advantage of this technique is that a user-level threads package can be implemented on an Operating System that does not support threads. Some other advantages are
- User-level threads does not require modification to operating systems.
- Simple Representation:
Each thread is represented simply by a PC, registers, stack and a small control block, all stored in the user process address space. - Simple Management:
This simply means that creating a thread, switching between threads and synchronization between threads can all be done without intervention of the kernel. - Fast and Efficient:
Thread switching is not much more expensive than a procedure call.
- There is a lack of coordination between threads and operating system kernel. Therefore, process as whole gets one time slice irrespect of whether process has one thread or 1000 threads within. It is up to each thread to relinquish control to other threads.
- User-level threads requires non-blocking systems call i.e., a multithreaded kernel. Otherwise, entire process will blocked in the kernel, even if there are runable threads left in the processes. For example, if one thread causes a page fault, the process blocks.
Kernel-Level Threads
In this method, the kernel knows about and manages the threads. No runtime system is needed in this case. Instead of thread table in each process, the kernel has a thread table that keeps track of all threads in the system. In addition, the kernel also maintains the traditional process table to keep track of processes. Operating Systems kernel provides system call to create and manage threads.The implementation of general structure of kernel-level thread is
<DIAGRAM>
Advantages:
- Because kernel has full knowledge of all threads, Scheduler may decide to give more time to a process having large number of threads than process having small number of threads.
- Kernel-level threads are especially good for applications that frequently block.
- The kernel-level threads are slow and inefficient. For instance, threads operations are hundreds of times slower than that of user-level threads.
- Since kernel must manage and schedule threads as well as processes. It require a full thread control block (TCB) for each thread to maintain information about threads. As a result there is significant overhead and increased in kernel complexity.
Advantages of Threads over Multiple Processes
- Context Switching Threads are very inexpensive to create and destroy, and they are inexpensive to represent. For example, they require space to store, the PC, the SP, and the general-purpose registers, but they do not require space to share memory information, Information about open files of I/O devices in use, etc. With so little context, it is much faster to switch between threads. In other words, it is relatively easier for a context switch using threads.
- Sharing Treads allow the sharing of a lot resources that cannot be shared in process, for example, sharing code section, data section, Operating System resources like open file etc.
Disadvantages of Threads over Multiprocesses
- Blocking The major disadvantage if that if the kernel is single threaded, a system call of one thread will block the whole process and CPU may be idle during the blocking period.
- Security Since there is, an extensive sharing among threads there is a potential problem of security. It is quite possible that one thread over writes the stack of another thread (or damaged shared data) although it is very unlikely since threads are meant to cooperate on a single task.
Application that Benefits from Threads
A proxy server satisfying the requests for a number of computers on a LAN would be benefited by a multi-threaded process. In general, any program that has to do more than one task at a time could benefit from multitasking. For example, a program that reads input, process it, and outputs could have three threads, one for each task.Application that cannot Benefit from Threads
Any sequential process that cannot be divided into parallel task will not benefit from thread, as they would block until the previous one completes. For example, a program that displays the time of the day would not benefit from multiple threads.Resources used in Thread Creation and Process Creation
The creation of a new process differs from that of a thread mainly in the fact that all the shared resources of a thread are needed explicitly for each process. So though two processes may be running the same piece of code they need to have their own copy of the code in the main memory to be able to run. Two processes also do not share other resources with each other. This makes the creation of a new process very costly compared to that of a new thread.
Context Switch
To give each process on a multiprogrammed machine a fair share of the CPU, a hardware clock generates interrupts periodically. This allows the operating system to schedule all processes in main memory (using scheduling algorithm) to run on the CPU at equal intervals. Each time a clock interrupt occurs, the interrupt handler checks how much time the current running process has used. If it has used up its entire time slice, then the CPU scheduling algorithm (in kernel) picks a different process to run. Each switch of the CPU from one process to another is called a context switch.Major Steps of Context Switching
- The values of the CPU registers are saved in the process table of the process that was running just before the clock interrupt occurred.
- The registers are loaded from the process picked by the CPU scheduler to run next.
Action of Kernel to Context Switch Among Threads
The threads share a lot of resources with other peer threads belonging to the same process. So a context switch among threads for the same process is easy. It involves switch of register set, the program counter and the stack. It is relatively easy for the kernel to accomplished this task.Action of kernel to Context Switch Among Processes
Context switches among processes are expensive. Before a process can be switched its process control block (PCB) must be saved by the operating system. The PCB consists of the following information:- The process state.
- The program counter, PC.
- The values of the different registers.
- The CPU scheduling information for the process.
- Memory management information regarding the process.
- Possible accounting information for this process.
- I/O status information of the process.
Solaris-2 Operating Systems
- Introduction
- At user-level
- At Intermediate-level
- At kernel-level
Introduction
The solaris-2 Operating Systems supports:- threads at the user-level.
- threads at the kernel-level.
- symmetric multiprocessing and
- real-time scheduling.
At user-level
- The user-level threads are supported by a library for the creation and scheduling and kernel knows nothing of these threads.
- These user-level threads are supported by lightweight processes (LWPs). Each LWP is connected to exactly one kernel-level thread is independent of the kernel.
- Many user-level threads may perform one task. These threads may be scheduled and switched among LWPs without intervention of the kernel.
- User-level threads are extremely efficient because no context switch is needs to block one thread another to start running.
- A user-thread needs a stack and program counter. Absolutely no kernel resource are required.
- Since the kernel is not involved in scheduling these user-level threads, switching among user-level threads are fast and efficient.
At Intermediate-level
The lightweight processes (LWPs) are located between the user-level threads and kernel-level threads. These LWPs serve as a "Virtual CPUs" where user-threads can run. Each task contains at least one LWp.The user-level threads are multiplexed on the LWPs of the process.
Resource needs of LWP
An LWP contains a process control block (PCB) with register data, accounting information and memory information. Therefore, switching between LWPs requires quite a bit of work and LWPs are relatively slow as compared to user-level threads.
At kernel-level
The standard kernel-level threads execute all operations within the kernel. There is a kernel-level thread for each LWP and there are some threads that run only on the kernels behalf and have associated LWP. For example, a thread to service disk requests. By request, a kernel-level thread can be pinned to a processor (CPU). See the rightmost thread in figure. The kernel-level threads are scheduled by the kernel's scheduler and user-level threads blocks.SEE the diagram in NOTES
In modern solaris-2 a task no longer must block just because a kernel-level threads blocks, the processor (CPU) is free to run another thread.
Resource needs of Kernel-level Thread
A kernel thread has only small data structure and stack. Switching between kernel threads does not require changing memory access information and therefore, kernel-level threads are relating fast and efficient.
CPU/Process Scheduling
When more than one process is runable, the operating system must decide which one first. The part of the operating system concerned with this decision is called the scheduler, and algorithm it uses is called the scheduling algorithm.
Goals of Scheduling (objectives)
In this section we try to answer following question: What the scheduler try to achieve?Many objectives must be considered in the design of a scheduling discipline. In particular, a scheduler should consider fairness, efficiency, response time, turnaround time, throughput, etc., Some of these goals depends on the system one is using for example batch system, interactive system or real-time system, etc. but there are also some goals that are desirable in all systems.
General Goals
FairnessA little thought will show that some of these goals are contradictory. It can be shown that any scheduling algorithm that favors some class of jobs hurts another class of jobs. The amount of CPU time available is finite, after all.
Fairness is important under all circumstances. A scheduler makes sure that each process gets its fair share of the CPU and no process can suffer indefinite postponement. Note that giving equivalent or equal time is not fair. Think of safety control and payroll at a nuclear plant.
Policy Enforcement
The scheduler has to make sure that system's policy is enforced. For example, if the local policy is safety then the safety control processes must be able to run whenever they want to, even if it means delay in payroll processes.
Efficiency
Scheduler should keep the system (or in particular CPU) busy cent percent of the time when possible. If the CPU and all the Input/Output devices can be kept running all the time, more work gets done per second than if some components are idle.
Response Time
A scheduler should minimize the response time for interactive user.
Turnaround
A scheduler should minimize the time batch users must wait for an output.
Throughput
A scheduler should maximize the number of jobs processed per unit time.
Preemptive Vs Nonpreemptive Scheduling
The Scheduling algorithms can be divided into two categories with respect to how they deal with clock interrupts.Nonpreemptive Scheduling
A scheduling discipline is nonpreemptive if, once a process has been given the CPU, the CPU cannot be taken away from that process.
Following are some characteristics of nonpreemptive scheduling
- In nonpreemptive system, short jobs are made to wait by longer jobs but the overall treatment of all processes is fair.
- In nonpreemptive system, response times are more predictable because incoming high priority jobs can not displace waiting jobs.
- In nonpreemptive scheduling, a schedular executes jobs in the following two situations.
- When a process switches from running state to the waiting state.
- When a process terminates.
A scheduling discipline is preemptive if, once a process has been given the CPU can taken away.
The strategy of allowing processes that are logically runable to be temporarily suspended is called Preemptive Scheduling and it is contrast to the "run to completion" method.
Scheduling Algorithms
CPU Scheduling deals with the problem of deciding which of the processes in the ready queue is to be allocated the CPU.
Following are some scheduling algorithms we will study- FCFS Scheduling.
- Round Robin Scheduling.
- SJF Scheduling.
- SRT Scheduling.
- Priority Scheduling.
- Multilevel Queue Scheduling.
- Multilevel Feedback Queue Scheduling.
First-Come-First-Served (FCFS) Scheduling
Other names of this algorithm are:
- First-In-First-Out (FIFO)
- Run-to-Completion
- Run-Until-Done
FCFS is more predictable than most of other schemes since it offers time. FCFS scheme is not useful in scheduling interactive users because it cannot guarantee good response time. The code for FCFS scheduling is simple to write and understand. One of the major drawback of this scheme is that the average time is often quite long.
The First-Come-First-Served algorithm is rarely used as a master scheme in modern operating systems but it is often embedded within other schemes.
Round Robin Scheduling
One of the oldest, simplest, fairest and most widely used algorithm is round robin (RR).
In the round robin scheduling, processes are dispatched in a FIFO manner but are given a limited amount of CPU time called a time-slice or a quantum.
If a process does not complete before its CPU-time expires, the CPU is preempted and given to the next process waiting in a queue. The preempted process is then placed at the back of the ready list.
Round Robin Scheduling is preemptive (at the end of time-slice) therefore it is effective in time-sharing environments in which the system needs to guarantee reasonable response times for interactive users.
The only interesting issue with round robin scheme is the length of the quantum. Setting the quantum too short causes too many context switches and lower the CPU efficiency. On the other hand, setting the quantum too long may cause poor response time and appoximates FCFS.
In any event, the average waiting time under round robin scheduling is often quite long.
Shortest-Job-First (SJF) Scheduling
Other name of this algorithm is Shortest-Process-Next (SPN).
Shortest-Job-First (SJF) is a non-preemptive discipline in which waiting job (or process) with the smallest estimated run-time-to-completion is run next. In other words, when CPU is available, it is assigned to the process that has smallest next CPU burst.
The SJF scheduling is especially appropriate for batch jobs for which the run times are known in advance. Since the SJF scheduling algorithm gives the minimum average time for a given set of processes, it is probably optimal.
The SJF algorithm favors short jobs (or processors) at the expense of longer ones.
The obvious problem with SJF scheme is that it requires precise knowledge of how long a job or process will run, and this information is not usually available.
The best SJF algorithm can do is to rely on user estimates of run times.
- In
the production environment where the same jobs run regularly, it may be
possible to provide reasonable estimate of run time, based on the past
performance of the process. But in the development environment users
rarely know how their program will execute.
Shortest-Remaining-Time (SRT) Scheduling
- The SRT is the preemtive counterpart of SJF and useful in time-sharing environment.
- In SRT scheduling, the process with the smallest estimated run-time to completion is run next, including new arrivals.
- In SJF scheme, once a job begin executing, it run to completion.
- In SJF scheme, a running process may be preempted by a new arrival process with shortest estimated run-time.
- The algorithm SRT has higher overhead than its counterpart SJF.
- The SRT must keep track of the elapsed time of the running process and must handle occasional preemptions.
- In this scheme, arrival of small processes will run almost immediately. However, longer jobs have even longer mean waiting time.
Priority Scheduling
An SJF algorithm is simply a priority algorithm where the priority is the inverse of the (predicted) next CPU burst. That is, the longer the CPU burst, the lower the priority and vice versa.
Priority can be defined either internally or externally. Internally defined priorities use some measurable quantities or qualities to compute priority of a process.
Examples of Internal priorities are
- Time limits.
- Memory requirements.
- File requirements,
for example, number of open files. - CPU Vs I/O requirements.
- The importance of process.
- Type or amount of funds being paid for computer use.
- The department sponsoring the work.
- Politics.
- A preemptive priority algorithm will preemptive the CPU if the priority of the newly arrival process is higher than the priority of the currently running process.
- A non-preemptive priority algorithm will simply put the new process at the head of the ready queue.
Multilevel Queue Scheduling
A multilevel queue scheduling algorithm partitions the ready queue in several separate queues, for instance
Fig 5.6 - pp. 138 in Sinha
In a multilevel queue scheduling processes are permanently assigned to one queues.
The processes are permanently assigned to one another, based on some property of the process, such as
- Memory size
- Process priority
- Process type
- Preemptive or
- Non-preemptively
Possibility I
If each queue has absolute priority over lower-priority queues then no process in the queue could run unless the queue for the highest-priority processes were all empty.
For example, in the above figure no process in the batch queue could run unless the queues for system processes, interactive processes, and interactive editing processes will all empty.
Possibility II
If there is a time slice between the queues then each queue gets a certain amount of CPU times, which it can then schedule among the processes in its queue. For instance;
- 80% of the CPU time to foreground queue using RR.
- 20% of the CPU time to background queue using FCFS.
Multilevel Feedback Queue Scheduling
The Algorithm chooses to process with highest priority from the occupied queue and run that process either preemptively or unpreemptively. If the process uses too much CPU time it will moved to a lower-priority queue. Similarly, a process that wait too long in the lower-priority queue may be moved to a higher-priority queue may be moved to a highest-priority queue. Note that this form of aging prevents starvation.
Example:
Figure 5.7 pp. 140 in Sinha
- A process entering the ready queue is placed in queue 0.
- If it does not finish within 8 milliseconds time, it is moved to the tail of queue 1.
- If it does not complete, it is preempted and placed into queue 2.
- Processes in queue 2 run on a FCFS basis, only when queue 2 run on a FCFS basis, only when queue 0 and queue 1 are empty.`
Interprocess Communication
Race Conditions
In operating systems, processes that are working together share some common storage (main memory, file etc.) that each process can read and write. When two or more processes are reading or writing some shared data and the final result depends on who runs precisely when, are called race conditions. Concurrently executing threads that share data need to synchronize their operations and processing in order to avoid race condition on shared data. Only one ‘customer’ thread at a time should be allowed to examine and update the shared variable.Race conditions are also possible in Operating Systems. If the ready queue is implemented as a linked list and if the ready queue is being manipulated during the handling of an interrupt, then interrupts must be disabled to prevent another interrupt before the first one completes. If interrupts are not disabled than the linked list could become corrupt.
Critical Section
How to avoid race conditions?
The key to preventing trouble involving shared storage is find some way to prohibit more than one process from reading and writing the shared data simultaneously. That part of the program where the shared memory is accessed is called the Critical Section. To avoid race conditions and flawed results, one must identify codes inCritical Sections in each thread. The characteristic properties of the code that form a Critical Section are
- Codes that reference one or more variables in a “read-update-write” fashion while any of those variables is possibly being altered by another thread.
- Codes that alter one or more variables that are possibly being referenced in “read-updata-write” fashion by another thread.
- Codes use a data structure while any part of it is possibly being altered by another thread.
- Codes alter any part of a data structure while it is possibly in use by another thread.
Mutual Exclusion
A way of making sure that if one process is using a shared modifiable data, the other processes will be excluded from doing the same thing.
Formally, while one process executes the shared variable, all other processes desiring to do so at the same time moment should be kept waiting; when that process has finished executing the shared variable, one of the processes waiting; while that process has finished executing the shared variable, one of the processes waiting to do so should be allowed to proceed. In this fashion, each process executing the shared data (variables) excludes all others from doing so simultaneously. This is called Mutual Exclusion.Note that mutual exclusion needs to be enforced only when processes access shared modifiable data - when processes are performing operations that do not conflict with one another they should be allowed to proceed concurrently.
Mutual Exclusion Conditions
If we could arrange matters such that no two processes were ever in their critical sections simultaneously, we could avoid race conditions. We need four conditions to hold to have a good solution for the critical section problem (mutual exclusion).- No two processes may at the same moment inside their critical sections.
- No assumptions are made about relative speeds of processes or number of CPUs.
- No process should outside its critical section should block other processes.
- No process should wait arbitrary long to enter its critical section.
Proposals for Achieving Mutual Exclusion
Problem
When one process is updating shared modifiable data in its critical section, no other process should allowed to enter in its critical section.
Proposal 1 -Disabling Interrupts (Hardware Solution)
Each process disables all interrupts just after entering in its critical section and re-enable all interrupts just before leaving critical section. With interrupts turned off the CPU could not be switched to other process. Hence, no other process will enter its critical and mutual exclusion achieved.ConclusionDisabling interrupts is sometimes a useful interrupts is sometimes a useful technique within the kernel of an operating system, but it is not appropriate as a general mutual exclusion mechanism for users process. The reason is that it is unwise to give user process the power to turn off interrupts.
Proposal 2 - Lock Variable (Software Solution)
In this solution, we consider a single, shared, (lock) variable, initially 0. When a process wants to enter in its critical section, it first test the lock. If lock is 0, the process first sets it to 1 and then enters the critical section. If the lock is already 1, the process just waits until (lock) variable becomes 0. Thus, a 0 means that no process in its critical section, and 1 means hold your horses - some process is in its critical section.ConclusionThe flaw in this proposal can be best explained by example. Suppose process A sees that the lock is 0. Before it can set the lock to 1 another process B is scheduled, runs, and sets the lock to 1. When the process A runs again, it will also set the lock to 1, and two processes will be in their critical section simultaneously.
Proposal 3 - Strict Alteration
In this proposed solution, the integer variable 'turn' keeps track of whose turn is to enter the critical section. Initially, process A inspect turn, finds it to be 0, and enters in its critical section. Process B also finds it to be 0 and sits in a loop continually testing 'turn' to see when it becomes 1.Continuously testing a variable waiting for some value to appear is called the Busy-Waiting.ConclusionTaking turns is not a good idea when one of the processes is much slower than the other. Suppose process 0 finishes its critical section quickly, so both processes are now in their noncritical section. This situation violates above mentioned condition 3.
Using Systems calls 'sleep' and 'wakeup'
Basically, what above mentioned solution do is this: when a processes wants to enter in its critical section , it checks to see if then entry is allowed. If it is not, the process goes into tight loop and waits (i.e., start busy waiting) until it is allowed to enter. This approach waste CPU-time.Now look at some interprocess communication primitives is the pair of steep-wakeup.
- Sleep
- It is a system call that causes the caller to block, that is, be suspended until some other process wakes it up.
- Wakeup
- It is a system call that wakes up the process.
- Both 'sleep' and 'wakeup' system calls have one parameter that represents a memory address used to match up 'sleeps' and 'wakeups' .
The Bounded Buffer Producers and Consumers
The bounded buffer producers and consumers assumes that there is a fixed buffer size i.e., a finite numbers of slots are available.Statement
To suspend the producers when the buffer is full, to suspend the consumers when the buffer is empty, and to make sure that only one process at a time manipulates a buffer so there are no race conditions or lost updates.
As an example how sleep-wakeup system calls are used, consider the producer-consumer problem also known as bounded buffer problem.
Two processes share a common, fixed-size (bounded) buffer. The producer puts information into the buffer and the consumer takes information out.
Trouble arises when
- The producer wants to put a new data in the buffer, but buffer is already full.
Solution: Producer goes to sleep and to be awakened when the consumer has removed data. - The consumer wants to remove data the buffer but buffer is already empty.
Solution: Consumer goes to sleep until the producer puts some data in buffer and wakes consumer up.
Semaphores
E.W. Dijkstra (1965) abstracted the key notion of mutual exclusion in his concepts of semaphores.
DefinitionA semaphore is a protected variable whose value can be accessed and altered only by the operations P and V and initialization operation called 'Semaphoiinitislize'.Binary Semaphores can assume only the value 0 or the value 1 counting semaphores also called general semaphores can assume only nonnegative values.
The P (or wait or sleep or down) operation on semaphores S, written as P(S) or wait (S), operates as follows:
P(S): IF S > 0
THEN S := S - 1
ELSE (wait on S)
The V (or signal or wakeup or up) operation on semaphore S, written as V(S) or signal (S), operates as follows:
V(S): IF (one or more process are waiting on S)
THEN (let one of these processes proceed)
ELSE S := S +1
Operations P and V are done as single, indivisible, atomic action. It is guaranteed that once a semaphore operations has stared, no other process can access the semaphore until operation has completed. Mutual exclusion on the semaphore, S, is enforced within P(S) and V(S).
If several processes attempt a P(S) simultaneously, only process will be allowed to proceed. The other processes will be kept waiting, but the implementation of P and V guarantees that processes will not suffer indefinite postponement.
Semaphores solve the lost-wakeup problem.
Producer-Consumer Problem Using Semaphores
The Solution to producer-consumer problem uses three semaphores, namely, full, empty and mutex.The semaphore 'full' is used for counting the number of slots in the buffer that are full. The 'empty' for counting the number of slots that are empty and semaphore 'mutex' to make sure that the producer and consumer do not access modifiable shared section of the buffer simultaneously.
Initialization
- Set full buffer slots to 0.
i.e., semaphore Full = 0. - Set empty buffer slots to N.
i.e., semaphore empty = N. - For control access to critical section set mutex to 1.
i.e., semaphore mutex = 1.
WHILE (true)
produce-Item ( );
P (empty);
P (mutex);
enter-Item ( )
V (mutex)
V (full);
Consumer ( )
WHILE (true)
P (full)
P (mutex);
remove-Item ( );
V (mutex);
V (empty);
consume-Item (Item)
Deadlock
“Crises and deadlocks when they occur have at least this advantage, that they force us to think.”- Jawaharlal Nehru (1889 - 1964) Indian political leader
The resources may be either physical or logical. Examples of physical resources are Printers, Tape Drivers, Memory Space, and CPU Cycles. Examples of logical resources are Files, Semaphores, and Monitors.
The simplest example of deadlock is where process 1 has been allocated non-shareable resources A, say, a tap drive, and process 2 has be allocated non-sharable resource B, say, a printer. Now, if it turns out that process 1 needs resource B (printer) to proceed and process 2 needs resource A (the tape drive) to proceed and these are the only two processes in the system, each is blocked the other and all useful work in the system stops. This situation ifs termed deadlock. The system is in deadlock state because each process holds a resource being requested by the other process neither process is willing to release the resource it holds.
Preemptable and Nonpreemptable Resources
Resources come in two flavors: preemptable and nonpreemptable. A preemptable resource is one that can be taken away from the process with no ill effects. Memory is an example of a preemptable resource. On the other hand, a nonpreemptable resource is one that cannot be taken away from process (without causing ill effect). For example, CD resources are not preemptable at an arbitrary moment.Reallocating resources can resolve deadlocks that involve preemptable resources. Deadlocks that involve nonpreemptable resources are difficult to deal with.
Necessary and Sufficient Deadlock Conditions
- Coffman (1971) identified four (4) conditions that must hold simultaneously for there to be a deadlock.
1. Mutual Exclusion Condition - The resources involved are non-shareable.
- Explanation: At least one resource (thread) must be held in a
non-shareable mode, that is, only one process at a time claims
exclusive control of the resource. If another process requests that
resource, the requesting process must be delayed until the resource has
been released.
2. Hold and Wait Condition - Requesting process hold already, resources while waiting for requested resources.
- Explanation: There must exist a process that is holding a
resource already allocated to it while waiting for additional resource
that are currently being held by other processes.
3. No-Preemptive Condition - Resources already allocated to a process cannot be preempted.
- Explanation: Resources cannot be removed from the processes are used to completion or released voluntarily by the process holding it.
- 4. Circular Wait Condition
- The processes in the system form a circular list or chain where each process in the list is waiting for a resource held by the next process in the list.
- Mutual exclusion condition applies, since only one vehicle can be on a section of the street at a time.
- Hold-and-wait condition applies, since each vehicle is occupying a section of the street, and waiting to move on to the next section of the street.
- No-preemptive condition applies, since a section of the street that is a section of the street that is occupied by a vehicle cannot be taken away from it.
- Circular wait condition applies, since each vehicle is waiting on the next vehicle to move. That is, each vehicle in the traffic is waiting for a section of street held by the next vehicle in the traffic.
It is not possible to have a deadlock involving only one single process. The deadlock involves a circular “hold-and-wait” condition between two or more processes, so “one” process cannot hold a resource, yet be waiting for another resource that it is holding. In addition, deadlock is not possible between two threads in a process, because it is the process that holds resources, not the thread that is, each thread has access to the resources held by the process.
Dealing with Deadlock Problem
In general, there are four strategies of dealing with deadlock problem:
- 1. The Ostrich Approach
- Just ignore the deadlock problem altogether.
- 2. Deadlock Detection and Recovery
- Detect deadlock and, when it occurs, take steps to recover.
- 3. Deadlock Avoidance
- Avoid deadlock by careful resource scheduling.
- 4. Deadlock Prevention
- Prevent deadlock by resource scheduling so as to negate at least one of the four conditions.
Now we consider each strategy in order of decreasing severity.
Deadlock Prevention
- Elimination of “Mutual Exclusion” ConditionThe mutual exclusion condition must hold for non-sharable resources. That is, several processes cannot simultaneously share a single resource. This condition is difficult to eliminate because some resources, such as the tap drive and printer, are inherently non-shareable. Note that shareable resources like read-only-file do not require mutually exclusive access and thus cannot be involved in deadlock.
- There are two possibilities for elimination of the second condition. The first alternative is that a process request be granted all of the resources it needs at once, prior to execution. The second alternative is to disallow a process from requesting resources whenever it has previously allocated resources. This strategy requires that all of the resources a process will need must be requested at once. The system must grant resources on “all or none” basis. If the complete set of resources needed by a process is not currently available, then the process must wait until the complete set is available. While the process waits, however, it may not hold any resources. Thus the “wait for” condition is denied and deadlocks simply cannot occur. This strategy can lead to serious waste of resources. For example, a program requiring ten tap drives must request and receive all ten derives before it begins executing. If the program needs only one tap drive to begin execution and then does not need the remaining tap drives for several hours. Then substantial computer resources (9 tape drives) will sit idle for several hours. This strategy can cause indefinite postponement (starvation). Since not all the required resources may become available at once.
- The nonpreemption condition can be alleviated by forcing a process
waiting for a resource that cannot immediately be allocated to
relinquish all of its currently held resources, so that other processes
may use them to finish. Suppose a system does allow processes to hold
resources while requesting additional resources. Consider what happens
when a request cannot be satisfied. A process holds resources a second
process may need in order to proceed while second process may hold the
resources needed by the first process. This is a deadlock. This strategy
require that when a process that is holding some resources is denied a
request for additional resources. The process must release its held
resources and, if necessary, request them again together with additional
resources. Implementation of this strategy denies the “no-preemptive”
condition effectively.
High Cost When a process release resources the process may lose all its work to that point. One serious consequence of this strategy is the possibility of indefinite postponement (starvation). A process might be held off indefinitely as it repeatedly requests and releases the same resources.
- The last condition, the circular wait, can be denied by imposing a
total ordering on all of the resource types and than forcing, all
processes to request the resources in order (increasing or decreasing).
This strategy impose a total ordering of all resources types, and to
require that each process requests resources in a numerical order
(increasing or decreasing) of enumeration. With this rule, the resource
allocation graph can never have a cycle.
For example, provide a global numbering of all the resources, as shown - Temporarily prevent resources from deadlocked processes.
- Back off a process to some check point allowing preemption of a needed resource and restarting the process at the checkpoint later.
- Successively kill processes until the system is deadlock free.
- Dietel, H. M., "Operating Systems", 2nd ed. Addison-Wesley, Reading, MA, 1992.
- Finkel, R.A., "An Operating Systems Vade Mecum", 2nd ed. Prentice-Hall, Englewood Cliffs, NJ, 1988.
- Goscinski, Andrzej, "Distributed Operating Systems : the logical design".
- Hartley, Stephen. J. "Operating Systems Programming".
- Krakowiak, S, "Principles of Operating Systems", MIT Press, 1988.
- Lane, M.G. and Mooney, J.D, "A Practical Approach to Operating Systems", Boyd and Fraser, 1988.
- Milenkovic, M., "Operating Systems: Concept and Design", 2nd ed. McGraw-Hill, 1992.
- Silberschatz, A. and Peterson, James, L., "Operating System Concepts",1983.
- Silberschatz, A. and Galvin, P.B., "Operating System Concepts", 4th ed. Addison-Wesley, Reading, MA, 1994.
- Sinha, Pradeep K., "Distributed Operating Systems".
- Singhal, Mukesh and Shivaratri, "Advanced Concepts in Operating Systems".
- Stallings, W, "Operating Systems", Macmillan, New York, 1992.
- Tanenbaum, Andrew S., "Modern Operating Systems", Prentice-Hall, Englewood Cliffs, NJ, 1992.
- Tanenbaum, Andrew S., "Operating Systems : Design and Implementation".
- Gerard Tel, "Introduction to Distributed Algorithms".
- Textbook: Essentials of Discrete Mathematics (2nd Edition) by David Hunter
- Errata for the Hunter textbook (pdf)
- Common Mistakes in Discrete Mathematics
- A Guide to Writing Proofs (pdf)
1 | ≡ | Card reader |
2 | ≡ | Printer |
3 | ≡ | Plotter |
4 | ≡ | Tape drive |
5 | ≡ | Card punch |
Deadlock Avoidance
If the necessary conditions for a deadlock are in place, it is still possible to avoid deadlock by being careful when resources are allocated. Perhaps the most famous deadlock avoidance algorithm, due to Dijkstra [1965], is the Banker’s algorithm. So named because the process is analogous to that used by a banker in deciding if a loan can be safely made.
Banker’s Algorithm
In this analogyCustomers | ≡ | processes |
Units | ≡ | resources, say, tape drive |
Banker | ≡ | Operating System |
Customers | Used | Max | |
---|---|---|---|
A B C D | 0 0 0 0 | 6 5 4 7 | Available Units = 10 |
Fig. 1 |
In the above figure, we see four customers each of whom has been granted a number of credit nits. The banker reserved only 10 units rather than 22 units to service them. At certain moment, the situation becomes
Customers | Used | Max | |
---|---|---|---|
A B C D | 1 1 2 4 | 6 5 4 7 | Available Units = 2 |
Fig. 2 |
Safe State The key to a state being safe is that there is at least one way for all users to finish. In other analogy, the state of figure 2 is safe because with 2 units left, the banker can delay any request except C's, thus letting C finish and release all four resources. With four units in hand, the banker can let either D or B have the necessary units and so on.
Unsafe State Consider what would happen if a request from B for one more unit were granted in above figure 2.
We would have following situation
Customers | Used | Max | |
---|---|---|---|
A B C D | 1 2 2 4 | 6 5 4 7 | Available Units = 1 |
Fig. 3 |
If all the customers namely A, B, C, and D asked for their maximum loans, then banker could not satisfy any of them and we would have a deadlock.
Important Note: It is important to note that an unsafe state does not imply the existence or even the eventual existence a deadlock. What an unsafe state does imply is simply that some unfortunate sequence of events might lead to a deadlock.
The Banker's algorithm is thus to consider each request as it occurs, and see if granting it leads to a safe state. If it does, the request is granted, otherwise, it postponed until later. Haberman [1969] has shown that executing of the algorithm has complexity proportional to N2 where N is the number of processes and since the algorithm is executed each time a resource request occurs, the overhead is significant.
Deadlock Detection
Deadlock detection is the process of actually determining that a deadlock exists and identifying the processes and resources involved in the deadlock.
The basic idea is to check allocation against resource availability for all possible allocation sequences to determine if the system is in deadlocked state a. Of course, the deadlock detection algorithm is only half of this strategy. Once a deadlock is detected, there needs to be a way to recover several alternatives exists:
Absolutely Important UNIX Commands
cat f | List contents of file |
cat f1 f2 >f3 | Concatenates f1(file 1) & f2(file 2) into f3(file 3) |
cd | returns you to your home or main directory |
cd / | takes you to the root, as far up (to the left) as far as possible |
cd | to move down (right in the pathname) a directory |
cd .. | moves you up (left in pathname) a directory; likewise, |
cd ../../.. | moves you up (left in the pathname) 3 directory levels |
chmod ### | changes your protections. The order is: you|group|universe (rwxrwxrwx). There will be either a d or - before it. If there's a d, then it's a directory. If there's not, then it's a file. You set the protections in the order rwx (read=1, write=2, execute=4). So, to set the protections for the directory directoryname: you rwx, group r-x, universe r--, you would enter: chmod 751 . |
clear | to clear screen |
compress | compresses the file filename and puts a .Z extension on it. To uncompress it, type uncompress |
cp f1 f2 | Copy file f1 into f2 |
cp -r D1D2 | copies the directory D1 and renames it D2 |
^-c (ctrl-c) | to kill a running process |
^-d (ctrl-d) | to close an open window |
df | gives disk usage |
diff f1 f2 | Lists file differences |
dig host | domain name, IP address, and alias information for the given host. |
dosdir | to do a "dir" (~ls in UNIX) on a DOS floppy in the disk drive |
dosread | to read a file from a DOS floppy to your computer account |
doswrite | to write a file from your computer account to a DOS floppy |
du | lists all subdirectories and their sizes (in blocks?) and total directory size (in blocks?) (takes a long time) |
du -a | lists all files and their sizes (in blocks?) in present directory and total directory size (in blocks?) (takes a long time) |
du -s | lists overall directory size (in blocks?) (long but clean) |
env | shows current environment set-up |
find | Searches the named directory and it"s sub-directories for files. Most frequently called like this: find ./ -name "t*" -printWhich searches the current directory ( and all of its sub-directories ) for any files that begin with the letter "t" and then prints them out. If you are looking for a specific filename, then replace "t*" with "filename", and "find" will print out all incidences of this file. |
finger @. | (e.g., finger johndoe@ksu.edu fingers Johndoe at Kent State University) |
ftp | establishes an ftp link with machinename |
gzip | produces files with a .gz extension. |
gunzip | decompress files created by gzip, compress or pack. |
ispell f | Interactively checks the spelling of the file f, giving logical alternatives to the misspelled words. Type "?" to get help. "ispell" can be accessed from the command line, and also through emacs with M-x ispell-buffer. |
kill -9 -1 | (from
a remotely logged-in site) kills all running processes (essentially
forces a logout) *not to be used unless nothing else works* kill -9 process-id# - kills a running process |
lpq | shows UNIX print queue |
lpr | to print the file |
lpqrm job# | removes job from printer queue |
ls | shows listing of files in present directory |
ls -a | shows listing of all files in present directory |
ls -l | shows long listing of files in present directory |
ls -la | more | shows long listing of all files in present directory |
man command | shows help on a specific command. |
mkdir D | creates a new directory called D |
more | to view the contents of a file without making changes to it one screen at a time. Hit q to quit more. |
mv f1 f2 | Rename file f1 as f2 |
mv f1D | moves the file called f1 to the directory D |
nslookup host | domain name, IP address, and alias information for the given host. e.g., nslookup www.kent.edu gives related data for www.kent.edu |
passwd | to change your password (takes an hour or so to take effect on all machines) |
ping host | to test if the host is up and running. |
pwd | present working directory |
ps | Shows processes running |
ps -flu | Shows detailed description of processes running |
pquota | Shows printer quota |
quota -v | Shows current disk usage and limits. |
rlogin | allows you to remotely log in to another machine on which you have access privileges |
rm f | Delete (removes) the file f. |
rm -i f | To be prompted for confirmation before you remove a file f, at the UNIX prompt, type |
rm dir D | Delete (removes) the empty directory D |
rm - r D | removes the directory named D and its contents - use with caution |
s f | Alphabetically sort f. |
talk | establishes an e-talk session with user@machinename |
tar | combines multiple files into one or vice-versa |
telnet | allows you to remotely log in to another machine on which you have access privileges |
uncompress | uncompresses filename.Z |
users | shows who's logged in on the machine |
vi | to open the file called filename in the vi text editor |
who | Shows who is currently logged on the system. |
whoami | shows username of person logged in that window |
whoisdomain_name | lists the domain registration record, e.g., whois kent.edu will produce the domain record for kent.edu |
* | wild card character representing any # or characters |
date | shows the time and date |
date -u | shows greenwich mean time |
. | a short cut that stands for the location you are at in a pathway. ex. cp (file (though a pathway) (. (the location you are at) |
.. | move to parent directory from any comand ex. mv (file name) .. or cd .. etc. |
pwd | shows where you are in the pathway |
? | wild card character representing one character, can be used in succesion |
~ | abbreviation for the home file ex. ls ~ lists files in home dir w/o moving there |
zip | best compression for IBM files. |
References
Operating Systems
DISCRETE MATHEMATICS
General Information
Logical Reasoning
Sets
Software
Data Structures from Pakistan UniversityObject Oriented Programming
No comments:
Post a Comment