Friday 18 March 2016

Software Engineering

UML Tutorial



UML is a standard language for specifying, visualizing, constructing, and documenting the artifacts of software systems.
UML was created by Object Management Group and UML 1.0 specification draft was proposed to the OMG in January 1997.
This tutorial gives a complete understanding on UML.

UML is a standard language for specifying, visualizing, constructing, and documenting the artifacts of software systems.
UML was created by Object Management Group (OMG) and UML 1.0 specification draft was proposed to the OMG in January 1997.
OMG is continuously putting effort to make a truly industry standard.
  • UML stands for Unified Modeling Language.
  • UML is different from the other common programming languages like C++, Java, COBOL etc.
  • UML is a pictorial language used to make software blue prints.
So UML can be described as a general purpose visual modeling language to visualize, specify, construct and document software system. Although UML is generally used to model software systems but it is not limited within this boundary. It is also used to model non software systems as well like process flow in a manufacturing unit etc.
UML is not a programming language but tools can be used to generate code in various languages using UML diagrams. UML has a direct relation with object oriented analysis and design. After some standardization UML is become an OMG (Object Management Group) standard.

As UML describes the real time systems it is very important to make a conceptual model and then proceed gradually. Conceptual model of UML can be mastered by learning the following three major elements:
  • UML building blocks
  • Rules to connect the building blocks
  • Common mechanisms of UML
This chapter describes all the UML building blocks. The building blocks of UML can be defined as:
  • Things
  • Relationships
  • Diagrams

(1) Things:

Things are the most important building blocks of UML. Things can be:
  • Structural
  • Behavioral
  • Grouping
  • Annotational

Structural things:

The Structural things define the static part of the model. They represent physical and conceptual elements. Following are the brief descriptions of the structural things.

Class:

Class represents set of objects having similar responsibilities.
class

Interface:

Interface defines a set of operations which specify the responsibility of a class.
Interface

Collaboration:

Collaboration defines interaction between elements.
Collaboration

Use case:

Use case represents a set of actions performed by a system for a specific goal.
Use case

Component:

Component describes physical part of a system.
Component

Node:

A node can be defined as a physical element that exists at run time.
Node

Behavioral things:

A behavioral thing consists of the dynamic parts of UML models. Following are the behavioral things:

Interaction:

Interaction is defined as a behavior that consists of a group of messages exchanged among elements to accomplish a specific task.
Interaction

State machine:

State machine is useful when the state of an object in its life cycle is important. It defines the sequence of states an object goes through in response to events. Events are external factors responsible for state change.
State machine

Grouping things:

Grouping things can be defined as a mechanism to group elements of a UML model together. There is only one grouping thing available:

Package:

Package is the only one grouping thing available for gathering structural and behavioral things.
Package

Annotational things:

Annotational things can be defined as a mechanism to capture remarks, descriptions, and comments of UML model elements. Note is the only one Annotational thing available.

Note:

A note is used to render comments, constraints etc of an UML element.
Note

(2) Relationship :

Relationship is another most important building block of UML. It shows how elements are associated with each other and this association describes the functionality of an application.
There are four kinds of relationships available.

Dependency:

Dependency is a relationship between two things in which change in one element also affects the other one.
Dependency

Association:

Association is basically a set of links that connects elements of an UML model. It also describes how many objects are taking part in that relationship.
Association

Generalization:

Generalization can be defined as a relationship which connects a specialized element with a generalized element. It basically describes inheritance relationship in the world of objects.
Generalization

Realization:

Realization can be defined as a relationship in which two elements are connected. One element describes some responsibility which is not implemented and the other one implements them. This relationship exists in case of interfaces.
Realization

Any real world system is used by different users. The users can be developers, testers, business people, analysts and many more. So before designing a system the architecture is made with different perspectives in mind. The most important part is to visualize the system from different viewer.s perspective. The better we understand the better we make the system.
UML plays an important role in defining different perspectives of a system. These perspectives are:
  • Design
  • Implementation
  • Process
  • Deployment
And the centre is the Use Case view which connects all these four. A Use case represents the functionality of the system. So the other perspectives are connected with use case.
  • Design of a system consists of classes, interfaces and collaboration. UML provides class diagram, object diagram to support this.
  • Implementation defines the components assembled together to make a complete physical system. UML component diagram is used to support implementation perspective.
  • Process defines the flow of the system. So the same elements as used in Design are also used to support this perspective.
  • Deployment represents the physical nodes of the system that forms the hardware. UML deployment diagram is used to support this perspective.

    UML is popular for its diagrammatic notations. We all know that UML is for visualizing, specifying, constructing and documenting the components of software and non software systems. Here theVisualization is the most important part which needs to be understood and remembered by heart.
    UML notations are the most important elements in modeling. Efficient and appropriate use of notations is very important for making a complete and meaningful model. The model is useless unless its purpose is depicted properly.
    So learning notations should be emphasized from the very beginning. Different notations are available for things and relationships. And the UML diagrams are made using the notations of things and relationships. Extensibility is another important feature which makes UML more powerful and flexible.
    The chapter describes the UML Basic Notations in more details. This is just an extension to the UML buildling block section I have discussed in previous chapter.

Structural Things:

  • Graphical notations used in structural things are the most widely used in UML. These are considered as the nouns of UML models. Following are the list of structural things.
    • Classes
    • Interface
    • Collaboration
    • Use case
    • Active classes
    • Components
    • Nodes

Class Notation:

  • UML class is represented by the diagram shown below. The diagram is divided into four parts.
    • The top section is used to name the class.
    • The second one is used to show the attributes of the class.
    • The third section is used to describe the operations performed by the class.
    • The fourth section is optional to show any additional components.
    Class Notation
    Classes are used to represent objects. Objects can be anything having properties and responsibility.

Object Notation:

  • The object is represented in the same way as the class. The only difference is the name which is underlined as shown below.
    Object Notation
    As object is the actual implementation of a class which is known as the instance of a class. So it has the same usage as the class.

Interface Notation:

  • Interface is represented by a circle as shown below. It has a name which is generally written below the circle.
    Interface Notation
    Interface is used to describe functionality without implementation. Interface is the just like a template where you define different functions not the implementation. When a class implements the interface it also implements the functionality as per the requirement.

Collaboration Notation:

  • Collaboration is represented by a dotted eclipse as shown below. It has a name written inside the eclipse.
    Collaboration Notation
    Collaboration represents responsibilities. Generally responsibilities are in a group.

Use case Notation:

  • Use case is represented as an eclipse with a name inside it. It may contain additional responsibilities.
    Use case Notation
    Use case is used to capture high level functionalities of a system.

Actor Notation:

  • An actor can be defined as some internal or external entity that interacts with the system.
    Actor Notation
    Actor is used in a use case diagram to describe the internal or external entities.

Initial State Notation:

  • Initial state is defined to show the start of a process. This notation is used in almost all diagrams.
    Initial state Notation
    The usage of Initial State Notation is to show the starting point of a process.

Final State Notation:

  • Final state is used to show the end of a process. This notation is also used in almost all diagrams to describe the end.
    Final state Notation
    The usage of Final State Notation is to show the termination point of a process.

Active class Notation:

  • Active class looks similar to a class with a solid border. Active class is generally used to describe concurrent behaviour of a system.
    Active class Notation
    Active class is used to represent concurrency in a system.

Component Notation:

  • A component in UML is shown as below with a name inside. Additional elements can be added wherever required.
    Component Notation
    Component is used to represent any part of a system for which UML diagrams are made.

Node Notation:

  • A node in UML is represented by a square box as shown below with a name. A node represents a physical component of the system.
    Node Notation
    Node is used to represent physical part of a system like server, network etc.

Behavioral Things:

  • Dynamic parts are one of the most important elements in UML. UML has a set of powerful features to represent the dynamic part of software and non software systems. These features include interactions and state machines.
    Interactions can be of two types:
    • Sequential (Represented by sequence diagram)
    • Collaborative (Represented by collaboration diagram)

Interaction Notation:

  • Interaction is basically message exchange between two UML components. The following diagram represents different notations used in an interaction.
    Interaction Notation
    Interaction is used to represent communication among the components of a system.

State machine Notation:

  • State machine describes the different states of a component in its life cycle. The notations are described in the following diagram.
    State machine Notation
    State machine is used to describe different states of a system component. The state can be active, idle or any other depending upon the situation.

Grouping Things:

  • Organizing the UML models are one of the most important aspects of the design. In UML there is only one element available for grouping and that is package.

Package Notation:

  • Package notation is shown below and this is used to wrap the components of a system.
    package Notation

Annotation Things:

  • In any diagram explanation of different elements and their functionalities are very important. So UML has notes notation to support this requirement.

Note Notation:

  • This notation is shown below and they are used to provide necessary information of a system.
    Note Notation

Relationships

  • A model is not complete unless the relationships between elements are described properly. The Relationship gives a proper meaning to an UML model. Following are the different types of relationships available in UML.
    • Dependency
    • Association
    • Generalization
    • Extensibility

Dependency Notation:

  • Dependency is an important aspect in UML elements. It describes the dependent elements and the direction of dependency.
    Dependency is represented by a dotted arrow as shown below. The arrow head represents the independent element and the other end the dependent element.
    Dependency Notation
    Dependency is used to represent dependency between two elements of a system.

Association Notation:

  • Association describes how the elements in an UML diagram are associated. In simple word it describes how many elements are taking part in an interaction.
    Association is represented by a dotted line with (without) arrows on both sides. The two ends represent two associated elements as shown below. The multiplicity is also mentioned at the ends (1, * etc) to show how many objects are associated.
    Association Notation
    Association is used to represent the relationship between two elements of a system.

Generalization Notation:

  • Generalization describes the inheritance relationship of the object oriented world. It is parent and child relationship.
    Generalization is represented by an arrow with hollow arrow head as shown below. One end represents the parent element and the other end child element.
    Generalization Notation
    Generalization is used to describe parent-child relationship of two elements of a system.

Extensibility Notation:

  • All the languages (programming or modeling) have some mechanism to extend its capabilities like syntax, semantics etc. UML is also having the following mechanisms to provide extensibility features.
    • Stereotypes (Represents new elements)
    • Tagged values (Represents new attributes)
    • Constraints (Represents the boundaries)
    Extensibility Notation
    Extensibility notations are used to enhance the power of the language. It is basically additional elements used to represent some extra behavior of the system. These extra behaviours are not covered by the standard available notations.

UML Class Diagram

Overview:

The class diagram is a static diagram. It represents the static view of an application. Class diagram is not only used for visualizing, describing and documenting different aspects of a system but also for constructing executable code of the software application.
The class diagram describes the attributes and operations of a class and also the constraints imposed on the system. The class diagrams are widely used in the modelling of object oriented systems because they are the only UML diagrams which can be mapped directly with object oriented languages.
The class diagram shows a collection of classes, interfaces, associations, collaborations and constraints. It is also known as a structural diagram.

Purpose:

The purpose of the class diagram is to model the static view of an application. The class diagrams are the only diagrams which can be directly mapped with object oriented languages and thus widely used at the time of construction.
The UML diagrams like activity diagram, sequence diagram can only give the sequence flow of the application but class diagram is a bit different. So it is the most popular UML diagram in the coder community.
So the purpose of the class diagram can be summarized as:
  • Analysis and design of the static view of an application.
  • Describe responsibilities of a system.
  • Base for component and deployment diagrams.
  • Forward and reverse engineering.

How to draw Class Diagram?

Class diagrams are the most popular UML diagrams used for construction of software applications. So it is very important to learn the drawing procedure of class diagram.
Class diagrams have lot of properties to consider while drawing but here the diagram will be considered from a top level view.
Class diagram is basically a graphical representation of the static view of the system and represents different aspects of the application. So a collection of class diagrams represent the whole system.
The following points should be remembered while drawing a class diagram:
  • The name of the class diagram should be meaningful to describe the aspect of the system.
  • Each element and their relationships should be identified in advance.
  • Responsibility (attributes and methods) of each class should be clearly identified.
  • For each class minimum number of properties should be specified. Because unnecessary properties will make the diagram complicated.
  • Use notes when ever required to describe some aspect of the diagram. Because at the end of the drawing it should be understandable to the developer/coder.
  • Finally, before making the final version, the diagram should be drawn on plain paper and rework as many times as possible to make it correct.
Now the following diagram is an example of an Order System of an application. So it describes a particular aspect of the entire application.
  • First of all Order and Customer are identified as the two elements of the system and they have a one to many relationship because a customer can have multiple orders.
  • We would keep Order class is an abstract class and it has two concrete classes (inheritance relationship) SpecialOrder and NormalOrder.
  • The two inherited classes have all the properties as the Order class. In addition they have additional functions like dispatch () and receive ().
So the following class diagram has been drawn considering all the points mentioned above:
UML Class Diagram

Where to use Class Diagrams?

Class diagram is a static diagram and it is used to model static view of a system. The static view describes the vocabulary of the system.
Class diagram is also considered as the foundation for component and deployment diagrams. Class diagrams are not only used to visualize the static view of the system but they are also used to construct the executable code for forward and reverse engineering of any system.
Generally UML diagrams are not directly mapped with any object oriented programming languages but the class diagram is an exception.
Class diagram clearly shows the mapping with object oriented languages like Java, C++ etc. So from practical experience class diagram is generally used for construction purpose.
So in a brief, class diagrams are used for:
  • Describing the static view of the system.
  • Showing the collaboration among the elements of the static view.
  • Describing the functionalities performed by the system.
  • Construction of software applications using object oriented languages.
    UML Use Case Diagram

    Overview:

    To model a system the most important aspect is to capture the dynamic behaviour. To clarify a bit in details, dynamic behaviour means the behaviour of the system when it is running /operating.
    So only static behaviour is not sufficient to model a system rather dynamic behaviour is more important than static behaviour. In UML there are five diagrams available to model dynamic nature and use case diagram is one of them. Now as we have to discuss that the use case diagram is dynamic in nature there should be some internal or external factors for making the interaction.
    These internal and external agents are known as actors. So use case diagrams are consists of actors, use cases and their relationships. The diagram is used to model the system/subsystem of an application. A single use case diagram captures a particular functionality of a system.
    So to model the entire system numbers of use case diagrams are used.

    Purpose:

    The purpose of use case diagram is to capture the dynamic aspect of a system. But this definition is too generic to describe the purpose.
    Because other four diagrams (activity, sequence, collaboration and Statechart) are also having the same purpose. So we will look into some specific purpose which will distinguish it from other four diagrams.
    Use case diagrams are used to gather the requirements of a system including internal and external influences. These requirements are mostly design requirements. So when a system is analyzed to gather its functionalities use cases are prepared and actors are identified.
    Now when the initial task is complete use case diagrams are modelled to present the outside view.
    So in brief, the purposes of use case diagrams can be as follows:
    • Used to gather requirements of a system.
    • Used to get an outside view of a system.
    • Identify external and internal factors influencing the system.
    • Show the interacting among the requirements are actors.

    How to draw Use Case Diagram?

    Use case diagrams are considered for high level requirement analysis of a system. So when the requirements of a system are analyzed the functionalities are captured in use cases.
    So we can say that uses cases are nothing but the system functionalities written in an organized manner. Now the second things which are relevant to the use cases are the actors. Actors can be defined as something that interacts with the system.
    The actors can be human user, some internal applications or may be some external applications. So in a brief when we are planning to draw an use case diagram we should have the following items identified.
    • Functionalities to be represented as an use case
    • Actors
    • Relationships among the use cases and actors.
    Use case diagrams are drawn to capture the functional requirements of a system. So after identifying the above items we have to follow the following guidelines to draw an efficient use case diagram.
    • The name of a use case is very important. So the name should be chosen in such a way so that it can identify the functionalities performed.
    • Give a suitable name for actors.
    • Show relationships and dependencies clearly in the diagram.
    • Do not try to include all types of relationships. Because the main purpose of the diagram is to identify requirements.
    • Use note when ever required to clarify some important points.
    The following is a sample use case diagram representing the order management system. So if we look into the diagram then we will find three use cases (Order, SpecialOrder and NormalOrder) and one actor which is customer.
    The SpecialOrder and NormalOrder use cases are extended from Order use case. So they have extends relationship. Another important point is to identify the system boundary which is shown in the picture. The actor Customer lies outside the system as it is an external user of the system.
    UML Use Case Diagram

    Where to Use Case Diagrams?

    As we have already discussed there are five diagrams in UML to model dynamic view of a system. Now each and every model has some specific purpose to use. Actually these specific purposes are different angles of a running system.
    So to understand the dynamics of a system we need to use different types of diagrams. Use case diagram is one of them and its specific purpose is to gather system requirements and actors.
    Use case diagrams specify the events of a system and their flows. But use case diagram never describes how they are implemented. Use case diagram can be imagined as a black box where only the input, output and the function of the black box is known.
    These diagrams are used at a very high level of design. Then this high level design is refined again and again to get a complete and practical picture of the system. A well structured use case also describes the pre condition, post condition, exceptions. And these extra elements are used to make test cases when performing the testing.
    Although the use cases are not a good candidate for forward and reverse engineering but still they are used in a slight different way to make forward and reverse engineering. And the same is true for reverse engineering. Still use case diagram is used differently to make it a candidate for reverse engineering.
    In forward engineering use case diagrams are used to make test cases and in reverse engineering use cases are used to prepare the requirement details from the existing application.
    So the following are the places where use case diagrams are used:
    • Requirement analysis and high level design.
    • Model the context of a system.
    • Reverse engineering.
    • Forward engineering.

      UML Activity Diagram

      Overview:

      Activity diagram is another important diagram in UML to describe dynamic aspects of the system.
      Activity diagram is basically a flow chart to represent the flow form one activity to another activity. The activity can be described as an operation of the system.
      So the control flow is drawn from one operation to another. This flow can be sequential, branched or concurrent. Activity diagrams deals with all type of flow control by using different elements like fork, join etc.

      Purpose:

      The basic purposes of activity diagrams are similar to other four diagrams. It captures the dynamic behaviour of the system. Other four diagrams are used to show the message flow from one object to another but activity diagram is used to show message flow from one activity to another.
      Activity is a particular operation of the system. Activity diagrams are not only used for visualizing dynamic nature of a system but they are also used to construct the executable system by using forward and reverse engineering techniques. The only missing thing in activity diagram is the message part.
      It does not show any message flow from one activity to another. Activity diagram is some time considered as the flow chart. Although the diagrams looks like a flow chart but it is not. It shows different flow like parallel, branched, concurrent and single.
      So the purposes can be described as:
      • Draw the activity flow of a system.
      • Describe the sequence from one activity to another.
      • Describe the parallel, branched and concurrent flow of the system.

      How to draw Activity Diagram?

      Activity diagrams are mainly used as a flow chart consists of activities performed by the system. But activity diagram are not exactly a flow chart as they have some additional capabilities. These additional capabilities include branching, parallel flow, swimlane etc.
      Before drawing an activity diagram we must have a clear understanding about the elements used in activity diagram. The main element of an activity diagram is the activity itself. An activity is a function performed by the system. After identifying the activities we need to understand how they are associated with constraints and conditions.
      So before drawing an activity diagram we should identify the following elements:
      • Activities
      • Association
      • Conditions
      • Constraints
      Once the above mentioned parameters are identified we need to make a mental layout of the entire flow. This mental layout is then transformed into an activity diagram.
      The following is an example of an activity diagram for order management system. In the diagram four activities are identified which are associated with conditions. One important point should be clearly understood that an activity diagram cannot be exactly matched with the code. The activity diagram is made to understand the flow of activities and mainly used by the business users.
      The following diagram is drawn with the four main activities:
      • Send order by the customer
      • Receipt of the order
      • Confirm order
      • Dispatch order
      After receiving the order request condition checks are performed to check if it is normal or special order. After the type of order is identified dispatch activity is performed and that is marked as the termination of the process.
      UML Activity Diagram

      Where to use Activity Diagrams?

      The basic usage of activity diagram is similar to other four UML diagrams. The specific usage is to model the control flow from one activity to another. This control flow does not include messages.
      The activity diagram is suitable for modeling the activity flow of the system. An application can have multiple systems. Activity diagram also captures these systems and describes flow from one system to another. This specific usage is not available in other diagrams. These systems can be database, external queues or any other system.
      Now we will look into the practical applications of the activity diagram. From the above discussion it is clear that an activity diagram is drawn from a very high level. So it gives high level view of a system. This high level view is mainly for business users or any other person who is not a technical person.
      This diagram is used to model the activities which are nothing but business requirements. So the diagram has more impact on business understanding rather implementation details.
      Following are the main usages of activity diagram:
      • Modeling work flow by using activities.
      • Modeling business requirements.
      • High level understanding of the system's functionalities.
      • Investigate business requirements at a later stage.

        UML Interaction Diagram


        Overview:


        From the name Interaction it is clear that the diagram is used to describe some type of interactions among the different elements in the model. So this interaction is a part of dynamic behaviour of the system.

        This interactive behaviour is represented in UML by two diagrams known as Sequence diagramand Collaboration diagram. The basic purposes of both the diagrams are similar.

        Sequence diagram emphasizes on time sequence of messages and collaboration diagram emphasizes on the structural organization of the objects that send and receive messages.

        Purpose:

        The purposes of interaction diagrams are to visualize the interactive behaviour of the system. Now visualizing interaction is a difficult task. So the solution is to use different types of models to capture the different aspects of the interaction.
        That is why sequence and collaboration diagrams are used to capture dynamic nature but from a different angle.
        So the purposes of interaction diagram can be describes as:
        • To capture dynamic behaviour of a system.
        • To describe the message flow in the system.
        • To describe structural organization of the objects.
        • To describe interaction among objects.

        How to draw Interaction Diagram?

        As we have already discussed that the purpose of interaction diagrams are to capture the dynamic aspect of a system. So to capture the dynamic aspect we need to understand what a dynamic aspect is and how it is visualized. Dynamic aspect can be defined as the snap shot of the running system at a particular moment.
        We have two types of interaction diagrams in UML. One is sequence diagram and the other is a collaboration diagram. The sequence diagram captures the time sequence of message flow from one object to another and the collaboration diagram describes the organization of objects in a system taking part in the message flow.
        So the following things are to identified clearly before drawing the interaction diagram:
        • Objects taking part in the interaction.
        • Message flows among the objects.
        • The sequence in which the messages are flowing.
        • Object organization.
        Following are two interaction diagrams modeling order management system. The first diagram is a sequence diagram and the second is a collaboration diagram.

        The Sequence Diagram:

        The sequence diagram is having four objects (Customer, Order, SpecialOrder and NormalOrder).
        The following diagram has shown the message sequence for SpecialOrder object and the same can be used in case of NormalOrder object. Now it is important to understand the time sequence of message flows. The message flow is nothing but a method call of an object.
        The first call is sendOrder () which is a method of Order object. The next call is confirm () which is a method of SpecialOrder object and the last call is Dispatch () which is a method ofSpecialOrder object. So here the diagram is mainly describing the method calls from one object to another and this is also the actual scenario when the system is running.
        UML Sequence Diagram

        The Collaboration Diagram:

        The second interaction diagram is collaboration diagram. It shows the object organization as shown below. Here in collaboration diagram the method call sequence is indicated by some numbering technique as shown below. The number indicates how the methods are called one after another. We have taken the same order management system to describe the collaboration diagram.
        The method calls are similar to that of a sequence diagram. But the difference is that the sequence diagram does not describe the object organization where as the collaboration diagram shows the object organization.
        Now to choose between these two diagrams the main emphasis is given on the type of requirement. If the time sequence is important then sequence diagram is used and if organization is required then collaboration diagram is used.
        UML Collaboration Diagram

        Where to use Interaction Diagrams?

        We have already discussed that interaction diagrams are used to describe dynamic nature of a system. Now we will look into the practical scenarios where these diagrams are used. To understand the practical application we need to understand the basic nature of sequence and collaboration diagram.
        The main purposes of both the diagrams are similar as they are used to capture the dynamic behaviour of a system. But the specific purposes are more important to clarify and understood.
        Sequence diagrams are used to capture the order of messages flowing from one object to another. And the collaboration diagrams are used to describe the structural organizations of the objects taking part in the interaction. A single diagram is not sufficient to describe the dynamic aspect of an entire system so a set of diagrams are used to capture is as a whole.
        The interaction diagrams are used when we want to understand the message flow and the structural organization. Now message flow means the sequence of control flow from one object to another and structural organization means the visual organization of the elements in a system.
        In a brief the following are the usages of interaction diagrams:
        • To model flow of control by time sequence.
        • To model flow of control by structural organizations.
        • For forward engineering.
        • For reverse engineering.

(3) UML Diagrams:

UML diagrams are the ultimate output of the entire discussion. All the elements, relationships are used to make a complete UML diagram and the diagram represents a system.
The visual effect of the UML diagram is the most important part of the entire process. All the other elements are used to make it a complete one.
UML includes the following nine diagrams and the details are described in the following chapters.
  1. Class diagram
  2. Object diagram
  3. Use case diagram
  4. Sequence diagram
  5. Collaboration diagram
  6. Activity diagram
  7. Statechart diagram
  8. Deployment diagram
  9. Component diagram
We would discuss all these diagrams in subsequent chapters of this tutorial.

Goals of UML:

A picture is worth a thousand words, this absolutely fits while discussing about UML. Object oriented concepts were introduced much earlier than UML. So at that time there were no standard methodologies to organize and consolidate the object oriented development. At that point of time UML came into picture.
There are a number of goals for developing UML but the most important is to define some general purpose modeling language which all modelers can use and also it needs to be made simple to understand and use.
UML diagrams are not only made for developers but also for business users, common people and anybody interested to understand the system. The system can be a software or non software. So it must be clear that UML is not a development method rather it accompanies with processes to make a successful system.
At the conclusion the goal of UML can be defined as a simple modeling mechanism to model all possible practical systems in today.s complex environment.

A conceptual model of UML:

To understand conceptual model of UML first we need to clarify What is a conceptual model? andWhy a conceptual model is at all required?
  • A conceptual model can be defined as a model which is made of concepts and their relationships.
  • A conceptual model is the first step before drawing a UML diagram. It helps to understand the entities in the real world and how they interact with each other.
As UML describes the real time systems it is very important to make a conceptual model and then proceed gradually. Conceptual model of UML can be mastered by learning the following three major elements:
  • UML building blocks
  • Rules to connect the building blocks
  • Common mechanisms of UML

Object oriented concepts:

UML can be described as the successor of object oriented analysis and design.
An object contains both data and methods that control the data. The data represents the state of the object. A class describes an object and they also form hierarchy to model real world system. The hierarchy is represented as inheritance and the classes can also be associated in different manners as per the requirement.
The objects are the real world entities that exist around us and the basic concepts like abstraction, encapsulation, inheritance, polymorphism all can be represented using UML.
So UML is powerful enough to represent all the concepts exists in object oriented analysis and design. UML diagrams are representation of object oriented concepts only. So before learning UML, it becomes important to understand OO concepts in details.
Following are some fundamental concepts of object oriented world:
  • Objects: Objects represent an entity and the basic building block.
  • Class: Class is the blue print of an object.
  • Abstraction: Abstraction represents the behavior of an real world entity.
  • Encapsulation: Encapsulation is the mechanism of binding the data together and hiding them from outside world.
  • Inheritance: Inheritance is the mechanism of making new classes from existing one.
  • Polymorphism: It defines the mechanism to exists in different forms.

OO Analysis and Design

Object Oriented analysis can be defined as investigation and to be more specific it is the investigation of objects. Design means collaboration of identified objects.
So it is important to understand the OO analysis and design concepts. Now the most important purpose of OO analysis is to identify objects of a system to be designed. This analysis is also done for an existing system. Now an efficient analysis is only possible when we are able to start thinking in a way where objects can be identified. After identifying the objects their relationships are identified and finally the design is produced.
So the purpose of OO analysis and design can described as:
  • Identifying the objects of a system.
  • Identify their relationships.
  • Make a design which can be converted to executables using OO languages.
There are three basic steps where the OO concepts are applied and implemented. The steps can be defined as
OO Analysis --> OO Design --> OO implementation using OO languages
Now the above three points can be described in details:
  • During object oriented analysis the most important purpose is to identify objects and describing them in a proper way. If these objects are identified efficiently then the next job of design is easy. The objects should be identified with responsibilities. Responsibilities are the functions performed by the object. Each and every object has some type of responsibilities to be performed. When these responsibilities are collaborated the purpose of the system is fulfilled.
  • The second phase is object oriented design. During this phase emphasis is given upon the requirements and their fulfilment. In this stage the objects are collaborated according to their intended association. After the association is complete the design is also complete.
  • The third phase is object oriented implementation. In this phase the design is implemented using object oriented languages like Java, C++ etc.

Role of UML in OO design:

UML is a modeling language used to model software and non software systems. Although UML is used for non software systems the emphasis is on modeling object oriented software applications. Most of the UML diagrams discussed so far are used to model different aspects like static, dynamic etc. Now what ever be the aspect the artifacts are nothing but objects.
If we look into class diagram, object diagram, collaboration diagram, interaction diagrams all would basically be designed based on the objects.
So the relation between OO design and UML is very important to understand. The OO design is transformed into UML diagrams according to the requirement. Before understanding the UML in details the OO concepts should be learned properly. Once the OO analysis and design is done the next step is very easy. The input from the OO analysis and design is the input to the UML diagrams.

UML Overview:

UML is a general purpose modeling language. It was initially started to capture the behavior of complex software and non software system and now it has become an OMG standard.
UML provides elements and components to support the requirement of complex systems. UML follows the object oriented concepts and methodology. So object oriented systems are generally modeled using the pictorial language.
UML diagrams are drawn from different perspectives like design, implementation, deployment etc.
At the conclusion UML can be defined as a modeling language to capture the architectural, behavioral and structural aspects of a system.
Objects are the key to this object oriented world. The basic requirement of object oriented analysis and design is to identify the object efficiently. After that the responsibilities are assigned to the objects. Once this task is complete the design is done using the input from analysis.
The UML has an important role in this OO analysis and design, The UML diagrams are used to model the design. So the UML has an important role to play.

UML notations:

UML notations are the most important elements in modeling. Efficient and appropriate use of notations is very important for making a complete and meaningful model. The model is useless unless its purpose is depicted properly.
So learning notations should be emphasized from the very beginning. Different notations are available for things and relationships. And the UML diagrams are made using the notations of things and relationships. Extensibility is another important feature which makes UML more powerful and flexible.

UML Diagrams:

Diagrams are the heart of UML. These diagrams are broadly categorized as structural and behavioral diagrams.
  • Structural diagrams are consists of static diagrams like class diagram, object diagram etc.
  • Behavioral diagrams are consists of dynamic diagrams like sequence diagram, collaboration diagram etc.
The static and dynamic nature of a system is visualized by using these diagrams.

Class diagrams:

Class diagrams are the most popular UML diagrams used by the object oriented community. It describes the objects in a system and their relationships. Class diagram consists of attributes and functions.
A single class diagram describes a specific aspect of the system and the collection of class diagrams represents the whole system. Basically the class diagram represents the static view of a system.
Class diagrams are the only UML diagrams which can be mapped directly with object oriented languages. So it is widely used by the developer community.

Object Diagram:

An object diagram is an instance of a class diagram. So the basic elements are similar to a class diagram. Object diagrams are consists of objects and links. It captures the instance of the system at a particular moment.
Object diagrams are used for prototyping, reverse engineering and modeling practical scenarios.

Component Diagram:

Component diagrams are special kind of UML diagram to describe static implementation view of a system. Component diagrams consist of physical components like libraries, files, folders etc.
This diagram is used from implementation perspective. More than one component diagrams are used to represent the entire system. Forward and reverse engineering techniques are used to make executables from component diagrams.

Deployment Diagram:

Component diagrams are used to describe the static deployment view of a system. These diagrams are mainly used by system engineers.
Deployment diagrams are consists of nodes and their relationships. An efficient deployment diagram is an integral part of software application development.

Use Case Diagram;

Use case diagram is used to capture the dynamic nature of a system. It consists of use cases, actors and their relationships. Use case diagram is used at a high level design to capture the requirements of a system.
So it represents the system functionalities and their flow. Although the use case diagrams are not a good candidate for forward and reverse engineering but still they are used in a slightly differently way to model it.

Interaction Diagram:

Interaction diagrams are used for capturing dynamic nature of a system. Sequence and collaboration diagrams are the interaction diagrams used for this purpose.
Sequence diagrams are used to capture time ordering of message flow and collaboration diagrams are used to understand the structural organization of the system. Generally a set of sequence and collaboration diagrams are used to model an entire system.

State-chart Diagram:

Statechart diagrams are one of the five diagrams used for modeling dynamic nature of a system. These diagrams are used to model the entire life cycle of an object. Activity diagram is a special kind of Statechart diagram.
State of an object is defined as the condition where an object resides for a particular time and the object again moves to other states when some events occur. State-chart diagrams are also used for forward and reverse engineering.

Activity Diagram:

Activity diagram is another important diagram to describe dynamic behavior. Activity diagram consists of activities, links, relationships etc. It models all types of flows like parallel, single, concurrent etc.
Activity diagram describes the flow control from one activity to another without any messages. These diagrams are used to model high level view of business requirements.

Overview:

UML 2.0 is totally a different dimension in the world of Unified Modeling Language. It is more complex and extensive in nature.
The extent of documentation has also increased compared to UML 1.5 version. UML 2.0 also added new features so that its usage can be more extensive.
UML 2.0 adds the definition of formal and completely defined semantics. This new possibility can be utilized for the development of models and the corresponding systems can be generated from these models. But to utilize this new dimension a considerable effort has to be made to acquire the knowledge.

New dimensions in UML2.0:

The structure and documentation of UML was completely revised in the latest version of UML2.0. There are now two documents available that describe UML:
  • UML 2.0 Infrastructure defines the basic constructs of the language on which UML is based. This section is not directly relevant to the users of UML. This is directed more towards the developers of modeling tools. So this area is not in the scope of this tutorial.
  • UML 2.0 Superstructure defines the user constructs of UML 2.0. It means those elements of UML that users will use at the immediate level. So this is the main focus for the user community of UML.
This revision of UML was created to fulfil a goal to restructure and refine UML so that usability, implementation, and adaptation are simplified.
The UML infrastructure is used to:
  • Provide a reusable meta-language core. This is used to define UML itself.
  • Provide mechanisms to adjustment the language.
The UML superstructure is used to:
  • Provide better support for component-based development.
  • Improve constructs for the specification of architectur.
  • e
  • Provide better options for the modeling of behaviour.
So the important point to note is the major divisions described above. These divisions are used to increase the usability of UML and define a clear understanding of its usage.
There is another dimension which is already proposed in this new version. It is a proposal for a completely new Object Constraint Language (OCL) and Diagram Interchange. These features all together form the complete UML2.0 package.
It is very important to distinguish between the UML model. Different diagrams are used for different type of UML modeling. There are three important type of UML modelings:

Structural modeling:

Structural modeling captures the static features of a system. They consist of the followings:
  • Classes diagrams
  • Objects diagrams
  • Deployment diagrams
  • Package diagrams
  • Composite structure diagram
  • Component diagram
Structural model represents the framework for the system and this framework is the place where all other components exist. So the class diagram, component diagram and deployment diagrams are the part of structural modeling. They all represent the elements and the mechanism to assemble them.
But the structural model never describes the dynamic behavior of the system. Class diagram is the most widely used structural diagram.

Behavioral Modeling:

Behavioral model describes the interaction in the system. It represents the interaction among the structural diagrams. Behavioral modeling shows the dynamic nature of the system. They consist of the following:
  • Activity diagrams
  • Interaction diagrams
  • Use case diagrams
All the above show the dynamic sequence of flow in a system.

Architectural Modeling:

Architectural model represents the overall framework of the system. It contains both structural and behavioral elements of the system. Architectural model can be defined as the blue print of the entire system. Package diagram comes under architectural modeling.

Modeling diagrams in UML2.0:

Modeling Interactions:

The interaction diagrams described in UML2.0 is different than the earlier versions. But the basic concept remains same as the earlier version. The major difference is the enhancement and additional features added to the diagrams in UML2.0.
UML2.0 models object interaction in the following four different ways.
  • Sequence diagram is a time dependent view of the interaction between objects to accomplish a behavioral goal of the system. The time sequence is similar to the earlier version of sequence diagram. An interaction may be designed at any level of abstraction within the system design, from subsystem interactions to instance-level.
  • Communication diagram is a new name added in UML2.0. A Communication diagram is a structural view of the messaging between objects, taken from the Collaboration diagram concept of UML 1.4 and earlier versions. This can be defined as a modified version of collaboration diagram.
  • Interaction Overview diagram is also a new addition in UML2.0. An Interaction Overview diagram describes a high-level view of a group of interactions combined into a logic sequence, including flow-control logic to navigate between the interactions.
  • Timing diagram is also added in UML2.0. It is an optional diagram designed to specify the time constraints on messages sent and received in the course of an interaction.
So from the above description it is important to note that the purposes of all the diagrams are to send/receive messages. Now the handlings of these messages are internal to the objects. So the objects are also having options to receive and send messages, and here comes another important aspect called interface. Now these interfaces are responsible for accepting and sending messages to one another.
So from the above discussion it can be concluded that the interactions in UML2.0 are described in a different way and that is why the new diagram names have come into picture. But if we analyze the new diagrams then it is clear that all the diagrams are created based upon the interaction diagrams described in the earlier versions. The only difference is the additional features added in UML2.0 to make the diagrams more efficient and purpose oriented.

Modeling Collaborations:

As we have already discussed that collaboration is used to model common interactions between objects. To clarify it, we can say that collaboration is a interaction where a set of messages are handled by a set of objects having pre defined roles.
The important point to note is the difference between the collaboration diagram in earlier version and in UML2.0 version. So to distinguish the collaboration diagram the name has been changed in UML2.0. In UML2.0 it is named as Communication diagram.
Consequently collaboration is defined as a class with attributes (properties) and behavior (operations). Compartments on the collaboration class can user defined also and may be used for interactions (Sequence diagrams) and the structural elements (Composite Structure diagram).
Figure below models the Observer design pattern as collaboration between an object in the role of an observable item and any number of objects as the observers.
Collaboration diagram

Modeling Communication:

Communication diagram is slightly different than the collaboration diagrams of the earlier versions. We can say it is a scaled back version of the earlier UML versions. The distinguishing factor of the communication diagram is the link between objects.
This is a visual link and it is missing in sequence diagram. In sequence diagram only the messages passed between objects are shown even if there is no link between them.
The Communication diagram is used to prevent the modeler from making this mistake by using an Object diagram format as the basis for messaging. Each object on a Communication diagram is called an object lifeline.
The message types in a Communication diagram are the same as in a Sequence diagram. A Communication diagram may model synchronous, asynchronous, return, lost, found, and object-creation messages.
Figure below shows an Object diagram with three objects and two links that form the basis for the Communication diagram. Each object on a Communication diagram is called an object lifeline.
Communication diagram

Modeling an Interaction Overview:

In practical usage, a sequence diagram is used to model a single scenario. So numbers of sequence diagrams are used to complete the entire application. So while modeling a single scenario it is possible to forget the total process and this can introduce errors.
So to solve this issue the new interaction overview diagram combines the flow of control from an activity diagram and messaging specification from the sequence diagram.
Activity diagram uses activities and object flows to describe a process. The Interaction Overview diagram uses interactions and interaction occurrences. The lifelines and messages found in Sequence diagrams appear only within the interactions or interaction occurrences. However, the lifelines (objects) that participate in the Interaction Overview diagram may be listed along with the diagram name.
Figure below shows an interaction overview diagram with decision diamonds, frames and termination point
Interaction diagram

Modeling a Timing Diagram:

The name of this diagram itself describes the purpose of the diagram. It basically deals with the time of the events over its entire lifecycle.
So a timing diagram can be defined as a special purpose interaction diagram made to focus on the events of an object in its life time. It is basically a mixture of state machine and interaction diagram. The timing diagram uses the following time lines:
  • State time line
  • General value time line
A lifeline in a Timing diagram forms a rectangular space within the content area of a frame. It is typically aligned horizontally to read from left to right. Multiple lifelines may be stacked within the same frame to model the interaction between them.
Timing diagram

Summary:

UML2.0 is an enhanced version where the new features are added to make it more usable and efficient. There are two major categories in UML2.0, One is UML super structure and another is UML infrastructure. Although the new diagrams are based on the old concepts but still they have additional features.
UML 2.0 offers four interaction diagrams, the Sequence diagram, Communication diagram, Interaction Overview diagram, and an optional Timing diagram. All four diagrams utilize the frame notation to enclose an interaction. The use of frames supports the reuse of interactions as interaction occurrences

 

Units 5 6 7 8 Click d below link (Password Encrypted)

User Interface design, Testing, RMMM 

Pressman Presentations (Software Engineering)

  • Chapter 1 - Software and Software Engineering
  • Chapter 2 - Software Process (including SEI TR-24 excerpts)
  • Chapter 3 - Prescriptive Process Models
  • Chapter 5 - Software Engineering Practice
  • Chapter 6 - System Engineering
  • Chapter 7 - Requirements Engineering
  • Chapter 8 - Analysis Modeling
  • Chapter 9 - Design Engineering
  • Chapter 10 - Architectural Design
  • Chapter 11 - Component-Level Design
  • Chapter 12 - User Interface Analysis and Design
  • Chapter 13 - Software Testing Strategies
  • Chapter 14 - Software Testing Techniques
  • Chapter 15 - Software Product Metrics
  • Chapter 21 - Project Management Concepts (Updated with slides on group dynamics)
  • Chapter 22 - Process and Project Metrics
  • Chapter 23 - Estimation for Software Projects
  • Chapter 24 - Software Project Scheduling
  • Chapter 25 - Risk Management
  • Chapter 26 - Quality Management
  • Chapter 27 - Change Management

Resources Click on Resources to download Safe Home(Password) 

  • SafeHome dialog excerpts from SEPA
  • SEPA pages regarding SafeHome
  • Overall SafeHome deployment diagram
  • SafeHome priliminary screen layout (see pg 376 of SEPA 6/e for more detail)
  • SafeHome control panel (see pg 193 and 231 of SEPA 6/e for more detail)
      • Note that we use "on", "off", "reset", "away", "stay", "code" (new password assignment), and "panic" functions in the control panel

Document Actions

Bachelor's Degree in Software Engineering


Software Engineering I from PAKISTAN UNIV 
  1. Software Engineering I - CS504 Power Point Slides Lecture 01.ppt
  2. Software Engineering I - CS504 Power Point Slides Lecture 02.ppt
  3. Software Engineering I - CS504 Power Point Slides Lecture 03.ppt
  4. Software Engineering I - CS504 Power Point Slides Lecture 04.ppt
  5. Software Engineering I - CS504 Power Point Slides Lecture 05.ppt
  6. Software Engineering I - CS504 Power Point Slides Lecture 06.ppt
  7. Software Engineering I - CS504 Power Point Slides Lecture 07.ppt
  8. Software Engineering I - CS504 Power Point Slides Lecture 08.ppt
  9. Software Engineering I - CS504 Power Point Slides Lecture 09.ppt
  10. Software Engineering I - CS504 Power Point Slides Lecture 10.ppt
  11. Software Engineering I - CS504 Power Point Slides Lecture 11.ppt
  12. Software Engineering I - CS504 Power Point Slides Lecture 12.ppt
  13. Software Engineering I - CS504 Power Point Slides Lecture 13.ppt
  14. Software Engineering I - CS504 Power Point Slides Lecture 14.ppt
  15. Software Engineering I - CS504 Power Point Slides Lecture 15.ppt
  16. Software Engineering I - CS504 Power Point Slides Lecture 16.ppt
  17. Software Engineering I - CS504 Power Point Slides Lecture 17.ppt
  18. Software Engineering I - CS504 Power Point Slides Lecture 18.ppt
  19. Software Engineering I - CS504 Power Point Slides Lecture 19.ppt
  20. Software Engineering I - CS504 Power Point Slides Lecture 20.ppt
  21. Software Engineering I - CS504 Power Point Slides Lecture 21.ppt
  22. Software Engineering I - CS504 Power Point Slides Lecture 22.ppt
  23. Software Engineering I - CS504 Power Point Slides Lecture 23.ppt
  24. Software Engineering I - CS504 Power Point Slides Lecture 24.ppt
  25. Software Engineering I - CS504 Power Point Slides Lecture 25.ppt
  26. Software Engineering I - CS504 Power Point Slides Lecture 26.ppt
  27. Software Engineering I - CS504 Power Point Slides Lecture 27.ppt
  28. Software Engineering I - CS504 Power Point Slides Lecture 28.ppt
  29. Software Engineering I - CS504 Power Point Slides Lecture 29.ppt
  30. Software Engineering I - CS504 Power Point Slides Lecture 30.ppt
  31. Software Engineering I - CS504 Power Point Slides Lecture 31.ppt
  32. Software Engineering I - CS504 Power Point Slides Lecture 32.ppt
  33. Software Engineering I - CS504 Power Point Slides Lecture 33.ppt
  34. Software Engineering I - CS504 Power Point Slides Lecture 34.ppt
  35. Software Engineering I - CS504 Power Point Slides Lecture 35.ppt
  36. Software Engineering I - CS504 Power Point Slides Lecture 36.ppt
  37. Software Engineering I - CS504 Power Point Slides Lecture 37.ppt
  38. Software Engineering I - CS504 Power Point Slides Lecture 38.ppt
  39. Software Engineering I - CS504 Power Point Slides Lecture 39.ppt
  40. Software Engineering I - CS504 Power Point Slides Lecture 40.ppt
  41. Software Engineering I - CS504 Power Point Slides Lecture 41.ppt
  42. Software Engineering I - CS504 Power Point Slides Lecture 42.ppt
  43. Software Engineering I - CS504 Power Point Slides Lecture 43.ppt
  44. Software Engineering I - CS504 Power Point Slides Lecture 44.ppt
  45. Software Engineering I - CS504 Power Point Slides Lecture 45.ppt

Software Requirements Engineering from PAK UNIV
  1. Software Requirement Engineering - CS708 Power Point Slides lecture-01.ppt
  2. Software Requirement Engineering - CS708 Power Point Slides lecture-02.ppt
  3. Software Requirement Engineering - CS708 Power Point Slides lecture-03.ppt
  4. Software Requirement Engineering - CS708 Power Point Slides lecture-04.ppt
  5. Software Requirement Engineering - CS708 Power Point Slides lecture-05.ppt
  6. Software Requirement Engineering - CS708 Power Point Slides lecture-06.ppt
  7. Software Requirement Engineering - CS708 Power Point Slides lecture-07.ppt
  8. Software Requirement Engineering - CS708 Power Point Slides lecture-08.ppt
  9. Software Requirement Engineering - CS708 Power Point Slides lecture-09.ppt
  10. Software Requirement Engineering - CS708 Power Point Slides lecture-10.ppt
  11. Software Requirement Engineering - CS708 Power Point Slides lecture-11.ppt
  12. Software Requirement Engineering - CS708 Power Point Slides lecture-12.ppt
  13. Software Requirement Engineering - CS708 Power Point Slides lecture-13.ppt
  14. Software Requirement Engineering - CS708 Power Point Slides lecture-14.ppt
  15. Software Requirement Engineering - CS708 Power Point Slides lecture-15.ppt
  16. Software Requirement Engineering - CS708 Power Point Slides lecture-16.ppt
  17. Software Requirement Engineering - CS708 Power Point Slides lecture-17.ppt
  18. Software Requirement Engineering - CS708 Power Point Slides lecture-18.ppt
  19. Software Requirement Engineering - CS708 Power Point Slides lecture-19.ppt
  20. Software Requirement Engineering - CS708 Power Point Slides lecture-20.ppt
  21. Software Requirement Engineering - CS708 Power Point Slides lecture-21.ppt
  22. Software Requirement Engineering - CS708 Power Point Slides lecture-22.ppt
  23. Software Requirement Engineering - CS708 Power Point Slides lecture-23.ppt
  24. Software Requirement Engineering - CS708 Power Point Slides lecture-24.ppt
  25. Software Requirement Engineering - CS708 Power Point Slides lecture-25.ppt
  26. Software Requirement Engineering - CS708 Power Point Slides lecture-26.ppt
  27. Software Requirement Engineering - CS708 Power Point Slides lecture-27.ppt
  28. Software Requirement Engineering - CS708 Power Point Slides lecture-28.ppt
  29. Software Requirement Engineering - CS708 Power Point Slides lecture-29.ppt
  30. Software Requirement Engineering - CS708 Power Point Slides lecture-30.ppt
  31. Software Requirement Engineering - CS708 Power Point Slides lecture-31.ppt
  32. Software Requirement Engineering - CS708 Power Point Slides lecture-32.ppt
  33. Software Requirement Engineering - CS708 Power Point Slides lecture-33.ppt
  34. Software Requirement Engineering - CS708 Power Point Slides lecture-34.ppt
  35. Software Requirement Engineering - CS708 Power Point Slides lecture-35.ppt
  36. Software Requirement Engineering - CS708 Power Point Slides lecture-36.ppt
  37. Software Requirement Engineering - CS708 Power Point Slides lecture-37.ppt
  38. Software Requirement Engineering - CS708 Power Point Slides lecture-38.ppt
  39. Software Requirement Engineering - CS708 Power Point Slides lecture-39.ppt
  40. Software Requirement Engineering - CS708 Power Point Slides lecture-40.ppt
  41. Software Requirement Engineering - CS708 Power Point Slides lecture-41.ppt
  42. Software Requirement Engineering - CS708 Power Point Slides lecture-42.ppt
  43. Software Requirement Engineering - CS708 Power Point Slides lecture-43.ppt
  44. Software Requirement Engineering - CS708 Power Point Slides lecture-44.ppt
  45. Software Requirement Engineering - CS708 Power Point Slides lecture-45.ppt
Software Engineering II Power points from Pakistan University
  1. Software Engineering II - CS605 Power Point Slides Lecture 01.ppt
  2. Software Engineering II - CS605 Power Point Slides Lecture 02.ppt
  3. Software Engineering II - CS605 Power Point Slides Lecture 03.ppt
  4. Software Engineering II - CS605 Power Point Slides Lecture 04.ppt
  5. Software Engineering II - CS605 Power Point Slides Lecture 05.ppt
  6. Software Engineering II - CS605 Power Point Slides Lecture 06.ppt
  7. Software Engineering II - CS605 Power Point Slides Lecture 07.ppt
  8. Software Engineering II - CS605 Power Point Slides Lecture 08.ppt
  9. Software Engineering II - CS605 Power Point Slides Lecture 09.ppt
  10. Software Engineering II - CS605 Power Point Slides Lecture 10.ppt
  11. Software Engineering II - CS605 Power Point Slides Lecture 11.ppt
  12. Software Engineering II - CS605 Power Point Slides Lecture 12.ppt
  13. Software Engineering II - CS605 Power Point Slides Lecture 14.ppt
  14. Software Engineering II - CS605 Power Point Slides Lecture 15.ppt
  15. Software Engineering II - CS605 Power Point Slides Lecture 16.ppt
  16. Software Engineering II - CS605 Power Point Slides Lecture 17.ppt
  17. Software Engineering II - CS605 Power Point Slides Lecture 18.ppt
  18. Software Engineering II - CS605 Power Point Slides Lecture 19.ppt
  19. Software Engineering II - CS605 Power Point Slides Lecture 20.ppt
  20. Software Engineering II - CS605 Power Point Slides Lecture 21.ppt
  21. Software Engineering II - CS605 Power Point Slides Lecture 22.ppt
  22. Software Engineering II - CS605 Power Point Slides Lecture 23.ppt
  23. Software Engineering II - CS605 Power Point Slides Lecture 24.ppt
  24. Software Engineering II - CS605 Power Point Slides Lecture 25.ppt
  25. Software Engineering II - CS605 Power Point Slides Lecture 26.ppt
  26. Software Engineering II - CS605 Power Point Slides Lecture 27.ppt
  27. Software Engineering II - CS605 Power Point Slides Lecture 28.ppt
  28. Software Engineering II - CS605 Power Point Slides Lecture 29.ppt
  29. Software Engineering II - CS605 Power Point Slides Lecture 30.ppt
  30. Software Engineering II - CS605 Power Point Slides Lecture 31.ppt
  31. Software Engineering II - CS605 Power Point Slides Lecture 32.ppt
  32. Software Engineering II - CS605 Power Point Slides Lecture 33.ppt
  33. Software Engineering II - CS605 Power Point Slides Lecture 34.ppt
  34. Software Engineering II - CS605 Power Point Slides Lecture 35.ppt
  35. Software Engineering II - CS605 Power Point Slides Lecture 36.ppt
  36. Software Engineering II - CS605 Power Point Slides Lecture 37.ppt
  37. Software Engineering II - CS605 Power Point Slides Lecture 38.ppt
  38. Software Engineering II - CS605 Power Point Slides Lecture 39.ppt
  39. Software Engineering II - CS605 Power Point Slides Lecture 40.ppt
  40. Software Engineering II - CS605 Power Point Slides Lecture 41.ppt
  41. Software Engineering II - CS605 Power Point Slides Lecture 42.ppt
  42. Software Engineering II - CS605 Power Point Slides Lecture 43.ppt
  43. Software Engineering II - CS605 Power Point Slides Lecture 44.ppt
  44. Software Engineering II - CS605 Power Point Slides Lecture 45.ppt


Operating Systems Lecture Notes
  1. Introduction
  2. History of Operating Systems
  3. Operating Systems Structure
    • System Component
    • Operating System Services
    • System Calls and System Programs
    • Layered Approach System Design
    • Mechanism and Policy
  4. Process
    • Definition of Process  
    • Process State
    • Process Operations
    • Process Control Block
  5. Threads
  6. Solaris-2 Operating Systems
  7. CPU/Process Scheduling
  8. Schedule Algorithm
    • FCFS Scheduling
    • Round Robin Scheduling
    • SJF Scheduling
    • SRT Scheduling
    • Priority Scheduling
    • Multilevel Queue Scheduling
    • Multilevel Feedback Queue Scheduling
  9. Interprocess Communication
    • Critical Section
    • Mutual Exclusion
    • Achieving Mutual Exclusion
    • Semaphores
  10. Deadlock
    • Necessary and Sufficient Deadlock Conditions
    • Dealing with Deadlock Problem
      • Deadlock Prevention
      • Deadlock Avoidance
      • Deadlock Detection
  11. Absolutely Important UNIX Commands
  12. References

 Introduction




What is an Operating System?

The 1960’s definition of an operating system is “the software that controls the hardware”. However, today, due to microcode we need a better definition. We see an operating system as the programs that make the hardware useable. In brief, an operating system is the set of programs that controls a computer. Some examples of operating systems are UNIX, Mach, MS-DOS, MS-Windows, Windows/NT, Chicago, OS/2, MacOS, VMS, MVS, and VM.
Controlling the computer involves software at several levels. We will differentiate kernel services, library services, and application-level services, all of which are part of the operating system. Processes run Applications, which are linked together with libraries that perform standard services. The kernel supports the processes by providing a path to the peripheral devices. The kernel responds to service calls from the processes and interrupts from the devices. The core of the operating system is the kernel, a control program that functions in privileged state (an execution context that allows all hardware instructions to be executed), reacting to interrupts from external devices and to service requests and traps from processes. Generally, the kernel is a permanent resident of the computer. It creates and terminates processes and responds to their request for service.
Operating Systems are resource managers. The main resource is computer hardware in the form of processors, storage, input/output devices, communication devices, and data. Some of the operating system functions are: implementing the user interface, sharing hardware among users, allowing users to share data among themselves, preventing users from interfering with one another, scheduling resources among users, facilitating input/output, recovering from errors, accounting for resource usage, facilitating parallel operations, organizing data for secure and rapid access, and handling network communications.

Objectives of Operating Systems

Modern Operating systems generally have following three major goals. Operating systems generally accomplish these goals by running processes in low privilege and providing service calls that invoke the operating system kernel in high-privilege state.
  • To hide details of hardware by creating abstractionAn abstraction is software that hides lower level details and provides a set of higher-level functions. An operating system transforms the physical world of devices, instructions, memory, and time into virtual world that is the result of abstractions built by the operating system. There are several reasons for abstraction. 
    First
    , the code needed to control peripheral devices is not standardized. Operating systems provide subroutines called device drivers that perform operations on behalf of programs for example, input/output operations. 
    Second
    , the operating system introduces new functions as it abstracts the hardware. For instance, operating system introduces the file abstraction so that programs do not have to deal with disks. 
    Third
    , the operating system transforms the computer hardware into multiple virtual computers, each belonging to a different program. Each program that is running is called a process. Each process views the hardware through the lens of abstraction. 
    Fourth
    , the operating system can enforce security through abstraction.
     
  • To allocate resources to processes (Manage resources)An operating system controls how processes (the active agents) may access resources (passive entities).
     
  • Provide a pleasant and effective user interface The user interacts with the operating systems through the user interface and usually interested in the “look and feel” of the operating system. The most important components of the user interface are the command interpreter, the file system, on-line help, and application integration. The recent trend has been toward increasingly integrated graphical user interfaces that encompass the activities of multiple processes on networks of computers.
One can view Operating Systems from two points of views: Resource manager and Extended machines. Form Resource manager point of view Operating Systems manage the different parts of the system efficiently and from extended machines point of view Operating Systems provide a virtual machine to users that is more convenient to use. The structurally Operating Systems can be design as a monolithic system, a hierarchy of layers, a virtual machine system, an ex kernel, or using the client-server model. The basic concepts of Operating Systems are processes, memory management, I/O management, the file systems, and security.

History of Operating Systems




Historically operating systems have been tightly related to the computer architecture, it is good idea to study the history of operating  systems from the architecture of the computers on which they run.
Operating systems have evolved through a number of distinct phases or generations which corresponds roughly to the decades.

The 1940's - First Generations

The earliest electronic digital computers had no operating systems. Machines of the time were so primitive that programs were often entered one bit at time on rows of mechanical switches (plug boards). Programming languages were unknown (not even assembly languages). Operating systems were unheard of .

The 1950's - Second Generation

By the early 1950's, the routine had improved somewhat with the introduction of punch cards. The General Motors Research Laboratories implemented the first operating systems in early 1950's for their IBM 701. The system of the 50's generally ran one job at a time. These were called single-stream batch processing systems because programs and data were submitted in groups or batches.

The 1960's - Third Generation

The systems of the 1960's were also batch processing systems, but they were able to take better advantage of the computer's resources by running several jobs at once. So operating systems designers developed the concept of multiprogramming in which several jobs are in main memory at once; a processor is switched from job to job as needed to keep several jobs advancing while keeping the peripheral devices in use.
For example, on the system with no multiprogramming, when the current job paused to wait for other I/O operation to complete, the CPU simply sat idle until the I/O finished. The solution for this problem that evolved was to partition memory into several pieces, with a different job in each partition. While one job was waiting for I/O to complete, another job could be using the CPU.
Another major feature in third-generation operating system was the technique called spooling (simultaneous peripheral operations on line). In spooling, a high-speed device like a disk interposed between a running program and a low-speed device involved with the program in input/output. Instead of writing directly to a printer, for example, outputs are written to the disk. Programs can run to completion faster, and other programs can be initiated sooner when the printer becomes available, the outputs may be printed.
Note that spooling technique is much like thread being spun to a spool so that it may be later be unwound as needed.
Another feature present in this generation was time-sharing technique, a variant of multi-programming technique, in which each user has an on-line (i.e., directly connected) terminal. Because the user is present and interacting with the computer, the computer system must respond quickly to user requests, otherwise user productivity could suffer. Time sharing systems were developed to multi-program large number of simultaneous interactive users.

Fourth Generation

With the development of LSI (Large Scale Integration) circuits, chips, operating system entered in the system entered in the personal computer and the workstation age. Microprocessor technology evolved to the point that it become possible to build desktop computers as powerful as the mainframes of the 1970s. Two operating systems have dominated the personal computer scene: MS-DOS, written by Microsoft, Inc. for the IBM PC and other machines using the Intel 8088 CPU and its successors, and UNIX, which is dominant on the large personal computers using the Motorola 6899 CPU family.

System Components

Even though, not all systems have the same structure many modern operating systems share the same goal of supporting the following types of system components.

 Process Management

The operating system manages many kinds of activities ranging from user programs to system programs like printer spooler, name servers, file server etc. Each of these activities is encapsulated in a process. A process includes the complete execution context (code, data, PC, registers, OS resources in use etc.).
It is important to note that a process is not a program. A process is only ONE instant of a program in execution. There are many processes can be running the same program. The five major activities of an operating system in regard to process management are
  • Creation and deletion of user and system processes.
  • Suspension and resumption of processes.
  • A mechanism for process synchronization.
  • A mechanism for process communication.
  • A mechanism for deadlock handling.

 Main-Memory Management

Primary-Memory or Main-Memory is a large array of words or bytes. Each word or byte has its own address. Main-memory provides storage that can be access directly by the CPU. That is to say for a program to be executed, it must in the main memory.
The major activities of an operating in regard to memory-management are:
  • Keep track of which part of memory are currently being used and by whom.
  • Decide which process are loaded into memory when memory space becomes available.
  • Allocate and de-allocate memory space as needed.

 File Management

A file is a collected of related information defined by its creator. Computer can store files on the disk (secondary storage), which provide long term storage. Some examples of storage media are magnetic tape, magnetic disk and optical disk. Each of these media has its own properties like speed, capacity, data transfer rate and access methods.
A file systems normally organized into directories to ease their use. These directories may contain files and other directions.
The five main major activities of an operating system in regard to file management are
  1. The creation and deletion of files.
  2. The creation and deletion of directions.
  3. The support of primitives for manipulating files and directions.
  4. The mapping of files onto secondary storage.
  5. The back up of files on stable storage media.

 I/O System Management

I/O subsystem hides the peculiarities of specific hardware devices from the user. Only the device driver knows the peculiarities of the specific device to whom it is assigned.

 Secondary-Storage Management

Generally speaking, systems have several levels of storage, including primary storage, secondary storage and cache storage. Instructions and data must be placed in primary storage or cache to be referenced by a running program. Because main memory is too small to accommodate all data and programs, and its data are lost when power is lost, the computer system must provide secondary storage to back up main memory. Secondary storage consists of tapes, disks, and other media designed to hold information that will eventually be accessed in primary storage (primary, secondary, cache) is ordinarily divided into bytes or words consisting of a fixed number of bytes. Each location in storage has an address; the set of all addresses available to a program is called an address space.
The three major activities of an operating system in regard to secondary storage management are:
  1. Managing the free space available on the secondary-storage device.
  2. Allocation of storage space when new files have to be written.
  3. Scheduling the requests for memory access.

 Networking

A distributed systems is a collection of processors that do not share memory, peripheral devices, or a clock. The processors communicate with one another through communication lines called network. The communication-network design must consider routing and connection strategies, and the problems of contention and security.

 Protection System

If a computer systems has multiple users and allows the concurrent execution of multiple processes, then the various processes must be protected from one another's activities. Protection refers to mechanism for controlling the access of programs, processes, or users to the resources defined by a computer systems.

 Command Interpreter System

A command interpreter is an interface of the operating system with the user. The user gives commands with are executed by operating system (usually by turning them into system calls). The main function of a command interpreter is to get and execute the next user specified command. Command-Interpreter is usually not part of the kernel, since multiple command interpreters (shell, in UNIX terminology) may be support by an operating system, and they do not really need to run in kernel mode. There are two main advantages to separating the command interpreter from the kernel.
  1. If we want to change the way the command interpreter looks, i.e., I want to change the interface of command interpreter, I am able to do that if the command interpreter is separate from the kernel. I cannot change the code of the kernel so I cannot modify the interface.
  2. If the command interpreter is a part of the kernel it is possible for a malicious process to gain access to certain part of the kernel that it showed not have to avoid this ugly scenario it is advantageous to have the command interpreter separate from kernel.

Operating Systems Services

Following are the five services provided by an operating systems to the convenience of the users.

Program Execution

The purpose of a computer systems is to allow the user to execute programs. So the operating systems provides an environment where the user can conveniently run programs. The user does not have to worry about the memory allocation or multitasking or anything. These things are taken care of by the operating systems.
Running a program involves the allocating and deallocating memory, CPU scheduling in case of multiprocess. These functions cannot be given to the user-level programs. So user-level programs cannot help the user to run programs independently without the help from operating systems.

 I/O Operations

Each program requires an input and produces output. This involves the use of I/O. The operating systems hides the user the details of underlying hardware for the I/O. All the user sees is that the I/O has been performed without any details. So the operating systems by providing I/O makes it convenient for the users to run programs.
For efficiently and protection users cannot control I/O so this service cannot be provided by user-level programs.

File System Manipulation

The output of a program may need to be written into new files or input taken from some files. The operating systems provides this service. The user does not have to worry about secondary storage management. User gives a command for reading or writing to a file and sees his her task accomplished. Thus operating systems makes it easier for user programs to accomplished their task.
This service involves secondary storage management. The speed of I/O that depends on secondary storage management is critical to the speed of many programs and hence I think it is best relegated to the operating systems to manage it than giving individual users the control of it. It is not difficult for the user-level programs to provide these services but for above mentioned reasons it is best if this service s left with operating system.

 Communications

There are instances where processes need to communicate with each other to exchange information. It may be between processes running on the same computer or running on the different computers. By providing this service the operating system relieves the user of the worry of passing messages between processes. In case where the messages need to be passed to processes on the other computers through a network it can be done by the user programs. The user program may be customized to the specifics of the hardware through which the message transits and provides the service interface to the operating system.

 Error Detection

An error is one part of the system may cause malfunctioning of the complete system. To avoid such a situation the operating system constantly monitors the system for detecting the errors. This relieves the user of the worry of errors propagating to various part of the system and causing malfunctioning.
This service cannot allowed to be handled by user programs because it involves monitoring and in cases altering area of memory or deallocation of memory for a faulty process. Or may be relinquishing the CPU of a process that goes into an infinite loop. These tasks are too critical to be handed over to the user programs.  A user program if given these privileges can interfere with the correct (normal) operation of the operating systems.

System Calls and System Programs


System calls provide an interface between the process an the operating system. System calls allow user-level processes to request some services from the operating system which process itself is not allowed to do. In handling the trap, the operating system will enter in the kernel mode, where it has access to privileged instructions, and can perform the desired service on the behalf of user-level process. It is because of the critical nature of operations that the operating system itself does them every time they are needed. For example, for I/O a process involves a system call telling the operating system to read or write particular area and this request is satisfied by the operating system.
System programs provide basic functioning to users so that they do not need to write their own environment for program development (editors, compilers) and program execution (shells). In some sense, they are bundles of useful system calls.

Layered Approach Design


In this case the system is easier to debug and modify, because changes affect only limited portions of the code, and programmer does not have to know the details of the other layers. Information is also kept only where it is needed and is accessible only in certain ways, so bugs affecting that data are limited to a specific module or layer.

Mechanisms and Policies

 

The policies what is to be done while the mechanism specifies how it is to be done. For instance, the timer construct for ensuring CPU protection is mechanism. On the other hand, the decision of how long the timer is set for a particular user is a policy decision.
The separation of mechanism and policy is important to provide flexibility to a system. If the interface between mechanism and policy is well defined, the change of policy may affect only a few parameters. On the other hand, if interface between these two is vague or not well defined, it might involve much deeper change to the system.
Once the policy has been decided it gives the programmer the choice of using his/her own implementation. Also, the underlying implementation may be changed for a more efficient one without much trouble if the mechanism and policy are well defined. Specifically, separating these two provides flexibility in a variety of ways. First, the same mechanism can be used to implement a variety of policies, so changing the policy might not require the development of a new mechanism, but just a change in parameters for that mechanism, but just a change in parameters for that mechanism from a library of mechanisms. Second, the mechanism can be changed for example, to increase its efficiency or to move to a new platform, without changing the overall policy.

Definition of Process   

The notion of process is central to the understanding of operating systems. There are quite a few definitions presented in the literature, but no "perfect" definition has yet appeared.

 Definition

The term "process" was first used by the designers of the MULTICS in 1960's. Since then, the term process, used somewhat interchangeably with 'task' or 'job'. The process has been given many definitions for instance
  • A program in Execution.
  • An asynchronous activity.
  • The 'animated sprit' of a procedure in execution.
  • The entity to which processors are assigned.
  • The 'dispatchable' unit.
and many more definitions have given. As we can see from above that there is no universally agreed upon definition, but the definition "Program in Execution" seem to be most frequently used. And this is a concept are will use in the present study of operating systems.
Now that we agreed upon the definition of process, the question is what is the relation between process and program. It is same beast with different name or when this beast is sleeping (not executing) it is called program and when it is executing becomes process. Well, to be very precise. Process is not the same as program. In the following discussion we point out some of the difference between process and program. As we have mentioned earlier.
Process is not the same as program. A process is more than a program code. A process is an 'active' entity as oppose to program which consider to be a 'passive' entity. As we all know that a program is an algorithm expressed in some suitable notation, (e.g., programming language). Being a passive, a program is only a part of process. Process, on the other hand, includes:
  • Current value of Program Counter (PC)
  • Contents of the processors registers
  • Value of the variables
  • The process stack (SP) which typically contains temporary data such as subroutine parameter, return address, and temporary variables.
  • A data section that contains global variables.
A process is the unit of work in a system.
In Process model, all software on the computer is organized into a number of sequential processes. A process includes PC, registers, and variables. Conceptually, each process has its own virtual CPU. In reality, the CPU switches back and forth among processes. (The rapid switching back and forth is called multi programming).

Process State

 The process state consist of everything necessary to resume the process execution if it is somehow put aside temporarily. The process state consists of at least following:

  • Code for the program.
  • Program's static data.
  • Program's dynamic data.
  • Program's procedure call stack.
  • Contents of general purpose registers.
  • Contents of program counter (PC)
  • Contents of program status word (PSW).
  • Operating Systems resource in use.

Process Operations

 Process Creation

In general-purpose systems, some way is needed to create processes as needed during operation. There are four principal events led to processes creation.
  • System initialization.
  • Execution of a process Creation System calls by a running process.
  • A user request to create a new process.
  • Initialization of a batch job.
     
Foreground processes interact with users. Background processes that stay in background sleeping but suddenly springing to life to handle activity such as email, webpage, printing, and so on. Background processes are called daemons. This call creates an exact clone of the calling process.

A process may create a new process by some create process such as 'fork'. It choose to does so, creating process is called parent process and the created one is called the child processes. Only one parent is needed to create a child process. Note that unlike plants and animals that use sexual representation, a process has only one parent. This creation of process (processes) yields a hierarchical structure of processes like one in the figure. Notice that each child has only one parent but each parent may have many children. After the fork, the two processes, the parent and the child, have the same memory image, the same environment strings and the same open files. After a process is created, both the parent and child have their own distinct address space. If either process changes a word in its address space, the change is not visible to the other process.
                                        <Figure 3.2 pp.55 From Dietel>
Following are some reasons for creation of a process
  • User logs on.
  • User starts a program.
  • Operating systems creates process to provide service, e.g., to manage printer.
  • Some program starts another process, e.g., Netscape calls xv to display a picture.
 Process Termination
A process terminates when it finishes executing its last statement. Its resources are returned to the system, it is purged from any system lists or tables, and its process control block (PCB) is erased i.e., the PCB's memory space is returned to a free memory pool. The new process terminates the existing process, usually due to following reasons:
  • Normal Exist    Most processes terminates because they have done their job. This call is exist in UNIX.
  • Error Exist    When process discovers a fatal error. For example, a user tries to compile a program that does not exist.
  • Fatal Error    An error caused by process due to a bug in program for example, executing an illegal instruction, referring non-existing memory or dividing by zero.
  • Killed by another Process    A process executes a system call telling the Operating Systems to terminate some other process. In UNIX, this call is kill. In some systems when a process kills all processes it created are killed as well (UNIX does not work this way).
 Process States

A process goes through a series of discrete process states.
  • New State    The process being created.
  • Terminated State    The process has finished execution.
  • Blocked (waiting) State    When a process blocks, it does so because logically it cannot continue, typically because it is waiting for input that is not yet available. Formally, a process is said to be blocked if it is waiting for some event to happen (such as an I/O completion) before it can proceed. In this state a process is unable to run until some external event happens.
  • Running State    A process is said t be running if it currently has the CPU, that is, actually using the CPU at that particular instant.
  • Ready State    A process is said to be ready if it use a CPU if one were available. It is runable but temporarily stopped to let another process run.
Logically, the 'Running' and 'Ready' states are similar. In both cases the process is willing to run, only in the case of 'Ready' state, there is temporarily no CPU available for it. The 'Blocked' state is different from the 'Running' and 'Ready' states in that the process cannot run, even if the CPU is available.

Process State Transitions

Following are six(6) possible transitions among above mentioned five (5) states

FIGURE
  • Transition 1 occurs when process discovers that it cannot continue. If running process initiates an I/O operation before its allotted time expires, the running process voluntarily relinquishes the CPU.

    This state transition is:

            Block (process-name): Running  Block.
     
  • Transition 2 occurs when the scheduler decides that the running process has run long enough and it is time to let another process have CPU time.

    This state transition is:

    Time-Run-Out  (process-name): Running  Ready.
     
  • Transition 3 occurs when all other processes have had their share and it is time for the first process to run again

    This state transition is:

        Dispatch (process-name): Ready  Running.
     
  • Transition 4 occurs when the external event for which a process was waiting (such as arrival of input) happens.

    This state transition is:

        Wakeup (process-name): Blocked  Ready.
     
  • Transition 5 occurs when the process is created.

    This state transition is:

        Admitted (process-name): New  Ready.
     
  • Transition 6 occurs when the process has finished execution.

    This state transition is:

        Exit (process-name): Running  Terminated.
FIGURE from Notes

Process Control Block


A process in an operating system is represented by a data structure known as a process control block (PCB) or process descriptor. The PCB contains important information about the specific process including
  • The current state of the process i.e., whether it is ready, running, waiting, or whatever.
  • Unique identification of the process in order to track "which is which" information.
  • A pointer to parent process.
  • Similarly, a pointer to child process (if it exists).
  • The priority of process (a part of CPU scheduling information).
  • Pointers to locate memory of processes.
  • A register save area.
  • The processor it is running on.
The PCB is a certain store that allows the operating systems to locate key information

Threads




  • Threads
  • Processes Vs Threads
  • Why Threads?
  • User-Level Threads
  • Kernel-Level Threads
  • Advantages of Threads over Multiple Processes
  • Disadvantages of Threads over Multiprocesses
  • Application that Benefits from Threads
  • Application that cannot benefit from Threads
  • Resources used in Thread creation and Process Creation
  • Context Switch
  • Major Steps of Context Switching
  • Action of Kernel to Context switch among threads
  • Action of kernel to Context switch among processes

Threads

Despite of the fact that a thread must execute in process, the process and its associated threads are different concept. Processes are used to group resources together and threads are the entities scheduled for execution on the CPU.
A thread is a single sequence stream within in a process. Because threads have some of the properties of processes, they are sometimes called lightweight processes. In a process, threads allow multiple executions of streams. In many respect, threads are popular way to improve application through parallelism. The CPU switches rapidly back and forth among the threads giving illusion that the threads are running in parallel. Like a traditional process i.e., process with one thread, a thread can be in any of several states (Running, Blocked, Ready or Terminated). Each thread has its own stack. Since thread will generally call different procedures and thus a different execution history. This is why thread needs its own stack. An operating system that has thread facility, the basic unit of CPU utilization is a thread. A thread has or consists of a program counter (PC), a register set, and a stack space. Threads are not independent of one other like processes as a result threads shares with other threads their code section, data section, OS resources  also known as task, such as open files and signals.

Processes Vs Threads

As we mentioned earlier that in many respect threads operate in the same way as that of processes. Some of the similarities and differences are:
Similarities
  • Like processes threads share CPU and only one thread active (running) at a time.
  • Like processes, threads within a processes, threads within a processes execute sequentially.
  • Like processes, thread can create children.
  • And like process, if one thread is blocked, another thread can run.
Differences
  • Unlike processes, threads are not independent of one another.
  • Unlike processes, all threads can access every address in the task .
  • Unlike processes, thread are design to assist one other. Note that processes might or might not assist one another because processes may originate from different users.

Why Threads?

Following are some reasons why we use threads in designing operating systems.
  1. A process with multiple threads make a great server for example printer server.
  2. Because threads can share common data, they do not need to use interprocess communication.
  3. Because of the very nature, threads can take advantage of multiprocessors.
Threads are cheap in the sense that
  1. They only need a stack and storage for registers therefore, threads are cheap to create.
  2. Threads use very little resources of an operating system in which they are working. That is, threads do not need new address space, global data, program code or operating system resources.
  3. Context switching are fast when working with threads. The reason is that we only have to save and/or restore PC, SP and registers.
But this cheapness does not come free - the biggest drawback is that there is no protection between threads.

User-Level Threads

User-level threads implement in user-level libraries, rather than via systems calls, so thread switching does not need to call operating system and to cause interrupt to the kernel. In fact, the kernel knows nothing about user-level threads and manages them as if they were single-threaded processes.
Advantages:
The most obvious advantage of this technique is that a user-level threads package can be implemented on an Operating System that does not support threads. Some other advantages are
  • User-level threads does not require modification to operating systems.
  • Simple Representation:
        Each thread is represented simply by a PC, registers, stack and a small control block, all stored in the user process address space.
  • Simple Management:
        This simply means that creating a thread, switching between threads and synchronization between threads can all be done without intervention of the kernel.
  • Fast and Efficient:
        Thread switching is not much more expensive than a procedure call.
Disadvantages:
  • There is a lack of coordination between threads and operating system kernel. Therefore, process as whole gets one time slice irrespect of whether process has one thread or 1000 threads within. It is up to each thread to relinquish control to other threads.
  • User-level threads requires non-blocking systems call i.e., a multithreaded kernel. Otherwise, entire process will blocked in the kernel, even if there are runable threads left in the processes. For example, if one thread causes a page fault, the process blocks.

Kernel-Level Threads

In this method, the kernel knows about and manages the threads. No runtime system is needed in this case. Instead of thread table in each process, the kernel has a thread table that keeps track of all threads in the system. In addition, the kernel also maintains the traditional process table to keep track of processes. Operating Systems kernel provides system call to create and manage threads.

The implementation of general structure of kernel-level thread is

                            <DIAGRAM>
Advantages:
  • Because kernel has full knowledge of all threads, Scheduler may decide to give more time to a process having large number of threads than process having small number of threads.
  • Kernel-level threads are especially good for applications that frequently block.
Disadvantages:
  • The kernel-level threads are slow and inefficient. For instance, threads operations are hundreds of times slower than that of user-level threads.
  • Since kernel must manage and schedule threads as well as processes. It require a full thread control block (TCB) for each thread to maintain information about threads. As a result there is significant overhead and increased in kernel complexity.

Advantages of Threads over Multiple Processes

  • Context Switching    Threads are very inexpensive to create and destroy, and they are inexpensive to represent. For example, they require space to store, the PC, the SP, and the general-purpose registers, but they do not require space to share memory information, Information about open files of I/O devices in use, etc. With so little context, it is much faster to switch between threads. In other words, it is relatively easier for a context switch using threads.
  • Sharing    Treads allow the sharing of a lot resources that cannot be shared in process, for example, sharing code section, data section, Operating System resources like open file etc.

Disadvantages of Threads over Multiprocesses

  • Blocking     The major disadvantage if that if the kernel is single threaded, a system call of one thread will block the whole process and CPU may be idle during the blocking period.
  • Security    Since there is, an extensive sharing among threads there is a potential problem of security. It is quite possible that one thread over writes the stack of another thread (or damaged shared data) although it is very unlikely since threads are meant to cooperate on a single task.

Application that Benefits from Threads

A proxy server satisfying the requests for a number of computers on a LAN would be benefited by a multi-threaded process. In general, any program that has to do more than one task at a time could benefit from multitasking. For example, a program that reads input, process it, and outputs could have three threads, one for each task.

Application that cannot Benefit from Threads

Any sequential process that cannot be divided into parallel task will not benefit from thread, as they would block until the previous one completes. For example, a program that displays the time of the day would not benefit from multiple threads.

Resources used in Thread Creation and Process Creation

When a new thread is created it shares its code section, data section and operating system resources like open files with other threads. But it is allocated its own stack, register set and a program counter.
The creation of a new process differs from that of a thread mainly in the fact that all the shared resources of a thread are needed explicitly for each process. So though two processes may be running the same piece of code they need to have their own copy of the code in the main memory to be able to run. Two processes also do not share other resources with each other. This makes the creation of a new process very costly compared to that of a new thread.

Context Switch

To give each process on a multiprogrammed machine a fair share of the CPU, a hardware clock generates interrupts periodically. This allows the operating system to schedule all processes in main memory (using scheduling algorithm) to run on the CPU at equal intervals. Each time a clock interrupt occurs, the interrupt handler checks how much time the current running process has used. If it has used up its entire time slice, then the CPU scheduling algorithm (in kernel) picks a different process to run. Each switch of the CPU from one process to another is called a context switch.

Major Steps of Context Switching

  • The values of the CPU registers are saved in the process table of the process that was running just before the clock interrupt occurred.
  • The registers are loaded from the process picked by the CPU scheduler to run next.
In a multiprogrammed uniprocessor computing system, context switches occur frequently enough that all processes appear to be running concurrently. If a process has more than one thread, the Operating System can use the context switching technique to schedule the threads so they appear to execute in parallel. This is the case if threads are implemented at the kernel level. Threads can also be implemented entirely at the user level in run-time libraries. Since in this case no thread scheduling is provided by the Operating System, it is the responsibility of the programmer to yield the CPU frequently enough in each thread so all threads in the process can make progress.

Action of Kernel to Context Switch Among Threads

The threads share a lot of resources with other peer threads belonging to the same process. So a context switch among threads for the same process is easy. It involves switch of register set, the program counter and the stack. It is relatively easy for the kernel to accomplished this task.

Action of kernel to Context Switch Among Processes

Context switches among processes are expensive. Before a process can be switched its process control block (PCB) must be saved by the operating system. The PCB consists of the following information:
  • The process state.
  • The program counter, PC.
  • The values of the different registers.
  • The CPU scheduling information for the process.
  • Memory management information regarding the process.
  • Possible accounting information for this process.
  • I/O status information of the process.
When the PCB of the currently executing process is saved the operating system loads the PCB of the next process that has to be run on CPU. This is a heavy task and it takes a lot of time.

Solaris-2 Operating Systems


  • Introduction
  • At user-level
  • At Intermediate-level
  • At kernel-level

 Introduction

The solaris-2 Operating Systems supports:
  • threads at the user-level.
  • threads at the kernel-level.
  • symmetric multiprocessing and
  • real-time scheduling.
The entire thread system in Solaris is depicted in following figure.

At user-level

  • The user-level threads are supported by a library for the creation and scheduling and kernel knows nothing of these threads.
  • These user-level threads are supported by lightweight processes (LWPs). Each LWP is connected to exactly one kernel-level thread is independent of the kernel.
  • Many user-level threads may perform one task. These threads may be scheduled and switched among LWPs without intervention of the kernel.
  • User-level threads are extremely efficient because no context switch is needs to block one thread another to start running.
Resource needs of User-level Threads
  • A user-thread needs a stack and program counter. Absolutely no kernel resource are required.
  • Since the kernel is not involved in scheduling these user-level threads, switching among user-level threads are fast and efficient.

At Intermediate-level

The lightweight processes (LWPs) are located between the user-level threads and kernel-level threads. These LWPs serve as a "Virtual CPUs" where user-threads can run. Each task contains at least one LWp.
The user-level threads are multiplexed on the LWPs of the process.
Resource needs of LWP   
An LWP contains a process control block (PCB) with register data, accounting information and memory information. Therefore, switching between LWPs requires quite a bit of work and LWPs are relatively slow as compared to user-level threads.

At kernel-level

The standard kernel-level threads execute all operations within the kernel. There is a kernel-level thread for each LWP and there are some threads that run only on the kernels behalf and have associated LWP. For example, a thread to service disk requests. By request, a kernel-level thread can be pinned to a processor (CPU). See the rightmost thread in figure. The kernel-level threads are scheduled by the kernel's scheduler and user-level threads blocks.
SEE the diagram in NOTES
In modern solaris-2 a task no longer must block just because a kernel-level threads blocks, the processor (CPU) is free to run another thread.
Resource needs of Kernel-level Thread
A kernel thread has only small data structure and stack. Switching between kernel threads does not require changing memory access information and therefore, kernel-level threads are relating fast and efficient.

CPU/Process Scheduling

The assignment of physical processors to processes allows processors to accomplish work. The problem of determining when processors should be assigned and to which processes is called processor scheduling or CPU scheduling.
When more than one process is runable, the operating system must decide which one first. The part of the operating system concerned with this decision is called the scheduler, and algorithm it uses is called the scheduling algorithm.

Goals of Scheduling (objectives)

In this section we try to answer following question: What the scheduler try to achieve?
Many objectives must be considered in the design of a scheduling discipline. In particular, a scheduler should consider fairness, efficiency, response time, turnaround time, throughput, etc., Some of these goals depends on the system one is using for example batch system, interactive system or real-time system, etc. but there are also some goals that are desirable in all systems.

 General Goals

Fairness
        Fairness is important under all circumstances. A scheduler makes sure that each process gets its fair share of the CPU and no process can suffer indefinite postponement. Note that giving equivalent or equal time is not fair. Think of safety control and payroll at a nuclear plant.
Policy Enforcement 
        The scheduler has to make sure that system's policy is enforced. For example, if the local policy is safety then the safety control processes must be able to run whenever they want to, even if it means delay in payroll processes.
Efficiency 
        Scheduler should keep the system (or in particular CPU) busy cent percent of the time when possible. If the CPU and all the Input/Output devices can be kept running all the time, more work gets done per second than if some components are idle.
Response Time 
        A scheduler should minimize the response time for interactive user.
Turnaround
        A scheduler should minimize the time batch users must wait for an output.
Throughput
        A scheduler should maximize the number of jobs processed per unit time.
A little thought will show that some of these goals are contradictory. It can be shown  that any scheduling algorithm that favors some class of jobs hurts another class of jobs. The amount of CPU time available is finite, after all.


Preemptive Vs Nonpreemptive Scheduling

The Scheduling algorithms can be divided into two categories with respect to how they deal with clock interrupts.
Nonpreemptive Scheduling
A scheduling discipline is nonpreemptive if, once a process has been given the CPU, the CPU cannot be taken away from that process.
Following are some characteristics of nonpreemptive scheduling
  1. In nonpreemptive system, short jobs are made to wait by longer jobs but the overall treatment of all processes is fair.
  2. In nonpreemptive system, response times are more predictable because incoming high priority jobs can not displace waiting jobs.
  3. In nonpreemptive scheduling, a schedular executes jobs in the following two situations.
    1. When a process switches from running state to the waiting state.
    2. When a process terminates.
Preemptive Scheduling
A scheduling discipline is preemptive if, once a process has been given the CPU can taken away.
The strategy of allowing processes that are logically runable to be temporarily suspended is called Preemptive Scheduling and it is contrast to the "run to completion" method.

Scheduling Algorithms

 CPU Scheduling deals with the problem of deciding which of the processes in the ready queue is to be allocated the CPU.

Following are some scheduling algorithms we will study
  • FCFS Scheduling.
  • Round Robin Scheduling.
  • SJF Scheduling.
  •  SRT Scheduling.
  • Priority Scheduling.
  • Multilevel Queue Scheduling.
  • Multilevel Feedback Queue Scheduling.

First-Come-First-Served (FCFS) Scheduling





Other names of this algorithm are:
  • First-In-First-Out (FIFO)
  • Run-to-Completion
  • Run-Until-Done
Perhaps,  First-Come-First-Served algorithm is the simplest scheduling algorithm is the simplest scheduling algorithm. Processes are dispatched according to their arrival time on the ready queue. Being a nonpreemptive discipline, once a process has a CPU, it runs to completion. The FCFS scheduling is fair in the formal sense or human sense of fairness but it is unfair in the sense that long jobs make short jobs wait and unimportant jobs make important jobs wait.
FCFS is more predictable than most of other schemes since it offers time. FCFS scheme is not useful in scheduling interactive users because it cannot guarantee good response time. The code for FCFS scheduling  is simple to write and understand. One of the major drawback of this scheme is that the average time is often quite long.
The First-Come-First-Served algorithm is rarely used as a master scheme in modern operating systems but it is often embedded within other schemes.

Round Robin Scheduling


One of the oldest, simplest, fairest and most widely used algorithm is round robin (RR).
In the round robin scheduling, processes are dispatched in a FIFO manner but are given a limited amount of CPU time called a time-slice or a quantum.
If a process does not complete before its CPU-time expires, the CPU is preempted and given to the next process waiting in a queue. The preempted process is then placed at the back of the ready list.
Round Robin Scheduling is preemptive (at the end of time-slice) therefore it is effective in time-sharing environments in which the system needs to guarantee reasonable response times for interactive users.
The only interesting issue with round robin scheme is the length of the quantum. Setting the quantum too short causes too many context switches and lower the CPU efficiency. On the other hand, setting the quantum too long may cause poor response time and appoximates FCFS.
In any event, the average waiting time under round robin scheduling is often quite long.

Shortest-Job-First (SJF) Scheduling


Other name of this algorithm is Shortest-Process-Next (SPN).
Shortest-Job-First (SJF) is a non-preemptive discipline in which waiting job (or process) with the smallest estimated run-time-to-completion is run next. In other words, when CPU is available, it is assigned to the process that has smallest next CPU burst.
The SJF scheduling is especially appropriate for batch jobs for which the run times are known in advance. Since the SJF scheduling algorithm gives the minimum average time for  a given set of processes, it is probably optimal.
The SJF algorithm favors short jobs (or processors) at the expense of longer ones.
The obvious problem with SJF scheme is that it requires precise knowledge of how long a job or process will run, and this information is not usually available.
The best SJF algorithm can do is to rely on user estimates of run times.
    In the production environment where the same jobs run regularly, it may be possible to provide reasonable estimate of run time, based on the past performance of the process. But in the development environment users rarely know how their program will execute.
Like FCFS, SJF is non preemptive therefore, it is not useful in timesharing environment in which reasonable response time must be guaranteed.

Shortest-Remaining-Time (SRT) Scheduling


  • The SRT is the preemtive counterpart of SJF and useful in time-sharing environment.
  • In SRT scheduling, the process with the smallest estimated run-time to completion is run next, including new arrivals.
  • In SJF scheme, once a job begin executing, it run to completion.
  • In SJF scheme, a running process may be preempted by a new arrival process with shortest estimated run-time.
  • The algorithm SRT has higher overhead than its counterpart SJF.
  • The SRT must keep track of the elapsed time of the running process and must handle occasional preemptions.
  • In this scheme, arrival of small processes will run almost immediately. However, longer jobs have even longer mean waiting time.

Priority Scheduling

The basic idea is straightforward: each process is assigned a priority, and priority is allowed to run. Equal-Priority processes are scheduled in FCFS order. The shortest-Job-First (SJF) algorithm is a special case of general priority scheduling algorithm.
An SJF algorithm is simply a priority algorithm where the priority is the inverse of the (predicted) next CPU burst. That is, the longer the CPU burst, the lower the priority and vice versa.
Priority can be defined either internally or externally. Internally defined priorities use some measurable quantities or qualities to compute priority of a process.
Examples of Internal priorities are
  • Time limits.
  • Memory requirements.
  • File requirements,
        for example, number of open files.
  • CPU Vs I/O requirements.
Externally defined priorities are set by criteria that are external to operating system such as
  • The importance of process.
  • Type or amount of funds being paid for computer use.
  • The department sponsoring the work.
  • Politics.
Priority scheduling can be either preemptive or non preemptive
  • A preemptive priority algorithm will preemptive the CPU if the priority of the newly arrival process is higher than the priority of the currently running process.
  • A non-preemptive priority algorithm will simply put the new process at the head of the ready queue.
A major problem with priority scheduling is indefinite blocking or starvation. A solution to the problem of indefinite blockage of the low-priority process is aging. Aging is a technique of gradually increasing the priority of processes that wait in the system for a long period of time.

Multilevel Queue Scheduling


A multilevel queue scheduling algorithm partitions the ready queue in several separate queues, for instance
Fig 5.6 - pp. 138 in Sinha
In a multilevel queue scheduling processes are permanently assigned to one queues.
The processes are permanently assigned to one another, based on some property of the process, such as
  • Memory size
  • Process priority
  • Process type
Algorithm choose the process from the occupied queue that has the highest priority, and run that process either
  • Preemptive or
  • Non-preemptively
Each queue has its own scheduling algorithm or policy.

Possibility I 
    If each queue has absolute priority over lower-priority queues then no process in the queue could run unless the queue for the highest-priority processes were all empty.
For example, in the above figure no process in the batch queue could run unless the queues for system processes, interactive processes, and interactive editing processes will all empty.

Possibility II 
    If there is a time slice between the queues then each queue gets a certain amount of CPU times, which it can then schedule among the processes in its queue. For instance;
  • 80% of the CPU time to foreground queue using RR.
  • 20% of the CPU time to background queue using FCFS.
Since processes do not move between queue so, this policy has the advantage of low scheduling overhead, but it is inflexible.

Multilevel Feedback Queue Scheduling

Multilevel feedback queue-scheduling algorithm allows a process to move between queues. It uses many ready queues and associate a different priority with each queue.
The Algorithm chooses to process with highest priority from the occupied queue and run that process either preemptively or unpreemptively. If the process uses too much CPU time it will moved to a lower-priority queue. Similarly, a process that wait too long in the lower-priority queue may be moved to a higher-priority queue may be moved to a highest-priority queue. Note that this form of aging prevents starvation.
Example:
Figure 5.7 pp. 140 in Sinha
  • A process entering the ready queue is placed in queue 0.
  • If it does not finish within 8 milliseconds time, it is moved to the tail of queue 1.
  • If it does not complete, it is preempted and placed into queue 2.
  • Processes in queue 2 run on a FCFS basis, only when queue 2 run on a FCFS basis, only when queue 0 and queue 1 are empty.`

Interprocess Communication


Since processes frequently needs to communicate with other processes therefore, there is a need for a well-structured communication, without using interrupts, among processes.

 Race Conditions

In operating systems, processes that are working together share some common storage (main memory, file etc.) that each process can read and write. When two or more processes are reading or writing some shared data and the final result depends on who runs precisely when, are called race conditions. Concurrently executing threads that share data need to synchronize their operations and processing in order to avoid race condition on shared data. Only one ‘customer’ thread at a time should be allowed to examine and update the shared variable.
Race conditions are also possible in Operating Systems. If the ready queue is implemented as a linked list and if the ready queue is being manipulated during the handling of an interrupt, then interrupts must be disabled to prevent another interrupt before the first one completes. If interrupts are not disabled than the linked list could become corrupt.

Critical Section

How to avoid race conditions?

 


The key to preventing trouble involving shared storage is find some way to prohibit more than one process from reading and writing the shared data simultaneously. That part of the program where the shared memory is accessed is called the Critical Section. To avoid race conditions and flawed results, one must identify codes inCritical Sections  in each thread. The characteristic properties of the code that form a Critical Section are
  • Codes that reference one or more variables in a “read-update-write” fashion while any of those variables is possibly being altered by another thread.
  • Codes that alter one or more variables that are possibly being referenced in “read-updata-write” fashion by another thread.
  • Codes use a data structure while any part of it is possibly being altered by another thread.
  • Codes alter any part of a data structure while it is possibly in use by another thread.
     
Here, the important point is that when one process is executing shared modifiable data in its critical section, no other process is to be allowed to execute in its critical section. Thus, the execution of critical sections by the processes is mutually exclusive in time.

Mutual Exclusion

 A way of making sure that if one process is using a shared modifiable data, the other processes will be excluded from doing the same thing.

Formally, while one process executes the shared variable, all other processes desiring to do so at the same time moment should be kept waiting; when that process has finished executing the shared variable, one of the processes waiting; while that process has finished executing the shared variable, one of the processes waiting to do so should be allowed to proceed. In this fashion, each process executing the shared data (variables) excludes all others from doing so simultaneously. This is called Mutual Exclusion.
Note that mutual exclusion needs to be enforced only when processes access shared modifiable data - when processes are performing operations that do not conflict with one another they should be allowed to proceed concurrently.

 Mutual Exclusion Conditions

If we could arrange matters such that no two processes were ever in their critical sections simultaneously, we could avoid race conditions. We need four conditions to hold to have a good solution for the critical section problem (mutual exclusion).
  • No two processes may at the same moment inside their critical sections.
  • No assumptions are made about relative speeds of processes or number of CPUs.
  • No process should outside its critical section should block other processes.
  • No process should wait arbitrary long to enter its critical section.

Proposals for Achieving Mutual Exclusion

The mutual exclusion problem is to devise a pre-protocol (or entry protocol) and a post-protocol (or exist protocol) to keep two or more threads from being in their critical sections at the same time. Tanenbaum examine proposals for critical-section problem or mutual exclusion problem.
Problem
When one process is updating shared modifiable data in its critical section, no other process should allowed to enter in its critical section.

 Proposal 1 -Disabling Interrupts (Hardware Solution)

Each process disables all interrupts just after entering in its critical section and re-enable all interrupts just before leaving critical section. With interrupts turned off the CPU could not be switched to other process. Hence, no other process will enter its critical and mutual exclusion achieved.
ConclusionDisabling interrupts is sometimes a useful interrupts is sometimes a useful technique within the kernel of an operating system, but it is not appropriate as a general mutual exclusion mechanism for users process. The reason is that it is unwise to give user process the power to turn off interrupts.

 Proposal 2 - Lock Variable (Software Solution)

In this solution, we consider a single, shared, (lock) variable, initially 0. When a process wants to enter in its critical section, it first test the lock. If lock is 0, the process first sets it to 1 and then enters the critical section. If the lock is already 1, the process just waits until (lock) variable becomes 0. Thus, a 0 means that no process in its critical section, and 1 means hold your horses - some process is in its critical section.
ConclusionThe flaw in this proposal can be best explained by example. Suppose process A sees that the lock is 0. Before it can set the lock to 1 another process B is scheduled, runs, and sets the lock to 1. When the process A runs again, it will also set the lock to 1, and two processes will be in their critical section simultaneously.

 Proposal 3 - Strict Alteration

In this proposed solution, the integer variable 'turn' keeps track of whose turn is to enter the critical section. Initially, process A inspect turn, finds it to be 0, and enters in its critical section. Process B also finds it to be 0 and sits in a loop continually testing 'turn' to see when it becomes 1.Continuously testing a variable waiting for some value to appear is called the Busy-Waiting.
ConclusionTaking turns is not a good idea when one of the processes is much slower than the other. Suppose process 0 finishes its critical section quickly, so both processes are now in their noncritical section. This situation violates above mentioned condition 3.

Using Systems calls 'sleep' and 'wakeup'

Basically, what above mentioned solution do is this: when a processes wants to enter in its critical section , it checks to see if then entry is allowed. If it is not, the process goes into tight loop and waits (i.e., start busy waiting) until it is allowed to enter. This approach waste CPU-time.
Now look at some interprocess communication primitives is the pair of steep-wakeup.
  • Sleep
    • It is a system call that causes the caller to block, that is, be suspended until some other process wakes it up.
  • Wakeup
    • It is a system call that wakes up the process.
 
Both 'sleep' and 'wakeup' system calls have one parameter that represents a memory address used to match up 'sleeps' and 'wakeups' .

  The Bounded Buffer Producers and Consumers

The bounded buffer producers and consumers assumes that there is a fixed buffer size i.e., a finite numbers of slots are available.
Statement
To suspend the producers when the buffer is full, to suspend the consumers when the buffer is empty, and to make sure that only one process at a time manipulates a buffer so there are no race conditions or lost updates.
As an example how sleep-wakeup system calls are used, consider the producer-consumer problem also known as bounded buffer problem.
Two processes share a common, fixed-size (bounded) buffer. The producer puts information into the buffer and the consumer takes information out.
Trouble arises when
  1. The producer wants to put a new data in the buffer, but buffer is already full.
    Solution: Producer goes to sleep and to be awakened when the consumer has removed data.
  2. The consumer wants to remove data the buffer but buffer is already empty.
    Solution: Consumer goes to sleep until the producer puts some data in buffer and wakes consumer up.
Conclusion This approaches also leads to same race conditions we have seen in earlier approaches. Race condition can occur due to the fact that access to 'count' is unconstrained. The essence of the problem is that a wakeup call, sent to a process that is not sleeping, is lost.

Semaphores

 E.W. Dijkstra (1965) abstracted the key notion of mutual exclusion in his concepts of semaphores.

DefinitionA semaphore is a protected variable whose value can be accessed and altered only by the operations P and V and initialization operation called 'Semaphoiinitislize'.
Binary Semaphores can assume only the value 0 or the value 1 counting semaphores also called general semaphores can assume only nonnegative values.

The P (or wait or sleep or down) operation on semaphores S, written as P(S) or wait (S), operates as follows:
P(S):   IF   S  >  0
                 THEN  S :=  S - 1
                 ELSE   (wait on S)

The V (or signal or wakeup or up) operation on semaphore S, written as V(S) or signal (S), operates as follows:
V(S):   IF  (one or more process are waiting on S)
                THEN (let one of these processes proceed)
                ELSE   S := S +1

Operations P and V are done as single, indivisible, atomic action. It is guaranteed that once a semaphore operations has stared, no other process can access the semaphore until operation has completed. Mutual exclusion on the semaphore, S, is enforced within P(S) and V(S).
If several processes attempt a P(S) simultaneously, only process will be allowed to proceed. The other processes will be kept waiting, but the implementation of P and V guarantees that processes will not suffer indefinite postponement.
Semaphores solve the lost-wakeup problem.

Producer-Consumer Problem Using Semaphores

The Solution to producer-consumer problem uses three semaphores, namely, full, empty and mutex.
The semaphore 'full' is used for counting the number of slots in the buffer that are full. The 'empty' for counting the number of slots that are empty and semaphore 'mutex' to make sure that the producer and consumer do not access modifiable shared section of the buffer simultaneously.

Initialization

  • Set full buffer slots to 0.
        i.e., semaphore Full = 0.
  • Set empty buffer slots to N.
        i.e., semaphore empty = N.
  • For control access to critical section set mutex to 1.
        i.e., semaphore mutex = 1.
Producer ( )
WHILE (true) 
            produce-Item ( );
        P (empty);
        P (mutex);
        enter-Item ( )
        V (mutex)
        V (full);

Consumer ( )
WHILE (true)
        P (full)
        P (mutex);
        remove-Item ( );
        V (mutex);
        V (empty);
        consume-Item (Item)
 

Deadlock

“Crises and deadlocks when they occur have at least this advantage, that they force us to think.”- Jawaharlal Nehru (1889 - 1964) Indian political leader
A set of process is in a deadlock state if each process in the set is waiting for an event that can be caused by only another process in the set. In other words, each member of the set of deadlock processes is waiting for a resource that can be released only by a deadlock process. None of the processes can run, none of them can release any resources, and none of them can be awakened. It is important to note that the number of processes and the number and kind of resources possessed and requested are unimportant.
The resources may be either physical or logical. Examples of physical resources are Printers, Tape Drivers, Memory Space, and CPU Cycles. Examples of logical resources are Files, Semaphores, and Monitors.
The simplest example of deadlock is where process 1 has been allocated non-shareable resources A, say, a tap drive, and process 2 has be allocated non-sharable resource B, say, a printer. Now, if it turns out that process 1 needs resource (printer) to proceed and process 2 needs resource A (the tape drive) to proceed and these are the only two processes in the system, each is blocked the other and all useful work in the system stops. This situation ifs termed deadlock. The system is in deadlock state because each process holds a resource being requested by the other process neither process is willing to release the resource it holds.

 Preemptable and Nonpreemptable Resources

Resources come in two flavors: preemptable and nonpreemptable. A preemptable resource is one that can be taken away from the process with no ill effects. Memory is an example of a preemptable resource. On the other hand, a nonpreemptable resource is one that cannot be taken away from process (without causing ill effect). For example, CD resources are not preemptable at an arbitrary moment.
Reallocating resources can resolve deadlocks that involve preemptable resources. Deadlocks that involve nonpreemptable resources are difficult to deal with.

 

Necessary and Sufficient Deadlock Conditions

Coffman (1971) identified four (4) conditions that must hold simultaneously for there to be a deadlock.

1. Mutual Exclusion Condition
The resources involved are non-shareable.
Explanation: At least one resource (thread) must be held in a non-shareable mode, that is, only one process at a time claims exclusive control of the resource. If another process requests that resource, the requesting process must be delayed until the resource has been released.

2. Hold and Wait Condition
Requesting process hold already, resources while waiting for requested resources.
Explanation: There must exist a process that is holding a resource already allocated to it while waiting for additional resource that are currently being held by other processes.

3. No-Preemptive Condition
Resources already allocated to a process cannot be preempted.
Explanation: Resources cannot be removed from the processes are used to completion or released voluntarily by the process holding it.
 
4. Circular Wait Condition
The processes in the system form a circular list or chain where each process in the list is waiting for a resource held by the next process in the list.

As an example, consider the traffic deadlock in the following figure


Consider each section of the street as a resource.
  1. Mutual exclusion condition applies, since only one vehicle can be on a section of the street at a time.
  2. Hold-and-wait condition applies, since each vehicle is occupying a section of the street, and waiting to move on to the next section of the street.
  3. No-preemptive condition applies, since a section of the street that is a section of the street that is occupied by a vehicle cannot be taken away from it.
  4. Circular wait condition applies, since each vehicle is waiting on the next vehicle to move. That is, each vehicle in the traffic is waiting for a section of street held by the next vehicle in the traffic.
The simple rule to avoid traffic deadlock is that a vehicle should only enter an intersection if it is assured that it will not have to stop inside the intersection.
It is not possible to have a deadlock involving only one single process. The deadlock involves a circular “hold-and-wait” condition between two or more processes, so “one” process cannot hold a resource, yet be waiting for another resource that it is holding. In addition, deadlock is not possible between two threads in a process, because it is the process that holds resources, not the thread that is, each thread has access to the resources held by the process.

Dealing with Deadlock Problem

 In general, there are four strategies of dealing with deadlock problem:

1. The Ostrich Approach
Just ignore the deadlock problem altogether.
2. Deadlock Detection and Recovery
Detect deadlock and, when it occurs, take steps to recover.
3. Deadlock Avoidance
Avoid deadlock by careful resource scheduling.
4. Deadlock Prevention
Prevent deadlock by resource scheduling so as to negate at least one of the four conditions.

Now we consider each strategy in order of decreasing severity.

Deadlock Prevention

Havender in his pioneering work showed that since all four of the conditions are necessary for deadlock to occur, it follows that deadlock might be prevented by denying any one of the conditions.

  • Elimination of “Mutual Exclusion” ConditionThe mutual exclusion condition must hold for non-sharable resources. That is, several processes cannot simultaneously share a single resource. This condition is difficult to eliminate because some resources, such as the tap drive and printer, are inherently non-shareable. Note that shareable resources like read-only-file do not require mutually exclusive access and thus cannot be involved in deadlock.
 Elimination of “Hold and Wait” Condition
  • There are two possibilities for elimination of the second condition. The first alternative is that a process request be granted all of the resources it needs at once, prior to execution. The second alternative is to disallow a process from requesting resources whenever it has previously allocated resources. This strategy requires that all of the resources a process will need must be requested at once. The system must grant resources on “all or none” basis. If the complete set of resources needed by a process is not currently available, then the process must wait until the complete set is available. While the process waits, however, it may not hold any resources. Thus the “wait for” condition is denied and deadlocks simply cannot occur. This strategy can lead to serious waste of resources. For example, a program requiring ten tap drives must request and receive all ten derives before it begins executing. If the program needs only one tap drive to begin execution and then does not need the remaining tap drives for several hours. Then substantial computer resources (9 tape drives) will sit idle for several hours. This strategy can cause indefinite postponement (starvation). Since not all the required resources may become available at once.
 Elimination of “No-preemption” Condition
  • The nonpreemption condition can be alleviated by forcing a process waiting for a resource that cannot immediately be allocated to relinquish all of its currently held resources, so that other processes may use them to finish. Suppose a system does allow processes to hold resources while requesting additional resources. Consider what happens when a request cannot be satisfied. A process holds resources a second process may need in order to proceed while second process may hold the resources needed by the first process. This is a deadlock. This strategy require that when a process that is holding some resources is denied a request for additional resources. The process must release its held resources and, if necessary, request them again together with additional resources. Implementation of this strategy denies the “no-preemptive” condition effectively.
    High Cost   When a process release resources the process may lose all its work to that point. One serious consequence of this strategy is the possibility of indefinite postponement (starvation). A process might be held off indefinitely as it repeatedly requests and releases the same resources.
 Elimination of “Circular Wait” Condition

No comments:

Post a Comment