Wednesday 5 November 2014

SE

Introduction to Software Engineering 

By Instructor: Steven A.Demurjian

Textbook: C++ programming with design patterns revealed. By Thomasz Müldner. Addison-Wesley
Textbook2:ANSI C Library Reference Guide
Download Slides from here

CSE230 Introductory Overheads (PPT)
FUNDAMENTALS OF SOFTWARE ENGINEERING OVERHEADS
SOFTWARE ARCHITECTURES 
UML: UNIFIED MODELING LANGUAGE 
SOFTWARE SPECIFICATION
  • Reading - The Specification Process: PDF

  • Overheads:  PDF
ASPECT-ORIENTD SOFTWARE DESIGN AND PROGRAMMING
 Overheads (PPT)

Course Description

An introduction and exploration of concepts and issues related to large-scale software systems development. Areas of exploration include technical complexities, organization issues, and communication techniques for large-scale development. Students participate through teams emulating industrial development. The projects cover the principal system development life-cycle phases from requirements analysis, to software design, and to final implementation. Issues relating to real-time control systems, human factors, reliability, performance, operating costs, maintainability and others are addressed and resolved in a reasonable manner, by Professor Christopher D

Resources

There will not be a specific text for this course, but several useful texts are worth considering if you are looking to expand your library:
  • Sommerville, Software Engineering, 8th EditionAddison-Wesley, 2007.
  • Hunt and Thomas, The Pragmatic ProgrammerAddison-Wesley, 2000.
  • Beck, Extreme Programming Explained: Embrace ChangeAddison-Wesley, 1999.
  • Gamma, Helm, Johnson, and Vlissides, Design PatternsAddison-Wesley, 1995.
    (Often referred to as the "Gang of Four" book)
  • Czarnecki and Eisenecker, Generative ProgrammingAddison-Wesley, 2001.
  • Meyer, Object-Oriented Software Construction, 2nd Ed., 1997.
  • Booch, Object-Oriented Analysis and Design, 2nd Ed., 1994.
Discussion Topics
Lecture Material
Course Overview
(slides: in ppt format)
Software Requirements: Overview and Motivation
(slides: in ppt format)
Project Descriptions from Traffic, Building, Island, and Infrastructure Teams
Software Requirements: Perspective and Definition
(slides: in ppt format)
Project Requirements Outlines from Traffic, Building, Island, and Infrastructure Teams
Software Requirements: Processes I
(slides: in ppt format)
Teams' Requirements Definition Documents
Software Requirements: Processes II
(slides: in ppt format)
Software Requirements: Products
(slides: in ppt format)
Software Architecture: Introduction
(slides: in ppt format)
Real World Requirements Example
(slides: in ppt format)
Software Requirements: Basic Methods I
(slides: in ppt format)

Software Requirements: Basic Methods II
(slides: in ppt format)

Software Requirements: Complex Models and Reviews
(slides: in ppt format)
Software Architecture: Specification I
(slides: in ppt format)
Software Architecture: Specification II
(slides: in ppt format)

Software Architecture: Design I
(slides: in ppt format)
Software Architecture: Specification III
(slides: in ppt format)

Software Architecture: Design II
(slides: in ppt format)
Software Architecture: Design III
(slides: in ppt format)


Course Summary: Review of Software Engineering Requirements and Architecture
(slides: in ppt format)




All Subjects Object Oriented Analysis, Design UML, DS, DLD, DAA

Really awesome for programming lovers, and i think if one is in computer science but says no to programming, den what d hell he is doing in computer science.

SLIDES:

Lecture Video Segment Link To Source Code

Orientation

-- General Orientation Orientation.ppt

Advanced Programming

1a Basics basics.ppt
1b Derived Data Types derived.ppt

Concepts in Object-Oriented Design

2a Introduction intro.ppt
2b UML - Based Notation UMLNotation.pdf

Classes

3a Introduction classIntro.ppt
3b Constructors and Destructors constAndDest.ppt
3c Access Specifiers specifiers.ppt
3d Copy Constructors copyConst.ppt
3e Static Members statMembers.ppt
3f Friends friends.ppt
3g Class Data Members and Initializers initial.ppt
3h This and Pointer to Members thisPointer.ppt
3i Member Qualifiers qualifiers.ppt

Stream Input/Output Basics

4a Basics streamBasics.ppt
4b Manipulators manipulators.ppt
4c File IO streamFiles.ppt

Abstract Data Type

5a Fraction Example fractionIE.ppt

Operator Overloading

6a Member Functions memberFunctions.ppt
6b Friend Functions & Special Forms
  • specialForms.ppt
  • formsExample.html
  • Inheritance

    7a Part I inheritanceI.ppt
    7b Part II inheritanceII.ppt

    Virtual Functions

    8a Introduction virtualFunctions.ppt
    8b Polymorphism polymorphism.ppt

    Exception Handling

    9a Introduction excepHandling.ppt

    Templates

    10a Introduction
  • templates.ppt
  • templateExamples.html
  • Standard Template Library

    11a Introduction stanTempLibrary.ppt

    History of Graphical User Interfaces

    12a Introduction historyOfGUIs.ppt

    Introduction To Visual Programming

    13a Introduction

    QT Designer

    14a demo  

    Boolean Algebra

    15a Introduction booleanAlg.ppt
    15b Standard Forms standardForms.ppt

    Digital Logic

    16a Introduction digitalLogic.ppt

    Analysis of Algorithms

    17a Introduction using Insertion Sort insertionSort.ppt
    17b Growth Rates growthRates.ppt
    17c Big-Oh bigOh.ppt
    17d Big-Omega and Big-Theta bigOhbigTheta.ppt
    17e Statement Analysis statementAnalysis.ppt

    Elementary Data Structures

    18a Array Lists arrayLists.ppt
    18b Linked Lists linkedLists.ppt

    Recursion

    19a Review of Recursion recursion.ppt

    Abstract Data Types

    20a Stacks stacks.ppt
    20b Queues queues.ppt
    20c Polynomials polynomials.ppt
    20d Sparse Matrices sparseMatrices.ppt
    20e Trees trees.ppt

    Elementary Sorting Algorithms

    21a Sorting sorting.ppt

    Quicksort

    22a Algorithm algorithm.ppt
    22b Analysis analysis.ppt

    Mergesort

    23a Introduction mergeSort.ppt

    Ethics

    24a Ethics ethics.ppt

    OS

    CPU SCHEDULING

    This is an attempt to put together common CPU scheduling questions from various sources for the convenience of computer science students.


    1.
    Consider N processes sharing the CPU in a round-robin fashion (N>=2). Assume that each context switch takes S msec and that each time quantum is Q msec. For simplicity, assume that processes never block on any event and simply switch between the CPU and the ready queue.
    In the following your answers should be functions of N, S and T.

    a) Find the maximum value of Q such that no process will ever go more than T msec


    Time taken for one process per quantum = quantum,Q+context switch,S


    Max wait time, T = N(Q+S)


    T = NQ+NS


    Q = (T-NS)/N

    b) Find the maximum value of Q such that no process will ever go more than T msecs between executing instructions on the CPU?

    Max wait time, T = N(Q+S) - Q ie.last instruction just before context switch executes at the end of the quantum of the first time when process executes..


    T = NQ+NS-Q


    T = Q(N-1)+NS


    Q = (T-NS)/(N-1)


    2.
    Suppose that there are two processes, PH and PL, running in a system. Each process is single-threaded. The operating system’s scheduler is preemptive and uses round-robin scheduling with a quantum of q time units.

    The scheduler supports two priority levels, HIGH and LOW. Processes at LOW priority will run only if there are no runnable HIGH priority processes. Process PH is a HIGH priority process.

    It behaves as described in the following pseudo-code:

    while (TRUE) do

    compute for tc time units
    block for tb time units to wait for a resource
    end while

    That is, if this process were the only one running in the system, it would alternate between running for tc units of time and blocking for tb units of time. Assume that tc is less than q.

    Process PL is a low priority process. This process runs forever, doing nothing but computation. That is, it never blocks waiting for a resource.

    a.
    For what percentage of the time will the low priority process PL be running in this system?
    Express your answer in terms of tb and tc.


    tb/(tb + tc)



    b.
    Repeat part (a), but this time under the assumption that there are two HIGH priority processes (PH1 and PH2) and one LOW priority process (PL). Assume that each HIGH priority process waits for a different resource. Again, express your answer in terms of tb and tc. Your answer should be correct for all tb greater than 0 and all 0 less than tc less than q.

    (tb−tc)/(tb+tc) if tc is less than tb


    0 if tc greater than or equal to tb

    ------------------------------------------------------------------------------------------------------------

    Suppose a processor uses a prioritized round robin scheduling policy. New processes
    are assigned an initial quantum of length q. Whenever a process uses its entire quantum without blocking, its new quantum is set to twice its current quantum. If a process blocks before its quantum expires, its new quantum is reset to q. For the purposes of this question, assume that
    every process requires a finite total amount of CPU time.

    (a) Suppose the scheduler gives higher priority to processes that have larger quanta. Is starvation possible in this system? Why or why not?
    No, starvation is not possible. Because we assume that a process will terminate, the worst
    that can happen is that a CPU bound process will continue to execute until it completes.
    When it finishes, one of the lower priority processes will execute. Because the I/O bound
    processes will sit on the low priority queue, they will eventually make it to the head of the
    queue and will not starve.
    (b) Suppose instead that the scheduler gives higher priority to processes that have smaller quanta. Is starvation possible in this system? Why or why not?
    Yes, starvation is possible. Suppose a CPU bound process runs on the processor, uses its
    entire quantum, and has its quantum doubled. Suppose a steady stream of I/O bound processes
    enter the system. Since they will always have a lower quantum and will be selected for
    execution before the process with the doubled quantum, they will starve the original process.
    _______________________________________________________________
    I have just invented a new scheduling algorithm that I claim gives the highest

    priority to processes that have just entered the system, but is fair to all processes. The algorithm
    works like this:
    There are two queues, one for new processes and one for old processes. When
    a process enters the system, it is put at the end of the new queue. After 2 milliseconds on the new queue, whether a process has been scheduled or not, it is moved to the end of the old queue.
    When it is time to schedule a process, the system schedules the process at the head of one of
    the queues, alternating between the two queues. Each process runs to completion before the next process is scheduled. Assume that processes enter the system at random times and that most processes take much longer than 2 milliseconds to execute.
    (a) Does this algorithm give the highest priority to new processes? Explain your answer.

    This algorithm does not give the highest priority to new processes. There are several reasons.
    First, even if executing processes took much less than 2 milliseconds to execute, the scheduler
    would alternate between “new” processes and “old” processes, giving equal priority to both.
    Second, given that executing processes take much longer than 2 milliseconds to execute, most
    “new” processes will not get scheduled during their 2 milliseconds on the ”new” queue and
    will then drop to the bottom of the “old” queue without ever having been given any priority.
    Now they have to wait for processes older than them to execute, and processes newer than
    them to execute.
    (b) Is this algorithm starvation free? Explain your answer.
    Yes. Because the scheduler alternates between the two queues, every job that is not executed
    from the “new” queue eventually ends up in the “old” queue and from there, eventually
    reaches the head of the queue and is executed.
    (c) Discuss whether this algorithm is fair to all processes. By “fair” we mean every process has a wait time approximately equal to the average wait time, assuming all processes have close to the same execution time.
    No, it is not fair to all processes. Some lucky “new” processes will get to execute when they
    reach the top of the “new” queue, while some will drop to the bottom of the “old” queue and
    have to wait much longer to execute. These unlucky processes will have to wait for all of
    the processes older than them to complete, and will have to wait for many processes younger
    than them to complete as well.
    ____________________________________________________________________________________________
    Can any of the three scheduling schemes (FCFS, SRTF, or RR) result in

    starvation? If so, how might you fix this?

    Yes. The SRTF algorithm can result starvation of long tasks if short tasks keep arriving.
    The FCFS algorithm will result in starvation only if some thread runs forever. Finally, RR
    does not result in starvation, since all threads get some of the CPU. To fix SRTF, we might
    add some priority mechanism to make sure that threads that have waited for a long time
    without running get at least some of the CPU now and then (priority increments when
    threads don’t get to run). The multi-level scheduler is a good approximation to SRTF that
    can be designed to avoid starvation. (the result wouldn’t be quite SRTF any more).
    To fix FCFS, we would need to find some way of preempting threads that have run forever,
    possibly by giving some CPU to other threads. Effectively, this would add elements of RR
    or multi-level scheduling into FCFS (it wouldn’t be quite FCFS any more).

    _____________________________________________________________

    Describe the differences among short-term, medium-term, and long-term scheduling.

    Ans:
    Short-term (CPU scheduler) –selects from jobs in memory those jobs
    that are ready to execute and allocates the CPU to them.
    Medium-term - used especially with time-sharing system as an
    intermediate scheduling level. A swapping scheme is implemented to
    remove partially run programs from memory and reinstate them later to
    continue where they left off.
    Long-term (job scheduler) –determines which jobs are brought into
    memory for processing.
    The primary difference is in the frequency of their execution. The
    short-term must select a new process quite often. Long-term is used much
    less often since it handles placing jobs in the system and may wait a while
    for a job to finish before it admits another one.
    ____________________________________________________

    Some systems simply treat the readers writers problems as critical section problems and hence the implementation simply use P and V. What requirement of the Readers Writers problem does this implementation not satisfy?

    Readers cannot read concurrently.

    ______________________________________________________________
    Assume that 3 processes all with requirements of 1 second of CPU time each and

    no I/O arrive at the same time.


    a)What will be the average response time (i.e., average time to

    completion) for the processes under FIFO scheduling?

    Answer: 2 seconds



    Explanation:

    Time for completion for process A = 1

    Time for completion for process  B = 2

    Time for completion for process C = 3

    Average time for completion = (1+2+3)/3 = 2
    b)
    Answer part “a” for Round Robin (RR) scheduling assuming a timeslice of 0.1 sec and no overhead for context switches (i.e., context switches are free).

    Answer: 2.9 seconds

    Explanation:


    Time for completion for process A =0.28

    Time for completion for process  B =0.29

    Time for completion for process C = 0.30

     Average time for completion =0. 29

    c)
    Answer part “a” for Shortest Job First (SJF) scheduling.
    Answer: 2 seconds
    d) Multilevel Feedback Queue Scheduling (MFQS) is a fairly good, general CPU

    scheduling algorithm, can lead to starvation under certain
    circumstances. Briefly describe how starvation can occur using MFQS and how to modify MFQS so that starvation can be avoided.
    Long jobs on low-priority queues can starve if a continuous
    stream of short jobs keep the high-priority queues full.
    Soln: hold a lottery among the QUEUES, weighted in favor
    of short queues
    OR
    implement aging, so that jobs that remain on
    low-priority queues for a long time are promoted
    e) What advantage is there in having different time-quantum (i.e. timeslice) sizes on different levels of the MFQS approach?
    1) different time quanta help differentiate between
    long and short jobs
    2) for long jobs, short quanta mean unnecessary context
    switches (so different time quanta improve throughput)
    3) Introduces more fairness


    _______________________________________________________________________________________________

    Consider a system with two processes, P1 and P2. Process P1 is running and process P2 is ready to
    run. The operating system uses preemptive round robin scheduling. Consider the situation in which
    P1’s scheduling quantum is about to expire, and P2 will be selected to run next.
    List the sequence of events that will occur in this situation, in the order in which they will occur.
    Create your list by choosing from the events shown below, using the event numbers to identify events.
    For example, a valid answer might be: “7,5,4,8,5,2”. If an event occurs more than once, you may
    include it more than once in your list.
    1. P1’s user-level state is saved into a trap frame
    2. P2’s user-level state is saved into a trap frame
    3. a timer interrupt occurs
    4. the operating system’s scheduler runs
    5. there is a context switch from thread T1 (from process P1) to thread T2 (from process P2).
    6. P1’s user-level state is restored from a trap frame
    7. P2’s user-level state is restored from a trap frame
    8. a “system call” instruction is executed
    9. thread T2 (for process P2) is created
    10. thread T1 (from process P1 is destroyed
    Write your answer here:
    3,1,4,5,7

    _____________________________________________________________________________

    Suppose that the operating system is running a round-robin scheduler with a 50 msec time quantum. There are three processes with the following characteristics:
    • Process A runs for 60 msec, blocks for 100 msec, runs for 10 msec and terminates.
    • Process B runs for 70 msec, blocks for 40 msec, runs for 20 msec, and terminates.
    • Process C runs for 20 msec, blocks for 80 msec, runs for 60 msec, and terminates.
    Process A enters the system at time 0. Process B enters at time 10 msec. Process C enters at time 20 msec. Trace the evolution of the system. You should ignore the time required for a context switch. The time required for process P to block is the actual clock time between the time that P blocks and the time that it unblocks, regardless of anything else that is happening.

    Answer:

    Time    Running process     Events
    0-50        A                  B enters at time 10.  C enters at time 20.
    50-100      B
    100-120     C                  C blocks until time 200.
    120-130     A                  A blocks until time 230.
    130-150     B                  B blocks until time 190
    150-190     Idle               B unblocks at time 190
    190-210     B                  C unblocks at time 200. B terminates at time 210
    210-260     C                  A unblocks at time 230.
    260-270     A                  A terminates at time 270.
    270-280     C                  C terminates at time 280.
     
    _______________________________________________________________________________________________

    Consider the following preemptive priority-scheduling algorithm based on dynamically changing priorities. Larger priority numbers imply higher priority. When a process is waiting for the CPU (in the ready queue, but not running), its priority changes at a rate
    alpha; when it is running, the priority changes at a rate beta. All processes are given a priority of 0 when they enter the ready queue for the first time. The parameters alpha and beta can be set to give many different scheduling algorithms.
    (a) What is the algorithm that results from beta > alpha > 0?
    (b) What is the algorithm that results from alpha < beta < 0?
    • (a) Since processes start at 0, any processes that have been in the system (either running or waiting) have higher priority. Therefore, new processes go to the back of the queue. When a process runs, its priority keeps increasing at the rate of beta, which is more of an increase than for the processes in the ready queue. Therefore, every time the process has timerrunout, it goes to the front of the ready queue and gets dispatched again. This is the equivalent of FCFS.
    • (b) This time, any new processes entering the system have higher priority than any old ones, since priority starts at zero and then becomes negative when the process waits or runs. New processes go in at the front of the queue. When a process runs or waits, its priority decreases, with the waiting processes decreasing faster than the running process. This is a LIFO (last-in-first-out) algorithm.  

    Digital Logic and Design Very Important

     

    DS

    Data Structures DS- Linked List , trees etc

    1. Introduction
    2. Programming Strategies
    3. Data Structures
    4. Searching
    5. Complexity
    6. Queues
    7. Sorting
    8. Searching Revisited
    9. Dynamic Algorithms
    10. Graphs
    11. Huffman Encoding
    12. FFT
    13. Hard or Intractable Problems