You can emulate US for a passion for acquiring new knowledge, and disseminate it to as many as you can.
You have great potential. Work with dedication and get ready for the challenges of your career!!!
Dedicated to- Our students, who are bringing glory to us and our Faculty.
The V - model is a model where execution of
processes happens in a sequential manner in V-shape. It is also known as
Verification and Validation model.
V - Model is an extension of the waterfall model
and is based on association of a testing phase for each corresponding
development stage.
So there
are Verification phases on one side of the .V. and Validation phases on the
other side. Coding phase joins the two sides of the V-Model.
Verification Phases
Following are the Verification phases in V-Model:
·Requirement Analysis:This phase involves detailed communication
with the customer to understand his expectations and exact requirement. This is
a very important activity and need to be managed well, as most of the customers
are not sure about what exactly they need. The acceptance test design planning
is done at this stage as business requirements can be used as an input for
acceptance testing.
·System Design: System design would comprise of
understanding and detailing the complete hardware and communication setup for
the product under development. System test plan is developed based on the
system design. Doing this at an earlier stage leaves more time for actual test
execution later.
·Architectural Design:Architectural specifications are
understood and designed in this phase. Usually more than one technical approach
is proposed and based on the technical and financial feasibility the final
decision is taken. System design is broken down further into modules taking up
different functionality. This is also referred to as High Level Design (HLD).
·Module
Design: It is important
that the design is compatible with the other modules in the system architecture
and the other external systems. Unit tests are an essential part of any
development process and helps eliminate the maximum faults and errors at a very
early stage. Unit tests can be designed at this stage based on the internal
module designs.
Coding Phases
The actual coding of the system modules designed in the design
phase is taken up in the Coding phase. The best suitable programming language
is decided based on the system and architectural requirements. The coding is
performed based on the coding guidelines and standards. The code goes through
numerous code reviews and is optimized for best performance before the final
build is checked into the repository.
Validation Phases
Following are the Validation phases in V-Model:
·Unit Testing:Unit tests designed in the module design
phase are executed on the code during this validation phase. Unit testing is
the testing at code level and helps eliminate bugs at an early stage, though
all defects cannot be uncovered by unit testing.
·Integration Testing:Integration testing is associated with the
architectural design phase. Integration tests are performed to test the
coexistence and communication of the internal modules within the system.
·System Testing:System testing is directly associated with
the System design phase. System tests check the entire system functionality and
the communication of the system under development with external systems. Most
of the software and hardware compatibility issues can be uncovered during
system test execution.
·Acceptance Testing:Acceptance testing is associated with the
business requirement analysis phase and involves testing the product in user
environment. Acceptance tests uncover the compatibility issues with the other
systems available in the user environment. It also discovers the non functional
issues such as load and performance defects in the actual user environment.
Follow are the steps to be followed in general to perform the experiments in Advanced Network Technologies Virtual Lab.
Read the theory about the experiment
View the simulation provided for a chosen, related problem
Take the self evaluation to judge your understanding (optional, but recommended)
Solve the given list of exercises
Experiment Specific Instructions
In the theory part we have learned how to work with NS2. In this
section we now learn how to make your system ready so that you can work
with Network Simulator 2.NS2 is a open source software. It can be
downloaded from Internet and installed.
Basic Requirements:
A computer which is having access to the Internet
Minimum 512Mb RAM
Operating system: Linux(Ubuntu 10.04)
ns-2.34 package
gcc-4.4.3
make tools
The following instructions for downloading and installing ns2 are for a system with:
Operating System: Linux (Ubuntu 10.04)
ns2: 2.34
gcc: 4.4.3
Some of the known problems that have been faced during installation
as well as their solutions have been discussed in one of the sections
below.
The steps for installation should ideally be applicable for other
version and/or configuration of Linux also. Any other problem that might
arise would require further troubleshooting.
Downloading ns-2.34
To download ns2 go to http://www.isi.edu/nsnam/ns/ns-build.html. Here
you can download the ns all-in-one package or you can download the
packages separately.
Well, lets learn how to download the packages separately.
First go to http://www.isi.edu/nsnam/ns/ns-build.html.
Then download Tcl and Tk from
http://www.tcl.tk/software/tcltk/downloadnow84.tml [note: the versions
of Tcl and Tk must be same.]
Note: there are some other few things to be downloaded but that
depends upon your requirement.For example, if some error occurs for
absence of any package,then you need to detect the error and download
the required package.
Installation
Here we will install the the packages separately
All the files will be zip format.So at first you need to unzip all the files. the command to unzip the files:
tar -xzvf <file_name>
for e.g if we want to unzip the Tcl package the type: tar -xzvf tcl8.4.19
To unzip all the files together use the following command:
for ifile in ` ls *.tar.gz`
do
ar -xzvf $ifile
done
Next, we will install Tcl.The command required is:
cd tcl8.4.19
ls
cd unix
./configure
make
sudo make install
Install Tk:
cd tk8.4.19
ls
cd unix
./configure
make
sudo make install
Install OTcl:
cd otcl-1.13
/configure --with-tcl=../tcl8.4.19 #note-while configuring we need to specify the path of tcl
make
sudo make install
Install Tclcl-1.19:
cd tclcl-1.19
./configure --with-tcl=../tcl8.4.19 #note-while configuring we need to specify the path of tcl
make
sudo make install
Install ns-2.34:
cd ns-2.34
/configure --with-tcl=../tcl8.4.19
make
sudo make install
Install NAM:
cd nam-1.14
./configure --with-tcl=../tcl8.4.19
make
sudo make install
Install xgraph:
cd xgraph-12.1
./configure
make
sudo make install
Probable problems that could appear while installing the packages and their solution
1. Tk was installed properly but it failed to run somehow.
How to identify this:
After installing tk8.4.19 try to run the script tk8.4.19/unix/wish
from the terminal.A small window will open,close it. If no error
messages appears in the terminal then Tk installation is successful and
Tk is working properly. This can also be verified after installing nam
and then trying to run nam. The error message would be something like:
nam:
[code omitted because of length]
: no event type or button # or keysym
while executing
"bind Listbox <MouseWheel> {
%W yview scroll [expr {- (%D / 120) * 4}] units
}"
invoked from within
"if {[tk windowingsystem] eq "classic" || [tk windowingsystem] eq "aqua"} {
bind Listbox <MouseWheel> {
%W yview scroll [expr {- (%D)}] units
}
bind Li..."
Solution:
If you get error messages then download the patch files
tk-8.4.18-tkBind.patch and tk-8.4-lastevent.patch from
http://bugs.gentoo.org/show_bug.cgi?id=225999.
then copy those files into Tk directory.Now apply the patches by using the following command:
patch -p1 < tk-8.4.18-tkBind.patch
patch -p1 < tk-8.4-lastevent.patch
If fail to apply the patches then open the patch, check the name of
the file to be patched, and make the relevant modifications to that file
accordingly.
Note: the contents of the original file are shown with a minus(-)
sign at the beginning.The modified contents do begin with a plus(+)
sign. The contents of the two patch files are shown below for easy
reference:
/*
After doing all these stuffs, do install Tk from beginning again and
also verify whether 'wish' runs properly(as indicated above).
2. Problem while running 'make' for OTcl
otcl.o: In function `OTclDispatch':
/home/barun/Desktop/ns2/otcl-1.13/otcl.c:495: undefined reference to `__stack_chk_fail_local'
otcl.o: In function `Otcl_Init':
/home/barun/Desktop/ns2/otcl-1.13/otcl.c:2284: undefined reference to `__stack_chk_fail_local'
ld: libotcl.so: hidden symbol `__stack_chk_fail_local' isn't defined
ld: final link failed: Nonrepresentable section on output
make: *** [libotcl.so] Error 1
Solution:
goto the 'configure' file
In line no. 5516
SHLIB_LD="ld -shared"
change the above to
SHLIB_LD="gcc -shared"
for further information you can gob to http://nsnam.isi.edu/nsnam/index.php/User_Information
3. Problem wile running ' sudo make install' for ns-2.34
ns: error while loading shared libraries: libotcl.so: cannot open shared object file: No such file or directory
Solution:
We need to set the following environment variables, and store them in the ~/.bashrc file.
1 OTCL_LIB=/your/path/ns-2.34/otcl-1.13
2 NS2_LIB=/your/path/ns-2.34/lib
3 export LD_LIBRARY_PATH=$LD_LIBRARY_PATH:$OTCL_LIB:$NS2_LIB
Now open a new terminal, and type ns. This should now work without any error.
xwd.c:87:29: error: X11/Xmu/WinUtil.h: No such file or directory make: *** [xwd.o] Error 1
Solution:
Install the package libxmu-dev. Then run
./configure
make clean
make
sudo make install
5. Problem while running 'make' for xgraph
/usr/include/stdio.h:651: note: expected ‘size_t * __restrict__’ but argument is of type ‘char *’
dialog.c:780: error: too few arguments to function ‘getline’
dialog.c: In function ‘getline’:
dialog.c:899: error: argument ‘lptr’ doesn’t match prototype
/usr/include/stdio.h:651: error: prototype declaration
dialog.c:899: error: number of arguments doesn’t match prototype
/usr/include/stdio.h:651: error: prototype declaration
make: *** [dialog.o] Error 1
Note: copy and unzip the above patch file into xgraph-12.1
view source
print?
1 patch < xgraph_12.1-12.diff
After applying the patch if you see in get any problem with the configure.in file like
configure.in:3: version mismatch. This is Automake 1.11,
configure.in:3: but the definition used by this AM_INIT_AUTOMAKE
configure.in:3: comes from Automake 1.11.1. You should recreate
configure.in:3: aclocal.m4 with aclocal and run automake again.
then goto configure.in file and add 'AC_LOCAL' in the first line.
After completing this experiment you will be able to:
Identify ambiguities, inconsistencies and incompleteness from a requirements specification
Identify and state functional requirements
Identify and state non-functional requirements
Time Required
Around 3.00 hours
After completing this experiment, you can able to :
Learn the basic idea about open source network simulator NS2
and how to download, install and work with NS2 using TCL programming.
Defining the different agents and their applications like TCP, FTP over TCP, UDP, CBR and CBR over UDP etc.
Identifying and solving the installation error of NS2.
Introduction
The network simulator is discrete event packet level simulator.The
network simulator covers a very large number of application of different
kind of protocols of different network types consisting of different
network elements and traffic models.Network simulator is a package of
tools that simulates behavior of networks such as creating network
topologies, log events that happen under any load,analyze the events and
understand the network. Well the main aim of our first experiment is to
learn how to use network simulator and to get acquainted with the
simulated objects and understand the operations of network simulation
and we also need to analyze the behavior of the simulation object using
network simulation.
Platform required to run network simulator
Unix and Unix like systems
Linux (Use Fedora or Ubuntu versions)
Free BSD
SunOS/Solaris
Windows 95/98/NT/2000/XP
Backend Environment of Network Simulator
Network Simulator is mainly based on two languages.They are C++ and
OTcl. OTcl is the object oriented version of Tool Command language.The
network simulator is a bank of of different network and protocol
objects. C++ helps in the following way:
It helps to increase the efficiency of simulation.
Its is used to provide details of the protocols and their operation.
It is used to reduce packet and event processing time.
OTcl helps in the following way:
With the help of OTcl we can describe different network topologies
It helps us to specify the protocols and their applications
It allows fast development
Tcl is compatible with many platforms and it is flexible for integration
Tcl is very easy to use and it is available in free
Basics of Tcl Programming (w.r.t. ns2)
Before we get into the program we should consider the following things:
Initialization and termination aspects of network simulator.
Defining the network nodes,links,queues and topology as well.
Defining the agents and their applications
Network Animator(NAM)
Tracing
Initialization
To start a new simulator we write
set ns [new Simulator]
From the above command we get that a variable ns is being initialized
by using the set command. Here the code [new Simulator] is a
instantiation of the class Simulator which uses the reserved word 'new'.
So we can call all the methods present inside the class simulator by
using the variable ns.
Creating the output files
1 #To create the trace files we write
2
3 set tracefile1 [open out.tr w]
4 $ns trace-all $tracefile1
5
6 #To create the nam files we write
7
8 set namfile1 [open out.nam w]
9 $ns namtrace-all $namfile
In the above we create a output trace file out.tr and a nam
visualization file out.nam. But in the Tcl script they are not called by
their names declared,while they are called by the pointers initialized
for them such as tracefile1 and namfile1 respectively.The line which
starts with '#' are commented.The next line opens the file 'out.tr'
which is used for writing is declared 'w'.The next line uses a simulator
method trace-all by which we will trace all the events in a particular
format.
The termination program is done by using a 'finish' procedure
01 # Defining the 'finish' procedure'
02
03 proc finish {} {
04 global ns tracefile1 namfile1
05 $ns flush-trace
06 close $tracefile
07 close $namfile
08 exec nam out.nam &
09 exit 0
10 }
In the above the word 'proc' is used to declare a procedure called
'finish'.The word 'global' is used to tell what variables are being used
outside the procedure.
'flush-trace' is a simulator method that dumps the traces on the
respective files.the command 'close' is used to close the trace files
and the command 'exec' is used to execute the nam visualization.The
command 'exit' closes the application and returns 0 as zero(0) is
default for clean exit.
In ns we end the program by calling the 'finish' procedure
1 #end the program
2 $ns at 125.0 "finish"
Thus the entire operation ends at 125 seconds.To begin the simulation we will use the command
1 #start the the simulation process
2 $ns run
Defining nodes,links,queues and topology
Way to create a node:
view source
print?
1 set n0 [ns node]
In the above we created a node that is pointed by a variable n0.While
referring the node in the script we use $n0. Similarly we create
another node n2.Now we will set a link between the two nodes.
1 $ns duplex-link $n0 $n2 10Mb 10ms DropTail
So we are creating a bi-directional link between n0 and n2 with a capacity of 10Mb/sec and a propagation delay of 10ms.
In NS an output queue of a node is implemented as a part of a link
whose input is that node to handle the overflow at the queue.But if the
buffer capacity of the output queue is exceeded then the last packet
arrived is dropped and here we will use a 'DropTail' option.Many other
options such as RED(Random Early Discard) mechanism, FQ(Fair Queuing),
DRR(Deficit Round Robin), SFQ(Stochastic Fair Queuing) are available.
So now we will define the buffer capacity of the queue related to the above link
1 #Set queue size of the link
2 $ns queue-limit $n0 $n2 20
so, if we summarize the above three things we get
01 #create nodes
02
03 set n0 [$ns node]
04 set n1 [$ns node]
05 set n2 [$ns node]
06 set n3 [$ns node]
07 set n4 [$ns node]
08 set n5 [$ns node]
09
10 #create links between the nodes
11
12 $ns duplex-link $n0 $n2 10Mb 10ms DropTail
13 $ns duplex-link $n1 $n2 10Mb 10ms DropTail
14 $ns simplex-link $n2 $n3 0.3Mb 100ms DropTail
15 $ns simplex-link $n3 $n2 0.3Mb 100ms DropTail
16 $ns duplex-link $n0 $n2 0.5Mb 40ms DropTail
17 $ns duplex-link $n0 $n2 0.5Mb 40ms DropTail
18
19 #set queue-size of the link (n2-n3) to 20
20 $ns queue-limit $n2 $n3 20
Agents and applications
TCP
TCP is a dynamic reliable congestion protocol which is used to
provide reliable transport of packets from one host to another host by
sending acknowledgements on proper transfer or loss of packets.Thus TCP
requires bi-directional links in order for acknowledgements to return to
the source.
Now we will show how to set up tcp connection between two nodes
1 #setting a tcp connection
2
3 set tcp [new Agent/TCP]
4 $ns attach-agent $n0 $tcp
5 set sink [new Agent/TCPSink]
6 $ns attach-agent $n4 $sink
7 $ns connect $tcp $sink
8 $tcp set fid_1
9 $tcp set packetSize_552
The command 'set tcp [new Agent/TCP]' gives a pointer called 'tcp'
which indicates the tcp agent which is a object of ns.Then the command
'$ns attach-agent $n0 $tcp' defines the source node of tcp connection.
Next the command 'set sink [new Agent/TCPSink]' defines the destination
of tcp by a pointer called sink. The next command '$ns attach-agent $n4
$sink' defines the destination node as n4.Next, the command '$ns connect
$tcp $sink' makes the TCP connection between the source and the
destination.i.e n0 and n4.When we have several flows such as TCP, UDP
etc in a network. So, to identify these flows we mark these flows by
using the command '$tcp set fid_1'. In the last line we set the packet
size of tcp as 552 while the default packet size of tcp is 1000. FTP over TCP
File Transfer Protocol(FTP) is a standard mechanism provided by the
Internet for transferring files from one host to another. Well this is
the most common task expected from a networking or a inter networking .
FTP differs from other client server applications in that it establishes
between the client and the server. One connection is used for data
transfer and other one is used for providing control information. FTP
uses the services of the TCP. It needs two connections. The well Known
port 21 is used for control connections and the other port 20 is used
for data transfer.
Well here we will learn in how to run a FTP connection over a TCP
1 #Initiating FTP over TCP
2
3 set ftp [new Application/FTP]
4 $ftp attach-agent $tcp
In above,the command 'set ftp [new Application/FTP]' gives a pointer
called 'ftp' which indicates the FTP application.Next, we attach the ftp
application with tcp agent as FTP uses the services of TCP.
UDP
The User datagram Protocol is one of the main protocols of the
Internet protocol suite.UDP helps the host to send send messages in the
form of datagrams to another host which is present in a Internet
protocol network without any kind of requirement for channel
transmission setup. UDP provides a unreliable service and the datagrams
may arrive out of order,appear duplicated, or go missing without notice.
UDP assumes that error checking and correction is either not necessary
or performed in the application, avoiding the overhead of such
processing at the network interface level. Time-sensitive applications
often use UDP because dropping packets is preferable to waiting for
delayed packets, which may not be an option in a real-time system.
Now we will learn how to create a UDP connection in network simulator.
1 # setup a UDP connection
2 set udp [new Agent/UDP]
3 $ns attach-agent $n1 $udp
4 $set null [new Agent/Null]
5 $ns attach-agent $n5 $null
6 $ns connect $udp $null
7 $udp set fid_2
Similarly,the command 'set udp [new Agent/UDP]' gives a pointer
called 'udp' which indicates the udp agent which is a object of ns.Then
the command '$ns attach-agent $n1 $udp' defines the source node of udp
connection. Next the command 'set null [new Agent/Null]' defines the
destination of udp by a pointer called null. The next command '$ns
attach-agent $n5 $null' defines the destination node as n5.Next, the
command '$ns connect $udp $null' makes the UDP connection between the
source and the destination.i.e n1 and n5.When we have several flows such
as TCP,UDP etc in a network. So, to identify these flows we mark these
flows by using the command '$udp set fid_2
Constant Bit Rate(CBR)
Constant Bit Rate (CBR) is a term used in telecommunications,
relating to the quality of service.When referring to codecs, constant
bit rate encoding means that the rate at which a codec's output data
should be consumed is constant. CBR is useful for streaming multimedia
content on limited capacity channels since it is the maximum bit rate
that matters, not the average, so CBR would be used to take advantage of
all of the capacity. CBR would not be the optimal choice for storage as
it would not allocate enough data for complex sections (resulting in
degraded quality) while wasting data on simple sections.
CBR over UDP Connection
1 #setup cbr over udp
2
3 set cbr [new Application/Traffic/CBR]
4 $cbr attach-agent $udp
5 $cbr set packetSize_1000
6 $cbr set rate_0.01Mb
7 $cbr set random _false
In the above we define a CBR connection over a UDP one. Well we have
already defined the UDP source and UDP agent as same as TCP. Instead of
defining the rate we define the time interval between the transmission
of packets in the command '$cbr set rate_0.01Mb'. Next, with the help of
the command '$cbr set random _false' we can set random noise in cbr
traffic.we can keep the noise by setting it to 'false' or we can set the
noise on by the command '$cbr set random _1'. We can set by packet size
by using the command '$cbr set packetSize_(packetsize).We can set the
packet size up to sum value in bytes.
Scheduling Events
In ns the tcl script defines how to schedule the events or in other
words at what time which event will occur and stop. This can be done
using the command
$ns at .
So here in our program we will schedule the ftp and cbr.
1 # scheduling the events
2
3 $ns at 0.1 "cbr start"
4 $ns at 1.0 "ftp start"
5 $ns at 124.0 "ftp stop"
6 $ns at 124.5 "cbr stop"
Network Animator(NAM)
When we will run the above program in ns then we can can visualize
the network in the NAM. But instead of giving random positions to the
nodes, we can give suitable initial positions to the nodes and can form a
suitable topology. So, in our program we can give positions to the
nodes in NAM in the following way
1 #Give position to the nodes in NAM
2
3 $ns duplex-link-op $n0 $n2 orient-right-down
4 $ns duplex-link-op $n1 $n2 orient-right-up
5 $ns simplex-link-op $n2 $n3 orient-right
6 $ns simplex-link-op $n3 $n2 orient-left
7 $ns duplex-link-op $n3 $n4 orient-right-up
8 $ns duplex-link-op $n3 $n5 orient-right-down
We can also define the color of cbr and tcp packets for identification in NAM.For this we use the following command
1 #Marking the flows
2 $ns color1 Blue
3 $ns color2 Red
To view the network animator we need to type the command: nam
Tracing
Tracing Objects
NS simulation can produce visualization trace as well as ASCII file
corresponding to the events that are registered at the network. While
tracing ns inserts four objects: EnqT,DeqT,RecvT & DrpT. EnqT
registers information regarding the arrival of packet and is queued at
the input queue of the link. When overflow of a packet occurs, then the
information of thye dropped packet is registered in DrpT.DeqT holds the
information abut the packet that is dequeued instantly.RecvT hold the
information about the packet that has been received instantly.
Structure of Trace files
The first field is event.It gives you four possible symbols '+'
'-' 'r' 'd'.These four symbols correspond respectively to enqueued,
dequeued, received and dropped.
The second field gives the time at which the event occurs
The third field gives you the input node of the link at which the event occurs
The fourth field gives you the the output node at which the event occurs
The fifth field shows the information about the packet type.i.e whether the packet is UDP or TCP
The sixth field gives the packet size
The seventh field give information about some flags
The eight field is the flow id(fid) for IPv6 that a user can set
for each flow in a tcl script.It is also used for specifying the color
of flow in NAM display
The ninth field is the source address
The tenth field is the destination address
The eleventh field is the network layer protocol's packet sequence number
NS2 Scenarios Generator (NSG) is a tcl script generator tool used to generate TCL Scripts automatically . . . !!!
NSG is a Java based tool that runs on any platform and can generate TCL
Scripts for Wired as well as Wireless Scenarios for Network Simulator -
2. The procedure to execute these TCL Scripts on NS-2 is same as those
of manually written TCL Scripts.
Some of the main features of NS2 Scenarios Generator (NSG) are as mentioned below:
(1) Creating Wired and Wireless nodes just by drag and drop.
(2) Creating Simplex and Duplex links for Wired network.
(3) Creating Grid, Random and Chain topologies.
(4) Creating TCP and UDP agents. Also supports TCP Tahoe, TCP Reno, TCP New-Reno and TCP Vegas.
(5) Supports Ad Hoc routing protocols such as DSDV, AODV, DSR and TORA.
(6) Supports FTP and CBR applications.
(7) Supports node mobility.
(8) Setting the packet size, start time of simulation, end time of
simulation, transmission range and interference range in case of
wireless networks, etc.
(9) Setting other network parameters such as bandwidth, etc for wireless scenarios.
If you still wanna download NSG1 (previous version of NSG), it can be found here.
To
execute NSG you need to install Java 6.0. NSG does not require any
installation . . . !!! Just double click on the jar file to launch NSG.
If it does not work, please see the instructions provided on the
homepage of NSG here.
For more information on NSG and its previous versions, please visit the homepage of NSG here.
Launch NSG2 :
To execute NSG2, you have to install JAVA6.0 first. You can download JAVA6.0 from http://java.sun.com/. The details of JAVA6.0 installation, please refer to Sun JAVA site.
NSG2
doesn't
need to be installed in your computer. You just download it and
launch it with following instruction under TERMINAL command environment. In
fact, on my computer system, Ubuntu 12.04 I just double click the
NSG2.jar, and NSG2 will automatically launch. If it doesn't launch, you
can also launch NSG2 as following instructions.
open terminal
change directory into the folder where NG2.1.jar is copied.
The FOAF ("Friend of a Friend") project is a
community driven effort to define an RDF vocabulary for expressing metadata about
people,
and their interests, relationships and activities. Founded by Dan Brickley and Libby
Miller,
FOAF is an open community-lead initiative which is tackling head-on the wider Semantic
Web
goal of creating a machine processable web of data. Achieving this goal quickly requires
a
network-effect that will rapidly yield a mass of data. Network effects mean people.
It seems
a fairly safe bet that any early Semantic Web successes are going to be riding on
the back
of people-centric applications. Indeed, arguably everything interesting that we might
want
to describe on the Semantic Web was created by or involves people in some form or
another.
And FOAF is all about people.
FOAF facilitates the creation of the Semantic Web equivalent of the archetypal personal
homepage: My name is Leigh, this is a picture of me, I'm interested in XML, and here
are
some links to my friends. And just like the HTML version, FOAF documents can be linked
together to form a web of data, with well-defined semantics.
Being an RDF application means that FOAF can claim the usual benefits of being easily
harvested and aggregated. And like all RDF vocabularies it can be easily combined
with other
vocabularies, allowing the capture of a very rich set of metadata. This tutorial introduces
the basic terms of the FOAF vocabulary, illustrating them with a number of examples.
The
article concludes with a brief review of the more interesting FOAF applications and
considers some other uses for the data.
The FOAF Vocabulary
Like any well-behaved vocabulary FOAF publishes both its schema and specification
at its
namespace URI: http://xmlns.com/foaf/0.1. The
documentation is thorough and includes definitions of all classes and properties defined
in
the associated RDF schema. While the schema is embedded in the XHTML specification,
it can
also be accessed directly.
Rather than cover the whole vocabulary, this article will focus on two of the most
commonly used classes it defines: Person and Image. The remaining
definitions cover the description of documents, projects, groups, and organizations;
consult
the specification for more information. The community also has a lively mailing list, IRC channel, and project wiki which serve as invaluable sources of
additional information and discussion.
Care has been taken in the schema to ensure that, where appropriate, the FOAF classes
have
been related to their equivalents in other ontologies. This allows FOAF data to be
immediately processable by applications built to understand these ontologies, while
allowing
the FOAF project to defer the definition of more complex concepts, e.g. geographical
metadata, to other communities.
For example, while all of the FOAF classes are tied to a definition in Wordnet, via an RDF view of that data, the Person class is
additionally tied to other schemas describing contact and geographic related data.
Personal Metadata
The Person class is the core of the FOAF vocabulary. A simple example will illustrate
it's
basic usage:
In other words, there is a person, with the name "Peter Parker",
who has an email address of "peter.parker@dailybugle.com".
Publishing data containing plain text email addresses is just asking for trouble;
to avoid
this FOAF defines another property, foaf:mbox_sha1sum whose value is a SHA1
encoded email address complete with the mailto: URI scheme prefix. The FOAF
project wiki has a handy reference page pointing to a number of different ways of generating a SHA1 sum.
The end result of applying this algorithm is a string unique to a given email address
(or
mailbox). The next example demonstrates the use of this and several other new properties
that further describe Peter Parker.
Which is a slightly richer description of Peter Parker, including some granularity
in the
markup of his name through the use of foaf:title, foaf:givenname,
and foaf:family_name. We also now know that Peter Parker is male
(foaf:gender) and has both a homepage (foaf:homepage) and a
weblog (foaf:weblog).
Identifying Marks
Keen-eyed RDF enthusiasts will already have noticed that neither of these examples
assigns
a URI to the resource called Peter Parker, i.e. there is no rdf:about attribute
on the foaf:Person resource:
<foaf:Person rdf:about="..uri to identify peter..."/>
That's because there is still some debate around both the social and technical
implications of assigning URIs to people. Which URI identifies you? Who assigns these
URIs?
What problems are associated with having multiple URIs (assigned by different people)
for
the same person? Side-stepping this potential minefield, FOAF borrows the concept
of an
"inverse
functional property" (IFP) from OWL, the Web Ontology Language. An inverse functional
property is simply a property whose value uniquely identifies a resource.
The FOAF schema defines several inverse functional properties, including
foaf:mbox, foaf:mbox_sha1sum, and foaf:homepage;
consult the schema documentation for the complete list. An application harvesting
FOAF data
can, on encountering two resources that have the same values for an inverse functional
property, safely merge the description of each and the relations of which they are
part.
This process, often referred to as "smushing", must be carried out when aggregating FOAF data to ensure that data
about different resources is correctly merged.
As an example consider the following RDF fragment:
Applying our knowledge that foaf:mbox:sha1sum is an inverse functional
property, we can merge the descriptions together to discover that these statements
actually
describe a single person. Spiderman is unmasked! While perfectly valid, it may not
be
desirable in all circumstances, and flags the importance of FOAF aggregators recording
the
source (provenance) of their data. This allows incorrect and potentially malicious
data to
be identified and isolated.
Before moving on it's worth noting that while FOAF defines the email address properties
(foaf:mbox_sha1sum and foaf:mbox) as uniquely identifying a
person, this is not the same thing as saying that all email addresses are owned by
a unique
person. What the FOAF schema claims is that any email address used in a
foaf:mbox (or encoded as a foaf:mbox_sha1sum) property uniquely
identifies a person. If it doesn't, then it's not a suitable value for that property.
It's Who You Know
Having captured some basic metadata about Peter Parker, it's time to go a step further
and
begin describing his relationships with others. The foaf:knows property is used
to assert that there is some relationship between two people. Precisely what this
relationship is, and whether it's reciprocal (i.e. if you know me, do I automatically
know
you?), is deliberately left undefined.
For obvious reasons, modeling interpersonal relationships can be a tricky business.
The
FOAF project has therefore taken the prudent step of simply allowing a relationship
to be
defined without additional qualification. It is up to other communities (and vocabularies)
to further define different types of relationships.
Using foaf:knows is simple: one foaf:Personfoaf:knows another. The following example shows two alternative ways of writing
this using the RDF/XML syntax. The first uses a cross-reference to a person defined
in the
same document (using rdf:nodeID), while the second describes the
foaf:Person"in situ" within the foaf:knows property. The
end result is the same: Peter Parker knows both Aunt May and Harry Osborn.
The other thing to notice is that, in addition to the foaf:knows relationship
between Peter and Harry, a link has also been introduced to Harry's own FOAF document,
using
the rdfs:seeAlso property. Defined by the RDF Schema specification, the
rdfs:seeAlso property indicates a resource that may contain additional
information about its associated resource. In this case it's being used to point to
Harry
Osborn's own FOAF description.
It's through the use of the rdfs:seeAlso property that FOAF can be used to
build a web of machine-processable metadata; rdfs:seeAlso is to RDF what the
anchor element is to HTML. Applications can be written to spider (or "scutter" using the FOAF community's
terminology) these RDF hyperlinks to build a database of FOAF data.
Finer-grained Relationships
The loose defintion of foaf:knows won't fit all applications, particularly
those geared to capture information about complex social and business networks. However,
this doesn't mean that FOAF is unsuitable for such purposes; indeed FOAF has the potential
to be an open interchange format used by many different social networking applications.
The expectation is that additional vocabularies will be created to refine the general
FOAF
knows relationship to create something more specific. The correct way to achieve this
is to
declare new sub-properties of foaf:knows. Stepping outside of FOAF for a
moment, we can briefly demonstrate one example of this using the relationship schema
created by Eric Vitiello.
The relationship schema defines a number of sub-properties of foaf:knows
including parentOf, siblingOf, friendOf, etc. The
following example uses these properties to make some clearer statements about the
relationships between Peter Parker and some of his contemporaries:
While it is possible to model quite fine-grained relationships using this method,
the most
interesting applications will be those that can infer relationships between people
based on
other metadata. For example, have they collaborated on the same project, worked for
the same
company, or been pictured together in the same image? Which brings us to the other
commonly
used FOAF class, Image.
Image is Everything
Digital cameras being all the rage these days, it's not surprising that many people
are
interested in capturing metadata about their pictures. FOAF provides for this use
case in
several ways. First, using the foaf:depiction property we can make a statement
that says "this person (Resource) is shown in this image". FOAF also supports an inverse
of
this property (foaf:depicts) that allows us to make statements of the form:
"this image is a picture of this Resource". The following example illustrates both
of these
properties.
This RDF instances says that the image at
http://www.peterparker.com/peter.jpg is a picture of Peter Parker. It also
defines a foaf:Image resource, i.e. an image which can be found at a specific
URI, that depicts both Spiderman and the Green Goblin. Elements from the Dublin Core namespace are often added to FOAF
documents to title images, documents, etc.
Notice also that Peter Parker is defined as the author of the image using the
foaf:maker property which is used to relate a resource to its creator. The
dc:creator term isn't used here due to some issues with its loose
definition.
Publishing FOAF Data
Having created an RDF document containing FOAF terms and copied it to the Web, the
next
step is to link the new information into the existing web of FOAF data. There are
several
ways of doing this:
through foaf:knows -- ensuring that people who know you link to your FOAF data
via rdfs:seeAlso link will make the data discoverable
through the FOAF Bulletin Board -- a wiki page that links to dozens of
FOAF files. FOAF harvesters generally include the RDF view of this page as one of
their starting locations.
through auto-discovery -- the FOAF project has defined a means to link to a FOAF document from an HTML
page using the link element; several tools now support this mechanism
Having covered the basics of the FOAF vocabulary and published some data, it's time
to
consider what applications are out there making use of it.
FOAF Applications
The FOAF application most immediately useful to the owner of a freshly published
FOAF
description is Morten Frederikson's FOAF
Explorer which can generate a HTML view of FOAF data, complete with referenced images
and links to other data. For example here is a view of my
FOAF description.
FOAF Explorer provides an effective way to browse the network of FOAF data. With
the
addition of a Javascript
bookmarklet to perform auto-discovery, it's easy to jump from a blog posting to a
description of that person and their interests.
However the most elegant way to browse the relationships to be found in the network
of
FOAF data is by using Jim Ley's foafnaut: an SVG
application that provides a neat visualiaation of foaf:knows relationships.
Here's the foafnaut view starting
from my description
There are a number of other interesting FOAF applications. plink is a social networking site. foafbot and whwhwhwh are IRC bots that provide conversational interfaces onto FOAF data. Libby
Miller's codepiction
experiments demonstrate a novel way to explore FOAF image metadata.
Beyond these initial developments, FOAF has potential in many other areas. For example,
painful web site registrations can become a thing of the past: just indicate the location
of
your FOAF description. Throw in the relationships and FOAF can be used as an interchange
format between social networking sites, building an open infrastructure that allows
end
users to retain control over their own data.
As an example consider ecommerce sites like Amazon, which have become successful
because
of their high levels of personalization. Getting the most from these sites involves
a
learning process where they discover your interests either through explicit preference
setting or adapting product suggestions based on a purchase history. Using FOAF there's
the
potential to capture this information once, in a form that can be used by not one
site, but
many. The user could then move freely between systems.
To summarize, FOAF is a lively project exploring the role that person-centric metadata
can
help in delivering on the promise of the Semantic Web. Whether or not this grand vision
is
ultimately realized, FOAF has many immediate applications. And what's more, it's fun.
Tech pundit Tim O’Reilly had just tried the new Google Photos app, and he was amazed by the depth of its artificial intelligence.
O’Reilly was standing a few feet from Google CEO and co-founder Larry
Page this past May, at a small cocktail reception for the press at the annual Google I/O conference—the centerpiece of the company’s year. Google had unveiled its personal photos app earlier in the day, and O’Reilly
marveled that if he typed something like “gravestone” into the search
box, the app could find a photo of his uncle’s grave, taken so long ago.
Google is open sourcing software that sits at the heart of its empire.
The app uses an increasingly powerful form of artificial intelligence called deep learning.
By analyzing thousands of photos of gravestones, this AI technology can
learn to identify a gravestone it has never seen before. The same goes
for cats and dogs, trees and clouds, flowers and food.
The Google Photos search engine isn’t perfect. But its accuracy is
enormously impressive—so impressive that O’Reilly couldn’t understand
why Google didn’t sell access to its AI engine via the Internet, cloud-computing style,
letting others drive their apps with the same machine learning. That
could be Google’s real money-maker, he said. After all, Google also uses
this AI engine to recognize spoken words, translate from one language to another, improve Internet search results, and more. The rest of the world could turn this tech towards so many other tasks, from ad targeting to computer security.
Well, this morning, Google took O’Reilly’s idea further than even he
expected. It’s not selling access to its deep learning engine. It’s open
sourcing that engine, freely sharing the underlying code with the world
at large. This software is called TensorFlow,
and in literally giving the technology away, Google believes it can
accelerate the evolution of AI. Through open source, outsiders can help
improve on Google’s technology and, yes, return these improvements back
to Google.
“What we’re hoping is that the community adopts this as a good way of
expressing machine learning algorithms of lots of different types, and
also contributes to building and improving [TensorFlow] in lots of
different and interesting ways,” says Jeff Dean, one of Google’s most important engineers and a key player in the rise of its deep learning tech.
‘If Google open sources its tools, this can make everybody else better at machine learning.’ Chris Nicholson
In recent years, other companies and researchers have also made huge strides in this area of AI, including Facebook, Microsoft, and Twitter. And some have already open sourced software that’s similar to TensorFlow. This includes Torch—a system originally built by researchers in Switzerland—as well as systems like Caffe and Theano.
But Google’s move is significant. That’s because Google’s AI engine is
regarded by some as the world’s most advanced—and because, well, it’s
Google.
“This is really interesting,” says Chris Nicholson, who runs a deep learning startup called Skymind.
“Google is five to seven years ahead of the rest of the world. If they
open source their tools, this can make everybody else better at machine
learning.”
To be sure, Google isn’t giving away all its secrets. At the moment,
the company is only open sourcing part of this AI engine. It’s sharing
only some of the algorithms that run atop the engine. And it’s not
sharing access to the remarkably advanced hardware infrastructure
that drives this engine (that would certainly come with a price tag).
But Google is giving away at least some of its most important data
center software, and that’s not something it has typically done in the
past.
Google became the Internet’s most dominant force in large part because of the uniquely powerful software and hardware it built inside its computer data centers—software
and hardware that could help run all its online services, that could
juggle traffic and data from an unprecedented number of people across
the globe. And typically, it didn’t share its designs with the rest of
the world until it had moved on to other designs. Even then, it merely
shared research papers describing its tech. The company didn’t open
source its code. That’s how it kept an advantage.
With TensorFlow, however, the company has changed tack, freely
sharing some of its newest—and, indeed, most important—software. Yes,
Google open sources parts of its Android mobile operating system and so
many other smaller software projects. But this is different. In
releasing TensorFlow, Google is open sourcing software that sits at the
heart of its empire. “It’s a pretty big shift,” says Dean, who helped
build so much of the company’s groundbreaking data center software,
including the Google File System, MapReduce, and BigTable.
Open Algorithms
Deep learning relies on neural networks—systems
that approximate the web of neurons in the human brain. Basically, you
feed these networks vast amounts of data, and they learn to perform a
task. Feed them myriad photos of breakfast, lunch, and dinner, and they
can learn to recognize a meal. Feed them spoken words, and they can learn to recognize what you say. Feed them some old movie dialogue, and they can learn to carry on a conversation—not a perfect conversation, but a pretty good conversation.
More Artificial Intelligence
Typically, Google trains these neural nets using a vast array of
machines equipped with GPU chips—computer processors that were
originally built to render graphics for games and other highly visual
applications, but have also proven quite adept at deep learning. GPUs are good at processing lots of little bits of data in parallel, and that’s what deep learning requires.
But after they’ve been trained—when it’s time to put them into
action—these neural nets run in different ways. They often run on
traditional computer processors inside the data center, and in some
cases, they can run on mobile phones. The Google Translate app
is one mobile example. It can run entirely on a phone—without
connecting to a data center across the ‘net—letting you translate
foreign text into your native language even when you don’t have a good
wireless signal. You can, say, point the app at a German street sign,
and it will instantly translate into English.
TensorFlow is a way of building and running these neural
networks—both at the training stage and the execution stage. It’s a set
of software libraries—a bunch of code—that you can slip into any
application so that it too can learn tasks like image recognition,
speech recognition, and language translation.
Google built the underlying TensorFlow software with the C++ programming language. But in developing applications for this AI engine, coders can use either C++ or Python,
the most popular language among deep learning researchers. The hope,
however, is that outsiders will expand the tool to other languages,
including Google Go, Java, and perhaps even Javascript, so that coders have more ways of building apps.
According to Dean, TensorFlow is well suited not only to deep learning, but to other forms of AI, including reinforcement learning and logistic regression.
This was not the case with Google’s previous system, DistBelief.
DistBelief was pretty good at deep learning—it helped win the
all-important Large Scale Visual Recognition Challenge in 2014—but Dean says that TensorFlow is twice as fast.
In open sourcing the tool, Google will also provide some sample
neural networking models and algorithms, including models for
recognizing photographs, identifying handwritten numbers, and analyzing
text. “We’ll give you all the algorithms you need to train those models
on public data sets,” Dean says.
The rub is that Google is not yet open sourcing a version of
TensorFlow that lets you train models across a vast array of machines.
The initial open source version only runs on a single computer. This
computer can include many GPUs, but it’s a single computer nonetheless.
“Google is still keeping an advantage,” Nicholson says. “To build true
enterprise applications, you need to analyze data at scale.” But at the
execution stage, the open source incarnation of TensorFlow will run on
phones as well as desktops and laptops, and Google indicates that the
company may eventually open source a version that runs across hundreds
of machines.
A Change in Philosophy
Why this apparent change in Google philosophy—this decision to open
source TensorFlow after spending so many years keeping important code to
itself? Part of it is that the machine learning community generally
operates in this way. Deep learning originated with academics who openly
shared their ideas, and many of them now work at Google—including
University of Toronto professor Geoff Hinton, the godfather of deep learning.
But Dean also says that TensorFlow was built at a very different time from tools like MapReduce and GFS and BigTable and Dremel and Spanner and Borg. The open source movement—where Internet companies share so many of their tools in order to accelerate the rate of development—has
picked up considerable speed over the past decade. Google now builds
software with an eye towards open source. Many of those earlier tools,
Dean explains, were too closely tied to the Google infrastructure. It
didn’t really make sense to open source them.
“They were not developed with open sourcing in mind. They had a lot
of tendrils into existing systems at Google and it would have been hard
to sever those tendrils,” Dean says. “With TensorFlow, when we started
to develop it, we kind of looked at ourselves and said: ‘Hey, maybe we
should open source this.'”
That said, TensorFlow is still tied, in some ways, to the internal
Google infrastructure, according to Google engineer Rajat Monga. This is
why Google hasn’t open sourced all of TensorFlow, he explains. As
Nicholson points out, you can also bet that Google is holding code back
because the company wants to maintain an advantage. But it’s telling—and
rather significant—that Google has open sourced as much as it has.
Feedback Loop
Google has not handed the open source project to an independent third
party, as many others have done in open sourcing major software. Google
itself will manage the project at the new Tensorflow.org website. But
it has shared the code under what’s called an Apache 2 license,
meaning anyone is free to use the code as they please. “Our licensing
terms should convince the community that this really is an open
product,” Dean says.
Certainly, the move will win Google some goodwill among the world’s
software developers. But more importantly, it will feed new projects.
According to Dean, you can think of TensorFlow as combining the best of
Torch and Caffe and Theano. Like Torch and Theano, he says, it’s good
for quickly spinning up research projects, and like Caffe, it’s good for
pushing those research projects into the real world.
Others may disagree. According to many in the community, DeepMind, a
notable deep learning startup now owned by Google, continues to use
Torch—even though it has long had access to TensorFlow and DistBelief.
But at the very least, an open source TensorFlow gives the community
more options. And that’s a good thing.
“A fair bit of the advancement in deep learning in the past three or
four years has been helped by these kinds of libraries, which help
researchers focus on their models. They don’t have to worry as much
about underlying software engineering,” says Jimmy Ba, a PhD student at
the University of Toronto who specializes in deep learning, studying
under Geoff Hinton.
Even with TensorFlow in hand, building a deep learning app still
requires some serious craft. But this too may change in the years to
come. As Dean points out, a Google deep-learning open source project and
a Google deep-learning cloud service aren’t mutually exclusive. Tim
O’Reilly’s big idea may still happen.
But in the short term, Google is merely interested sharing the code.
As Dean says, this will help the company improve this code. But at the
same time, says Monga, it will also help improve machine learning as a
whole, breeding all sorts of new ideas. And, well, these too will find
their way back into Google. “Any advances in machine learning,” he says,
“will be advances for us as well.” Correction: This story has been updated to correctly show the
Torch framework was originally developed by researchers in Switzerland.
Saturday, 28 January 2017
NS2 PROGRAM TUTORIAL FOR WIRELESS TOPOLOGY
In this section, you are going to learn to use the mobile wireless
simulation model available in ns. The section consists of two parts. In
the first subsection, we discuss how to create and run a simple 2-node
wireless network simulation. In second subsection, we will extend our
program to create a relatively more complex wireless scenario.
1. Creating a simple wireless scenario
We are going to simulate a very simple 2-node wireless scenario. The
topology consists of two mobile nodes, node_(0) and node_(1). The mobile
nodes move about within an area whose boundary is defined in this
example as 500m X 500m. The nodes start out initially at two opposite
ends of the boundary. Then they move towards each other in the first
half of the simulation and again move away for the second half. A TCP
connection is setup between the two mobile nodes. Packets are exchanged
between the nodes as they come within hearing range of one another. As
they move away, packets start getting dropped.
Just as with any other ns simulation, we begin by creating a tcl script for the wireless simulation. We will call this file simple-wireless.tcl. If you want to download a copy of simple-wireless.tcl clickhere.
A mobile node consists of network components like Link Layer (LL),
Interface Queue (IfQ), MAC layer, the wireless channel nodes transmit
and receive signals from etc.
At the beginning of a wireless simulation, we need to define the type
for each of these network components. Additionally, we need to define
other parameters like the type of antenna, the radio-propagation model,
the type of ad-hoc routing protocol used by mobilenodes etc. See
comments in the code below for a brief description of each variable
defined. The array used to define these variables, val() is not global
as it used to be in the earlier wireless scripts.We begin our script
simple-wireless.tcl with a list of these different parameters described
above, as follows:
# ======================================================================
# Define options
# ======================================================================
set val(chan) Channel/WirelessChannel ;# channel type
set val(prop) Propagation/TwoRayGround ;# radio-propagation model
set val(ant) Antenna/OmniAntenna ;# Antenna type
set val(ll) LL ;# Link layer type
set val(ifq) Queue/DropTail/PriQueue ;# Interface queue type
set val(ifqlen) 50 ;# max packet in ifq
set val(netif) Phy/WirelessPhy ;# network interface type
set val(mac) Mac/802_11 ;# MAC type
set val(rp) DSDV ;# ad-hoc routing protocol
set val(nn) 2 ;# number of mobilenodes
Next we go to the main part of the program and start by creating an instance of the simulator,
set ns_ [new Simulator]
Then setup trace support by opening file simple.tr and call the procedure trace-all {} as follows:
set tracefd [open simple.tr w]
$ns_ trace-all $tracefd
Next create a topology object that keeps track of movements of mobilenodes within the topological boundary.
set topo [new Topography]
We had earlier mentioned that mobile nodes move within a topology of
500m X 500m. We provide the topography object with x and y co-ordinates
of the boundary, (x=500, y=500) :
$topo load_flatgrid 500 500
The topography is broken up into grids and the default value of grid
resolution is 1. A diferent value can be passed as a third parameter to
load_flatgrid {} above.
Next we create the object God, as follows:
create-god $val(nn)
Quoted from CMU document on god, "God (General Operations Director) is
the object that is used to store global information about the state of
the environment, network or nodes that an omniscent observer would have,
but that should not be made known to any participant in the
simulation." Currently, God object stores the total number of mobile
nodes and a table of shortest number of hops required to reach from one
node to another. The next hop information is normally loaded into god
object from movement pattern files, before simulation begins, since
calculating this on the fly during simulation runs can be quite time
consuming. However, in order to keep this example simple we avoid using
movement pattern files and thus do not provide God with next hop
information. The usage of movement pattern files and feeding of next hop
info to God shall be shown in the example in the next sub-section.
The procedure create-god is defined in ~ns/tcl/mobility/com.tcl,
which allows only a single global instance of the God object to be
created during a simulation. In addition to the evaluation
functionalities, the God object is called internally by MAC objects in
mobile nodes. So even though we may not utilize God for evaluation
purposes,(as in this example) we still need to create God.
Next, we create mobile nodes. The node creation APIs have been revised
and here we shall be using the new APIs to create mobile nodes.
IMPORTANT NOTE: The new APIs are not available with ns2.1b5 release.
Download the daily snapshot version if the next release (2.1b6 upwards)
is not as yet available.
First, we need to configure nodes before we can create them. Node
configuration API may consist of defining the type of addressing
(flat/hierarchical etc), the type of adhoc routing protocol, Link Layer,
MAC layer, IfQ etc. The configuration API can be defined as follows:
(parameter examples)
# $ns_ node-config -addressingType flat or hierarchical or expanded
# -adhocRouting DSDV or DSR or TORA
# -llType LL
# -macType Mac/802_11
# -propType "Propagation/TwoRayGround"
# -ifqType "Queue/DropTail/PriQueue"
# -ifqLen 50
# -phyType "Phy/WirelessPhy"
# -antType "Antenna/OmniAntenna"
# -channelType "Channel/WirelessChannel"
# -topoInstance $topo
# -energyModel "EnergyModel"
# -initialEnergy (in Joules)
# -rxPower (in W)
# -txPower (in W)
# -agentTrace ON or OFF
# -routerTrace ON or OFF
# -macTrace ON or OFF
# -movementTrace ON or OFF
All default values for these options are NULL except: addressingType: flat
We are going to use the default value of flat addressing; Also lets turn
on only AgentTrace and RouterTrace; You can experiment with the traces
by turning all of them on. AgentTraces are marked with AGT, RouterTrace
with RTR and MacTrace with MAC in their 5th fields. MovementTrace, when
turned on, shows the movement of the mobilenodes and the trace is marked
with M in their 2nd field.
The configuration API for creating mobilenodes looks as follows:
for {set i 0} {$i < $val(nn) } {incr i} {
set node_($i) [$ns_ node ]
$node_($i) random-motion 0 ;# disable random motion
}
The random-motion for nodes is disabled here, as we are going to provide
node position and movement(speed & direction) directives next.
Now that we have created mobilenodes, we need to give them a position to start with,
#
# Provide initial (X,Y, for now Z=0) co-ordinates for node_(0) and node_(1)
#
$node_(0) set X_ 5.0
$node_(0) set Y_ 2.0
$node_(0) set Z_ 0.0
$node_(1) set X_ 390.0
$node_(1) set Y_ 385.0
$node_(1) set Z_ 0.0
Node0 has a starting position of (5,2) while Node1 starts off at location (390,385).
Next produce some node movements,
#
# Node_(1) starts to move towards node_(0)
#
$ns_ at 50.0 "$node_(1) setdest 25.0 20.0 15.0"
$ns_ at 10.0 "$node_(0) setdest 20.0 18.0 1.0"
# Node_(1) then starts to move away from node_(0)
$ns_ at 100.0 "$node_(1) setdest 490.0 480.0 15.0"
$ns_ at 50.0 "$node_(1) setdest 25.0 20.0 15.0" means at time 50.0s,
node1 starts to move towards the destination (x=25,y=20) at a speed of
15m/s. This API is used to change direction and speed of movement of the
mobilenodes.
Next setup traffic flow between the two nodes as follows:
# TCP connections between node_(0) and node_(1)
set tcp [new Agent/TCP]
$tcp set class_ 2
set sink [new Agent/TCPSink]
$ns_ attach-agent $node_(0) $tcp
$ns_ attach-agent $node_(1) $sink
$ns_ connect $tcp $sink
set ftp [new Application/FTP]
$ftp attach-agent $tcp
$ns_ at 10.0 "$ftp start"
This sets up a TCP connection betwen the two nodes with a TCP source on node0.
Then we need to define stop time when the simulation ends and tell
mobilenodes to reset which actually resets thier internal network
components,
#
# Tell nodes when the simulation ends
#
for {set i 0} {$i < $val(nn) } {incr i} {
$ns_ at 150.0 "$node_($i) reset";
}
$ns_ at 150.0001 "stop"
$ns_ at 150.0002 "puts \"NS EXITING...\" ; $ns_ halt"
proc stop {} {
global ns_ tracefd
close $tracefd
}
At time 150.0s, the simulation shall stop. The nodes are reset at that
time and the "$ns_ halt" is called at 150.0002s, a little later after
resetting the nodes. The procedure stop{} is called to flush out traces
and close the trace file. And finally the command to start the
simulation,
puts "Starting Simulation..."
$ns_ run
Save the file simple-wireless.tcl. In order to download a copy of the file clickhere. Next run the simulation in the usual way (type at prompt: "ns simple-wireless.tcl" )
At the end of the simulation run, trace-output file simple.tr is
created. As we have turned on the AgentTrace and RouterTrace we see DSDV
routing messages and TCP pkts being received and sent by Router and
Agent objects in node _0_ and _1_. Note that all wireless traces starts
with WL in their first field. See Chapter 15 of ns documentation for
details on wireless trace. We see TCP flow starting at 10.0s from node0.
Initially both the nodes are far apart and thus TCP pkts are dropped by
node0 as it cannot hear from node1. Around 81.0s the routing info
begins to be exchanged between both the nodes and around 100.0s we see
the first TCP pkt being received by the Agent at node1 which then sends
an ACK back to node0 and the TCP connection is setup. However as node1
starts to move away from node0, the connection breaks down again around
time 116.0s. Pkts start getting dropped as the nodes move away from one
another.
2. Using node-movement/traffic-pattern files and other features in wireless simulations
As an extension to the previous sub-section, we are going to simulate a
simple multi hop wireless scenario consisting of 3 mobile nodes here. As
before, the mobile nodes move within the boundaries of a defined
topology. However the node movements for this example shall be read from
a node-movement file called scen-3-test. scen-3-test
defines random node movements for the 3 mobile nodes within a topology
of 670mX670m. This file is available as a part of the ns distribution
and can be found, along with other node-movement files, under directory ~ns/tcl/mobility/scene.
Random node movement files like scen-3-test can be generated using
CMU's node-movement generator "setdest". Details on generation of node
movement files are covered in Generating traffic-connection and node-movement files for large wireless scenarios section of this tutorial.
In addition to node-movements, traffic flows that are setup between the
mobile nodes, are also read from a traffic-pattern file called
cbr-3-test. cbr-3-test is also available under ~ns/tcl/mobility/scene.
Random CBR and TCP flows are setup between the 3 mobile nodes and data
packets are sent, forwarded or received by nodes within hearing range of
one another. See cbr-3-test to find out more about the traffic flows
that are setup. These traffic-pattern files can also be generated using
CMU's TCP/CBR traffic generator script. More about this is discussed in Generating traffic-connection and node-movement files for large wireless scenarios section of this tutorial.
We shall make changes to the script, simple-wireless.tcl, we had created in above section. and shall call the resulting file wireless1.tcl. For a copy of wireless1.tcl download from here.
In addition to the variables (LL, MAC, antenna etc) that were declared
at the beginning of the script, we now define some more parameters like
the connection-pattern and node-movement file, x and y values for the
topology boundary, a seed value for the random-number generator, time
for the simulation to stop, for convenience. They are listed as follows:
set val(chan) Channel/WirelessChannel
set val(prop) Propagation/TwoRayGround
set val(netif) Phy/WirelessPhy
set val(mac) Mac/802_11
set val(ifq) Queue/DropTail/PriQueue
set val(ll) LL
set val(ant) Antenna/OmniAntenna
set val(x) 670 ;# X dimension of the topography
set val(y) 670 ;# Y dimension of the topography
set val(ifqlen) 50 ;# max packet in ifq
set val(seed) 0.0
set val(adhocRouting) DSR
set val(nn) 3 ;# how many nodes are simulated
set val(cp) "../mobility/scene/cbr-3-test"
set val(sc) "../mobility/scene/scen-3-test"
set val(stop) 2000.0 ;# simulation time
Number of mobile nodes is changed to 3; Also we use DSR (dynamic source
routing) as the adhoc routing protocol inplace of DSDV (Destination
sequence distance vector);
After creation of ns_, the simulator instance, open a file
(wireless1-out.tr) for wireless traces. Also we are going to set up nam
traces.
set tracefd [open wireless1-out.tr w] ;# for wireless traces
$ns_ trace-all $tracefd
set namtrace [open wireless1-out.nam w] ;# for nam tracing
$ns_ namtrace-all-wireless $namtrace $val(x) $val(y)
Next (after creation of mobile nodes) source node-movement and
connection pattern files that were defined earlier as val(sc) and
val(cp) respectively.
#
# Define node movement model
#
puts "Loading connection pattern..."
source $val(cp)
#
# Define traffic model
#
puts "Loading scenario file..."
source $val(sc)
In node-movement file scen-3-test, we see node-movement commands like
$ns_ at 50.000000000000 "$node_(2) setdest 369.463244915743 \
170.519203111152 3.371785899154"
This, as described in earlier sub-section, means at time 50s, node 2
starts to move towards destination (368.4,170.5) at a speed of 3.37m/s.
We also see other lines like
$god_ set-dist 1 2 2
These are command lines used to load the god object with the shortest
hop information. It means the shortest path between node 1 and 2 is 2
hops. By providing this information, the calculation of shortest
distance between nodes by the god object during simulation runs, which
can be quite time-consuming, is prevented.
The setdest program (see Generating traffic-connection and node-movement files for large wireless scenarios)
generates movement pattern files using the random waypoint algorithm.
The node-movement files generated using setdest (like scen-3-test)
already include lines like above to load the god object with the
appropriate information at the appropriate time.
A program called calcdest
(~ns/indep-utilities/cmu-scen-gen/setdest/calcdest) can be used to
annotate movement pattern files generated by other means with the lines
of god information. calcdest makes several assumptions about the format
of the lines in the input movement pattern file which will cause it to
fail if the file is not formatted properly. If calcdest rejects a
movement pattern file you have created, the easiest way to format it
properly is often to load it into ad-hockey and then save it out again.
If ad-hockey can read your input correctly, its output will be properly
formatted for calcdest.
Both setdest and calcdest calculate the shortest number of hops between
nodes based on the nominal radio range, ignoring any effects that might
be introduced by the propagation model in an actual simulation. The
nominal range is either provided as an argument to the programs, or
extracted from the header in node-movement pattern files.
The path length information provided to god was used by CMU's Monarch
Project to analyze the path length optimality of ad hoc network routing
protocols, and so was printed out as part of the CMUTrace output for
each packet.
Other uses that CMU has found for the information are:
Characterizing the rate of topology change in a movement pattern.
Identifying the frequency and size of partitions.
Experimenting with the behavior
of the routing protocols if the god information is used to provide
them with ``perfect'' neighbor information at zero cost.
Next add the following lines for providing initial position of nodes in
nam. However note that only node movements can currently be seen in nam .
Dumping of traffic data and thus visualization of data pkt movements in
nam for wireless scenarios is still not supported (future work).
# Define node initial position in nam
for {set i 0} {$i < $val(nn)} {incr i} {
# 20 defines the node size in nam, must adjust it according to your
# scenario size.
# The function must be called after mobility model is defined
$ns_ initial_node_pos $node_($i) 20
}
Next add informative headers for the CMUTrace file, just before the line "ns_ run" :
puts $tracefd "M 0.0 nn $val(nn) x $val(x) y $val(y) rp $val(adhocRouting)"
Save the file wireless1.tcl. Make sure the connection-pattern and
node-movement files exist under the directories as declared above.
Run the script by typing at the prompt:
ns wireless1.tcl
On completion of the run, CMUTrace output file "wireless1-out.tr" and
nam output file "wireless1-out.nam" are created. Running
wireless1-out.nam we see the three mobile nodes moving in nam window.
However as mentioned earlier no traffic flow can be seen (not supported
as yet). For a variety of coarse and fine grained trace outputs turn
on/off AgentTrace, RouteTrace, MacTrace and movementTrace as shown
earlier in the script. From the CMUTrace output we find nodes 0 and 2
are out of range and so cannot hear one another. Node1 is in range with
nodes 0 and 2 and can communicate with both of them. Thus all pkts
destined for nodes 0 and 2 are routed through node 1.
3. Creating random traffic-pattern for wireless scenarios.
Random traffic connections of TCP and CBR can be setup between mobile
nodes using a traffic-scenario generator script. This traffic generator
script is available under ~ns/indep-utils/cmu-scen-gen and is called
cbrgen.tcl. It can be used to create CBR and TCP traffics connections
between wireless mobile nodes. In order to create a traffic-connection
file, we need to define the type of traffic connection (CBR or TCP), the
number of nodes and maximum number of connections to be setup between
them, a random seed and in case of CBR connections, a rate whose inverse
value is used to compute the interval time between the CBR pkts. So the
command line looks like the following
The start times for the TCP/CBR connections are randomly generated with a
maximum value set at 180.0s. Go through the tcl script cbrgen.tcl for
the details of the traffic-generator implementation.
For example, let us try to create a CBR connection file between 10
nodes, having maximum of 8 connections, with a seed value of 1.0 and a
rate of 4.0. So at the prompt type:
From cbr-10-test file (into which the output of the generator is
redirected) thus created, one of the cbr connections looks like the
following:
#
# 2 connecting to 3 at time 82.557023746220864
#
set udp_(0) [new Agent/UDP]
$ns_ attach-agent $node_(2) $udp_(0)
set null_(0) [new Agent/Null]
$ns_ attach-agent $node_(3) $null_(0)
set cbr_(0) [new Application/Traffic/CBR]
$cbr_(0) set packetSize_ 512
$cbr_(0) set interval_ 0.25
$cbr_(0) set random_ 1
$cbr_(0) set maxpkts_ 10000
$cbr_(0) attach-agent $udp_(0)
$ns_ connect $udp_(0) $null_(0)
$ns_ at 82.557023746220864 "$cbr_(0) start"
Thus a UDP connection is setup between node 2 and 3. Total UDP sources
(chosen between nodes 0-10) and total number of connections setup is
indicated as 5 and 8 respectively, at the end of the file cbr-10-test.
Similarly TCP connection files can be created using "type" as tcp. An example would be:
A typical connection from tcp-25-test looks like the following:
#
# 5 connecting to 7 at time 163.0399642433226
#
set tcp_(1) [$ns_ create-connection TCP $node_(5) TCPSink $node_(7) 0]
$tcp_(1) set window_ 32
$tcp_(1) set packetSize_ 512
set ftp_(1) [$tcp_(1) attach-source FTP]
$ns_ at 163.0399642433226 "$ftp_(1) start"
4. Creating node-movements for wireless scenarios
The node-movement generator is available under
~ns/indep-utils/cmu-scen-gen/setdest directory and consists of
setdest{.cc,.h} and Makefile. CMU's version of setdest used system
dependent /dev/random and made calls to library functions initstate()
for generating random numbers. That was replaced with a more portable
random number generator (class RNG) available in ns. In order to compile
the revised setdest.cc do the following:
1. Go to ns directory and run "configure" (you probably have done that
already while building ns). This creates a makefile for setdest.
2.Go to indep-utils/cmu-scen-gen/setdest. Run "make" , which first
creates a stand-alone object file for ~ns/rng.cc (the stand-alone
version doesnot use Tclcl libs) and then creates the executable
setdest.
Lets say we want to create a node-movement scenario consisting of 20
nodes moving with maximum speed of 10.0m/s with an average pause between
movement being 2s. We want the simulation to stop after 200s and the
topology boundary is defined as 500 X 500. So our command line will look
like:
The output is written to stdout by default. We redirect the output to
file scen-20-test. The file begins with the initial position of the
nodes and goes on to define the node movements.
$ns_ at 2.000000000000 "$node_(0) setdest 90.441179033457 44.896095544010
1.373556960010"
This line from scen-20-test defines that node_(0) at time 2.0s starts to
move toward destination (90.44, 44.89) at a speed of 1.37m/s. These
command lines can be used to change direction and speed of movement of
mobile nodes.
Directives for GOD are present as well in node-movement file. The
General Operations Director (GOD) object is used to store global
information about the state of the environment, network, or nodes that
an omniscent observer would have, but that should not be made known to
any participant in the simulation.
Currently, the god object is used only to store an array of the shortest
number of hops required to reach from one node to an other. The god
object does not calculate this on the fly during simulation runs, since
it can be quite time consuming. The information is loaded into the god
object from the movement pattern file where lines of the form
$ns_ at 899.642 "$god_ set-dist 23 46 2"
are used to load the god object with the knowledge that the shortest
path between node 23 and node 46 changed to 2 hops at time 899.642.
The setdest program generates node-movement files using the random
waypoint algorithm. These files already include the lines to load the
god object with the appropriate information at the appropriate time.
Another program calcdest (also available in
~ns/indep-utils/cmu-scen-gen/setdest) can be used to annotate movement
pattern files generated by other means with the lines of god
information. calcdest makes several assumptions about the format of the
lines in the input movement pattern file which will cause it to fail if
the file is not formatted properly. If calcdest rejects a movement
pattern file you have created, the easiest way to format it properly is
often to load it into ad-hockey and then save it out again. If ad-hockey
can read your input correctly, its output will be properly formatted
for calcdest.
Both calcdest and setdest calculate the shortest number of hops between
nodes based on the nominal radio range, ignoring any effects that might
be introduced by the propagation model in an actual simulation. The
nominal range is either provided as an argument to the programs, or
extracted from the header on the movement pattern file.
The path length information was used by the Monarch Project to analyze
the path length optimality of ad hoc network routing protocols, and so
was printed out as part of the CMUTrace output for each packet.
Other uses that CMU found for the information:
Characterizing the rate of topology change in a movement pattern.
Identifying the frequency and size of partitions.
Experimenting with the behavior of the routing protocols if the god
information is used to provide them with perfect'' neighbor information
at zero cost.
Thus at the end of the node-movement
file are listed information like number of destination unreachable,
total number of route and connectivity changes for mobile nodes and the
same info for each mobile node.
The revised (more portable) version
of setdest ( files revised are: setdest{.cc,.h}, ~ns/rng{.cc,.h},
~ns/Makefile.in ) should be available from the latest ns distribution.