[Ns-developers] ns-3 and future development
Mathieu.Lacage at sophia.inria.fr
Sat Jan 14 10:35:18 PST 2006
Tom Henderson wrote:
> and available under GPL. There are three main reasons for considering
> using pieces of gtnets:
> - gtnets has already worked out the parallelization issues
I think there are 3 issues to solve for parallelization:
1) a core time synchronization library and message exchange library
2) make the simulator use the core synchronization library
3) deployment of simulation scenarios over a parallel system (a
cluster or an smp system) in a way which is transparent to the user and
which does not require any changes to purely serialized scenarios.
As I understand it, gtnets solves 2). 1) is solved by the rtikit
library. 3) is not solved in gtnets. Unless I am grossly mistaken, the
license of rtikit is _not_ compatible with the gpl. More generally, it
does not seem to be distributed freely: you have to ask for it by email
(which I did but never got any reply so, I have never been able to get
access to its source code). This means that we will probably need to
re-implement a functional clone of rtikit, whichever core is used as a
basis for a future simulator.
The way gtnets solves 2) is interesting because it is known to work: I
have tried to design the yans simulation APIs such that following the
ideas put forward by gtnets is really easy.
Finally, I think 3) is critical to make parallel simulations usable. I
plan to work on a prototype of a solution of this problem. It would
involve the use of python and its native xml-rpc mechanism to perform
deployment. I could outline a more detailed technical proposal in a
later email if there is interest in it.
> - gtnets has Qt support already defined
I see. This is nice.
> - gtnets is further along
I think you mean: "gtnets has more working/useful models". Am I right ?
> I like the splitting of simulator core (event handling and stuff
> unrelated to network-specific objects) from the rest of the source.
> I am not sure yet whether I prefer that compiled output of examples/
> directory lands in bin/examples rather than just in the examples/
> directory itself. I have found convenient sometimes that one can just
> work in a single directory by i) editing the source file or script,
> ii) run make in that directory, iii) try the executable,
> iv) go back to step i) ...
> Regarding directory organization, I might suggest a few changes.
> python/ (or whatever this becomes)
> (no change so far...)
> move src/samples to examples/, and bin/src/samples to bin/examples
I dont have any opinion about this. The directory structure is something
I have changed a lot recently and which I do not mind changing more.
> output data for given test should land in some subdirectory prefixed
> by the testname; e.g., bin/examples/testname-output/ for a file defined
> as testname
I am not sure. I have tried to design something a bit more elaborate in
the current codebase. I designed the code such that each Host has a
directory of its own which it uses as its filesystem root. So, for
example, if you instantiate an apache webserver on Host A and Host B,
each instance of the webserver would look for its contributation files
in /etc relative to the Host filesystem root. log files fit neatly in
this model: they are stored in /log/interface-name relative to the host
filesystem root. This model has many advantages but what I like about it
is that it might make it very natural/easy to deal with the integration
of user-level socket-based applications.
> under src/, the subdirectories should probably consider channels/, and
> a lot of the current subdirectories (os-model, posix, thread, common)
> could perhaps be combined somehow and put into a node directory.
I don't have a strong opinion on this. Just a data point: Thread is not
dependent on anything but the core simulator so, it might be annoying to
put it together with the others.
> There could be an interfaces/ directory containing ethernet/
> arp/ 80211/ etc.
yes, I thought about this. It would be nice. It would probably make a
lot of sense because the network interface API is reasonably
self-contained and each interface implementation should not depend on
anything but the simulator core and the common packet API.
> Also, could be a stack/ directory to contain ipv4, ipv6, posix, and
> other more researchy stacks that might be devised in the future. This
> would also perhaps match better with Sam Jansen's network simulation
> cradle work.
I am reluctant here but I have trouble explaining why. I think it is
related to the fact that I fail to see what kind of API each "stack"
should implement. i.e., I do not see any equivalent of the "network
interface" API for "stacks".
> 2. Core
> One might consider making the API from this core accessible via some
> single header file (e.g., "core.h"), which could include all of the
> simulator/ .h files.
please, please, no. Large-scale developement projects need to focus on
minimizing dependencies between the various modules. Part of this task
involves headers: each header should be "minimal and self contained".
That is, it should not contain anything but what is required to compile
an empty .cc file which includes it. The headers I have produced for now
in yans try to adhere to this rule. From this rule, it follows that each
.cc file needs to include all the .h which contain definitions it needs.
> One difference between this core and ns-2 core is that in yans,
> Events are consumed by a wrapper to an arbitrary method (wrapper
> performs garbage collection and memory management and executes the
> method), whereas in ns-2 they are consumed by a subclass derived from
> class Handler.
> - In ns-2, this leads to the convention that class Handler, which
> provides pure virtual handle() method that is called when the event
> executes, is subclassed when needed (and such classes are declared
> friend to classes that use the Handlers), and the pure virtual handle()
> method is tailored by the specific Handlers to consume specific events.
> Class Event is a wrapper around Handler, defined as a node in a linked
> - In yans, the method consuming the event can be any method in scope
> (possibly with arguments). The Event subclass is simple, providing
> a pure virtual notify() that consumes the event. At this point,
> the yans Event and ns-2 Handle are basically the same (with the yans
> Event list being maintained instead by the Simulator object).
> The departure from ns-2 way of doing things is that two layers of
> templates are defined on top of Event. Events are scheduled with
> a make_event(&method) call, which instantiates a callback event.
> The callback event (also defined by template) calls the method
> referenced via its function pointer, and does garbage collection
> on itself if needed. The net effect seems to be an avoidance
> of needing to declare friend classes of objects for the purpose of
> consuming simulation events-- arbitrary methods instead can be used
> for that purpose and scheduled by referencing the consuming method()
> by function pointer. This seems like a benefit to me.
yes !!! This is exactly the point. I might copy/paste your email in the
documentation when I write it :)
> Some other thoughts on this core:
> There also should be some thought about how to partition the library
> so as to just rebuild and relink smaller chunks upon recompilation.
> Timer classes are missing.
Is the following not simple enough ?
Simulation::insert_at_s (10.0, make_event (&MyClass::timeout, my_class,
Or maybe you are looking for some other bit of functionality ? Something
// do stuff
wait (10.0); // wait 10 seconds
// do other stuff
This can be done in the Thread model with a call to Thread::wait_s
> We need to resolve the issue of internal vs external random number
> generator. Ideally, I like the idea of built-in RNG with also
> ability to use other RNGs, with a public API obscuring this library
> choice. Random numbers seem to be something that falls into scope
> of core.
Will add to TODO.
> 4. Hosts (Node)
> Note: I prefer to call these "Nodes" for two reasons-- past convention
> in ns, and also because host often implies "end host" as opposed to
> other nodes in the network such as routers, proxies, firewalls, etc.
> You seem to mean it in the end-host sense but I wonder whether Host then
> should inherit from a common base Node class.
Sure, I don't have any strong opinion here. More generally, I think lots
of classes/methods need renaming. I welcome suggestions on other pieces
of the code :)
> I like the posix_socket support.
It is just a stub. There is no functional code in it for now unfortunately.
> One thing that will be needed is a way to navigate pointers to reach
> objects on given nodes, or to reach nodes themselves. Computing
> propagation delay is a good example of this-- it relies on a low-level
> object such as an interface or channel to be able to find a "Position"
> object of a distant node. The distant node must be accessible by
> functions such as "IP address to node", and there probably needs
> to be some support for efficient lookups (caches, hashes) of such
> pointers to avoid walking lists as much as possible. One such
> list I have found to be essential in ns-2 and gtnets is a static list
> of nodes.
I would like to avoid this as much as possible: parallel simulations
will hate you if you do this.
> I would not rely on assembly code at this stage (under
> fiber-context-x86-linux-gcc.c). I think that consideration
> has to be given to more than small set of Linux/x86 distributions--
> the ns user base demands this.
I welcome suggestions on how to do what this file does without assembly
:) Some unix systems provide the make_context posix function which could
be used but it is not present on osx for example. A lot of other unixes
do not offer it either so, I don't have any solution to offer.
If the question is "can we have similar assembly code for other widely
used platforms ?", the answer is: if you ask for it, you will get it if
you give me access to such a platform. I have access to x86-64-linux-gcc
so, I will probably do it whenever there is a request for it.
ppc-osx-gcc will be harder since no one around has an apple box (I will
be happy with ssh access).
> 5. Tracing
> Overall, I like the libpcap output. I would opt for a more general
> tracing framework either in parallel to what you have or from which
> libpcap traces could be later generated. In my experience, I like
> to be able to trace events that are not conventional pcap events,
> such as "Router SPF computation" or packet drops in the stack due to
> e.g. checksum errors. Maybe we call these event logs instead of
> traces. (e.g., a .log file). These event logs could be generated
> per-host, or could be some common master event log for the simulator.
> Probably we want both capabilities.
probably. I have not given much thinking to this though :/
Maybe it should be related to the tracing mechanism you describe below.
> The core should be instrumented to output statistics on its performance
> (wall time elapsed, number of events processed, max event queue length,
> etc.) Simulator should output this by default to stdout and if run in
> quiet mode should dump to some file like testname-output/testname.log
> or .stat
will add to TODO
> For trace output organization, it could land in a subdirectory
> named by the node name, as you have it (e.g., client/eth0) or else
> in the top level testname output directory. Upon thinking about it,
> I probably prefer the way you have it, except that I don't think they
> need to land in an additional logs/ subdirectory.
I don't have any strong opinion on this. This is a one-liner change so,
I don't mind doing it :)
> Nodes should also have a facility for counters and statistics.
will add this to the TODO.
> Minor nit-- tracefiles should not have file permissions of 700. Also
ah. probably, yes.
> the timestamp shows up as a negative time under tcpdump, but under
> ethereal it looks OK.
I noticed this too. Looks like a bug.
> 6. User interface
> I like the overall structure of the test scripts, how they are built
> up from Channels and Interfaces to Hosts then Applications.
> Note that there is a nice facility for command-line argument parsing in
> gtnets that is useful for a C++-based simulator.
thanks a lot for your comments,
More information about the Ns-developers