marko [Fri, 20 Jul 2007 09:22:26 +0000 (09:22 +0000)]
Keep all annotation objects in a single list (annotation_list),
instead of having three separate lists for text, rectangle and
oval objects.
In configuration file, deprecate text, rectangle, and oval object
classes, and replace them with a single annotation class. The
type of annotation objects can be determined via proc nodeType.
Add an "xxx xxx xxx" asert in textConfigApply in a suspicious branch.
Remove the request for "raising" canvas objects tagged as "menuBubble"
in proc raiseAll, since it seems to be never used.
marko [Mon, 7 May 2007 23:09:07 +0000 (23:09 +0000)]
Refactor node instatiation procedures for hub and lanswitch nodes
to reflect recent changes in exec.tcl, as well as kernel-level
differences between 4.11 and 7.0 netgraph virtualization model.
marko [Mon, 7 May 2007 23:06:06 +0000 (23:06 +0000)]
Refactor the mechanism for creating netgraph-based pseudo
interfaces (ng_iface and ng_eiface) so that:
a) we don't need a specialized version of ngctl userland utility;
b) that in FreeBSD 7.0 netgraph nodes are not renamed when
interfaces are assigned to other vimages;
Introduce a helper array "ngnodemap" which provides name mapping
between kernel view of netgraph space, and IMUNES view of node
naming.
When creating links, do not insert a ng_pipe node between the
endpoints, given that ng_pipe is not yet ported to FreeBSD 7.0.
Instead, endpoint nodes are connected back to back, which means
that currently we will be able to construct topologies, but not
emulate link properties and impairments.
Use vimageCleanup instead of cleanupCfg, since it seems that the
later is defunct, at least on FreeBSD 7.0
marko [Wed, 2 May 2007 11:36:50 +0000 (11:36 +0000)]
Wipe out mbuf / cluster usage monitoring routines. Accounting and
imposing limits on mbuf usage is quite different on FreeBSD -CURRENT
from what it was on 4.11, so don't mess with those details at the
moment.
Do not show odd grid lines when zooming is set to or lower than 50%.
Refactor TopoGen procedures so that they can operate on already
existing nodes.
Implement utilities for connecting selected nodes in a chain, star,
cycle or clique topology. The functions are accessible when
holding the right button over a selected node.
After moving a ng_iface interface to another vimage, "touch" it by
doing a no-op ifconfig on it. This was an old hack that allowed
the kernel to rename the corresponding netgraph node, so we need this
for running IMUNES with kernels older then Nov 23 2006, when the
renaming problem was fixed in the kernel.
In effect this and previous commit by Miljenko back out revision
1.36 of exec.tcl.
- Implement a procedure and GUI hooks for selecting adjacent nodes;
- Display a grid in the canvas;
- Change the cursor to a "watch" icon during undo / redo / delete
operations;
- Link color and "thickness" can now be configured on individual
link basis;
- Extensive (yet not complete) indentation cleanup - we should use
modulo 4 tab stops exclusively;
- Enclose "expr" expressions in braces, per suggestion from manual
pages for performance improvement (though it seems that no
no improvements can be observed);
- Remove the "Configure remote hosts" menu, given that we are
considering different approaches for executing remote experiments.
The "nexec" and related procedures are left untouched for now;
- Adjust default window size to cover the entire default canvas
surface, while is should still fit into 1024x768 displays.
ana [Wed, 17 Jan 2007 20:28:32 +0000 (20:28 +0000)]
Added support for more (more than one) custom configurations for each node.
Configuration reading is backwards compatible with old configurations (.imn).
marko [Thu, 23 Nov 2006 11:52:24 +0000 (11:52 +0000)]
Given that now the kernel automatically renames netgraph interfaces
when moved from one vimage to another, remove unneeded ifconfig calls
that previously preformed this job.
Bug found by: Ivan Babic
Submitted by:
Requested by:
Reviewed by:
Approved by:
Obtained from:
marko [Mon, 6 Nov 2006 11:13:43 +0000 (11:13 +0000)]
Implement a peer-to-peer membership daemon to be used for certain state
synchronization in future distributed / decentralized IMUNES operation.
The daemon will try to connect to remote peer(s) specified as command-line
arguments at invocation time, and form an ad-hoc peer-to-peer overlay
network with all nodes reachable via its peers.
Each node in the peer-to-peer structure is uniquely identified by its
IPv4 address. The daemon will try to maintan a small number of direct
peerings (between two and four) between random nodes in the overlay, thus
forming a well-connected mesh over time. Each node maintans full routing
information to all other nodes, basically in the same way as BGP does,
Implement a peer-to-peer membership daemon to be used for certain state
synchronization in future distributed / decentralized IMUNES operation.
The daemon will try to connect to remote peer(s) specified as command-line
arguments at invocation time, and form an ad-hoc peer-to-peer overlay
network with all nodes reachable via its peers.
Each node in the peer-to-peer structure is uniquely identified by its
IPv4 address. The daemon will try to maintan a small number of direct
peerings (between two and four) between random nodes in the overlay, thus
forming a well-connected mesh over time. Each node maintans full routing
information to all other nodes, basically in the same way as BGP does,
except that instead of AS numbers we use node ID-s (IP addresses) to
construct path vectors. Once the routing state converges, no topology
information needs to be exchanged, except periodic keepalives used to
verify that direct peerings are active. Hence, in steady state the
protocol is unlikely to consume any measurable network bandwidth nor
CPU time.
Besides maintaining the topology / reachability state, the daemon provides
a simple facility for nodes to announce arbitrary attributes associated
with their IDs. The attributes will be distributed by flooding the
overlay network with new state. Only the attribute set with the version
number greater then the currently stored one will be propagated through
the overlay, thus preventing endless loopings. Hence, the originating
node is responsible to bump its attribs version number each time it
attempts to broadcats a new set of attributes. In the future this part
of the protocol might need to be enhanced so that only incremental /
partial updates would need to be sent.
An application can directly interface with this "daemon" by observing
global variables "active_hosts" and "dead_hosts" which will be updated
dynamically. For each active host the host_attrib_tbl($host_id) should
store most recent attributes, if any. If the need arises, notification
hooks can / should be placed in ProcessAnnounce, ProcessWithraw and
ProcessAttributes procedures.
The framework was tested on our ad-hoc cluster with 1032 virtual nodes
mapped to 8 physical Pentium-4 machines. After a relatively long initial
synchronization period (around 20 minutes, mostly CPU-bound) joins
and leaves to the overlay are processed and propagated to all members
virtually instantenously. However, in sporadic cases topology changes
can lead to shorter periods of oscillations lasting up to 10 - 20
seconds, but those oscilations are typically observable only on a
limited set of nodes.
My initial impression is that the protocol should work fine for overlays
of up to several hundreds of nodes in size, at which point we should
investigate alternative options for maintaing the overlay coherence.
miljenko [Tue, 17 Jan 2006 12:08:12 +0000 (12:08 +0000)]
Returned back "animateCursor" proc. Needed in exec.tcl/statline.
In VMware, without animateCursor call in statline proc status line is blank
during experiment startup/shutdown.
In ActiveState Active Tcl animateCursor is call is not needed ?!