Skip to content
Projects
Groups
Snippets
Help
Loading...
Help
Support
Keyboard shortcuts
?
Submit feedback
Contribute to GitLab
Sign in
Toggle navigation
C
codes
Project overview
Project overview
Details
Activity
Releases
Repository
Repository
Files
Commits
Branches
Tags
Contributors
Graph
Compare
Issues
38
Issues
38
List
Boards
Labels
Milestones
Merge Requests
8
Merge Requests
8
Analytics
Analytics
Repository
Value Stream
Wiki
Wiki
Members
Members
Collapse sidebar
Close sidebar
Activity
Graph
Create a new issue
Commits
Issue Boards
Open sidebar
codes
codes
Commits
16787f14
Commit
16787f14
authored
Oct 30, 2014
by
Jonathan Jenkins
Browse files
Options
Browse Files
Download
Email Patches
Plain Diff
clean up best practices doc
parent
f25d26a7
Changes
2
Hide whitespace changes
Inline
Side-by-side
Showing
2 changed files
with
112 additions
and
311 deletions
+112
-311
codes/lp-io.h
codes/lp-io.h
+1
-3
doc/codes-best-practices.tex
doc/codes-best-practices.tex
+111
-308
No files found.
codes/lp-io.h
View file @
16787f14
...
@@ -21,9 +21,7 @@ int lp_io_prepare(char *directory, int flags, lp_io_handle* handle, MPI_Comm com
...
@@ -21,9 +21,7 @@ int lp_io_prepare(char *directory, int flags, lp_io_handle* handle, MPI_Comm com
/* to be called within LPs to store a block of data */
/* to be called within LPs to store a block of data */
int
lp_io_write
(
tw_lpid
gid
,
char
*
identifier
,
int
size
,
void
*
buffer
);
int
lp_io_write
(
tw_lpid
gid
,
char
*
identifier
,
int
size
,
void
*
buffer
);
/* undo the immediately preceding write for the given LP
/* undo the immediately preceding write for the given LP */
* (hack for logging/testing optimistic mode, not recommended for general use)
*/
int
lp_io_write_rev
(
tw_lpid
gid
,
char
*
identifier
);
int
lp_io_write_rev
(
tw_lpid
gid
,
char
*
identifier
);
/* to be called (collectively) after tw_run() to flush data to disk */
/* to be called (collectively) after tw_run() to flush data to disk */
...
...
doc/codes-best-practices.tex
View file @
16787f14
...
@@ -31,6 +31,7 @@
...
@@ -31,6 +31,7 @@
\usepackage
{
color
}
\usepackage
{
color
}
\usepackage
{
listing
}
\usepackage
{
listing
}
\usepackage
{
listings
}
\usepackage
{
listings
}
\usepackage
{
verbatim
}
\lstset
{
%
\lstset
{
%
frame=single,
frame=single,
...
@@ -244,20 +245,6 @@ model. This can help simplify reverse computation by breaking complex
...
@@ -244,20 +245,6 @@ model. This can help simplify reverse computation by breaking complex
operations into smaller, easier to understand (and reverse) event units with
operations into smaller, easier to understand (and reverse) event units with
deterministic ordering.
deterministic ordering.
Adding reference to storage server example:
In the simple storage server example following this section, there are multiple
LP types i.e. a storage server LP and a Network LP. The storage server LP initiates
data transmission and reception to/from neighboring storage server LP, it also keeps
track of the amount of data sent/received in bytes. The job of data transmission
is delegated to the network LP which simply transports the data to destination storage
server LP. The network LP is unaware of the total amount of data sent by a particular
server. At the same time, the storage server LP is unaware of the networking protocol
used by the network LP for transporting the messages.
TODO: reference example, for now see how the LPs are organized in Triton
model.
\subsection
{
Protecting data structures
}
\subsection
{
Protecting data structures
}
ROSS operates by exchanging events between LPs. If an LP is sending
ROSS operates by exchanging events between LPs. If an LP is sending
...
@@ -283,141 +270,94 @@ headers. If the definitions are placed in a header then it makes it
...
@@ -283,141 +270,94 @@ headers. If the definitions are placed in a header then it makes it
possible for those event and state structs to be used as an ad-hoc interface
possible for those event and state structs to be used as an ad-hoc interface
between LPs of different types.
between LPs of different types.
Section~
\ref
{
sec:completion
}
will describe alternative mechanisms for
exchanging information between different LP types.
TODO: reference example, for now see how structs are defined in Triton
model.
\subsection
{
Techniques for exchanging information and completion events
across LP types
}
\label
{
sec:completion
}
TODO: fill this in.
Send events into an LP using a C function API that calls event
\_
new under
the covers.
Indicate completion back to the calling LP by either delivering an opaque
message back to the calling LP (that was passed in by the caller in a void*
argument), or by providing an API function for 2nd LP type to
use to call back (show examples of both).
\section
{
CODES: common utilities
}
\section
{
CODES: common utilities
}
TODO: point out what repo each of these utilities can be found in.
\subsection
{
codes
\_
mapping
}
\label
{
sec:mapping
}
TODO: pull in Misbah's codes-mapping documentation.
\subsection
{
modelnet
}
\subsection
{
modelnet
}
TODO: fill this in. Modelnet is a network abstraction layer for use in
Modelnet is a network abstraction layer for use in CODES models. It provides a
CODES models. It provides a consistent API that can be used to send
consistent API that can be used to send messages between nodes using a variety
messages between nodes using a variety of different network transport
of different network transport models. Note that modelnet requires the use of
models. Note that modelnet requires the use of the codes-mapping API,
the codes-mapping API, described in previous section.
described in previous section.
modelnet can be found in the codes-net repository. See the example program for
general usage.
\subsection
{
lp-io
}
\subsection
{
lp-io
}
TODO: fill this in. lp-io is a simple API for storing modest-sized
% TODO: flesh out further
lp-io is a simple API for storing modest-sized
simulation results (not continuous traces). It handles reverse computation
simulation results (not continuous traces). It handles reverse computation
and avoids doing any disk I/O until the simulation is complete.
All data is
and avoids doing any disk I/O until the simulation is complete. All data is
written with collective I/O into a unified output directory.
lp-io is
written with collective I/O into a unified output directory. lp-io is
mostly useful for cases in which you would like each LP instance to report
mostly useful for cases in which you would like each LP instance to report
statistics, but for scalability and data management reasons those results
statistics, but for scalability and data management reasons those results
should be aggregated into a single file rather than producing a separate
should be aggregated into a single file rather than producing a separate
file per LP.
file per LP. It is not recommended that lp-io be used for data intensive,
streaming output.
The API for lp-io can be found in codes/lp-io.h
TODO: look at ross/IO code and determine how it relates to this.
%
TODO: look at ross/IO code and determine how it relates to this.
\subsection
{
codes-workload generator
}
\subsection
{
codes-workload generator
}
TODO: fill this in. codes-workload is an abstraction layer for feeding I/O
% TODO: fill in further
workloads into a simulation. It supports multiple back-ends for generating
codes-workload is an abstraction layer for feeding I/O / network
those I/O events; data could come from a trace file, from Darshan, or from a
workloads into a simulation. It supports multiple back-ends for generating
I/O and network events; data could come from a trace file, from Darshan, or from a
synthetic description.
synthetic description.
This component is under active development right now, not complete yet. The
This component is under active development right now and not complete yet. If
synthetic generator is probably pretty solid for use already though.
you are interested in using it, a minimal example of the I/O API can be seen in
the codes-workload-dump utility and in
\subsection
{
codes
\_
event
\_
new
}
tests/workload/codes-workload-test-cn-lp.c
TODO: fill this in. codes
\_
event
\_
new is a small wrapper to tw
\_
event
\_
new
that checks the incoming timestamp and makes sure that you don't exceed the
global end timestamp for ROSS. The assumption is that CODES models will
normally run to a completion condition rather than until simulation time
runs out, see later section for more information on this approach.
\subsection
{
ross/IO
}
The API for the workload generator can be found in codes/codes-(nw-)workload.h.
TODO: fill this in. This is the I/O library included with ROSS, based on
\subsection
{
codes
\_
event
\_
new
}
phasta I/O library. What are the use cases and how do you use it. Does it
deprecate the lp-io tool?
\section
{
CODES: reproducability and model safety
}
Defined in codes/codes.h, codes
\_
event
\_
new is a small convenience wrapper to
tw
\_
event
\_
new that errors out if an event exceeds the global end timestamp for
ROSS. The assumption is that CODES models will normally run to a completion
condition rather than until simulation time runs out, see later section for
more information on this approach.
TODO: fill this in. These are things that aren't required for modularity,
\section
{
CODES/ROSS: general tips and tricks
}
but just help you create models that produce consistent results and avoid
some common bugs.
\subsection
{
Event magic numbers
}
\subsection
{
Event magic numbers
}
TODO: fill this in.
Put magic numbers at the top of each event struct and
Put magic numbers at the top of each event struct and
check them in event handler. This makes sure that you don't accidentally
check them in event handler. This makes sure that you don't accidentally
send the wrong event type to an LP.
send the wrong event type to an LP
, and aids debugging
.
\subsection
{
Small timestamps for LP transition
s
}
\subsection
{
Avoiding event timestamp tie
s
}
TODO: fill this in. Sometimes you need to exchange events between LPs
Event timestamp ties in ROSS occur when two or more events have the same
without really consuming significant time (for example, to transfer
timestamp. These have a variety of unintended consequences, most significant of
information from a server to its locally attached network card). It is
which is hampering both reproducability and determinism in simulations. To
tempting to use a timestamp of 0, but this causes timestamp ties in ROSS
avoid this, use codes
\_
local
\_
latency for events with small or zero time deltas
which might have a variety of unintended consequences. Use
to add some random noise. codes
\_
local
\_
latency must be reversed, so use
codes
\_
local
\_
latency for timing of local event transitions to add some
codes
\_
local
\_
latency
\_
reverse in reverse event handlers.
random noise, can be thought of as bus overhead or context switch overhead.
\section
{
ROSS: general tips
}
One example of this usage is exchanging events between LPs without really
consuming significant time (for example, to transfer information from a server
to its locally attached network card). It is tempting to use a timestamp of 0,
but this would cause timestamp ties in ROSS. Use of codes
\_
local
\_
latency for
timing of local event transitions in this case can be thought of as bus
overhead or context switch overhead.
\subsection
{
Organizing event structures
}
\subsection
{
Organizing event structures
}
TODO: fill this in. The main idea is to use unions to organize fields
Since a single event structure contains data for all of the different types of
within event structures. Keeps the size down and makes it a little clearer
events processed by the LP, use a type enum + unions as an organizational
what variables are used by which event types.
strategy. Keeps the event size down and makes it a little clearer what
variables are used by which event types.
\subsection
{
Avoiding event timestamp ties
}
TODO: fill this in. Why ties are bad (hurts reproducability, if not
accuracy, which in turn makes correctness testing more difficult). Things
you can do to avoid ties, like skewing initial events by a random number
generator.
\subsection
{
Validating across simulation modes
}
\subsection
{
Validating across simulation modes
}
TODO: fill this in. The general idea is that during development you should
During development, you should do test runs with serial, parallel conservative,
do test runs with serial, parallel conservative, and parallel optimistic
and parallel optimistic runs to make sure that you get consistent results.
runs to make sure that you get consistent results. These modes stress
These modes stress different aspects of the model.
different aspects of the model.
\subsection
{
Reverse computation
}
TODO: fill this in. General philosophy of when the best time to add reverse
computation is (probably not in your initial rough draft prototype, but it
is best to go ahead and add it before the model is fully complete or else it
becomes too daunting/invasive).
Other things to talk about (maybe these are different subsections):
\begin{itemize}
\item
propagate and maintain as much state as possible in event structures
rather than state structures
\item
rely on ordering enforced by ROSS (each
reverse handler only needs to reverse as single event, in order)
\item
keeping functions small
\item
building internal APIs for managing functions with reverse functions
\item
how to handle queues
\end{itemize}
\subsection
{
Working with floating-point data
}
\subsection
{
Working with floating-point data
}
...
@@ -429,20 +369,20 @@ structure and perform assignment on rollback.
...
@@ -429,20 +369,20 @@ structure and perform assignment on rollback.
\subsection
{
How to complete a simulation
}
\subsection
{
How to complete a simulation
}
TODO: fill this in.
Most core ROSS examples are design to intentionally hit
Most core ROSS examples are design to intentionally hit
the end timestamp for the simulation (i.e. they are modeling a continuous,
the end timestamp for the simulation (i.e. they are modeling a continuous,
steady state system). This isn't necessarily true when modeling a
steady state system). This isn't necessarily true for other models. Quite
distributed storage system. You might instead want the simulation to end
simply, set g
\_
tw
\_
ts
\_
end to an arbitrary large number when running simulations
when you have completed a particular application workload (or collection of
that have a well-defined end-point in terms of events processed.
application workloads), when a fault has been repaired, etc. Talk about how
to handle this cleanly.
\begin{comment}
ROSS takes care of this
\subsection
{
Kicking off a simulation
}
\subsection
{
Kicking off a simulation
}
\label
{
sec
_
kickoff
}
\label
{
sec
_
kickoff
}
TOOD: fill this in. Each LP needs to send an event to itself at the
TOOD: fill this in. Each LP needs to send an event to itself at the
beginning of the simulation (explain why). We usually skew these with
beginning of the simulation (explain why). We usually skew these with
random numbers to help break ties right off the bat (explain why).
random numbers to help break ties right off the bat (explain why).
\end{comment}
\subsection
{
Handling non-trivial event dependencies
}
\subsection
{
Handling non-trivial event dependencies
}
...
@@ -509,7 +449,7 @@ section(s).
...
@@ -509,7 +449,7 @@ section(s).
\item
prefer placing state in event structure to LP state structure
\item
prefer placing state in event structure to LP state structure
\begin{enumerate}
\begin{enumerate}
\item
simplifies reverse computation -- less persistent state
\item
simplifies reverse computation -- less persistent state
\item
NOTE: tradeoff with previous point - consider efficiency vs.
\item
NOTE: tradeoff with previous point - consider efficiency vs.
\
complexity
complexity
\end{enumerate}
\end{enumerate}
...
@@ -528,7 +468,8 @@ section(s).
...
@@ -528,7 +468,8 @@ section(s).
TODO: Standardize the namings for codes configuration, mapping, and model-net.
TODO: Standardize the namings for codes configuration, mapping, and model-net.
This is a simple CODES example to demonstrate the concepts described above. In
An example model representing most of the functionality present in CODES is
available in doc/example. In
this scenario, we have a certain number of storage servers, identified
this scenario, we have a certain number of storage servers, identified
through indices
$
0
,
\ldots
, n
-
1
$
where each server has a network interface card
through indices
$
0
,
\ldots
, n
-
1
$
where each server has a network interface card
(NIC) associated with it. The servers exchange messages with their neighboring
(NIC) associated with it. The servers exchange messages with their neighboring
...
@@ -542,36 +483,15 @@ to concurrent messages being sent.
...
@@ -542,36 +483,15 @@ to concurrent messages being sent.
The model is relatively simple to simulate through the usage of ROSS. There are
The model is relatively simple to simulate through the usage of ROSS. There are
two distinct LP types in the simulation: the server and the NIC. Refer to
two distinct LP types in the simulation: the server and the NIC. Refer to
Listings
\ref
{
snippet1
}
for data structure definitions. The server LPs
example.c
for data structure definitions. The server LPs
are in charge of issuing/acknowledging the messages, while the NIC LPs
are in charge of issuing/acknowledging the messages, while the NIC LPs
(implemented via CODES's model-net) transmit the data and inform their
(implemented via CODES's model-net) transmit the data and inform their
corresponding servers upon completion. This LP decomposition strategy is
corresponding servers upon completion. This LP decomposition strategy is
generally preferred for ROSS-based simulations: have single-purpose, simple LPs
generally preferred for ROSS-based simulations: have single-purpose, simple LPs
representing logical system components.
representing logical system components.
\begin{figure}
\begin{lstlisting}
[caption=Server state and event message struct, label=snippet1]
struct svr
_
state
{
int msg
_
sent
_
count; /* requests sent */
int msg
_
recvd
_
count; /* requests recvd */
int local
_
recvd
_
count; /* number of local messages received */
tw
_
stime start
_
ts; /* time that we started sending requests */
}
;
struct svr
_
msg
{
enum svr
_
event svr
_
event
_
type;
tw
_
lpid src; /* source of this request or ack */
int incremented
_
flag; /* helper for reverse computation */
}
;
\end{lstlisting}
\end{figure}
In this program, CODES is used in the following four ways: to provide
In this program, CODES is used in the following four ways: to provide
configuration utilities for the program, to logically separate and provide
configuration utilities for the program
(example.conf)
, to logically separate and provide
lookup functionality for multiple LP types, to automate LP placement on KPs/PEs,
lookup functionality for multiple LP types, to automate LP placement on KPs/PEs,
and to simplify/modularize the underlying network structure. The
\codesconfig
{}
and to simplify/modularize the underlying network structure. The
\codesconfig
{}
API is used for the first use-case, the
\codesmapping
{}
API is used for
API is used for the first use-case, the
\codesmapping
{}
API is used for
...
@@ -581,53 +501,23 @@ ROSS-specific information.
...
@@ -581,53 +501,23 @@ ROSS-specific information.
\subsection
{
\codesconfig
{}}
\subsection
{
\codesconfig
{}}
Listing~
\ref
{
snippet2
}
shows a stripped version of example.conf (see the file
The configuration format allows categories, and optionally subgroups within the
for comments). The configuration format allows categories, and optionally
category, of key-value pairs for configuration. The LPGROUPS category defines
subgroups within the category, of key-value pairs for configuration. The LPGROUPS
the LP configuration. The PARAMS category is currently used for
listing defines the LP configuration and (described in
\codesmodelnet
{}
and ROSS-specific parameters. For instance, the
Section~
\ref
{
subsec:codes
_
mapping
}
). The PARAMS category is used by both
\texttt
{
message
\_
size
}
field defines the maximum event size used in ROSS for
\codesmapping
{}
and
\codesmodelnet
{}
for configuration, providing both ROSS-specific and
memory management. Of course, user-defined categories can be used as well,
network specific parameters. For instance, the
\texttt
{
message
\_
size
}
field defines the
which are used in this case to define the rounds of communication and the size
maximum event size used in ROSS for memory management. Of course, user-defined
of each message.
categories can be used as well, which are used in this case to define the rounds
of communication and the size of each message.
\begin{figure}
\begin{lstlisting}
[caption=example configuration file for CODES LP mapping, label=snippet2]
LPGROUPS
{
SERVERS
{
repetitions="16";
server="1";
modelnet
_
simplenet="1";
}
}
PARAMS
{
packet
_
size="512";
message
_
size="256";
modelnet="simplenet";
net
_
startup
_
ns="1.5";
net
_
bw
_
mbps="20000";
}
server
_
pings
{
num
_
reqs="5";
payload
_
sz="4096";
}
\end{lstlisting}
\end{figure}
\subsection
{
\codesmapping
{}}
\subsection
{
\codesmapping
{}}
\label
{
subsec:codes
_
mapping
}
\label
{
subsec:codes
_
mapping
}
The
\codesmapping
{}
API transparently maps
LP types to MPI ranks (Aka ROSS PE's).
The
\codesmapping
{}
API transparently maps
user LPs to global LP IDs and MPI
The LP type and count can be specified through
\codesconfig
{}
. In this
ranks (Aka ROSS PE's). The LP type and count can be specified through
section, we focus on the
\codesmapping
{}
API as well as configuration. Refer again
\codesconfig
{}
. In this section, we focus on the
\codesmapping
{}
API as well as
to Listing~
\ref
{
snippet2
}
. Multiple LP types are specified in a single LP group
configuration. Multiple LP types are specified in a single LP group (there can
(there can
also be multiple LP groups in a config file).
also be multiple LP groups in a config file).
In Listing~
\ref
{
snippet2
}
, there is 1 server LP and 1
In Listing~
\ref
{
snippet2
}
, there is 1 server LP and 1
\texttt
{
modelnet
\_
simplenet
}
LP type in a group and this combination is repeated
\texttt
{
modelnet
\_
simplenet
}
LP type in a group and this combination is repeated
...
@@ -645,48 +535,10 @@ level LPs (e.g., the servers). Specifically, each NIC is mapped in a one-to-one
...
@@ -645,48 +535,10 @@ level LPs (e.g., the servers). Specifically, each NIC is mapped in a one-to-one
manner with the calling LP through the calling LP's group name, repetition
manner with the calling LP through the calling LP's group name, repetition
number, and number within the repetition.
number, and number within the repetition.
After the initialization function calls of ROSS (
\texttt
{
tw
\_
init
}
), the configuration
After the initialization function calls of ROSS (
\texttt
{
tw
\_
init
}
), the
file can be loaded in the example program using the calls in Figure
configuration file can be loaded in the example program (see the main function
\ref
{
snippet3
}
. Each LP type must register itself using
\texttt
{
lp
\_
type
\_
register
}
in example.c). Each LP type must register itself using
before setting up the mapping. Figure
\ref
{
snippet4
}
shows an example of how
\texttt
{
lp
\_
type
\_
register
}
before setting up the mapping.
the server LP registers itself.
\begin{figure}
\begin{lstlisting}
[caption=CODES mapping function calls in example program, label=snippet3]
int main(int argc, char **argv)
{
.....
/* ROSS initialization function calls */
tw
_
opt
_
add(app
_
opt);
tw
_
init(
&
argc,
&
argv);
/* loading the config file of codes-mapping */
configuration
_
load(argv[2], MPI
_
COMM
_
WORLD,
&
config);
/* Setup the model-net parameters specified in the config file */
net
_
id=model
_
net
_
set
_
params();
/* register the server LP type (model-net LP type is registered internally in model
_
net
_
set
_
params() */
svr
_
add
_
lp
_
type();
/* Now setup codes mapping */
codes
_
mapping
_
setup();
/* query codes mapping API */
num
_
servers = codes
_
mapping
_
get
_
group
_
reps("MODELNET
_
GRP") * codes
_
mapping
_
get
_
lp
_
count("MODELNET
_
GRP", "server");
.....
}
\end{lstlisting}
\end{figure}
\begin{figure}
\begin{lstlisting}
[caption=Registering an LP type, label=snippet4]
static void svr
_
add
_
lp
_
type()
{
lp
_
type
_
register("server", svr
_
get
_
lp
_
type());
}
\end{lstlisting}
\end{figure}
The
\codesmapping
{}
API provides ways to query information like number of LPs of
The
\codesmapping
{}
API provides ways to query information like number of LPs of
a particular LP types, group to which a LP type belongs, repetitions in the
a particular LP types, group to which a LP type belongs, repetitions in the
...
@@ -702,85 +554,31 @@ maintains a count of the number of remote messages it has sent and received as
...
@@ -702,85 +554,31 @@ maintains a count of the number of remote messages it has sent and received as
well as the number of local completion messages.
well as the number of local completion messages.
For the server event message, we have four message types KICKOFF, REQ, ACK and
For the server event message, we have four message types KICKOFF, REQ, ACK and
LOCAL. With a KICKOFF event, each LP sends a message to itself (the simulation
LOCAL. With a KICKOFF event, each LP sends a message to itself to begin the
begins from here). To avoid event ties, we add a small noise using the random
simulation proper. To avoid event ties, we add a small noise using
number generator (See Section
\ref
{
sec
_
kickoff
}
). The server LP state data structure
codes
\_
local
\_
latency. The ``REQ'' message is sent by a server to its
and server message data structures are given in Figure
\ref
{
snippet5
}
. The
\`
REQ
\'
neighboring server and when received, neighboring server sends back a message
message is sent by a server to its neighboring server and when received,
of type ``ACK''.
neighboring server sends back a message of type
\`
ACK
\'
.
TODO: Add magic numbers in the example file to demonstrate the magic number best
practice.
\begin{figure}
\begin{lstlisting}
[caption=Event handler of the server LP type., label=snippet5]
static void svr
_
event(svr
_
state * ns, tw
_
bf * b, svr
_
msg * m, tw
_
lp * lp)
{
switch (m->svr
_
event
_
type)
{
case REQ:
...
case ACK:
...
case KICKOFF:
...
case LOCAL:
...
default:
printf("
\n
Invalid message type
%d ", m->svr_event_type);
assert(0);
break;
}
}
\end{lstlisting}
\end{figure}
\subsection
{
\codesmodelnet
{}}
\subsection
{
\codesmodelnet
{}}
\codesmodelnet
{}
is an abstraction layer that allow models to send messages
\codesmodelnet
{}
is an abstraction layer that allow models to send messages
across components using different network transports. This is a
across components using different network transports. This is a
consistent API
consistent API that can send messages across either torus, dragonfly, or
that can send messages across both simple and complex network models without
simplenet network models without
changing the higher level model code.
changing the higher level model code.
In the CODES example, we use
\emph
{
simple-net
}
as the underlying plug-in for
In the CODES example, we use
\emph
{
simple-net
}
as the underlying plug-in for
\codesmodelnet
{}
. The simple-net parameters are specified by the user in the config
\codesmodelnet
{}
. The simple-net parameters are specified by the user in the
file (See Figure
\ref
{
snippet2
}
). A call to
\texttt
{
model
\_
net
\_
set
\_
params
}
sets up
example.conf config file and loaded via model
\_
net
\_
configure.
the model
\-
net parameters as given in the config file.
\codesmodelnet
{}
assumes that the caller already knows what LP it wants to
\codesmodelnet
{}
assumes that the caller already knows what LP it wants to
deliver the message to and how large the simulated message is. It carries two
deliver the message to (e.g.
\
by using the codes-mapping API) and how large the
types of events (1) a remote event to be delivered to a higher level model LP
simulated message is. It carries two types of events (1) a remote event to be
(In the example, the
\codesmodelnet
{}
LPs carry the remote event to the server LPs) and
delivered to a higher level model LP (In the example, the
\codesmodelnet
{}
LPs
(2) a local event to be delivered to the caller once the message has been
carry the remote event to the server LPs) and (2) a local event to be delivered
transmitted from the node (In the example, a local completion message is
to the caller once the message has been transmitted from the node (In the
delivered to the server LP once the Model-Net LP sends the message). Figure
example, a local completion message is delivered to the server LP once the
\ref
{
snippet6
}
shows how the server LP sends messages to the neighboring server
\codesmodelnet
{}
LP sends the message).
using the model
\-
net LP.
\begin{figure}
\begin{lstlisting}
[caption=Example code snippet showing data transfer through model-net API, label=snippet6]
static void handle
_
kickoff
_
event(svr
_
state * ns,
tw
_
bf * b,
svr
_
msg * m,
tw
_
lp * lp)
{
......
/* record when transfers started on this server */
ns->start
_
ts = tw
_
now(lp);
/* each server sends a request to the next highest server */
int dest
_
id = (lp->gid + offset)
%(num_servers*2 + num_routers);
/* model-net needs to know about (1) higher-level destination LP which is a neighboring server in this case
* (2) struct and size of remote message and (3) struct and size of local message (a local message can be null) */
model
_
net
_
event(net
_
id, "test", dest
_
id, PAYLOAD
_
SZ, sizeof(svr
_
msg), (const void*)m
_
remote, sizeof(svr
_
msg), (const void*)m
_
local, lp);
ns->msg
_
sent
_
count++;
.....
}
\end{lstlisting}
\end{figure}
\subsection
{
Reverse computation
}
\subsection
{
Reverse computation
}
ROSS has the capability for optimistic parallel simulation, but instead of
ROSS has the capability for optimistic parallel simulation, but instead of
...
@@ -792,7 +590,7 @@ functionality to reverse the LP state, given the event to be reversed. ROSS
...
@@ -792,7 +590,7 @@ functionality to reverse the LP state, given the event to be reversed. ROSS
makes this simpler in that events will always be rolled back in exactly the
makes this simpler in that events will always be rolled back in exactly the
order they were applied. Note that ROSS also has both serial and parallel
order they were applied. Note that ROSS also has both serial and parallel
conservative modes, so reverse computation may not be necessary if the
conservative modes, so reverse computation may not be necessary if the
simulation is not comput
ationally intens
e.
simulation is not comput
e- or memory-intensiv
e.
For our example program, recall the ``forward'' event handlers. They perform the
For our example program, recall the ``forward'' event handlers. They perform the
following:
following:
...
@@ -835,13 +633,17 @@ event handlers are buggy).
...
@@ -835,13 +633,17 @@ event handlers are buggy).
\section
{
TODO
}
\section
{
TODO
}
\begin{itemize}
\begin{itemize}
\item
Build a single example model that demonstrates the concepts in this
\item
reference to ROSS user's guide, airport model, etc.
document, refer to it throughout.
\item
add code examples?
\item
reference to ROSS user's guide, airport model, etc.
\item
techniques for exchanging events across LP types (API tips)
\item
put a pdf or latex2html version of this document on the codes web page
\item
add codes-mapping overview
when ready
\item
add more content on reverse computation. Specifically, development
strategies using it, tips on testing, common issues that come up, etc.
\item
put a pdf or latex2html version of this document on the codes web page
when it's ready
\end{itemize}
\end{itemize}
\begin{comment}
==== SCRATCH MATERIAL ====
\begin{figure}
\begin{figure}
\begin{lstlisting}
[caption=Example code snippet., label=snippet-example]
\begin{lstlisting}
[caption=Example code snippet., label=snippet-example]
for (i=0; i<n; i++)
{
for (i=0; i<n; i++)
{
...
@@ -854,5 +656,6 @@ for (i=0; i<n; i++) {
...
@@ -854,5 +656,6 @@ for (i=0; i<n; i++) {
Figure ~
\ref
{
fig:snippet-example
}
shows an example of how to show a code
Figure ~
\ref
{
fig:snippet-example
}
shows an example of how to show a code
snippet in latex. We can use this format as needed throughout the document.
snippet in latex. We can use this format as needed throughout the document.
\end{comment}
\end{document}
\end{document}
Write
Preview
Markdown
is supported
0%
Try again
or
attach a new file
Attach a file
Cancel
You are about to add
0
people
to the discussion. Proceed with caution.
Finish editing this message first!
Cancel
Please
register
or
sign in
to comment