Sélection de la langue

Search

Sommaire du brevet 1340592 

Énoncé de désistement de responsabilité concernant l'information provenant de tiers

Une partie des informations de ce site Web a été fournie par des sources externes. Le gouvernement du Canada n'assume aucune responsabilité concernant la précision, l'actualité ou la fiabilité des informations fournies par les sources externes. Les utilisateurs qui désirent employer cette information devraient consulter directement la source des informations. Le contenu fourni par les sources externes n'est pas assujetti aux exigences sur les langues officielles, la protection des renseignements personnels et l'accessibilité.

Disponibilité de l'Abrégé et des Revendications

L'apparition de différences dans le texte et l'image des Revendications et de l'Abrégé dépend du moment auquel le document est publié. Les textes des Revendications et de l'Abrégé sont affichés :

  • lorsque la demande peut être examinée par le public;
  • lorsque le brevet est émis (délivrance).
(12) Brevet: (11) CA 1340592
(21) Numéro de la demande: 1340592
(54) Titre français: LOGICIEL PERMETTANT DE COMMUNIQUER DE L'INFORMATION A UN UTILISATEUR POUR UN GRAND NOMBRE D'ORDINATEURS
(54) Titre anglais: SOFTWARE AGENT USED TO PROVIDE INFORMATION TO A USER FOR A PLURALITY OF COMPUTER
Statut: Périmé et au-delà du délai pour l’annulation
Données bibliographiques
(51) Classification internationale des brevets (CIB):
(72) Inventeurs :
  • WATSON, RALPH THOMAS (Etats-Unis d'Amérique)
  • PACKARD, BARBARA B. (Etats-Unis d'Amérique)
  • STEARNS, GLENN (Etats-Unis d'Amérique)
(73) Titulaires :
  • HEWLETT-PACKARD COMPANY
  • HEWLETT-PACKARD COMPANY
(71) Demandeurs :
  • HEWLETT-PACKARD COMPANY (Etats-Unis d'Amérique)
  • HEWLETT-PACKARD COMPANY (Etats-Unis d'Amérique)
(74) Agent: MARKS & CLERK
(74) Co-agent:
(45) Délivré: 1999-06-08
(22) Date de dépôt: 1989-03-21
Licence disponible: S.O.
Cédé au domaine public: S.O.
(25) Langue des documents déposés: Anglais

Traité de coopération en matière de brevets (PCT): Non

(30) Données de priorité de la demande:
Numéro de la demande Pays / territoire Date
225,115 (Etats-Unis d'Amérique) 1988-07-27

Abrégés

Abrégé anglais


A computing system is presented which includes an
application object, a computer based training instruction
object ("INSTRUCTION object") and an agent engine. The
INSTRUCTION object runs concurrently with the application
object. The application objects includes a first action
processor and a first command processor. The first
action processor receives messages which indicate
syntactic actions taken by the user and generates
semantic commands based on the syntactic actions. The
first command processor receives the semantic commands
from the first action processor and executes the semantic
commands.
The INSTRUCTION object receives input from a user
through syntactic actions and displays information on a
monitor. The information instructs a user as to
operation of the first application. The INSTRUCTION
object may include an INSTRUCTION action processor and an
INSTRUCTION command processor. The INSTRUCTION action
processor receives messages which indicate syntactic
actions taken by the user and generates semantic commands
based on the syntactic actions. The INSTRUCTION command
processor receives the semantic commands from the
INSTRUCTION action processor and executes the semantic
commands.
The agent, running a task language program, sends
semantic commands to the INSTRUCTION object which direct
the INSTRUCTION object as to what information to display.
The agent also monitors the application object and the
INSTRUCTION object, intercepting semantic commands before
they are executed.

Revendications

Note : Les revendications sont présentées dans la langue officielle dans laquelle elles ont été soumises.


28
THE EMBODIMENTS OF THE INVENTION IN WHICH AN EXCLUSIVE
PROPERTY OR PRIVILEGE IS CLAIMED ARE DEFINED AS FOLLOWS:
1. In a computing system having a monitor, and an agent
process, a method for providing a demonstration, the
method comprising the steps of:
(a) sending an interrogation message from the agent
process to a first process, the interrogation message
requesting the first process to send to the agent process
information about a graphical interface element displayed
in a window of the first process;
(b) accessing, by the first process, the
information about the graphical interface element
requested in step (a);
(c) sending, from the first process to the agent
process, the information about the graphical interface
element accessed in step (b); and
(d) performing the demonstration by the agent
process, wherein the demonstration includes the substep
of manipulating the graphical interface element displayed
in the window of the first process.
2. A method as in claim 1 wherein the interrogation
message includes an identity of the graphical interface
element and the information about the graphical interface
element includes a location of the graphical interface
element within the window.
3. A method as in claim 1 wherein the interrogation
message includes an identity of the graphical interface
element and the information about the graphical interface
element includes status information about a second
process represented by the graphical interface element.
4. A method as in claim 1 wherein the interrogation
message includes a location of the graphical interface
element and the information about the graphical interface

29
element includes an identity of the graphical interface
element.
5. A method as in claim 1 wherein step (d) includes the
substeps:
(d.1) sending messages from the agent process to
an INSTRUCTION process which instruct the INSTRUCTION
process to display conversational data; and
(d.2) displaying by the INSTRUCTION process the
conversational data on the monitor.

Description

Note : Les descriptions sont présentées dans la langue officielle dans laquelle elles ont été soumises.


1e0594
1
A SOFTWARE AGENT USED TO PROVIDE INSTRUCTION TO A USER
FOR A PLURALITY OF COMPUTER APPLICATIONS
Background
This application is a divisional application of
Canadian patent application Serial No. 594,334 filed
March 21, 1989.
The present invention relates to computer-based
training (CBT) and particularly to the use of a software
agent to provide instruction to a user.
CBT is provided with many commercially available
applications to either complement or replace manuals,
written tutorials and other traditional instruction
materials. CH,T can be interactive and often models the
role of a human mentor, providing specific feedback in
response to a user's performance during a training
session.
Generally there are two ways CBT is implemented. In
"Simulated CBT" the application is simulated by a CBT
program. In "Concurrent CBT" the application is run
concurrently with a CBT program.
As applications become more sophisticated and make
greater utilizations of graphics, it is more difficult to
use simulated CBT. This is because complexity in an
application ge.nerall.y requires complexity in a program
which simulates the application.
Concurrent CBT may often be much less complex than
simulated CBT becau~:e in concurrent CBT the application
itself provides its own interface and functionality
during a training session. In a concurrent CBT session,
a CBT program generally will initiate the application and
act as a "shell" around the application.
During th.e concurrent CBT session, the CBT will open
the application and control it to bring the application
to a known, desired state. Using its own routines CBT
will deliver instructional text and graphics to a user
through "windows" which are drawn on top of the

~~~o~sz
2
application. The text and graphics explain the y
application concepts and prompt for user response. The
CBT monitors user input to determine whether the user has
responded appropriately, and monitors the display screen
to determine when the application has finished processing
input. Then the CBT can advance the training based on
the user's response.
Typically, the CBT controls and monitors the
activities of an application at a syntactic level. What
is meant herein by ''syntactic level" is the action a user
makes, such as keystrokes or movements of a mouse, in
order to interact with an application. For example, in a
CBT acting at a syntactic level where an application is
controlled with a keyboard and mouse and where it outputs
to a CRT monitoring device, the CBT would be able to
detect key and mouses input, as well as the status of
pixels on the CRT. This level of interaction is referred
to as "syntactic" because at this level the computer does
not semantically interpret the intent associated with the
actions.
Summary of the Invention
In accordance with the preferred embodiments of the
present invention a computing system is presented which
includes an application object, a computer based training
instruction object ("INSTRUCTION object") and an agent
engine. The INSTRUCTION object runs concurrently with
the application object. The application objects includes
a first action processor and a first command processor.
The first action processor receives messages which
indicate syntactic actions taken by the user and
generates semantic commands based on the syntactic
actions. The first command processor receives the
semantic commands from the first action processor and
executes the semantic commands.
The INSTRUCTION' object receives input from a user
through syntactic actions and displays information on a

1340~9~
3
monitor. The information instructs a user as to
operation of the first application. The INSTRUCTION
object, in the: preferred embodiment, includes an
INSTRUCTION action processor and an INSTRUCTION command
processor. Th.e INSTRUCTION action processor receives
messages which. indicate syntactic actions taken by the
user and generates semantic commands based on the
syntactic actions. The INSTRUCTION command processor
receives the semantic commands from the INSTRUCTION
action processor and executes the semantic commands.
The agent, running a task language program, sends
semantic commands to the INSTRUCTION object which direct
the INSTRUCTION object as to what information to display.
The agent also monitors the application object and the
INSTRUCTION object, intercepting semantic commands before
they are executed.
An object of an aspect of this invention is as
follows:
In a computing system having a monitor, and an agent
process, a method for providing a demonstration, the
method comprising the steps of:
(a) sending an interrogation message from the agent
process to a first process, the interrogation message
requesting the first: process to send to the agent process
information about a graphical interface element displayed
in a window of the first process;
(b) accessing, by the first process, the
information about the graphical interface element
requested in step (a):
(c) sending, from the first process to the agent
process, the information about the graphical interface
element accessed in step (b): and
(d) performing the demonstration by the agent
process, wherein the demonstration includes the substep
of manipulating the graphical interface element displayed
in the window of the first process.

~3405~2
4
Brief Description of the Drawings
Figure 1 is a block diagram which shows the
interaction beaween an application, an agent environment
and a help environment.
Figure 2 is a block diagram which shows how a task
language file is generated and executed in accordance
with the preferred embodiment of the present invention.
Figure 3 is a block diagram of the application shown
in Figure 1 in. accordance with a preferred embodiment of
the present invention.
Figure 4 is a block diagram showing data flow
through the application shown in Figure 1 in accordance
with a preferred embodiment of the present invention.
Figure 5 is a diagram of a compiler in accordance
with a preferred embodiment of the present invention.
Figure 6 shows a computer, monitor, keyboard, and
mouse in accordance with the preferred embodiment of the
present invention.
Figure 7 shows a top view of the mouse shown in
Figure 6.
Figure 8 shows data flow within the compiler shown
in Figure 5.
Figure 9 is a block diagram which shows the addition
of an application which performs computer based training
to the application and agent environment shown in Figure
1.
Figures 10-17 show what is displayed on the monitor
shown in Figure 6, as a result of the execution of a task
by the system shown in Figure 9.
Description of the Preferred Embodiment
Figure 1 is a block diagram of a computing system in
accordance with a preferred embodiment of the present
invention. A user 111 communicates with the computing
system through a software environment 112. Software
environment 112 may be, for instance, Microsoft'" Windows,
a program sold by Mircosoft Corporation, having a

I~~44~9
business addreas at 16011 NE 36th Way, Redmond, WA 98073-
9717. Software environment 112 interacts with an
application 100. Messages containing information
describing user actions are sent to application 100 by
5 software environment 112. In the preferred embodiment
the messages containing user actions are standard
messages sent by Microsoft Windows. Application 100
includes an acaion processor 101 which converts syntactic
user actions t,o a single semantic command. For example,
action processor 101 observes and collects user actions,
e.g., the clicks and movement of a mouse used by a user.
Once the user actions conform to the syntax of a command,
a semantic comanand p_s generated. There are multiple ways
user actions may be used to generate a single semantic
command. The ways t:he semantic command is generated by a
user may differ, but: the execution of the semantic
command is always the same. Action processor 1o1 is able
to syntactically interpret the many ways a user can build
a particular semantic command. In addition to syntactic
user actions, action processor 101 also processes other
messages from which come to application 100. Some
messages will result: in a semantic command being
generated: others will be dealt with entirely by action
processor 101.
Application 100 also includes a command processor
102 which executes semantic commands. Command processor
102 receives semantic commands in internal form (internal
form is discussed more fully below) and returns an error
if a command cannot be executed.
Application 10C and software environment 112
interact with help environment 119 at the level of the
interface between software environment 112 and
application 100. Help environment 119 includes a help
application 103, which utilizes a help text 104. Help
environment l19 also includes help tools 105 which are
used to generate help text 104.

1340592
6
Software environment 112 also interacts with an
agent environ~d~ent 1:L8. Agent environment 118 includes an
agent task 10T and an agent engine 108.
Agent engine 108 interacts with application 100 at
five different: conceptual categories, in order to perform
five functions.. Agent engine 108 interacts with action
processor 101 through a data channel 113 for the purpose
of interrogation. Agent engine 108 interacts between
action processor 107l and command processor 102 through a
data channel 1.14 for the purpose of monitoring the
activities of application 100. Agent engine 108
interacts with command processor 102 through a data
channel l15 for the purpose of having commands executed
by application 100. Agent engine 108 interacts with
command processor 1U2 through a data channel 116 for the
purpose of handling errors in the processing of a command
within application 100. Agent engine 108 interacts with
command processor lU2 through a data channel 117 for the
purpose of recording execution of application 100 and
receiving notificatp.on of the completion of a command.
In the preferred embodiment of the present
invention, commands may be represented in four ways, (1)
in task language fo=-m, stored as keywords and parameters,
(2) in pcode form, which are binary codes in external
for~aa with additional. header interpreted by agent engine
108; (3) in external. form, which are binary data
understood by application l00 and which are passed
between agent engine: 108 and application 100; and (4) in
internal form, as binary commands which are executed
within application 7.00. The four ways of representing
commands are further described in Appendix A attached
hereto.
Figure 2 shows a block diagram of how the overall
agent system functions. A task language file 131 is a
file containing task language. Task language is the text
form of commands that describe an application's
functionality. Tasls; language is comprised of class

1340592
7
dependent commands and class independent commands. Class
dependent commands are commands which are to be performed
by an application. In Figure 2, just one application,
application 1C10 is shown: however, agent engine 108 may
interact with many applications.
In the preferred embodiment of the present
invention, data files to be operated on by applications
are referenced by the use of objects. Each object
contains a reference to a data file and a reference to an
application. Those objects which refer to the same
application are said to be members of the same class.
Each application executes a different set of commands.
Class dependent commands therefore differ from
application tc~ application.
Agent engine 108 executes class independent commands
which are commands understood by agent engine 108. Class
independent cc~mmand:~ are executed by agent engine 108,
not by an application.
Task language file 131 is used by a class
independent parser 122 to prepare a pcode file 121. In
preparing pcod.e file 121, independent parser 122 calls
class dependent parsers 123, 124 and etc. As will be
further described below, a class dependent parser is a
parser which generates class dependent commands which are
encapsulated in pcode form. Agent engine 108 extracts
the commands in their external form from the pcode form
and forwards these commands to the appropriate
application. A class field within the pcode indicates
which application is. to receive a particular class
dependent command. Class independent parser 122 is a
parser which generates pcodes which are executed by agent
engine 108.
Task language file 131 may be prepared by user 111
with an agent task editor 132. Alternately, task
language file may be prepared by use of a class
independent recorder 125 which utilizes class dependent
recorders 126, l27 a.nd etc. Generally, a recorder

13'40592
8
records the commands of applications for later playback.
When the computing system is in record mode, agent task
editor 132 receives input from applications, such as
shown application 100, which detail what actions agent
engine 108 and the applications take. Applications
communicate to agent task editor 132 through an
application program interface (API) 130. Agent task
editor 132, forwards data to class independent recorder
125 when the computing system is in record mode, and to
task language file .l31 when agent task editor is being
used by user l.11.
Class indlependent recorder 125 receives the
information and builds task language file 131. When
class independlent recorder 125 detects that agent task
editor 132 is forwarding information about an action
taken by an a~~plication, class independent recorder calls
the class dependent recorder for that application, which
then generates. the task language form for that action.
Class independent recorder 125 generates the task
language form for acaions taken by agent engine 108.
When executing pcode file 121, agent engine 108
reads each pco~de conunand and determines whether the pcode
command contains a class independent command to be
executed by agent engine 108, or a class dependent
command to be executed by an application. If the pcode
command contains a class independent command, agent
engine 108 executes the command. If the pcode command
contains a class dependent command, agent engine 108
determines by the pcode command the application which is
to receive the command. Agent engine 108 then extracts a
class dependent command in external form, embedded within
the pcode. This class dependent command is then sent to
the application. For instance, if the class dependent
command is for application 100, the class dependent
command is sent to application l00. Within application
100 a translate to internal processor 128 is used to

134059
9
translate the class dependent command--sent in external
form-- to the command's internal form.
In the interactions between agent engine 108 and
application l00, API 130 is used. API 130 is a set of
functions and messages for accessing agent engine 108 and
other facilities.
When the system is in record mode, translate to
internal processor 1.28 translates commands from agent
engine 108 and feeds them to command processor 102
through a command interface component 146 shown in Figure
3. A translate to external processor 129 receives
commands in internal. form that have been executed by
command processor 102. The commands are received through
return interface component 147, shown in Figure 3.
Translate to external processor 129 translates the
commands in internal. form to commands in external form.
The commands in external form are then transferred
through API 130 to task editor 132.
Figure 3 shows in more detail the architecture of
application 100 in t:he preferred embodiment of the
present invention. Application 100 includes a user
action interface component 145 which interacts with
software environment: 112 and command interface component
146 which communicates with both action processor 101 and
command processor 102. As shown both action processor
101 and command processor l02 access application data
144. A return interface component 147 is responsive to
command processor 102 and returns control back to
software environment. 112. Translate to external
processor 129 is shown to interact with return interface
component 147. Return interface component 147 is only
called when application 100 is in playback mode or record
mode. These modes a.re more fully described below.
Return interface component 147 indicates to agent engine
108 that a command has been executed by application 100
and application 100 is ready for the next command.

134059
Also included :in application 100 are a modal dialog
box processor 148 and an error dialog box component 149.
Both these interact with software environment 112 to
control the dj.splay of dialog boxes which communicate
5 with a user 17.l.
Some appl.icatians are able to operate in more than
one window at a time. When this is done a modeless user
action interface component, a modeless action processor,
and a modeless~ command interface component is added for
10 each window more than one, in which an application
operates. For example, in application 100 is shown a
modeless user action interface component 141, a modeless
action processor 14:? and a modeless command interface
component 143.
Figure 4 shows data flow within application 100.
Messages to a~~plicat:ion 100 are received by user action
interface component 145. For certain types of messages--
i.e., messages. from help application 103-- user action
interface 145 causes application 100 to return
immediately. Otherwise the message is forwarded to a
playback message test component 150.
If the message is for playback of commands which
have been produced either by recording or parsing, the
message is sent to translate to internal processor 128
which translates a command within the message from
external form to internal form. The command is then
forwarded to command interface component 146.
If the message is not a playback message the message
is sent to action processor l01 to, for example,
syntactically interpret a user's action which caused the
generation of the measage. If there is no semantic
command generated by action processor 101, or produced by
internal processor 1.08 playback message test component
150 causes application 100 to return. If there is a
semantic command generated the command is forwarded to
command interface component 146.

134592
11
If agent engine 108 is monitoring execution of
commands by application 100, command interface component
l46 sends any data received to translate to external
processor 129 which translates commands to external form
and transfers the commands to agent engine 108. Command
interface comb>onent also forwards data to a modal dialog
box test component :L52.
If the forwarded data contains a request for a
dialog box, modal dialog box test component 152 sends the
data to modal dialog box processor 148 for processing.
Otherwise modal dia7log box test component 152 sends the
data to command tesi: component 151.
If the data contains a command, command test
component 151 sends the command to command processor 102
for execution. Command test component 151 sends the data
to return interface component 147.
If agent engine 108 is recording commands, return
interface component 147 sends the data to translate to
external processor 129 for translation to external form
and transfer t.o agent engine 108 via return interface
component 147. Return interface component returns until
the next message is received.
In Figure 5, data flow through a task language
compiler 120 is shown. Task language compiler 121 is
used to generate pcode file 121 task language written by
a user. A task langauge file 131 includes commands
written by a user. In the preferred embodiment of the
present invention, t:he task language is written in
accordance with the Agent Task Language Guidelines
included as Appendix B to this Specification.
Task language compiler 120 is a two pass compiler.
In the first pass the routines used include an input
stream processor 164a an expression parser 166, a class
independent parser l.22, a save file buffer 171, second
pass routines 174, and class dependent parsers, of which
are shown class dependent parser 123, a class dependent

~~~a59z
12
parser 167 and a class dependent parser 168. As a result
of the first pass a temporary file 176 is created.
Class independent parser 122 parses the class
independent task language commands. Each application
which runs on the system also has special commands which
it executes. For each application, therefore, a separate
class dependent parser is developed. This parser is able
to parse commands to be executed by the application for
which it is de:veloped. Class dependent parsers may be
added to or deleted from task language compiler 120 as
applications acre added to or deleted from the system.
In addition a <:BT parser 125 is shown. CBT parser
125 is used to parse code generated to be run by agent
engine 108 when performing CBT.
When compiling begins, class independent parser 122
requests a token from input stream processor 164. Input
stream processor 164 scans task langauge file 131 and
produces the token. Class independent parser 122 then
does one of several things. Class independent parser l22
may generate p~code t:o be sent to save file buffer 171.
If class independent: parser 122 expects the next token to
be an expression, class independent parser 122 will call
routine MakeExpressi.on () which calls expression parser
166. Expression parser 166 requests tokens from input
stream processor 164. until the expression is complete.
Expression parser 166 then generates pcode to be sent to
file buffer 171 and then to be saved in temporary file
176. Additionally, expression parser 166 generates an
expression token which is returned to input stream
processor 164. Input stream processor 164 delivers this
expression to independent parser 122 when it is requested
by independent parser 122.
As a result of a FOCUS command, a particular class
dependent parser will have priority. Therefore, in its
parsing loop, class independent scanner 122a will call
the class dependent parser for the application which
currently has the focus. The class dependent parser will

1340592
13
request tokenF: from input stream processor 164 until it
has received << class dependent command which the semantic
routines callE:d by class dependent parser convert to
external command form, or until the class dependent
parser determines that it cannot parse the expressions
that it has resceived. If the class dependent parser
encounters an expression, it may invoke expression parser
166 using the call l~akeExpression (). If the class
dependent parser is unable to parse the tokens it
receives, the class dependent parser returns an error and
the class independent parser will attempt to parse the
tokens.
A FOCUS OFF command will result in independent
parser 122 im~~ediately parsing a11 commands without
sending them t.o a dependent parser. When a string of
class independent commands are being parsed, this can
avoid the needless running of dependent parser software,
thus saving computing time required to compile the task
langauge.
CBT compiler directives result in a CBT compiler
flag being "on." or '"off". The CBT compiler flag
determines whether CBT parser 125 is available to be
called to parse commands. Precedence for parsing is as
described below.
Commands will first be sent to any class dependent
parser which has focus. If there is no class dependent
parser with focus, or if the class dependent parser with
focus is unable to parse the command, the command will
then be sent to CBT parser 125 for parsing if the CBT
compiler flag is "on". If the CBT compiler flag is "off"
or if CBT parser 125. is unable to parse the command, the
command will be parsed by class independent parser 122.
Figure 6 shows a computer 18 on which INSTRUCTION
objects may be run. Also shown are a monitor 14, a mouse
20 and a keyboard 19~. Figure 7 shows mouse 20 to include
a button 27 and a button 28.

1~~~592
14
In FigurE: 8 is shown data flow between independent
parser 122 and dependent parsers of which dependent
parser 123 anct dependent parser 124 are shown. In order
to focus the discussion on the relationship between
parsers, callf; to expression parser 166 by scanner 122a
are not taken into account in the discussion of Figure 8.
Also, CBT parser 125 and a dependent scanner 125a
for CBT parser 125 are shown. When the CBT flag is "on"
precedence for parsing commands is a class dependent
parser with focus, then CBT parser and finally class
independent parser 122 as stated above. In the following
discussion, for simplicity of explanation, it is assumed
that the CBT flag i:~ off.
When independent parser 122 is ready for a token,
independent parser 122 calls a scanner routine 122a.
Scanner 122a checks if there is a focus on an
application. If there is not a focus on an application,
scanner 122a calls input stream processor 164 which
returns to scanner 122a a token. Scanner 122a returns
the token to independent parser 122.
If there is a i:ocus on an application, the dependent
parser for the application has precedence and is called.
For instance, when locus is on the application for parser
123, parser 123 calls scanner 122a through a dependent
scanner 123a. Scanner 122a checks its state and
determines that it is being called by a dependent parser,
so it does not recursively call another dependent parser.
Scanner 122a calls input stream processor 164 which
returns to scanner 1.22a a token. Scanner 122a returns
the token to dependent parser 123 through dependent
scanner 123a. Although the present implementation of the
present invention includes dependent scanner 123a, in
other implementations dependent scanner 123a may be
eliminated and parser 123 may call scanner 122a directly.
Dependent parser 123 will continue to request tokens
through dependent scanner 123a as long as dependent
parser 123 is able t.o parse the tokens it receives. With

1340592
these tokens dependent parser will call semantic routines
which will generate class dependent external commands
embedded in pc;ode. When dependent parser 123 is unable
to parse a tol~;en it receives, dependent parser will
5 return to scanner 122a an error. Scanner 122a then calls
input stream ~~rocessor 164 and receives from input stream
processor 164 the token which dependent parser 123 was
unable to parse. This token is returned to independent
parser 122. l:ndependent parser 122 parses the token and
10 calls semantic: routines to generate pcode for execution
by agent engine 108.. The next time independent parser
122 requests a~ token from scanner 122a, scanner 122a will
again call de~~endeni~ parser 123 until there is a FOCUS
OFF command or unti7L there is a focus on another
15 application.
When the focus is on the application for dependent
parser 124, scanner 122a will call dependent parser 124.
Dependent parser 124 calls a dependent scanner 124a and
operates similarly t:o dependent parser l23.
Save file: buffer 171, shown in Figure 5, receives
pcode from class independent parser 122 and from
expression parser 1E6, and receives external command
forms embedded. in pcode from class dependent parsers.
Save file buffer 177. stores this information in a
temporary file 176. Second pass routines 174 takes the
pcode and external command forms stored in temporary file
176 and performs housekeeping, e.g., fixes addresses
etc., in order to generate pcode file 121.
In Figure 9 application 100 is shown to be included
in, for example, an object "NewWave Office". A window
300 is the user interface for object "NewWave Office".
For the purpose of instructing a user into how to use
object "NewWave Office" an INSTRUCTION application 200
runs simultaneously to application 100. INSTRUCTION
application 200 is included within an INSTRUCTION object.
INSTRUCTION application 200 is designed similar to other
applications on the system. INSTRUCTION application 200

1340592
16
has an action processor 201 and a command processor 202
as shown.
Agent engine 1U8 interacts with INSTRUCTION
application 200 as with other applications on the system.
For instance agent engine l08 interacts with action
processor 20l through a data channel 213 for the purpose
of interrogation. Agent engine l08 interacts between
action processor 201 and command processor 202 through a
data channel 214 for the purpose of monitoring the
activities of application 200. Agent engine l08
interacts with command processor 202 through a data
channel 215 for the purpose of having commands executed
by INSTRUCTION application 200. Agent engine 108
interacts with command processor 202 through a data
channel 216 for the purpose of handling errors in the
processing of a command within INSTRUCTION application
200. Agent engine 1.08 interacts with command processor
202 through a data channel 217 for the purpose of
recording execution of INSTRUCTION application 200 and
receiving notification of the completion of a command.
INSTRUCTION application 200 interacts with a user
concurrent with the execution of application 100, for
instance by displaying dialog boxes such as a window 302.
INSTRUCTION application 200 also may communicate by use
of other means such as voice.
Figures 10-17 illustrate a brief CBT session. In
the session a user i.s instructed on how to open a folder
"Fred" represented by an icon 301, shown in Figure 10.
Also shown in Figure: 10 is an icon 304 which represents
agent engine 108. An icon 309 represents an object
"Lesson Task" which includes the compiled pcode version
of the task language file, shown in Table 1 below. The
compiled pcode version of the task language file is run
by agent engine 108. If object Lesson Task is opened
before compilation, the source code for the pcode version
of the task language: file may be edited.

17 134(?592
An icon a05, represents an object called "Lesson
Instruction", which includes data called conversational
data, and INS~'RUC;TION application 200 which contains
library routines which display the data to a user.
Object "Lesson Instruction" displays the data when
instructed to do so by agent engine 108. A cursor 303
controlled by mouse 20 is shown pointing to icon 309.
With the cursor 303 over icon 309, a user may select
object "Lesson Task'' by pressing button 27. At this
point, icon 30 will be highlighted, as shown in Figure
11. A shadow 307 of Icon 309 will follow cursor 303.
When shadow 3C7 is placed over icon 305 and button 27 is
released icon 309 wall disappear from window 300, as
shown in Figure 12 and agent engine 108 will begin to run
the task language program included in object "Lesson
Task". An example of source for the task language
program included in Lesson Task is given in Table 1
below:

134592
18
Table 1
1 task:
2 cbt on
3 OPEN# - 1
4 SELfCT# - 211
5 focus on office "NewWave Office"
6 select instruction "Lesson Instruction"
7 open.
8 focus on instruction "Lesson Instruction"
9 show_windaw 1
10 on command do process button
11 button_flag# - 0
12 set command on
13 while button-flag# - 0
14 wait

1340592
15 endwhile
16 set command off
1T hide window 1
18 show
window 2
19 _
on command do process open
20 open flag# _ 0
21 set coiamand en
22 while ~~pen_f:lag# = 0
23 wait
24 endwhi:le
25 set command off
26 hide w:lndow 2
27 show
w:lndow :;
28 _
on command do process_button
29 button flag# = 0
30 set cocomand on
31 while hutton
.flag# _ 0
32 _
wait
33 endwhi:le
34 set command off
35 hide window 3
36 end tack
3T
38 procedure process button
39 if sys_emdolass() _ "INSTRUCTION"
40 button flag# - 1
41 endif
42 ignore
43 endproc:
44
45 procedure process open
46 if sys_emdolass() - "INSTRUCTION"
47 do demo
48 oven flag# = 1
49 ie;noce
50 else
51 em~d# _ sys_command()
52 if omd# = SE~ECT#
53 olass# : sya_commandparm(1,0)
54 title# _ sys_commandparm(l,len(class#))
55 execul;e
56 else
5T if cmd# - OPEN# and elass# = "FOLDER" and
title# _ "Fred"
58 open flag# ~ 1
59 execute
60 else
61 ignore
62 ends P
63 endif
64 endi:P
65 endproo
66
6T

1344592
68 procedure demo
69 locus on office "NewWave Office"
70 object region# = where is ("FOLDER", "Fred")
71 F~oint i~o center (obj ect region# )
5 72 double click left button
73 ~~ause !5
74 ends>roc
The task language in Table 1 is compiled by task
language compiler 1:?0, using class independent parser
10 122, a class dlependE:nt parser for object "NewWave
Office", a class dependent parser for object "Lesson
Instruction" a,nd a t:BT parser 125. For instance the
"focus" command in l~~ine 5 is parsed by class independent
parser 122, th,e "select instruction" command in line 6 is
15 parsed by the class dependent parser for object "NewWave
Office", the command "show window" in line 9 is parsed by
the class dependent parser for object "Lesson
Instruction" and the: command "point to center" in line 71
is parsed by C'BT parser l25.
20 Line 1 of the c:ode in Table 1 contains the word
"task" because the first line of every task language
program contains the: instruction "task". Similarly line
36 of the code in Table 1 contains the word "end task"
indicating the last instruction in the task program.
The instruction in line 2 turns the CBT compiler
flag "on". The instructions in lines 3 and 4 set
variables. The instruction in line 5 places the focus on
object "NewWave Office". The instructions in lines 6 and
7 will be sent from agent engine 108 to object "NewWave
Office". These instructions when executed by object
"NewWave Office" will cause the object "Lesson
Instruction" to be selected and opened. When object
Lesson Instruction i.s opened, it runs INSTRUCTION
application 200. The selection of "Lesson Instruction"
is indicated by icon 305 being highlighted as shown in
Figure 13.

~3~0592
21
The instruction in line 8 places the focus on the
object "Lesson Instruction". The instruction in line 9,
when executed by agent 108, will be sent by agent 108 to
object "Lesson Inst~__~uction". When executed by object
"Lesson Instri;~ction" the instruction will cause window
302 to be opened on top of window 300 as shown in Figure
14. Window 302 instructs the user as to how to open an
object. When the user is done reading window 302, he
places cursor 303 over a button 308, using mouse 20, and
clicks button 27. ~,ssentially, then agent engine 108
waits for the user t:o select button 308, which is
labelled "Cont.inue"., Agent engine 108 will cause every
other command to be ignored. Agent engine 108 is able to
do this by monitoring the applications executed by other
running object's, and intercepting commands before they
are executed. The code required for this is described
below.
The instructions in line 10-14 are executed by agent
engine 108 while the: user reads window 302. The
instruction in line 10 defines a monitoring procedure.
In line 11, the variable "button_flag#" is cleared to
"0". In line 12 they instruction "set command on" turns
monitoring on. When monitoring is turned on, the
procedure "process button" will be performed upon the
user performing any command. This corresponds to any
command being sent along a data path between an action
processor and a command processor in an application,
e.g., along data path 114 between action processor 101
and command processor 102 of application 100. This is
called a command trap. The command trap is able to be
generated because agent engine 108 is able to monitor
"NewWave Office" application 100 and INSTRUCTION
application 200. When agent engine produces a command
trap, commands sent from action processor 101 to command
processor 102, and commands sent from action processor
201 to command processor 202 are intercepted by agent
engine 108. Thus both object "NewWave Office" and

~~~0~9~
22
"Lesson 1" are: monitored by agent engine 108. Semantic
commands from both objects are intercepted before
execution and result in a command trap.
The procedure "process button" is shown in lines 38-
43 of Table 1. The function "sys cmdclass()" returns the
class string of the object who received the command from
the user. In line 39, the object is INSTRUCTION
application 2Ci0, that is, if the procedure process button
is called becamse the user has placed cursor 303 over
button 308 andl clicked button 27, then function
"sys cmdclass()" will return the string "INSTRUCTION" and
the variable "button_flag#" shall be sent to "1" in line
40. On the other hand, in any other object receives a
command from the user--eg., if the user places cursor 303
in window 300 and clicks button 27--the class string of
the object shall be returned and shall not be equal to
"INSTRUCTION" and tree variable "button-flag#" will not be
set to "1".
In line 42, the "Ignore" command indicates that
regardless of which object (i.e., whether object "NewWave
Office" or "Lesson 7.") returned the command from the
user, the command itaelf is to be ignored. That is
whether the command came from "NewWave Office"
application 100 or INSTRUCTION application 200, the
command is not returned to command processor 102 or
command processor 202 for further processing. Instead, a
NULL command is returned by setting the command to
API NO CMD.
After turning a:onitoring on, in line 12, agent
engine executes the instruction in line 13 and enters a
while loop (instruct:ions 13-15) waiting for a user to
select button 308 with cursor 303. While in this loop,
when any command is generated in an application,
procedure "process ~~utton" is run. Upon completion of
procedure "process ~~utton", if button-flag# = 0, agent
engine remains in th.e loop defined by the instructions in
lines 13-15. If but.ton_flag# = 1, agent engine 108

~~~o~s2
23
continues execution of the program with the instruction
in line 16.
In line 1.6, monitoring is turned "off". The
instructions i.n lines 17 and 18 are sent by agent engine
108 to object "Lesson Instruction" for execution by
INSTRUCTION applicai:ion 200. The instructions when
executed cause: INSTRUCTION application 200 to remove
window 302 from display 14 and to a display window 311 as
shown in Figure 15. The instruction in line 19, executed
by agent engine 108, redefines the monitor procedure to
be the procedure "process open".
In line 20, the: variable "open flag#" is set to "0".
In line 21 the: instruction "set command on" turns the
monitoring "on." resulting in agent engine 108
intercepting commands before they are executed by any
object running' on the system.
The instructions in lines 22-24, instruct agent
engine 108 to wait until variable "open_flag" is set to
"1" in procedure "process open" before proceeding with
execution of the program. When a command is generated in
an application, as a result of user action, procedure
"process open" is run. Upon completion, if open_flag# _
0, agent engine remains in the loop defined by the
instructions in linea 22-24. If open_flag# = 1, agent
engine 108 continue; execution of the program with the
instruction in line 25.
The procedure "'process open" is given in lines 45-
65. If, in line 46 function "sys cmdclass()" returns
"INSTRUCTION", this indicates that it was INSTRUCTION
application 200 that. was attempting to execute a command.
This means that the User has selected button 314
requesting a demonstration. Therefore, in line 47, the
procedure "demo" is called. In line 48, variable
"open-flag#" is set to 1. In line 49, agent engine 108,
is instructed to ignore the command that is being
monitored.

1340592
24
The proce:dure "demo" given in lines 68-74 shows a
user how folds:r "Fred" may be opened. The instruction in
line 68 places. the focus on object "NewWave Office". The
interrogation function "where_is("FOLDER", "Fred")" asks
the object "Ne:wWave Office" where in its display, i.e.,
window 300, is. the (OLDER Fred. When agent engine 108
executes this instruction, agent engine sends an
API-INTERROGATE MSG message to object "NewWave Office"
which responds. with coordinates of the location of FOLDER
Fred in window 300.
In the instruction in line 70, variable
"object region#" is set to value returned by the function
"where_is("FONDER", "Fred")". The instructions in line
71 and 72 produce a sequence of user action level
messages to be. sent) through user action interface, to
object "NewWave Office" for execution. The instruction
in line 71 causes cursor 303 to move to point to the
center of folder Fred, as shown in Figure 16. In Figure
16 the motion of cursor 303 is represented by cursor 303
being shown at the starting point and the ending point of
a path of movement 310. In line 72, a user action
equivalent to a doux>le click on button 20 of mouse 27 is
sent to object "NewWave Office" to be processed. In line
73, a five second pause allows a user a chance to reflect
over what he has seem.
In general, there are three types of interrogation
functions. The funcaion "where-is()" has as parameters
the identification of an object class and title, and asks
for the rectangular regions within the display on monitor
14 of an icon representing the object. The function
"whats at()" has as a parameter a point on the display
and asks for the identity of any object that is
represented by an icon at that point. The function
"status()" has as a parameter the identification of an
object and asks for the status of the object, e.g., does
the object have focus, or is there an open window for the
object. Instructions for each of these functions, when

1340592
executed by agent engine 108, result in an
API-INTERROGATE MSG message being sent from agent engine
108 requestingf the information from the object which has
focus.
5 The use of interrogation functions allows great
flexibility for demo programs. Using an interrogation
function a demio program is able to locate an object,
determine what. objecas reside at particular locations and
to determine the status of an object. Using these
10 interrogation functp.ons allows a demo program to execute
a demo even though t:he demo program does not initially
know the location, identity and/or status of objects used
in the demo.
If a user attempts to open folder Fred rather than
15 receive a demo, the instructions in lines 51-64 of
"process open" will cause agent engine 108 to monitor
commands generated by "NewWave Office" application 100
before allowing execution of the commands. A command
generated by the uss:r is intercepted when it is sent from
20 action processor 107. to command processor 102.
The instructions in lines 51-64 allow a user to open
folder Fred in a number of ways. For instance, the user
may place cursor 303. over folder Fred and double click
button 27 or the user may select folder Fred and select
25 the command "open" from a pull down menu. Regardless of
the way the user chooses to open folder "Fred", two
commands must occur in sequence. First, the select
command must be returned to command processor 102 with
the user having selected folder "Fred". When this
occurs, class# is assigned the value "FOLDER" in line 53,
title# is assigned "FRED" in line 54, and the select
command is executed in line 55. Second, an open command
must be returned to command processor l02 for execution.
When an open commanf, is returned and if class# = FOLDER
and title# = FRED, then the open_flag# will be set to 1
in line 58 and the open command will be executed in line
59.

1340592
26
A11 other commands are ignored by agent engine 108.
When the user is successful in opening folder "Fred",
window 319 will appear as shown in Figure 17.
In line 25, monitoring is turned off. The
instructions i.n linea 26 and 27 are sent by agent engine
108 to object "Lesson Instruction" for execution. Upon
execution, window 31l is removed from display 14 and a
window 316 appears as shown in Figure 17. The
instruction in line 28 redefines the monitor procedure to
be procedure "process button" so that procedure
"process butta~n" is performed upon the user performing
any command, but only when monitoring is on.
In line 29, the: variable "button_flag#" is set to
"0". In line 30, the instruction "set command on"
results in monitoring being turned on so that a command
trap is produced at the generation of a command in any
application.
Agent 108 loops through the instructions in lines
31-33, as long as button-flag# remains at 0. At
instruction 32, agent engine waits for a command to be
intercepted from any application. When a command is
intercepted from INSTRUCTION application 200,
button_flag# is set to 0 and agent engine 108 escapes the
loop and proceeds with the execution of the instruction
in line 34.
In line 34, monitoring is turned off. The
instruction in line 35 is sent by agent engine 108 to
object "Lesson Instruction" for execution. Upon
execution, window 3l.6 is removed from display 14.
Appendix A contains an Introduction to API 130
(Programmer's Guide Chapter 4).
Appendix B contains guidelines for developing agent
task language. (Agent Task Language Guidelines).
Appendix C contains a description of Task Language
Internals.

130592
Appendix D contains a description of
API INTERRC~GAZ'E MSG..
Appendix E contains a paper entitled "Extensible
Agent Task Language"
Appendix F contains a Hewlett Packard internal
document entitled: "Tools and Strategies for
Implementing Computer-Based Training in the New Wave".
Appendix G contains an article to appear in the HP
Journal entitled "Computer-Based Training Facility
("Cousteau")".

APPENDIX A f ~' ' '
134092
Introduction to the APt
Overview 0f the What is the Apl and what does ft do for you?
APi The Application Program Interfsoe (API) enables you as an appliation
devda~per to readily incorporate advanced features into your sottware. Where
user interfaces allow a user to communicate with applintions) a program
interface allows an application program to interisoe with other facilities on
the
system. Speci~mlly) the API reeves as an interface between your appliation
and these three: facilities, is shown in Figure 4.1: .
~ The Agent) which provides task automation;
~ Help Facility) used for comprehensive on-line assistance; and
~ Computer Based Training (CBT)) which enables built-in product training
fcrc your software users.
The API provides you with s set of functions snd messages for accessing these
facilities. The API is readily implemented by structuring your apptintion code
appropriately t~o include'hooks' for aaeesing the API. To make it easier to
incorporate the: API) segments of code for using the API, referred to a
comps>nents) have been worked out and are supplied for you. You can plug
these components directly into your application and need only make sure that
the variable names match.
API
HELP H HELP
TEXT
APPLICATION
CBT I---I AGENT
i
~:SSONS TASK
LANG
Fjgui~e 4~1- The API: Interface between the Application and the API Facilities
Mvedretfon to tM A11 s . v

Destgn Philosophy for APt Appticnions
Designing an application that uses the AP1 facilities is no more dii~ult than
developing any other application with comparable features. By keeping the
design philosophy in mind) it should, in fact) be easier beaux of the tools
that
are provided. There are four basic things you must do when designing an API
application:
a Define a net of command: to deaai'be the functions a uxr performs in your
application. 'Ibis is called the Task Language.
~ Separate the interpretation of user actions from the actual execution of the
commands derived from thox actions.
~ Structure your aF-'ication so that it has the following categories of
support:
I. playback of tasKS;
2. the recording of tasks;
3. interrogation) that is, for help) Agent) or CBT requesu;
d. monitoring of uxr actions and commands for computer based training;
S. error handling within tasks.
~ Incorporate function calls into your code for interfacing with the API.
Chapter Organization
The chapter is divided into the following sections:
~ The API Facilities
~ The API Modes
~ Messages to Your Application
~ The Task Language
~ The API Architecture
i Message Handling in API Applintions
~ How Your A~lication Responds to the Di~'erent Modes
~ API Function Summary
~ API Component Summary
~ -: Imroaueuoa b 1M APB

l~~o~sz
The API FaCili~le8 The concept behind the API is to give your application
program the cxpabiliry
to access the API Facilities. Wiadows can be thought of as a 'message delivery
system") once it delivers aD extetaal menages to your application. Continuing
the analogy) the application architecture can be thought of as a'message
routing system" and the API itself is a 'message monitoring system". The
architecture (that is) the sttucxure of your application) ensures that
messages
get routed to the appropriate part of your code or to external API procedures)
as required. The API monitors messages acrd their resulting commands at
dilfferent interface points mithin your application.
Ttie Agent
The Agcnt is the name of the service that provides task automation. In the HP
Ne;wWave environment) a user can record any series of commands and save the
series for later re-use) i.e.) playback. A recorded series of commands is
referred
to as an Agcnt Task. The Agent acts on the user's behalf to record or perform
AF,ent Tasks. The Agent does more than memorize keystrokes; it structures
tasks in teens of the commands in the Task language that you design for your
application. This means that the result of a user's actions is recorded and
not
the: intermediate steps that cause the command.
Ta record a task, the user selects 'Start Record" from the API-provided "Task"
menu. As the user goes through the steps in the task) your application must
translate the user actions into commands) execute them) and pass them on to
the; Agent via the API for recording. The user indicates the end of the task
by
selecting "End Record" from the menu and must then name the task for later
recall.
At the same time as the commands are being recorded) an "Agent Task" is
created. The user can open the task and perform any desired editing. Tasks can
include conurol statements and interrogation functions that are not possible
through dir~xt recording of commands. A user can also) if desired) create a
task
by typing commands directly into the task.
To have the stored task performed) the user aeleMs'Perform Task' from the
menu and spies the task to be exearted. The Agent retrieves each stored
cotnmand and passes them back to your application via the API for execution)
attached to Playback messages.
Inbedueboe 1o tM W s . a

.~340y~
Ths HNp Facility
With the NewWave Help Facility) you can readily provide your users with
extensive on- line assistance. The API-wpplied'Help" menu lets the users
access a list of help topics or reoeiv~e contact-sensitive help by clicking on
a
specific item.
Setting up a help teat file is straightforvrard and requires no programming.
The
help author an make use of automatic inde~dag and can enable the user to
jump from topic to topic. Since adding help teat requires no additional coding
of the application) the help author an operate independently of the
application
developer) for the most pan. Help teat for new commands can be readily
added.
By selecting Xelp Topics from the menu) the uxr can br~~Hse through an index
of informational topics that you (or a help author) prov~nes for the
application.
When the user xlects the topic) the API passes the information on to the Help
Facility.
Conrarsansitivc help refers to a help message that is provided directly for a
single item and should correspond to the uxr's current situation. For conteat-
xnsitive help, the user xlecu "Saeen Help' from the menu and the application
passes the request on to the API which informs the Help Facility accordingly.
The cursor is changed into a question mark (?). With the question mark as a i
cursor) the uxr clicks on the item in question and the Help Facility provides
the appropriate help text. Content-xnsitive help can be provided for all menu
commands or for specific parts of the application window.
Computer Based Training (C8T)
NewWave provides the building blocks for you (or a lesson author) to create
innovative lessons for your uxrs. You basically xt up a lessen (which is the
equivalent to an Agent Task) and instruct the API to "monitor" the uxr's
responxs in the lessons. Fpr example) you can write a task to open a window)
display teat to ask the uxr to cease a document) and detect sucessful
completion of this activity.
Other tools will be available in the future to enable you to include
additional
graphic including animation in the lessons.
. ~~t~oeuWe~ to 1M APi

~34o~9z
Th~ API MOC~eS As mentioned earlier, an application that makes ux of the API
must have the
following categories of support:
1, task playback;
2. the recording of tasks;
3. interrogation) for example, returning context-xnsitive help to the help
facility;
4. monitoring of uxr actions for computer based training;
5. error handling within tasks.
These are a~xomplished by structuring your application to operate according to
the current mode in effect. There are five modes of operation for the API:
1. Playback Mode
I. Record Mode
3. Intercept Mode (Help)
4. Monitor Mode (CBT)
5. Error l~4ode
Playback refers to the execution of an Agent Task by your application working
in tandem with the API. In Playback Mode) your application is sent commands
fro~,m the Agent Task and must execute them.
Record is the: mode in which the uxr performs actions and your application
translates them into commands to be stored in an Agent Task.
Intercept is set while the uxr requests context-xnsitive help. For example)
suppose the user wants more information concerning an item on the bottom left
corner of your window. The uxr selects "Screen Help") and moves the
"question mark" cursor to the item and dicks the moux.
Monitor Mod a is uxd by the Computer Based Training (CBT) facility. In
Monitor mode) the commands a user generates are passed to CBT for approval
bef ore the command is exearted. Thus) the CBT task can prevent the uxr from
performing unwanted commands.
Error Mode as uxd to pass error inside your application to the Agent instead
of
the uxr. It us tied to the 'ON ERROR DO.. ' statement which is a lass
independent command in the Ta:>c Language. In this mode, the task is notified
about an error rather than having the error displayed in an Error Message Box.
The modes are set by "MODE_ON_FLAGS" that are passed in the
API SET_MODE_FLAGS_MSG message.
kwoa~euon b tM API s . s

1340592
MeSS89eS t0 your In order to understand how your application interacts with
the API) consider
AppIiCatlon Figure 4-Z. All interaction baw~ your application and the user and
the rest
of the environment comes to your application in the form of messages from the
Windows messaging system. The messages may be the result of acxivity within
Windows itxlf) the OMF) the user) or the API.
Wrndows houselueping mGSSaga arc used for general activities such as painting
windows. To be a good Windows citizen you must process rhea messages
according to Windows guidelines.
OMF masagu arc uxd for object management tasks) such as CREATE OMF)
OPEN, or TERMINATE. Your application can respond to other objects by
processing these messages.
Uscraction masagu arc moux movements) mouse clicks) menu selections,
keystrokes) etc. The user does something to cause Windows to and you these
messages. Some of these messages are meaningful to your application, but not
necessarily all of them.
Procsssfn9 API Messa~ss
All API messages are generated as a result of your application calling API
functions. API messages may be the result of a user request, such as xlecting
"Record a Task"; an Agent request, such as requiting the playback of an agent
task involving the application; or a Help request, such as requesting help
information on a certain location within the application window.
Note that the API ands messages to your application via Windows. Your
application communicates with the API by calling an API function. The API
can send your application three types of messages:
~ API_PLAYBACK_MSG)
a AP1_SET_MODE_FLAGS_MSG,
~ API_INTERROGATE_MSG.
The API_PIAYBACK_MSG supplied the command to the application prior to
execution by_the s~licattion.
API_SET_MODE_FiAGS_MSG changes the mode bits in the APIModeFlags
in order to change the flow of control mithin your application.
API_IN'I'ERROGATE_MSG enables Help, the Agent or CBT to request
cxrtain information from the application) for e~cample context-sensitive help.
Some types of requests are common to all object dasxs) while other types of
interrogation are speck to a single object class. The common requests are
called Class Independent Int~rrng~ation Functions. Currently these include
contact- sensitive help requests where the application returns a help context
number to allow the help system to display the appropriate text.
a-s rwoe~aflen a a~ APB

134e92
Figwx 4~2. Mrsaage 7~pes Received by the Applkation
IMrodlltlen b 1111 API ~ - 7

~~-~o~~z
Designing a Task What is a Task Lan~w~?
Language In order for your application to interact with the API and the user)
it is
necessary for you to design a task language. A task language is the tent form
of
the commands that describe the appliation's functionality. 'Ihe task language
for the HPSI3APE program is shown below. Although simpler than the task
language for most applications) it.doea demonstrate the basic concepts.
RECTAIIGIE
E1L1PSE
TRIAIIGIE
STAR
C1FAR
IIIdIMIZE
~AxI~tZE
RESTORE
CLOSE
An application's task language is comprised of class dependant commands,
which represent functions that can be performed by your application) such as
TRIANGLE. The Agent provides class independent com~lands, which are
functions such as flow control) IF) FOR) DO, ere. that are interpreted by the
Agent itself.
Note that many class dependent commands are common across applications)
for example, the CLOSE command. These are still executed by your
application and not the Agent) and so are defined as class dependent.
As discussed earlier, there are two ways of creating tasks:
1. by recording commands generated by the user's actions, and
2. by typing commands directty into an Agent Task.
Tasks are compiled to produce the external form of the command used by the
Agent.
D~stgntng Your Comtrsnds
Tlie Agent is only concerned with the commands themselves, not in the way the
user ar 'rives at the commands. Your application muu be able to eadude any
non-essential activity by the user in the process of creating a command. It is
poss~'ble to arrive at the acme command through more than one means. For
example) a mouse didc in a menu) a keyboard accelerator) or the keyboud
interface to the menu could aD be used to'dose' an applintion, but the Agent
only records that the application was 'dosed" without recording the
keystrokes.
The Agent simply takes the command as interpreted by your application. To
the Agent, a command is simply a token that is given to it to store in Record
Mode and that it must return when in Playback Mode.
s . ~ hvw~euon to tM APl

1340592
As the application designer) it is up to you to design your task commands. The
task commands are not visible to the usa uati7 you implement a compiled
version of your task hnguage. Task command design has many long term
ramifications) ao you need to oor>:ider your design nrefull~r. Here are some
general guidelines for designing the commands in your Task Lnguage:
1. E>iminate any ambiguity in your commands. You want your users to know
the exa<:t meaning of the commands.
2. Be sure that your oommarid set is all-inclusive with regard to the
functionality of your application. Everything that your application can do,
your task language should be able to replicate.
3. Use the same terms that appear in your pull-down menus. Di~crent
terminology would be confusing to the user.
4. Use a simple vocabulary. The creation of Agent tasks should not be limited
to "power users". The broader your base of task users) the more suaoessful
your application product will be.
Chapter 7 "Agent Task I,anguage~ furnishes a complete list of guidelines. If
your application has a similar function to those described in the guidelines,
use
the command format recommended.
Motes Does Xour Application Ua~ Commands?
At the highGSt conceptual level, commands are what an application can do. This
ma~~ include commands that may be given from pull-down menus, from pressing
buttons in dialog boxes, or from typing or mouse operations in the client
area.
Cornmands rnay be represented in four ways: as keywords in the Task
Language form; as binary external commands that are stored by a task and
pas:>ed to an application for execution; as binary internal commands that are
user within an application during command execution; and as pcodes which are
similar to the; external form but with a special header used by the Agent.
These
are shown in Figure 4-3 "Different Forms of a Command". The transformation
of o~mmands from one form to another is shovm in Figure 4-4.
The Task Language com~iand form is the form of the command that is
displayed in the Talc window.
The sterna! forrre is the version that is passed to the API. It consists of
binary
dats~ understood by the appliation only. Although the external form is passed
to the API) it. is essentially private to your application and is not
interpreted by
the ,AP1 or any other application. (However, your CBT tasks do need to
interpret the external form.) When an Agent Task is compiled) an external form
command is ~fieated from each class dependent command.
Note r' The vernal form contains the command number and the parameters far the
command and must have no run-time dependent data such as memory pointers.
kwoeue~o~ to 1M A11 a .,

i34a5~2
The interval fom~) on the other hand, is used totally within your application.
You can reprexnt thex commands internally in whatever form is convenient
for you) as long as you have the necessary processors to translate back and
forth
between the internal and esternal forms.
Note also that it is rooommended that you give your internal oomtnanda values
that correspond to the ranges in Appendix A of the "I~ NewWave
Environment: Programmer Refetena Manual'. 'Ibis w~l enable you to take
advantage of thox API macro: that ue basod on the range coding.
The pco~i~ com~send form refer: to the binuy version of the external command
form that is stored in the Agent task, along with some instructions to allow
the
Agent to replay the command.
~ - ~ o awee~auoe to tM W

1~4D59~
Task Language Form
"RECTANGLE"
Pcode Form
I,12 ~ 8 !i 1 6 8001 50
External Form
~ 8001 50
Internal Form
'~ 8001 50
Ft~ure 4.3 - Different Forms oI a Command
Introdueuen is 1M A'~ ~ ."

~~~~~~z
RECORDING PLAYBACK
Mtarnal P eeda
Form Ferm
e001 12 a 1
60 a e001
50
1 1
TRANSLATE
''
TO 'Apant
EXTERNAL! Enpina'
~i
External External
Form Form
6 6 e001
e001 50
~
50
!
'
1
I TRANSLATE
11
'Recorder' TO
j INTERNAL
Task Lanpuapa Form int~rnai Form
'RECTANGLE' 80O1' S0
' ~ COMPILING
Task Lanquapa Form PeeM Form
'RECTANGLE' ~ PARSER
L2 ~e ~,'e ~ eoo, ~ 50~
Flgnre 4-4 Transformations of Commands
a - a tewoduetfon to 1M APB

1340592
The Application S~~parattn~ user Actions snd Commands
Architecture ~, ~nventiional applications, you generally test to determine
which acxion a
usxr performed on the screen. Then you sexually e~cecute the action is the
same
pairt of your code. When you design an API application, an intermediate step
is
required. 7lou must match the user's action to a defined command from the
Task Language and then execute it. The cleanest way to do this in your code is
to have one; procedure dedicated to interpretation of user actions into
commands and a second procedure devoted to actually executing the
commands. The procedure that interprets the actions is referred to ss the
Action Processor; the procedure that executes the commands is called the
Command ,Procusor.
Figure 4-5 shows how the Action Prooasor and Command Processor within
your application interact with Windows and the API facilities. The important
concept here is that the Action Processor and Command Processor are major
elements separating the interpretation of actions and the execution of
commands) with calls to the API between the processors. User actions) API
m~asages, C)MF messages, Windows housekeeping) and help requests all come
into your application in the form of messages. They may be intercepted
initially
by the API (in the case of help requests or menu selections) or may pass
through to the main body of your application. After the Action Processor,
commands .may be intercepted by the API or they may go directly to the
Command lProcessor.
usER
AGENT
MS WINDOWS TOOLS
AdENT
TASK
MEIP ) ~ ~o~asoR ~ ~ AGENT
ACTION EN01NE
HELP ~ ----_- AGENT
TEXT -- - -- TASK
Cp~AMAND
HELP PROCESSOlI C6T
TOOL& TOOLS
Figure 4-S - How' the Action Processor and Command Processor Interact with
V~'indows and the API
Invoaueuen a fM API ~ . t 3

13Q ~~z
Structurln9 Your Cods
An overview of the components and prooasors used in your application is
shown in the block diagram in Figure 4-6. As mentioned earlier, is order to
make ux of the API, it is neoaaary for you to :Uucture your code to
accommodate calls to the API. Iu the block diagram) the rectangular shape is
used to show components and the sqwre shapes indicate processors. (Note
that there are special meanings for the terms 'component' and 'prooaaor"
here) as explained below.)
The most important reason to ~riite an application according t~ the
Architecture is that it guarantee that the application wn'll pass information
to
the API at the correct points. It wdi always generate commands before they are
needed by the Command Processor. It will always xt the corrxt mode before
the application depends on that mode being set. Howcver, it is up to you) the
developer) to choose how much of this architecture you adhere to. Just note
that whether or not you need to dcviate from the architecture) your
application
must always do API related processing in the order required by the
architecture.
A flow chart showing the recommended architecture is shown in Figure 4-7.
This is the basic application arcleirtcrur~e. The flow chart actually
represents the
logic that is required in the main window procedure. In subsequent sections)
you can see how the logic applies to the major API modes. These sections take
a closer look at the conversations that take place between your application
and
the API. Notice that the order of communication is very important. Certain
events must happen only after other events. The predefined application
architecture ensures that the events take place in the right order.
. is naoeueuo~ to tM ~W

~.~4~59~
USER
M ~-Windows
-.~ a-1 w~.nv.
1 ~ ~ .a I oa o~
___, __
ACTION nws.A~c
woDE:ESS j PROCESSOR
I ACno~. 1 r~oauo~
( PROCESSOF 1
L______J I
pCD'~~C,i I~~j
_, n ~ ~ws~.e
I :mi.t it
eo..,. ~ ,~ -, co.oe. crre~r..
I .~P.,.. 1 .~~.o. I
_____~ ,
~ooA.
aA~oc eox
oaotESSOF -
I COMMAN'
DROCESSOR cw~
oKas ~o
._---~.-.~~._ CO,p,I.- I
TAAII~A'E
TO
,nw,a ExtE~
~s<ow
Flgu.re 4-6. Application Architecture Block DLgnm
htredue~on b tM API s ~ ~ s

Receive Message
from Windows ~ 3 4 Q ) g 2
Return to WinMain il I~lilp or
APIAI~nu S~l~etion M~as~pi Uasr ACtlOn
Interface Component
lI NOT PLAYBACK) Il PLAYBACK,
Action interpret "lion tnnihN oannrond Translate
Playback Message to
Processor Test Component Internal
Processor
R~furn to WinM~in
il no cornin~nd pin~i~t~d
Ti~niHlo Translate
to
oxtorn~l
lornr) to
Il
CBT
Command
Interface
External
Component
Proee
ss or
Proeiti Modal
aiviratNy
ll
Modal Dialog
Dialog Box
Box
~wlt
ror
~~~
~x
Test Processor1
Component
11 eomnr~nd
~sicuti It
Command Test
Component Command
Processor
ll RECORD)
tr~nilit~ eornniand Translate
Return Interface to
Component External
Processor
Return to
WinMain
Figure 4-7 . Application Architecture Fiow Chart
1
t ~ 16 htroduetion to 1M API

134059z
Pvnctions and Macros
7."he smallest building blocks of the Application Architecture are the API
functions and maaos. These are used in playing back commands) providing
help to the user, and recording commands after they have been executed. 'Ihe
functions are direct calls to the API. The maaos are tests for modes or other
conditions..
C,omponants
The functions and macro comprise components. The term consporunt refers to
a segment of program code supplied by HP. AD you need to do with a
a~mponent is copy it verbatim into your program in the appropriate loation
and make sure that the variable names match up with the ones you are using.
The logic including a!1 the appropriate calls has been worked out for you. A
list
of the components currently available and the global variables they use is
provided at the end of this chapter.
For example) there is a component called API Initialization Processing that is
r(:quired before other API calls are made. Its implementation in the
HfPSHIAP1~ code is as follows:
fi' ( IAPilnit( (LPAPINND)ilhAPl, lhYnd) lhlnst) lhOMi,
(~PfTR)s:AppNtlpfilt,
(LPSTR)ttObJtitlt, API It0 110DE ) ) (
if ( APIError (lhAPi, API N0 1100E) >~ API =TART API FATAL ERR
rtturn(CO ERROR);
if ( lAPlJnitlltnu(lhAPl, GttMtnu(hYnd)) AP1 TAlKMENU / API NELP_IIENU,
AP I 110 MODE ) )
if ( NoteAllError (lhAP1) >~ AP1 :TART A11_fATAI ERR ) (
APlTtre(lhAPl) hilnd, lhlnst, lhOMi) A11 YO MODE);
rtturn(CO~ERROR);
Ire short, this initializes the API, gets an API handle) and checks for
errors. It
then requests display of the Task and Help menus on the application menu bar)
as. well as checking for other errors. This component is explained in more
detal~
at the end of the chapter. As long as your application uses the same variables
as
in this exarnple) you can simply plug this component into your application in
the
appropriate place.
iMrodret4erl to 1M API a . n

1340W2
Processors
A procersor has the same del6ned purpose within all API applications. However,
the speciSc fur~io~ that a peocx~x performs are unique to the application so
that you) the developer) are raponuble for writing the processor code. Thera
are four processors required for all API applications:
a Action Processor
~ Command Processor
r Translate to Internal Prooeaaor
~ Translate to External Processor
There are other processors that may be required if you make use of dialog
boxes in your application:
a Modal Dialog Box Processor
~ Modeless Dialog Box Action Processor
Modal dialog boxes are displayed when a menu item selects a command that
requires additional parameters from the user. The user must supply the
information or cancel the command before proceeding. Modeless dialog boxes
are also displayed by a menu item but allow the user to continue with other
commands before selecting a button within the modeless dialog box.
The modal dialog box requires an immediate answer and thus can be thought of
as occupying a spec location in your main logic. The modeleas dialog box has .
a fair degree of independence from the main lagic of your application and has
its own window procedure and Action Processor. The logic of the modeless
dialog box must be in the same recommended structure as all NewWave
applications) although it does use the same Command Processor as the rest of
your application rather than having its own Command Processor.
The block diagram in Figure 4.6 shows how a modeless dialog box fits into the
application architecture. Note that there can be multiple modeless dialog
boxes
within an application; they fit in the same way as the one shown in the
figure.
4 -1 ~ 111trod~ b 1A1 API

134~59~
The Action Proc~sso~
The ~lctioit Processor is the part of your program that interprets the uxr's
actions and translates them into internal commands. It takes the following
messages as input:
~ Uxr Action Messages
si API Messages
~ OMF Messages
~ Windows Houxkeeping Menages
Ita main purpose is to handle the Windows user action messages) derive the
internal command that results from the user's actions, and return that
command to the window procedure where it will eventually be executed. The
Action Processor must also be able to answer API Interrogation Messages) set
modes according to API Set Mode Messages, and take care of OMF and
V~'indows housekeeping requests.
In interpreting user actions) tht Action Processor observes the clicks and
movements of the mouse and waits until a meaningful command has been
generated. The Action Processor is responsible for handling the different ways
in which a Fuser can build the same command and for generating an appropriate
command that can be understood by the Command Processor. In HPSHAPE)
for example) when the user chooses a menu item, such as "Triangle" from the
Edit Menu, a command is generated. The Action Processor then builds the
command, 'New Shape Triangle".
The Action Processor is a giant switch statement based on messages of all
types
(including messages from Windows) OMF) and API that are not related to user
actions). Some messages will result in a command being generated; others will
be dealt with entirely in the Action Proxuor (such as WM PAINT). A typical
Amion Processor could have the form shown below.
s~itc~ fmssusss) C
east YN_PAtllt:
( .1
esss ~ txf101AttD
( ,1
esss YI~ CLOfE:
C :1
esss 1M itlCOIaIAIID:
( .1
esss ~ ouf:
C 71
csss AI'1_tIITEEEOGATE IIEltAGE:
( :1
csss Ai'I SET 110pE_fLAGS I1SG:
( :1
default:
( :1
hy~stbn 101M A'PI A . t!i

4~44W~
The Command Proceuor
'The purpose of the Convno~d Processor is to execute internal commands that
are passed to it. 'Ibe only input to the Command PfO0C3EOr is a command in
the internal format) and the only return value is an error if the command
cannot be executed. The Command Processor is not concerned with the source
of the commands; its only purpwe is to execute them. It handles all possible
functions that a uxr might wish to perform. The Command Processor is a giant
switch statement baxd on the task lan gtrage command set. The format of the
Command Processor in HPSHAP>E is shown below:
s~ite~ (sppiC~d~sadl
ess~ IIt1iI11I2E YI1100Y:
( )
use IIAX111I2E Y11100Y:
( )
csss XESTOEE 11I110011:
C )
case CIOSE_1111iDOw:
( )
esss wE~_sNAVE:
()
detwl t: .
( )
The Translate to Internal Processor
The Tinnslar~ ro Inten~lal Procusor translates the external format of the
command (used by the API ~ : _~ the internal format (uxd by your application).
Note that in some cases, the internal format may be the same as the external
format.
The Translate to External Processor
The Translate to F..~xen~l P, roctssor is responsible for translating a
command
from internal format to external format. It takes an internal command as input
and returns an external command. The command that the Action Processor
builds may have internal information ruck as pointers, array indices) etc. For
the API to record the command it must be in external format. All parameters
for the external format of the command must be value rather than reference
parameters.
s . Z0 Intreduet'on to tM APl

134092
Modal Dialog Box Processor
The Modal ,Dialog Box Processor is called whenever a command requiring an
immediate user response into a modal dialog box has been generated. Note
than there a:re two commands involved with a call to the Modal Dialog Box
Pnxessor:
1,. a dialog command causing the modal dialog box to be displayed
2, a command generated by the,user'a response to the modal dialog box.
The Modal ;Dialog Box P'rooessor is used to display the appropriate modal
dialog box and derive a command to be returned based on the user's response.
'Ih~e user fills in the modal dialog box and presses a button. This forms a
new
command with user's entry as a data parameter. The Command Processor then
executes the command. The input to the Modal Dialog Box Processor is the
cornmand requesting the modal dialog box, and the return value is the new
command derived from the user's response.
Note that in some cases you may want to use the modal dialog box to determine
a parameter when recording. In playback, the modal dialog box does not need
to !x axessed since the parameter is already known. In other cases) you may
want to enter an Agent Task command to put up a modal dialog box to prompt
for user input during playback.
Mo~dalssa C~falog Box Procasslng
The: window procedure for modeless dialog boxes is structured much the same
as the main 'window procedure) with its own Action Processor. It calls the
same
Command f rocessor as the main window procedure. Commands are generated
by the modeless dialog box when s button is selected, or some other operation
changes the state of the application.
Mtrpduorion b IM API 4 ~ I1

13~OW
Message Handling The window Proc~dun Components
in API Applications ,~,~I applications follow the same basic logic in
proxssing messages. The
logic is shown in the flow chart in Figure 4-6. The window procedure in the
HPLAYOUT sample application ooasi:ts of the following API components:
~ Uxr Action Interface Component
a Playback Message Test Component
~ Command Interface Component
~ Modal Dialog Box Test Component
~ Command Test Component
Return Interface Component
A typical window procedure is shown on the pages that follow. Note that in
your application) there will be some variations. You may use different
variable
names. You might puD piecxs of the code out into xparate procedures. The
important point is that you maintain the same basic structure with the hooks
into the API.
The following window procedure example comes from the sample application
HPLAYOUT.
1 ~ ~ Mboduetfen b 1M API

134092
/1HN~~~~~~~~~H~~~~~~~~~~~~~~~~~~N~f~~~~~~~~~~~~~~~~~~~t~~~~~~~~t~~~~/
/ layeutllr~roe ~/
/ ~/
/~r Main proctdura to htndlt tll ~essa~ts. ~/
/Ir~~~~~~~~~~~~~~~~~~~~~~~~N~~~~~~N~~~~~~~~~~~~N~~~~~~~~~~~~~~~~~11~~/
lei fA1 PAiCAt Ltpoutlh>droe(hYnd, ~tsstlt) rltrtti) lParls~)
N~nIO hYnd;
ups i qntd ~l~tssv't;
YCNtO Wtrtm;
ICSIG 11w1~s;
(
AllolosTllurr extc.a;
twTt~DSrllurr intcwd;
APIITIITTPE tpplltn;
if ( Ilil ht::otltfilter(hYnd, ~tfsaOt, wltrtt, 111r1~s, 0011G fA1 ~)
itppllttn) )
return( pplltn);
/~~s~~~~~~~~~~~ UsE1 ACT 1011 I I/TElfACE C0111011EIIT ~~~~~~~~~~~~~~/
if ( API InttretptOn(4A1111odtfltlt) ~ ( AIINwtlltrw (~lsult, rPtrani))
APIUSerACtionlnterfact( OKAPI, hYnd) (11A11Ut1S1Gt1ED)is~tssa9t,
rPtraiu) lltraw, API 401100E);
lpplltn ~ (APIITNTTPE)0;
if ( APtllavtllesstlt ( wtssalt ))
intCmd.NCaid ~ AP1 110 C11D;
/~~~~~~~'~~" END Of USER ACT10N ItlTEIfACE COIIP011EIiT ~~~~~~~~~~/
/~~~~~~~~~s~~s~ P~ATIACK IIEIfAGE TEIT C0111011EwT ~~~~~~~~~~~~~~/
if (APII>ltybtet11s1(wtsult>)
TransttteTolnttrnlllreetlsor(ttssalt, rltrw) llartai, iintCind);
else
ActioMroctssor(hllnd, wssllt, r11r1~, darts, iinttad,
iapplltn);
if (ADlNavtCowstnd(intCtd.rCs~d))
ItpplErr ~ API 110 Ellt;
/~~~~~~w~~~ EN0 0f IlAYIACK IIEfSACE TEIT C0111011E11T ~~~~~~~~~~/
/~~r~~~~~~~~~~~~~ C10 111TEIfACE OCJh1011E11T ~~~~~~~f~~~N~/
if I;APIIlonitor0n(IA1111odtflalt)) (
TrapsLtteTOExttrntllroetstor(iinthd, itxtdld):
APlCoatndlnttrftet( IhAII, (L1AIICIbiTtuCT)itxtGs~d,
All 110 ilOpE );
if (exthd.rCwd ~~ AI1_I10_Clb)
intCitd.rC~d ~ A1I 110 Clb;
/~~~~~~~~~~~~~ Ew0 0f COIRIAlIO IIITEIfACE Cd11011E11T ~~~~~~~~~~~~/
tetra luttien b i1M API 4 - b

<IMG>
In the following paragraphs, the windows procedure is broken down into its
components and the API functions that comprise the components are
described. As mentioned earlier, the application architecture can be thought
of
as a message routing system. The descriptions indicate the message and
commands flowing in and out each component.
A summary of all API functions and macros is provided at the end of this
chapter. (Refer to Chapter 2 of the "Programmer Reference Manual" for
detailed descriptions of the API functions and macros, Chapter 5 for
descriptions of the messages, and Appendix A for supplemental API
information.)

1340592
Ui~r Action lnt~iac~ Component
T'h.e purpose of the UserAction 1>att>fau Component is to test whether the
user
is ;I<ccessing one of the API facilities. If the user is requesting Help for
an item
in a pull-down menu) then the rest of the window procedure is bypassed and the
application is set into Intercept Mode. For example) if the user has selected
one
of the API menu items or is using oonteltt-xnsitive help) the API deals with
the
message arnd no further action from the application is required.
Note ~ ~~ ~I rewserves certain values to provide help for the system mean and
non-client areas) and for the API menus. Thus, the application must not use
menu item :IDs less than 100 or greater than OuDFFF.
ATl incoming messages pass through the User Action Interface Component on
to the Playback Message Component eltcept for the menu xlection and
context-sensitive help messages, which are handled by the User Action
Interface. The code for the User Action Interface Component follows:
/~~'~~~~~~~~~~~~ USER ACTION IIITERiACE C0111011E11T ~~~~~~~~~~~~~~/
if (APIIntarceDtOn (oAPt~ode~.a;.<<. ~ Av111aw11anu (wssa~a, rParafn))
APIUSerACtioninterface (on;r:, n~na, (IPAPIU11f1611E0) iwessaOe,
rParam, lParaw, A11 YO 1100E);
applRtn ~ (APIERRTrPE)OL;
if (APlHavallassaoe (~asaape) )
applCmd.and ~ API IIO CyD;
qapplErr ~ API WO ERR;
/~~'~~~~~~~~~ E11D OF USER ACT 1011 IIITERFACE C011POIIE11T ~~~~~~~~~~/
where APllntercept0~ tests to see if the API is xt to intercept all messages
to
the: application;
Af'IHaveMcnu tests to see if the user just made a xlection from any of the API
menus (Help or Task);
At'IUserActionlnrcrface passes the message on to the API, if either test is
passed. The API may proceu the message if it is a context-xnsitive help or
Al'I menu selection message;
At'IHaveMeuage tests to see if the API processed the message. If this is the
case (that is, the message is xt to zero), then there is no need for your
application to xe the message snd the program will return from the window
procedure.
Ywoauetlon b 1M API s - 25

Playback Masss~ Test Component
The Playback Musage Test Corr~poncnt comes next in the window procedure
and is used to test if the mesaa~e to be prooased is a playback message. If it
is
a playback mesaa~e) then it is passed to the Translate to Internal Processor
where the attached handle is used to retrieve the command) and the command
is translated to internal format to that it can be executed. If it is not a
playback
message) it is passed to the Action Prooasor. After one of the two processors
has been called) there is a test made to :ee if a command (internal format)
has
been generated. If there was a Windows housekeeping or OMF message) then
it may have hero fully processed within the Action Prooasor and the command
variable will be null indicating that no further processing is required. The
code
for the Playback Message Test Component follows:
/~~~~~~~~~~~~~~ PLAY~ACK IIEtlAGE TEfT COIIP011EIIT ~~~~~~~~~~~~~~/
if (APtIlsybaeklls,(~sesso4s))
TrmtlatsTolntsrnalProetssor(~esaaoo, Wsrwe) listen) iintCwd);
elss
AetioMroeassor( hw~d) sxssa9e, wParw) llaraw, iintC~d)
iapplltn);
if (A1t11avsCowssndiintC~d.rC~d))
oapplErr ~ A1i 110 EttE;
/~~~~~'~~~~~ E110 0f ILAY~ACK IIEffAGE TEfT C~11011EIIT ~~~~~~~~~~/
where APIPlaybackMsg is the test to :ee if it is a playback message;
Translate?olnternalProcessor is the call to the application's Translate to
Internal
Processor;
ActionProcessor is the call to the application's Action Processor; and
APIHaveCommand tests to see if a command is ready for processing by the
application's Command Processor.
~ ~ ~i wAreAueBen to 1M API

134092
l:ommand Intslrtacs! Componstnt
;rhe purpose of the Comrr~nd Interfau Con~poncnt is to pass the external form
of the command to the Agent. It receives an internal command as input. If the
application is in Monitor Mode) the internal form is passed to Translate to
External, and then the returned external command is passed to the Agent via
the APIG~mmandInterface function. The Command Interface Component uses
the following code:
www,e~~w~~ COIWAlIO IIITElfACF CD1N011E1T ~~~w~wwy
if (AIIIIOnitorOn(oAllllodsfloOs))
Trsnslst:ToExtsrnsllrocasor(tintCwd, isxtCwd);
AllCos~ssndlntsrfset( 'hAll, (LIAIICIblTEUCT)ioxtCwd,
All 110 1100E );
if (:xtCsxi.rCsd ~~ A11 110 CIIC)
intCsid.rCsid ~ AIi 110 C110;
~wws~wnw~ EN0 Of C011hA110 111TEEfACE C0111011EIIT ~
vrhere AP,lMonitorOn is used to test if the application is in Monitor Mode. If
rdonitor is on, then the command needs to be translated to external form and
Fussed on to the Agent;
7 rrrnslate7~oExternalProcusor is the call to the application's Translate to
)::xternal Processor; and
~IPICommandlnterface is the function that passes the external form of the
cammar,d on to the Agent.
rdonitor nnode allows CBT to examine the command before it has been
executed) and to cancel the command if it is not desirable.
~Ilodal Dlalo~ Box Tast Compons:nt
If the user has performed an action that requires a modal dialog box) then it
nosy be necessary to provide additional processing of the message, since the
nnodal dialog box may be used to form a new command based on the user's
response. (IMPORTANT: Note that not all applications require modal dialog
6~oxes. If your application does not u:e modal dialog boxes) then you can omit
t;he Modal Dialog Box Test Component. HPSHAPE) for example) does not use
nnodal dialog boxes.) The purpose of the Modal Dialog Bc~s Test Cornporunt is
to test whether a dialog command has been formed and then to call the Modal
Dialog Bex Processor.
Ylboduetfoel to 1M API 4 ~ ?f

13:0 ~~z
The Modal Dialog Box Test Component receives an internal command as input
which it passes to the Modal Dialog Box Processor if the command requires a
dialog. If so, the same or a different command may be returned as described
above. The code for the Modal Dialog Boot Test Component follows:
/rrrrrrrrrrrrrr CAL DlAlt>G 10X TElT Od11011ENT ~~~r~~rrr/
if (AltNwoOiololCornd (tnthd. ~d))
IIod~IDt~looloxlroe~ssor(hYnd) itntC~d);
/rrrsrrr~rrrrr END Of 110DA1 DIAt.01 TElT C0111011EIIT N~~~rrr/
where APIHavcDialogCo>'nmand is used to check if a dialog command has been
formed and
ModalDialogBasProcasor is the call to the Modal Dialog Box Processor
procedure that you write to handle dialog commands.
Command Tttst Component
The purpose of the Command Test Component is to test if a command has been
generated and, if so) to pass the command on to the Command Processor for
execution. It roceives an internal command as input. If the command is not
API_NO_CMD, the Command Processor acecutes the command and only
returns a value if there is a problem. The Command Test Component uses the
following code:
/rrrrrrrrrrrrrrrrrrr C01NIAND TEt1 C0111011ENT ~~~r~rrrrrrrrrrrr/
if (AltNaveto~nand(in;C~d.rCwd))
Cown~ndlroeessor( hYnd, wossolo) wlorw, llor~e, iinttwd,
i~ppltstn);
/rrrrrrrrrrrrrrr END Oi C0lfilANO TEiT Cd11011ENT rrrrrrrrrrrrrr/
whereAPIXavaCommand tests to see if there is a command. Note that
APIHaveCommand was also used at the end of the Playback Message Test
Component. (It is poaible that after a command has been formed in the
Playback Message Test Component) further processing caused the command to
be cancelled, for example; if the user pressed the "cancel" button in a modal
dialog box.)
ConemandProctssor is the call to the Command Processor procedure that you
write to execute commands.
4 ~ ZI Mvolvation b 1M API

~.34459~
R~tum lr~Hiac~ Compon~rtt
The Returm Interface is only called if the application is in Playback Mode or
Rccord Mode. 'Ihe purpose of the Raurn Jntafau Component is to tell the
Agent that the command is complete and has been executed and that the
application is ready for the neltt command. 'Ihe Return Interface Component
receives an internal command as input and an error value if there was a
problem. Ii' the application is in Record mode) then the command must be
translated to its external format and recorded as part of the Agent Task. If
there was a~ problem, then the error must be passed on to the API via the
APIReturnlnterface function. The code for the Return Interface Component is
as follows:
/rrrrrrrrrrrrrrrrr EETUlII IIITEEiACE C0111011EIIT 'rr""rrrrrrrrrr/
if 1'APiPW ybackon(~A1111odtilt's) ~ ( Alltteord0n(~AltllodtiltEA))
if (APIEecordOn(EAIIIIedtila's)) t
trtnslatttofxttrn tllrxtstor(tinthd) itxtGd);
APlltcordlnttrfac~( EhAII, (LIA11CIICitEUCT) itxttnd,
A1I 110 1100E );
APIEeturnlnterfaet( OhAll) ~lpplErr, A11 110 1100E );
/' End if (AV111avtCewxand) ~/
7 /' End ii (AilNtvtlIttIStOt) '/
rtturntspplEtn);
/rrrrrrrrrrrrrr EWO 0f EETUtII IIITEEfACE CdIP011ElIT 'rrrrrrrrrrrr/
where APIt'laybackOn is used to check if Playback Mode is on;
At'IRecorriOn is used twice in the logic to test if Record Mode is on. The
first
time it is used in combination with APIPlaybackOn to see if the
ReturnIntelface call is necessary. The socond time it is used to see if the
command needs to be translated and recorded.
TrruulareToFxternalProcessos' is the call to the application's Translate to
External Processor (which you write). The command needs to be in external
format in order to be recorded;
APIRaordlwrerface is used to record a command after it has been performed.
APIRetunllruerface informs the Agent that the command is complete and that
the. application is ready to receive or build the next command. It also
informs
the. Agent about any errors. If an error bas occurred in Playback Mode, the
Agent stops the execution and displays an error message.
Yltroduatlon b t1M A'1 s . Z9

1340g
HOW YOUr The API sends your application an API_SET_MODE_FLAG_MSG whenever
Application it wants your application to change the global variable)
APIModeF7ags.
RdSpOtld8 t0 ~IM~deF7ags indiates which mode your application is in: 'Record
Mode',
Different MOd~S ~P~~ack Mode','Moaitor Mode' or 'Intercept Mode". Because
NewWave
applications are continuously communicating with the API, your application
must perform API-related prooasing in a very specific order. The following
paragraphs dexn'be how your applic~ati~ interacts with the API during the
four major modes.
What Mapptns Durtn~ Monitor Mods?
In Monitor (CBT) Mode, the end user is performing actions according to a
lesson and CBT is observing the actions. Monitor Mode is turned on when your
application receives the API_SET_MODE_FLAG_MSG with the variable)
LParam, xt to turn Monitor Mode on. The flow of control during Monitor
Mode is shown in Figure 4-8.
In the Playback Message Test Component, the flow takes the path to the Acxion
Processor since the user's actions need to be interpreted. It is possible that
the
Action Processor can handle the command ao that there is no need to and the
command down to the Command Processor. Such a cxx would be s Windows
or OMF message. If this happened) the flow would take the optional path from
the Playback Message Test Component to return to WinMaia.
Once a command has been formed in the Action Processor, it is translated to
external form and passed via the API to the CBT task, which may cancel the
command. If the command is approved by CBT) normal processing continues.
Since some uxr actions in Monitor Mode involve responses to questions in .
modal dialog boxes, it may be neo~sary to access the Modal Dialog Box
Processor. While in the modal dialog box, your application may allow the user
to except in which case the command returned would be 'CANCEL'.
There is additional monitoring within the dialog procedure to allow CBT to
control which command the dialog boot produces.
In normal situations, however) a command will have been built up)
necessitating
the Command Processor to be aecased for aecution of the command. Since
Monitor Mode does not require either recording or the return intetfacx) flow
goes straight through the Return Interface Component.
< - ?0 Inbod~eben to 1M API

134Q592
Receive Message
from Windows
Return to WinMain Il lyilp or
API Alinu Sa~loction A~~aaa~a User ACtiOn
Interface Component
1J NOT ~PL,I YBACIC) l1 PLAYBACK,
Action fnr~ro~'~r acrion 1 banalati eonin~and Translate
Playback Message to
Processor Test Component Internal
Processor
1
Ril~rn fo WinMain
ll no ec~mand panoratid 1
- ~ ~ "" " ~ ~ ~ ~ ~ " Tnnalafi to ixhrnal Translate
form, lI CBT to
Command Interface
Component External
Processor
Proeiaa a~paril~ly I Modal
nqwst for Dialog Box
Modal Dialog Box ~ ~ ~ ~ ~ Dialog Box
Test Component ~ ~ ~ " Processor
lI conunand,
~xacuf~ It
Command Test Command
Component
Processor
n RECORD,
hanalatt command Translste
Return Interface to
Component External
Processor
Return to
WinMain
Filgure 4-8 ~ Floe of Control durin; Monitor Mode
wweduetwe to 1M W s

130 ~~~
What Mappans During Playback Mods?
In Playback Mode) the API ands commands to your application via
API_PLAYBACK_MSG wish a hate to a loation in global memory where
the command is stored To process this message, your application must retrieve
the command from global memory, then pTOOeas the command. For example,
when the Agent runs an Agent task, it tells the API to Bend your application
each command from the usk in this way. The API places a command in global
memory) then posts an API_PLAYBACI~MSG to your application. When
your application calls the Return Interface Component) the API wfll post the
next playback command, and so on untfl the teak is ootnpleted.
The flow of control during Playback is shown in Figure 4-9. Control falls
through the User Action Interface since I3elp is not involved. The Playback
Message Test a passed) and the external form of the command is then
translated by the Translate to Internal Processor.
The Command Interface is bypassed, since only Monitor Mode uses it. There is
a possibility that the command may involve a modal dialog box) ao that there
is
an optional path to the Modal Dialog Bax Processor. As in Monitor Mode)
your application may permit the user to escape from the modal dialog box in
which ase the command returned would be'CANC>=I,'.
If the command has not been cancelled in the modal dialog box, then the
Command Processor is accessed to secure the command. You then inform the
API that the command has been executed via APIReturnInterface and control
then returns to WinMain.
Note the order of the processing. After receiving the message from the API)
your application had to process that message. Meanwhile) the API waited for
the response. The API could only tell the Agent to continue executing the task
after your application had called APIReturnlnterface.
Note that while in playback mode) some messages will not be playback
messages) for example, paint messages. These messages are handled in the
usual way by the Action Processor.
s . iZ Ywee~etien so 1M APB

Receive Message 1;~ 4 0 ~ ~ ~
from Windows
Ratum to WtnAIain it HN,o or t
IIPIAIanu SNiction Alis~rada Ussr Action
Interface Component
N NOT PLIYBACK, ~' It PL1YBACK) Translate
Action Inra~~rat action hranalali canmand
~ .. ~~ Playback Message to
Processor ~ ~ ,~ Test Component Internal
Processor
R~lurn to WinMaIn
Il no con~n~and prnarafid
Tnnalara to axtarnal Translate
I~rm, Il CBT to
Command Interface
Component External
Processor
Proeiaa a~parat~lyll Medal
nqwat !or Dialog Box
Modal Dialog Box ~ ~ ~ ~ Dialog Box
Test Component .. ~ ~ ~ Processor
l1 command,
axieufa it
Command Test Command
Component
Processor
I! RECORD.
hanalafi ca~mniand Translate
Return Interface to
Component External
Processor
Return to
WinMain
Figure 4-9. Flow of Control During Playbsck Mode
Imroduafioa b tM MI a - ri

1340o92
What Happens Durln~ Record Mode?
Record Mode is switched on ~rhen the user xlects 'Start Recording' from the
API Task Menu. Like the other modes) Record Mode informs your application
by xnding it the API_SET_MODE_FIAG_MSG with the variable) lParam, xt
appropriately. The flow of control during Record Mode is shown in
Figure 4-10.
Once in Record Mode) the 8a~ bypataa the User Action Interfioe. Since the
message represents a current user action rather than a command to be played
back) the Action Prooesaor is aoodxd from the PLyback Message Teat
Component. In the Action Prooe>tsor) your application interprets the uxr
action and de 'rnes a command. If the action involves a modal dialog boa, tht
Modal Dialog Box Processor will be called. Unless the command is cancelled by
the user in the modal dialog box, the Command Processor will be accessed from
the Command Test Component in order to execute the command. From the
Return Interface Component, the Translate to External Processor must be
called to put the command in e~cternal form for recording. After that the
Record Interface is called to acxually record the command. Finally, your
application needs to call the Return Interface to inform the API that your
application is ready for the next command.
I - ~I lntroduetion to fht API

Rscsiw Message 1
from Windows
' Ritu~n to WinMiin ll N'Ilp of
APIAIwnu srllerion ~~saa~I Ussr Action
Interface Com~on~nt
Il NOT PLrIYB.4Clr; ' Il P41 YBACK, Translate
Action inbiprat action lnnalatl earnnmnd
Playback Masaa~e to
Processor ~ Test Component Internal
Processor
i
Rltuin to WinM~in
!l no eon~mand plnlritrd
rllniat/ t0 IXIIMIl Trsnslate
I~nrt, It CBT t0
Command Interface
Component External
Processor
Pro~si slplntlly iI Modal
wqwst toi Diiloa Box
Modal Dialog Box ~ ~ ~ ~ ~ Diaio~ Box
Teat Component ~ ~ ~ ~ Processor
Il eomniand,
Ixleutl !t
Command 'e='
Command
Compone- .
Proceaaor
H RECORD)
h~nsl~ti eomnrand Translate
Return Interface to
Component External
Processor
Return to
WinMain
Fligure 4-10 - Flow of Contra! during Record Mode
Invoduet:on a 1M APB s . i5

I~~OO'~z
What Happans During Intacapt Modt?
Intercept mode is turned on when the uxr requests screen help.
There are two types of Help requests that a uxr can make:
1. For information on a menu item) and
Z. For information within the application's window.
When Help is requested from a menu item, the message is taken care of in the
Uxr Action Interface Componeat, which routes the message directly to the
Help Facility) as shown in Figure 4~ 11.
Requesting Help from within the application window is a little bit more
complicated. The Help Facility responds by supplying the help text. The help
text selection corresponds to the saeen coordinates at the location in the
application where the uxr requested help. To translate the moux position into
,
a help message number, the API sends your application an
API_INTERROGATE_MSG which must be handled by your Action Processor.
Your application must respond by returning the index number of the
corresponding help text. The HPSHAPE program has a function called
InterrogateFromAPI that does this processing. Since it is a simple program) it
always returns the index number for its general screen help text. More complex
applications may choox to ux the moux position and the current state (for
example, using a xlected item) to produce a more speck help message. The
flow chart is shoW~n in Figure 4-12.
4 - >s Inbodu~~Uon to thr API

Receive Mssaa~e 13 4 0 ~ 9 ~
from Windows
Roluin to ~nAl~in lI Hil~ or 1
APl Alinu Srlrction Miaa~a~i Ussr ACtlOn
Interface Component
Il NOT PLrI YBACK) n p~ y~~,
Action Intir~rot ~crion ~ bn,l~h camnund Translate
Playback Message to
Processor ~~ Test Component Internal
Processor
Return to WinMNn
il no command pon~ratid
Tr~niliti to Translate
ixtimal
form
If CBT
. t0
Command
Interface
Component . External
Processor
Proc~ts aiptr~tily
ll Modal
npuiat for Di~loy
box
Modal Dialog
Dialog Box
Box
Test Processor
Component
i Il commend)
execute It
Command Test
Component Command
Processor
Il RECORD,
tranalar~ commend Translate
Return Interface to
Component External
Processor
Return to
WinMain
Figure 4-11. Flow of Control During Intercept Mode - Menu Selection
Introauetkn to tM A'I a . I7
3

Receive MessaQs
from Windows .t' ~ ~ r
Rifurn fo ~nAliin iJ Haip or
API Menu salacrbn ~aasaa~ User Action
Interface Component
Il HOT PLAYBACK, Il PLAYBACK,
Action lnferprat action ~ rr,,~aa,l, oo""mand Translate
Playback Message to
Processor Test Component Internal
Proce ssor
Rafuin fo WInAIain
iI no command p~n~r~f~d
Tnnalali ro ixlarnal Translate
rorn; n ceT to
Command Interface
External
Component
- Processor
Procaa aparir~~y Modal
lJ
roquiaf Ior Dialog
Box
Modal Dialog Box Dialog
Box
Test Component Processor'
l1 commons
axaeala it
Command Test
Component Command
Processor
n RECORO,
franalafa command Translate
Return Interface to
Component External
Processor
- ~ Return to
WinMain
Figure 4-12. Flow of Control During Intercept Mode - Interrogate Message
s - i1 Intree~etlon to the An

~.3~059~
API Function T~n~ API functions and maaos fall in to four general categories:
Summary Tables , ppI ~~terface Functions
~ API Have Test Macros
~ API Mode Test Macros
~ Miscellaneous API Functions and Maaos
The API Interfuct Functianc are used to pass information to the API as well as
to perform some type of initialization or termination. The API Have Tut
Macros test to see if a particular entity is present. The API Mode Test Macros
check the current mode of the API. Miscellaruous API Functions and Macros is
the category for everything else.
The API functions are summarized in the following tables. For more detail)
refer to Chapter 2 in the "Programmer Reference Manual".
Table 4-1. API Interface Functions
Function/Macro Description
APICommandInterface Used when a command has been generated in
the main
application or a modeless dialog box. Passes
the external form of
the ca~mmand to the Agent.
APIDIgCommandInterfaceUsed when a command comes from a modal dialog
box. It passes
the external form of the command to be executed
to the Agent.
APIDIgHelpInterface Used to provide access to Help from a modal
dialog box executed
before APIReady is called.
APIDIgInit Performs initialization when a modal dialog
box is opened. It
passes the modal dialog box information to
the API environment.
APIDIg?erm Used to terminate a modal dialog boot seaaion
and API
interaction.
APIDIgUserActionInterface'Passes a user action (de 'rned from a Dialog
Box) to the Agent,
used by CBT and Help.
APIEnableMenuItem Enables, disables or grays a menu item while
still allowing Help
to access that item.
Introduetlon a 1M APB a . ~9

Table 4-1. API Interface Functions (coat.)
Function/Macro D~suiptlon
APIError Returns the error after an API funcxion has
been called.
APIErrorInterface When an error is detected by the application)
APIErrorInterface
signals that an error occurred to the Agent.
APIInit Initializes the API data structures and help.
APIInitMenu Adds Task and/or Help menus to an applintion
menu.
APIModelessDlglnit Performs initialization when s modeless dialog
box is opened. It
passes the modeless dialog box information
to the Agent.
i
APIModelessDlgTerm Used to terminate modeless dialog box and API
interaction.
APINotReady Notifies the Agent that the application is
not ready to restive
messages from the API.
4PIReady Informs the Agent that the application is ready
to receive API .
messages (such as set modes or playback).
APIRecordInterface Passes the external form of a command to the
Agent for
recording. Called after the command has been
executed.
APIReturnlnterface Tells the Agent that the command is complete
and that the
application is ready of r~:.ei~~e or build
the next command.
Informs the Agent about any errors. .
APITerm Signals termination pf the use of the API.
APIUserActionlnterfaceResponds to API menu selections. Passes all
messages to the API
so that Help or CBT may act on them.
4 -10 kHroluetioe~ to 1M API

_ 1340y2
Table 4-2. API Have Test Maaos
Functlon/Mscro . D~scriptlon
APIHaveButton Tests if API button has been activated (e.g.,
Help button).
APIHaveCommand Tests if the uxr'a acxion(s) have formed
a command.
APIHaveDialogCommand Tests if the user's action(:) form a dialog
boos command.
APIHsveMenu Tests if an API menu (Talc, Help) etc.)
hss been :elaxed.
APIHaveMessaee Determines if a message has been proxssed
and nullified by
the API. If not) then the message requires
further processing
by the application.
APIPlaybackMsg Tests for a playback message. If true) the
application should
caU the Translate To Internal Processor
to generate an internal
command.
Table 4.3. API Mode Test Macros
Funrtion/Macro ~ D~scriptlon
APIErrorOn Signals the application that error capturing has been set by an
"ON ERROR DO .. " statement within an Agent task. Errors
will tie handled by the Agent task and do not require any
further reporting by the application.
APIInterceptOn Indicates that all messages should be intercepted by
APIZJserActionlnterface.
APIMonitorOn Tests if the application is in Monitor Mode) in which case
commands are passed to the API and may be nullified before
they are passed on to the Command Processor.
APINoWindowOn ~ Tests whether application is to run without visible windows.
APIPlaybackOn ~ Tests if the application is in PLyback Mode.
APIReoordOn ~ Tests if the application is nn Record Mode.
wnraeuafion is w. W a . a~

1340.~.~z
Table 4-4. Miscellaneous API Functions and Macros
Funetlon/Macro Description
APIChangeCaptioa Changes the caption displayed by facilities
like Help.
APIGetAPNersion Returns the current API part number and
version.
APIMessageBox Creates and displays a wiado~w containing
an
application-supplied message and caption)
plus some
combination of pre-defined icons snd push
buttons.
APILoadAccelerators Loads the API Accelerators to support the
keyboard
accelerators for the Task and Help menu
items.
f..[ 2 ~eduefiun b 1M API

This section provides description of the API components currently available.
As long as your application uses the same variable as in these examples, you
can simply plug the components into your application in the prescribed places.
The following global variables are used in these examples:
<IMG>
<IMG>
The User Action Interface Component is the first component in your window
procedure. APIIntercept On test if the application is in intercept mode,
i.e., all
messages are to be routed to the API. If this is the case, then the API wants
to
see all user actions and must be alerted through APIUserActionInterface.
APIHaveMenu tests if the user made a selection from an API menu in which
case the API would also want to see the user's actions, althrough the
APIIntercept mode may not have been turned on. After these tests, the return
value is initialized to 0.
At the end of the component, it is necessary to test if a message has been
produced that needs to be processed (by means of APIHaveMessage). If there
is no message, then there is no reason for further processing in the window
procedure and the control returns back to WinMain.

13~O:W
Playback MsssaQs Tsst Component
it tAIlIttyb~ckllslt~tsup))
Tronelotetolntornotlreeoosertrsaolt, vlar~, tlora~, iintt~d);
else
Aetion~rocettorthYnd, ~sspt) rlor~r, 11er1e, tintCnd) iepplittn);
ff tAIINIVtco~ndtfntC~d.rhd))
opplErr ~ A11 110 Eitl;
The Playback Massage Test Contponatt is the second component in windorov
procedure. Its purpose is to test if a playback message has been recxived fmm
the Agent. If so, the message noels to be translated to its internal form (by
means of the Translate to Internal Processor). If not a playback message) it
must be routed to the Action Prot~essor where it can be interpreted. Either
processor may result in a command to be passed to the Command Processor.
APIHavet:ommand is used to test if a command has been generated) with no
further protxssing required if there is no command.
Command Interests Component
if tA1111onitoronfAllllodtfltlt)) i
TronelotttoExternallroetstortifntted, io><tCed);
ACIConin~ndlnttrfvcethAPl, tIIAIICIbftitUCT)ittttld, A11 1101100E);
if ttxtCmcl.rC~d ~~ AIt 110_C110)
intC~nd.rC~d ~ All 110 CIID;
The Com~land Irlttrfact Component is the third component in the window
procedure. At this point in the window procedure, a command has been formed
and is in its internal form. If the application is in monitor mode (CBT), then
the external form of the command must be passed on to the API (by way of the
Translate to External Processor). The purpose of this is to check in the case
of
CBT that an appropriate command has been formed and if not to cancel it. If
the command has been canceled (set to API_NO_CMD), it is necessary to reset
the internal command variable as weU so that the command processor will not
be called.
Modal Oislo~ Test Component
if tA11111vt0folelConrtndtintt~d.~d))
IlodelpielolloxlreeossortMIM, ifntC~d);
The Modal Dialog Tat Componatt coma next in the window procedure) if the
application has any modal dialog boxes. If the command is in the range of
dialog oomman~ the Modal Dialog Haoc Prot:euor is called to display the
modal dialog box and get the user input. When the Modal Dialog Box
Processor returns a command for the Command Processor will have been
generated (or API_NO_CMD if the user cancelled the modal dialog box).
v ~ s4 Inboduetbn to tM AP1

134 e92
Comcnsnd Test Component
if (AIIIIOwiCawand(inthd.rtad))
Co~wndProeosser(hilnd,rssap,daraw) llarsa,NntCad,iapplEtn);
Tlhe Co~and Tat Cornpo~rt is the 5fth component in the window
procedure. APfHaveCommand checks to see if there is a complete)
uncancelled command at this point. If there is) then the Command Processor is
called in order to execute the command.
Return Interface Component
if (APIPtI~;bOekon(A1t11odatla,s) (, AllteeordDn(A111todoflsss))
it (AIINOeordOn(A~Illodoflafs))
TranslataToExtsrnalltoeossor(tintdd) iaatCwd);
AI l ltacotd t ntorf aeo( hAI l , ( 11A11 t:llDfTKUCT )ioxtGd, A1I 110 1100E
);
A111atutnlntorfaeo(hAII, applErr) A11 110 1100E);
TI)e Rrturn Irstrrface Coniponertt is the final command in the window
procedure.
If the application is either recording a new task or playing back an existing
task
or CBT lesson, then it is necessary to call APIReturnInterface to let the API
know that this command has been completed and that the application is ready
for the nant command.
If the application is record mode) then in addition the command must be sent
to
the Agent to be recorded. This is done by first translating the command to its
external form and then calling APIReoordInterface.
Aial Initialisation Component
if ( !AIIInit( (LIA118110)ihAli, hYnd, hlnst, h0lif)
(~IiTI!)siAppNalpfilo,
(LliTt)o:OelTitlo, A11 ~ 1100E ) )
i1 t AIIEtror(hAlt, A11 1101tODE) >~ A1I iTAET A11 iATAI EAA
raturn(C0 EE~Ot);
if ( 1A111nitllanu(hAll, ~otllonu(hYnd), A11 TAlK VEIN ( A11 HELPMENU,
A11 110_110pE ) )
if ( IIOtaAIIError(AAII) >~ All fTAET A11 fATAt EEE ) ~
AIITorai(AA11, htlnd) /hiflst, half, A11 110 lIOpE);
raturn(C0 ElEOE);
The API Initialization component mu>it be called by an appliation before any
API calls cxn be made (although the mode test macros may be used before this
ca;U). This component is used to initialize the API. APIInit returns to the
application a handle to the API. Thi: call is nortnauy followed by a call to
APIInitMenu) which will add the Task and Help menus to the given menu bar.
Tk~is component should be placed in the Action Processor during handling of
the: CREATE_OMF message. APIInitMenu should be called once for each
window that requires API menus.
Yarodoatieo a tho A't a . as

13~OW
API Termination Component
APITtrw(hAPI) Alh~d, hlfut, (~If) API 110 1100E);
APTTerm is called in the Action Prooesaor whr~e handling the 'TERMINATE
message) before OMF Term is called. No calls to the API may be made after
APTTerm.
API Ready Component
APIfttldy(hAPI) API 110_II~E);
APIReady is called in the Action Processor while handling an OPEN or
WARM_START message. This call will cause the API to set up the
APIModeFlags and start sending playback messages to the application.
API Not Ready Component
AP(NOtEttdy ( hAPI) API 110 MODE );
APINotReady is called in the Command Processor while handling an
API_CLOSE_WINDOW_CDCMD command. It is paired with the API Ready
Component in the OPEN message handling. It should be preceded by the API
Return lnterface Component to allow the close message to be correctly
recorded. The API Mode Flags are turned off with this call, and no more
pla~~back messages will be sent to the application.
Modal Dialog Box Us~~ Action Interlace Component
if (APIInttretptOn(APtllodtiltos) (I AllNavNutton(wessalt) rPtrtm))
APIDIpUstrActioninttrf~ct( hAPi, A10Ut10X) hDll,
(LPAPIUIIfIGIIED)i~sessa9t, rPSrsm, lPsrtm,
API 110_IIODE );
This component is placed at the start of each modal dialog box procedure. It
permits the API buttons (i.e., Help) to be trapped) and also allows the API to
see all messages while in intercept mode. If the message has been handled by
the API it sets the message to zero.
Modal Dialog Box Initialization Component
a~iteh (otasalt) i
etst H11_11111DIALOG:
it ( AItIltybtck0n(APllledtflNs)
II APIttecordOn(A1111odtflals)
II Alillonitor0n(MIIIOdtfl111) )
AIIDI~lnit (hAII, A10UT10Il. hDll, All 1101100E);
brt~k;
Called when the the modal dialog box procedure receives the
WM INITDIALOG message. This informs the API that the modal dialog box
is ready.
hvoduetieel b trM APt

134059
Modeless Di~lo~ Box Return IMerisce Component
i f (AI I ~ l ayback0n(A1111odN lads ) i i AlltlaeordOn(AI t llodaf l acs ) )
C
if (Af!lleard0n(A1111odaflaEs))
translatatoExtornallreeaasor(iintCtad, ioxthd);
A~Ilseordlntarfaca(hAll,(LIAIICIpiTtUCT)i~xtewd,Al1 110_IIODE);
Iullflaturntntarfaea( hAll, applErr) Alt 110_IIOpE );
This component is placed after the command processor in the window
proa;dure for a modeles: dialog beat. It will record the command if in record
modt:) and (ill APIReturnInterface to tell the API that the command has been
proclased and another message may be played back.
Modeles: Dialog Box initialization Component
s~iten (wtssaEa> C
case hM_IIIITDIALOf.:
if ( A~Illsybaek0n(AIIIlodaflaOs)
APtEec:ordOn(A1111odeflaos)
A~111onitorOn(A1111adeflaOs) )
A1111odolassDlElnit(hAll, DLG 1100ELEfi) hOIO, A11 110 MODE);
break;
Placed in the modeless Action Processor while handling the
WM..II~1IT'DL~LOG message) this component informs the API that the
mode;less dialog box is now ready.
ModNess Dialog Box Termination Component
snitch (rssssaOtr) C
cast Irl1_OESTNOt':
if ( A111lsybaek0n(A1111odaflaEs)
AI l ftae:ord0n(A~ I llodaf l a,s >
A1111aritor0n(A~Illodafla's) )
AIlIIerdelassDllfTarsKhAll, Ola 110pElEff, holE) A11 r0 110DE);
break;
Placed in the modelesa dialog bo: Action Processor, this handles the
WM..DESTROY message by informant the API that the box is no longer
dispu~yed
Error Maape Box Component
it ( AIIErrerOn(AIIllodafla~s) )
AhIErrorlntarfaea ChAII) error, A11 110_II~E);
else ~I
AhlllaasaOaltox( hAll) aNalpllo, hWd, (1111Tf1)ssllp,
(lIiT1)s=caption) 111 0K 1 ~ IC011EItClA11AT1011 );
ahplErr ~ error; /' iat error rnabtr for AIIIaturnlntarfaea '/
This .component is caDed whenever an application wishes to report an error to
the user. It allows the API to trap the error without display ing a message
box.
s . Ia IMredaeden to the API

Modal Dialog Box Command Int~rtac~ Component
if (APIIIOnitorOn(Altllodtflps)) t
trensletetaExternollreeesser(tint0l~dd, iextOlIGsd):
APIDIItoswendtnterfeee( hAII,(l1A~IGOSItUCf)ioxt0l,cwd,
All 110 1100E );
if (axtDlICsd.rCsd ~~ AII_Ii0_C1p)
intDlltasd.rhd ~ AIt 110 Go;
Called in a modal dialog box procedure whenever a button has been selected
and a ooramand generated. This aDows CBT to monitor the generated
command, and to cancel the command if CBT wishes.
Modal Dialog Box Termination Component
if (APINIVaCo~nand(intC~sd.rGs~d))
ff ( APIPlaybackOn(APtllodtfllls)
II AItEecordOn(APIIIOdeflsls)
II APIhonitorOn(APIIIOdofloEs) )
APIDIITern~ (hAPt, DIALOG_ID, h011) A11 110 MODE);
En~ialol(hDtl,tlUE);
Called when any pushbutton has been selected that will remove the moCa.
dialog box) and placed after the Modal Dialog Command Interface Component.
Mod~less Dialog Box User Action Int~rfsc~ Component
if ( APIlntarceptOnlAPl~odaflal) II AIlNSVetutton(wessslt) rPsresi) )
APIDlIUSerAetionlnterfaer( hAPt) I0 0f DlOx) hDll)
( LIAI I UIIi 1 finED )iwsssele)
Wsreis, (Iarlw, AP1 110 1100E );
if (APtNavenessaoa(~esssle)) C
/~ Process the s~ssala ~/
This component is placed at the start of each modeless dialog box procedure.
It
permits the API buttons (i.e.) Help) to be trapped) and also allows the API to
see all messages while in intercept mode. If the message has been handled by
the API it sets the message to zero) and APIHaveMessage will return FALSE.
Modeless Dislog Box Command Int~rtac~ Component
if tAllMenitor0n(AllMedeflels)) t
translatoToExtornallreeossorfiintCwd, iextCsd);
AllCewsndlnterfeeetAAlt, (11A11C1~stIUCT)ioxtGd, All 110 MODE);
if (exttasd.rCsd ~~ All 110 GG~)
intCsd.rCsd ~ All 110 CMD;
This component is placed in the main window procedure for a modeless dialog
box after the modeless action processor. It allows CBT to monitor the
command generated by the modeless action processor, and to cancel this
command if it wishes.
4'
khroduetlen to >we A11 s . s7

- TASK ORGANIZATION 13 4 0 5 9 2 ~
I ~ I
AnPENDIX B
An Agent Task script is a set of commands in Task Language form which is then
compiled to an external
form as defined in the API section of the: Programrreers~ Reference Maruaal.
This external form is executed
at runtime either by the Ap;ent (Claws Independent command) or an application
(CIa:: Dependent
command) through an API menage. API messages are also dexribed in the
Programmers' Reference
Manual. A script may have u~~ to two sections: Main and Procedure. The
following examples illustrate
the organization. The specific commands are explained in detail in later
sections of this document.
2.1 TASK MAIN SEC'~ION
The Main section of a ta:)c is the only one which is required. It consist: of
a list of commands bracketed
by the TASK and ENDTASK commands. 'Task execution begin with the first command
in this section, so it
essentially directs the flow conl:rol of the task script. The following it a
simple example of a task which
makes a copy of a folder on the Desktop.
Example
TASK 'A simply exam~pl~
FOCUS OFFICE WINDOW '''HarrWava Ottica"
SELECT FOLDER "Ordar~"
MAKE_COPY
ENDTASK
* HP Confidential *
2-1

Purpoa and Overview
power uxr model Another potential uxr of Task Language is an "Intelligent
Agent". The addition of AI
to New Wave would neceaitate this user model.
The chosen model for Task Language is the power user. The language will be
appropriate for constructing
large automated tasks, quite possibly involving xveral applications. Such
tasks) while a:ecuting, may
receive uxr input and display information through converational windows
designed by the task writer
and controlled by the Task Language script. Most of the functionality of well-
known programming
Languages is prexnt. More will be added in future New Wave releases. However)
our goal is that
relationship of the uxr's interactive actions to the Task Language commands
will be apparent from the
syntax of the language. In particular) the syntax of Task Language commands
which are recorded will be
meaningful enough to serve as a learning aid to users who wish to explore the
more advanced features of
New Wave Agent Tasks
We note that the creation and performing of an Agent Task is a three stage
process
The Task Language script is created using the Agent Task Editor. Commands are
entered in
the Task Language tornr. Alternatively) the script is created as the Ageat
remembers user
actioru while in Record mode.
2. The script is compiled into a binary form by the task compiler. The
compiler consists of several
sub-parsers, a class independent parser plus clan dependent parser, i. e. a
parser of each object
class which is to be accessed during the execution of this particular Agent
Task. The binary
output of the compiler i: known as the external jorn~ of a command. The
external form is
dexribed in other documentation.
3. At run time, the Agent dispatches the binary inrtructioas to the
appropriate object.
This document concentrates primarily on the first rtate.
Besides the command set) the Task Language supports several data types) user
and system variables,
arithmetic and logical expressions, and functions. These are described in
later actions
NOTt
The syntax of these command: does not have to be finalized for VAB Wave.
It is prexnted here both for your review and suggeatioru and as examples of
the guidelines set forth in the following sections The Class Independent
command syntax will be released to VABs with the caveat that some syntax
may change, We expect any such changes to be minor and feel that VABs
would be better served if alerted early on to Class Independent command
keyword: to avoid However) we mart live with the style guideline as
released to our VABs
* HP Confidential
1-2

Task Organization
~340fi9~
2.2 PROCEDURE .SECTION
Following the main body is an optional procedure section consisting of one or
more procedures. Each
procedure consists of a list of commands bracketed by PROCEDURE and ENDPROC
commands These
commands may be executed from elsewhere in the task script with a DO command.
We can expand the
simple example in the previous section:
Example
TASK 'A simple e~xnn~pla with a procedure
FOCUS OFFICE_WINt~W "NewWave Office"
TITLE# _ "August Orders"
DO COPY FOLDER
TITLE# _ "S~ptemt~er Orders"
DO COPY FOLDER
ENDTASKy
'The procedu n asction starts here
PROCEDURE COPY FOLDER
SELECT FOLDER TITLE#
MAKE_COPY
ENDPROC
Note the use of TITLE# in ~thi: example. TITLE# i: a Task Language varteble.
Variables are discussed in
Section 7.
~ HP Confidential ~
2-2

TASK LANGUAGE COMMANDS 13 4 D 5 9 2
3
This section discusses the general syntax of Task Language commands. Task
Language commands form
the "meat" of a task. They are of two types: Class Independent commands and
Class Dependent
commands. In NewWave the classes of objects installed as well as the objects
in the uxr's workspace will
vary from system to system. To provide task automation across object classes,
each application must
support a unique xt of commands peculiar to its own feature xt. And the Task
Language compiler must
be able to accept or reject a script command baxd on which object clasxs are
installed on the syatcm as
well as the format of the command itself. To do this the compiler uxs multiple
parsers. The Class
Independent parser is prexnt on all systems It handles parx and xmantic
routines and generates the
binary format for the Class Independent commands. In addition, each
application provides a parxr, in the
form of a dynamic library) which does the same thing for all the Clan:
Dependent commands which it
supports. This is necessary for xvenl reasons:
. Only the application knows the external form (binary) of the commands it
supports.
. Only the application knows the Task Language script commands it supports.
~ Two applications may support different flavors of the same command For
example) SELECT in
the content of a word processor may not mean the acme as in the content of a
spreadsheet.
Obviously there will be much overlap in Clans Dependent parxrs. Hewlett-
Packard will asust VABs in
the development of their parxrs with documentation) library routine:) and
source code templates
3.1 BASIC SYNTAX
Both Class Independent and Clay Dependent commands will need to follow the
same syntax guidelines
Task Language commands are based on English imperative xntence structure)
which consists of a verb
followed by zero or more objects. Sometimes the verb is preceded by a
conditional clause introduced by a
keyword such as IF or NHILE Such a command may be referred to as a statement.
In this document)
Task Language commands and Task Language statements may be considered
interchangeable. In the
examples which follow, items in brackets ( ~ . .' ) are optional. Items in
italles represent user supplied
value:.
In general, a Talc Language command will have the format:
<coweand keywords ~ par'aeeter' .
The parts of a command are separated by one or more spaces. The line end
terminates the command. (See
Continuation CJiaracttr. ) Blaak spaces at the beginning of a line arc
ignored. The command keyword is a
verb form. It will parallel a user action) for example xlecting a menu item.
(See Keyword Identlfitrs. )
Some command keywords define user action which are not menu xlections. Actions
which are common
across many applications have their verb forms defined in this document (e. g.
HONE T0, COPY TO).
Applications should ux these defined verb forms so that the Task Language
appears as consistent as
possible to users. Parameters) when prexnt, modif y the verb. It is
conceivable that the parameter syntax
of a command keyword may vary across applications. For example) the objects of
a move or copy verb
might be different in a word processor or spreadsheet context than in the
Desktop. Many commands will
f HP Confidential ~
3-1

Task Language Command:
have no parameters, conaitting only of the keyword verb. Optional keywords, or
noisewords, may be
added to the command to innprove readability.
3.2 KEYWORD IDENTIFIERS
Keywords and noisewords consist of any alphanumeric character plus the
underline character "_" The
first character must be a letter. Thex identifiers are not case sensitive, e.
g. TITLE, Title, and title are a11
equivalent. When the keyword is a command keyword, it should be as close as
possible to any menu
selections that the user woc~ld make when accomplishing the same action
interactively. If the xlection
consists of a phrase of two or more words) the command keyword should contain
those words with the
spaces replaced by the underline character) e. g. CREATE A NEii. If the
command keyword does not
represent a menu selection, it should describe the user action in English, e.
g. MOVE T0, CHANGE TITLE,
etc. If the command keyword for the action has been predefined in this
document, the application should
use it.
3.3 NOISEWORDS
The Guidelines recommend limited ux of noireworda They should be uxd only in
thane situations where
their additioa stgnitfcontly improves the readability of the command and makes
it more English-like.
Noixwords are frequently prepodtions such as OF or TO and can be useful
immediately followin= the
command keyword. They should be used judiciously since they can inflate the
size of the parse tables and
parse routine. Do not insert a noiseword in a command if it would make the
corresponding spoken or
written xntence sound :tilted.
Examples
The simpler command
SELECT <ctassnarna~ "<ti;tl~~"
is preferable to the more vert~ae form
SELECT <etassnan~~~ ~UITHJ CTITL~ "<tittt~"
The noisewords iiITH and TITLE area not necwary and would be unusual in the
spoken or written
sentence. However, in the ca:oe
CHANGE TITLE ~0~ <ctass~a~isr~ "<otd titlQ~"
CT0' "<IffJJ >f it IfN n
OF and TO make the command read like an English sentence and are recommended.
If is doubt) leave the
noixword out. Noixworda are always recorded.
t HP Confidential'
3-2

Task Language Commands
3.4 PARAMETERS
A parameter may be a keyword, user supplied data, or a keyword followed by
data. Parameter: may be
required or optional. Ordinarily) they are not separated by commas. When the
command parameter is
data, it must be a recognized Task Language data type such as integer or
string. But such parameters may
be Task Language variables or expressions as well as constants. A parameter
which might create
ambiguity in the command should be preceded by a keyword. The following two
Desktop commands
illustrate optional and required keyword parameters.
TEXTUAL_VIEii SORTED' BY CLA55
BY TITLE
BY DATE
CHANGE ATTRIBUTES TITLE "~tittt3~"]
PUBLIC ON
PUBLIC OFF
[co~ENTS "~~o~nts~"~
In the examples above, either BY CLA55~ BY TITLE orBY DATE mgt appear as a
keyword parameter.
The parameters " ~ t i t Z t ~ " and "~ eamisn t s ~" are string expresdons.
The keyword SORT ED in the
TEXTUAL VI EH command is a noiaeword added to improve readability while TITLE
and CO!lIENTS in the
CHANGE ATTRI BUTES command are aece:ary keywords to resolve ambiguity.
Optional keyworded
parameters should not be order dependent unless the order is needed for
readability. If the ::me
parameter appears more than once in a script command, the compiler should
di:nlay an error message.
3.5 DIALOG COMMANDS
.1
There are two types of commands which simulate the action of dialog bore: The
task writer may choose
to have the dialog box displayed and receive user input as the talk is
executed. Alternatively, the
information which the user would input to the dialog box in interactive mode
may be defined as
command parameters. In this case, there is no uxr interaction at execution
time.
The command verb should map closely to the interactive action. Frequently, it
will correspond to a
selection on a pull-down menu, e. g. INSTALLATION.
3.5.1 Modal Dialog Boxes
Modal dialog bozo are terminated by a user action such as pressing the OK
button. When a modal dialog
box is open, the rest of the application is effectively locked out until it is
closed. Each such box map: to a
single task language command. All checkboxes) lint boxes, editboxes etc. are
expres,ed as keyworded
parameters. They will be optional is the cases where user input is not
necessary, sad they should not be
order dependent. If a modal dialog box is an auto-close box with more than oae
action button (e. g.
closing with SAVE or DISCARD), each button is a parameter. Again, the keywords
should reflect the user
actions. Parameters which are omitted assume their current values Table 1. 1
summarizes the syntax for
dialog box controls.
~ HP Confidential'
3-3

~.3~0~9~
Task Language Commands
Example
SET_USER ~(~IrAMI:' "rnarr~~"'
~~IiIThI) TIMEZONE "~timazona~"'
is an eaample of a modal dialog command. Note that it is not necessary for the
user to act on every item
in this dialog box) hence the: parameters are optional.
Certain dialog controls have button;s which change the state of other controls
in the dialog box. Such
buttons are also representai as keyworded parameter:, e. g. DEFAULTS CLEAR
Other parameters which
then appear in the same command will be taken as exceptions to the stipulated
state. When present, a
parameter of this type should precede the controls it affects. DEFAULTS type
buttons in subdialog boxes
have separate keyworded paramters, e. g. TAB SETTINGS DEFAULTS
Example
CELL SETTINGS OEF~4ULTS CENTER
illuatratea this with a spreadsheet command. The user wants all the default
cell settings sxcspt for
justification.
~ HP Confidential ~
3-4

Task Language Commands
3.5.2 Diaiog Command Parameters
Table 1. 1 summarizes dialog box controls as they map to parameters in dialog
box commands.
Table 1.1. Dialog Box Controls Syntax
Control Paraaete~ Exaaple
Checkbox <keyword> ON CHANGE_ATTRIBUTES PUBLIC
ON
OFF}
Listbox Selection is a parameterSET USER TIMEZONE "-7h"
of
the action command
Editbox Content is a parameterSET
of USER NAME "Jon"
the action command _
LIST
TO
PRINT 2 COPIES
_
_
DEVICE "t PT 1"
Radiobutton <keyword> CONFIGURE DEVICE PORTRAIT
Single action! None SET USER box
Multiple actions!<keyword> for each In a word processor) CLOSE
may be
button prexnted with a dialog box
which allows
the user to SAVE or DISCARD
his changes
3.5.3 interactive Modal Dialog Commands
Commands which will bring up a modal dialog box to display a message or to
obtain information from the
user should have a,' ? ', appended to the command keyword For example,
ABOUT?
results in a modal dialog box which contain= information on the current
object. The task will resume
when the user clicks on OK This coaamand corresponds to the user selecting
About... on the File menu.
When parameter: appesr in interactive dialog box commands) the values will be
filled in when the dialog
box is opened for user input. This feature is useful when setting up default
value:.
3.5.4 Recording in Modal Dialog Boxes
In Record mode) the values of all modal dialog box parameters are recorded.
The effect is then to record
the entire state of the object. Therefore, DEFAULTS type parameter: are not
recorded. Later, the user
may edit the task to delete any unwanted parameters
Applies to Modal dialog boxes only.
~ HP Confidential'
3-5

Task Language Commands
3.5.5 Modeless Dilalog Boxes
Modeless dialog boxes differ from modal in that they remain open until
explicitly closed by the user.
Frequently, they may be moved. The user may continue to work in other parts of
the application. These
boxes have specific Task Language commands to open and close them. Each action
within a modeless
dialog box corresponds to a command rather than a parameter as in a modal box.
The action command
verb must be specific enough to avoid ambiguity with commands not related to
the dialog box. Such an
unambiguous command implies activation of the dialog box window. (See
discussion of the ACTIVATE
command in the next section. ) A Task Language command verb which opens a
modeleas dialog box should
be the same as the menu selection which opens the bo: interactively. Opening a
modeleas dialog box
implicitly makes it the current active window. Each such command should
include an optional keyword
parameter GUI ET. When present) this parameter suppresses display of the
dialog box while still allowing
the state-modifying dialog box commands to be executed. Only those control:
which result in actions or
state changes are represented by commands For example) if the interactive
action consists of the user
selecting an item from a listbox and pushing a control button, the command
would consist of the control
button verb with the listbor selection as a parameter. The selection itself
would not be a command unless
it caused a persistent change of state: in the dialog bos. A non-persistent
state change is represented as a
parameter of the action command.
Commands which pertain to the Desktop Show Links... option on the Items menu
can be used to illustrate
several points discussed in this rectio:n. The following example is identical
to a New Wave user selecting
the Slsow Links... option in the De:fc.top Iterres menu, completing the dialog
box and preuing Done The
user has previously selected .an item on his Desktop.
Example
'Opens the ~nodels~ss dialog box and makes it the active window
SHOW LINKS
'The parameter DOCUMENT "August Orders" is a listbox selection
'It is not a persistent change of state hence not a co~n~nand
OPEN_PARENT DO(:UN~NT ''August Orders" 'User presses OPEN
'Illustrate the use of a variable as a parameter
a# _ ~~N~w Orders!"
OPEN PARENT DOCUMENT a#
CLOSE 'User presses DONE
3.5.6 Recording in IModeiess Dialog Boxes
In Record mode, only the commands which are executed by a user action are
rewrded. Note the
difference from modal dialog boxes.
3.6 PARAMETER LISTS
If the parameters form a list of elements of the same type then the list
elements should be separated by
commas. The lists may be of either of definite or indefinite length.
* HP Confidential
3-6

Task Language Commands
Examp~e 1 ~ 4 4 ~ 9
MANAGE TOOLS PUT IN OFFICE WINDOH <classnatae> "<titla~"
<classnaaae> "~titla~"'...
3.7 USER NAMES
Certain commands) e.;. LABEL and DEFINDIINDOi~ require a user defined
identifier a: a parameter.
These identifiers have the same construction as keywords. In fact) keyword:
are acceptable. The compiler
will be context-sensitive to this situation.
3.8 COMMENTS
Comments are introduced using the single-quote character, ' ' '. Unless the
character is within a string
literal, the compiler will ignore a11 characters on a line following a ain~le-
quote character. Since blank
lines are also ignored by the compiler, they may be inserted to improve
readability)
~ HP Confidential'
3-7

Task Language Commands 13 4 D ~ 9 2
3.9 CONTINUATIt)N CHARACTER
The Task Language is a Line-oriented command language. A carriage return
terminates a command.
, However, commands may tie continc~ed on successive lines by using the
ampersand) ' 3 ', as s continuation
character. If the last nonblank character on a line is the ampersand) the
compiler will suppress the
carriage return and continue parsing the command from the next line. An
exception occurs if the
ampersand is in a string literal.
Example
'The continuation character can improve readability and allow
'the use of long strings
CHANGE OWN ATTRIBUTES PUBLIC OFF &
COMMENTS "We are inserting a rather long comment string which" + &
"won't 'Pit on one link eo we use concatenation and" + &
"continuation."
: HP Confidential
3-8

CLASS INDEPENDENT COMMANDS
4
1~4 ~:~ 9~
Class Independent commands are parsed by the claw independent parser and
executed by the Agent itself
at runtime, independent of any application object which may be open. Mo:<
Class Independent commands
either manipulate task conversational windows and handle flow control of the
task. All Class
Independent commands are described in the section on the command syntax.
4.1 CONVERSATIONAL WINDOW COMMANDS
The task conversational window is a feature provided by Task Language to
enable the task to
communicate with the user. With these command:, the task writer can design and
display a window on
the screen. He can output information to the user in the window. Or he can put
an editbox or
pushbutton in the window to get input from the user. Some commonly used window
designs are available
in the MESSAGE and INPUT comms,nds Some esamples of the functionality provided
by conversational
window commands are listed below:
Examples
DEFINENINDON defines a task conversational windo~r
OPENNINDON opens a previously defined conversational window
CLOSEHINDOii close a conversational window
TITLEWINDON change the caption bar
CLEARIiINDON clears a conversational window
OUTPUT outputs tezt to a conversational window
4.2 FLOW CONTROL AND EXECUTION
The execution sequence of tuk commands may be controlled by the conditional
execution and looping
capabilities. Frequently a:ecuted command sequences may be placed in a
procedure. Task data may be
stored in variable:.
Examples
IF..ELSE..ENDIF coaditionalexecution
NH I LE . . ENDWH I LE looping
PROCEDURE. . ENDPROC defines a procedure or subroutine
* HP Confidential
4-1

Class Independent Commands . 13 ~ 0
DO executes a procedure
RETURN exit a proetdure returning to the command following the DO statement
GOTO unconditional jump
<var> _ <expr> assignment
4.3 FOCUS COMMAND
The FOCUS command needs additional discussion since it results in both compile
time and run time
actions. The syntax is
FOCUS ~ON' <classnawe> "<titls string"
OFF'
where <elaas~are> refers to the class of object (e. g. DOCUMENT) FOLDER) sad
"Mitts string" is the
title of the specific object referenced. When a class of objects a installed,
its claaname is added to those
recognized by the parsers.
When a task a executed, the: majorit;ir of the commands will result in the
Agent sending a message to the
object which currently has the focus. The parameters of thin message comprise
a command which will
direct the object to change its state. At run time, the FOCUS command telh the
Agent which object is the
target of subsequent command messages. At compile time it has another role. It
controls selection of the
class parser which will parse Class Dependent commands and generate the
external command form.
Commands are compiled sequentially in the order received. However, the order
in which commands are
actually executed at run time will seldom, if ever) be completely sequential.
Conditional execution ( I F,
iiHILE), jumps (GOTO), procedure execution (DO), user variables) etc.
virtually guarantee that there is no
way to make a determination at compile time which object will have the focus
at runtime. The FOCUS
command sets a compile time focus In effect, it determines which Claa
Dependent parser will parse the
commands following it. The command
FOCUS DOCUMENT 'Orders Rcpo~t"
will cause a11 Clans Dependent commands to be parsed by the Document parser
until another FOCUS
command is encountered. If the clans sad title parameters are missing, only
class independent commands
will be accepted by the parsers until another FOCUS statement is encountered.
The main effect of thin
command is to reduce compilation time.
The following example illustrate the compile time and run time behavior of the
FCICUS command. It
displays a conversational window a:lcing the user if he wishes to see his
spreadsheet. If ao, it opens it and
calls a procedure which executes some spreadsheet commands
t HP Confidential'
4-2

Class Independent Command
Example
TASK 'this task illustrates the FOCUS command
'Get user input via a message box with YES and NO pushbuttons
MESSAGE , a# "Do you want to see your spreadsheet?" YESNO
'If a# = t) user pressed YES
'It a# = 2) user pressed NO
I F a# = t
'direct commands to desktop
FOCUS OFFICE_WINDOW "NewWave Office"
title# _ "Your Spreadsheet"
SELECT SPREADSHEET title#
OPEN
DO SS_STUFF
ENDIF
'Focus again on the Desktop to be safe at run time
'Note that without the FOCUS command) con~ands will be parsed by the
'OFFICE WINDOW parser) but) at run tine) con~ands will be sent to the ob3ect
'which has the focus at return trop ss stuff OR no obfect will have focus,
'depending on the value of a#.
FOCUS OFFICE WINDOW "New4lave Ottice"
< nio re command s >
ENDTASK
PROCEDURE SS STUFF
'Set compile and run ties focus
'Note that without the following FOCUS command, commands will be parsed
'by the parser which hat the focus at the ENDTASK co~mand) but, at run time)
'commands will be sent to the OFFICE WINDOW ob,fect, NewWave Office.
FOCUS SPREADSHEET title#
< spreadsheet cosssands >
CLOSE
RETURN
ENDPROC 'run lisle focus is still on the spreadsheet
f HP Confidential t
4-3

Class Independent Commands
~.~4059~
4.4 INTERRUPT COMMANDS
Interrupt commands are available which enable the tasac to take action if a
system variable is modified by
the Agent. For example
ON ERROR DO ERRORPROC
will cause the routine ERF'ORPROC to be executed if the Agent detects a task
execution error. See
Appendix A.
4.5 COMMAND PRIECEDENCE
Class Dependent commands have precedence over Class Independent commands. If a
FOCUS command is
in effect) the command is Ixasxd to the class parser specified by it. If that
parser returns an error, the
command is passed to the Cl.aaa Independent parser which will either parse it
or cause a compile error to
be displayed. However, to minimize user confusion) developers should avoid
conflicts with Class
Independent command verbs.
t HF Confidential !
4-~

i
CLASS DEPENDENT COMMANDS
1340W
A Class Dependent command sends a run time message to an application which
will caume it to chance its
state. Both the Task Language form (which the task writer enters) and the
external form (the task
compiler output which the Agent sends to the application) are defined by the
application. Many Class
Dependent commands are common to most, if not a11, applications It is
important that the Task
Language form of these look as similar as possible. Command id numbers found
across classy are
predefined in NWAPI. H. The command keyword is the literal define with API_
and CDCMD
removed. These predefined values are provided for the conveaieace of
application developers. Some of
the more common Class Dependent commands are discussed in this section.
Information about commands
may also be found in the NWAPI. H file.
Reiterating, if a command matches a menu command, its keyword should match the
command on that
menu.
5.1 SELECTION COMMANDS
In New Wave all applications will support some form of selection although the
action will vary depending
on the application. This section describes selection commands for common typo
of applications.
Developers should use the mort appropriate model as a guide when defining
their selection commands
The concept of seketion is described in the User Interface Design Rules.
Interactively, selection umally
reflects keyboard input or some form of mouse click by the user. It can be
either absolute or relative.
Selecting the folder named "Orders" on your Desktop) or cell A 1 through B6 is
a spreadsheet is an
absolute xlection) whereas selecting the next two words in a document is
relative. In the second case, the
portion of the document selected clearly depends on where the selection
starts. The commands must make
this differentiation. Task Language defines both absolute and relative
selection. Not all objects will
implement both mode: - those that do will provide a user toggle to allow
recording in either mode.
Application developers should define which mode is the default for recording
their commands It is
important to understand the distinction between absolute and relative when
specifying Task Language
commands The primuy difference is in the specification of the command
parameters
5.1.1 General Syntax
This section describes general syntax of selection commands Later sections
include examples of selection
command syntax for some common object types
5.1.1.1 SELECT
The SELECT command is the selection of a single element or a range of elements
Any previous selection
is deselected. The cornsponding user action is a mouse. The syntax is:
SELECT cpctrm-1> C TO ~parm-2~'
The format of the ~ parnri ~ depends on the application. The TO clause is used
to designate the selection
of a range or a number of items
~ HP Confidential ~
5-1

Class Dependent Commancb
1340~9~
5.1.1. Z DISJOINT SELECT
The DISJOINT SELECT command is the selection of a single element or a range of
elements. Previous
selections are not affected by this command. The corresponding user action is
a mouse xlection with
either the shift key or control key depressed or the equivalent keyboard
action, depending on the
application. The parameters are the same as those in the SELECT command
DISJOINT SELECT <parnr-1'~ C TO <parm-2~
S. 1.1.3 ADJUST SELEC7~ION
The ADJUST_SELECTION command changes the set of items or modifies the
boundaries of the range of
the currently selected items. If the current selections are disjoint, the most
recent xlection will be
adjusted. The general syntax is
ADJUST SELECTION ~<ha~clta-parm~' ~ TO ) <tocatio~-parm~
The form of <Zoeatio~-pcrrm~ determines if the xlection is absolute or
relative. Some applications such
as imaging or graphics require an additional parameter ( < ha~d t a-parrn~ )
to indicate the location
through which the area is to be adjusted.
S.1.1.4 DESELECT
The DESELECT command removes all current xlections It has no panmetere.
S.1.1.5 SELECT ALL
The SELECT_ALL command. xlects t:he entire object, e. g. a11 items in the
current window, the entire
spreadsheet, document, table etc. It has no parameters
S.1.1.6 Keyboard Commands
Cursor key commands are relative xlection commands They are most frequently
used in the context of a
two dimensional object such as a spreadsheet or a table.
LEFT C<i~t~'
RIGHT C<i~t~~
uF [<int~~
ooHN [<i~t~~
< i~ t ~ indicates the number of cursor movements if omitted, one is assumed.
The item under the cursor
will be selected. Previous selections are dexlected. In some caxs these
commands form the parameter
syntax for a relative ADJUST. SELECTI011. For example,
ADJUST SELECTION RIGHT 3
extends the current selection three units to the rights The parameters are
relative to the anchor point of
the current selection.
Certain keyboard actions are intrinsically absolute, e. g. pressing the Ctrl
Hone or Ctr! Bnd keys.
Commands which reflect suctu actions are defined to be absolute in nature; any
xlection which occurs as a
by-product of these command is considered absolute.
' HP Confidential ~
5-2

Cla.~ Dependent Commands
13~05~
5.1.1. 7 Other Relative Commands
Certain object types ue linear in nature. For example) a document is
eaentially a character stream.
Relative movement is accomplished by poeitional commands
NEXT <parm> ~<int>J
PREVIOUS <parm> C<int>J
< parnt> is some unit, expressed as a keyword, which makes sense in the
context of the object (e. g. HORD,
LINE, era ) and < it t ~ indicate: the number of unit: from the current
selection point. Previous selections
are deselected. In moat case:) the result of these command: will be an empty
selection.
Since the parameters and syntax of selection command: vary widely with the
object, we will refine the
preceding definitions with examples of representative object types.
5.1.1. 8 Recording Repeated Relative Commands
When two ides .al or related commands are recorded) the application may) at
its discretion) overwrite
the first command and increment the [<int>) parameter. For example,
LEFT
LEFT
may be replaced by
LEFT 2
The user should be able to turn of f thin overwriting if desired. Certain
procedures such a: CBT or
animation may need the multiple command mode.
~ HP Confidential'
5-3

Class Dependent Commands
~34Q59~
5.1.2 Container Ob jecte
For container objects ouch as folders or the Desktop, a11 selections are
absolute. The items are selected
from icons or lists, and their parametric representation in the corresponding
Task Language command is
the class name keyword and title string. The following commands are supported:
Container Object Selection Syntax
SELECT <classna~e~ "<title~"
DISJOINT SELECT <classna~e~ "~titla~"
DESELECT
SELECT ALL
A DISJOINT SELECT is aa;omplished interactively by clicking on an item while
holding dowa the Shift
key. In container objects) t.hia command is a toggle. A previously selected
item will be deselected while
an unaelected item is selected. No change is made to the select status of the
other items in the container.
Example
The command
SELECT FOLDER "Au~aust Orders~~
represents a mouse click on the icon representing "August Orders") which will
be highlighted. Any items
previously selected will be deselected. The sequence
SELECT ALL
DISJOINT SELECT F(7LDER "August Orders"
results in the selection of all icons in the container exrePt the "August
Orders" folder.
t HP Confidential !
5-4

Class Dependent Commands
134~~~'z
5.1.3 Table Objects
Table objects are two dimensional objects such as database tables and
spreadsheet. They support both
relative and absolute selection. In spreadsheets selection granularity is
based on cells At least one cell
will always be selected Absolute selection parameters are cell references or
cell ranges. A user-defined
name for a cell range may also be used as a selection parameter. Database
tables behave similarly) but a
"cell" is really a field in a record and the absolute selection parameter is
expreaed in terms of row and
column. In table objects) the keyboard commands, used for relative selection,
may be used both as
commands or as parameters of the ADJUST SELECTION command
To set the context) we give a brief review of interactive selection in table
objects. A cell is selected by
clicking on it with the mouse or using the cursor keys until the desired cell
is highlighted. Continuing to
hold the button down and moving the mouse over the table) or holding the shift
key down and moving
the cursor keys will highlight (select) a rectangular area, a range of cells.
The original cell is the anclwr
cell while the cell diagonally opposite the anchor is the rrwvabte cell. "
only one cell is selected, it is both
anchor and movable. Adjustments to the selected area will be alon,; the side:
of the rectangle which
intersect at the movable cell. At the end of the adjustment, the new movable
cell will be the cell
diagonally opposite the anchor. The anchor will not have changed
Selection Syntax
SELECT "cstl par~amstsr" [TO "cstt par~ametsr"'
DISJOINT SELECT "cst t par~an~etsr" [TO "cet t par~anastsr"]
ADJUST_SELECTION ~0) ~ "cstt pananetsr"
~ksyboar~d conmand~
SELECT ALL
KEYBOARD COMMANDS
LEFT [~int~]
RIGHT [~int~)
UP [<int~'
DOiiN [~ in t ~]
For spreadsheet: "esZ t pararnstsr" is a rtring expression containing a cell
location, e. g. "C 10 ") or a cell
range, e. g. "J 1. . . M 10". Fos database table: the absolute selection
parameter is expresred in terms of row
and column using the following ryntax forms.
SELECT [RON ~ in t ~' ~ COLUt~l~1 ~" ~ fisldnanr~ "~
<int~
Note that the column can be expreoe : ther as a field name string or a
numeric. If the ROIi parameter is
omitted, the column selection is across , a rows. If the COLUl~l~1 parameter
is omitted) the row selection is
acroa all columns The following syntax defines selection of rectangular area.
! HP Confidential'
S-5

Class Dependent Commands
SELECT ROW <int~ COI.UI~II ~"<fietdnan~>"~ TO RON <int> COLUMN ~"<fieZdname>"~
<int> <irrt>
Note that in this case all row and column parameters must be present.
If the command is in r~tative mode) the keyboard commands are used as
parameters. Recall the syntax
SELECT <parnr 1 > C TO <parnr2>
In relative mode < perm- l > is relative to the current anchor point. (In
spreadsheet objects) at least one
cell is always selected. ) However, < perm-2> is relative to < parnf- l >.
This also applies to the
DISJOINT SELECT command.
Examples
LEFT 3
selects the third cell or field to the left of the current selection, which is
now deselected. The sequence
FOCUS SPREADSHEET "Sapttmb~r Orders"
SELECT "A1...C3"
DISJOINT_SELECT "C14"
ADJUST SELECTION ~10WN 4
selects the cells from A1 through C3 ~rlus D4 through D8. The sequence
FOCUS SPREADSHEET "Soptemlxr Orders"
SELECT "A1...C3"
DISJOINT SELECT DC~WN 4 RIGHT 4 TO DOWN 4
also selects the cells from A1 through C3 plus D4 through D8.
5.1.4 Document Ob jects
A document may be considered in the context of a :trsam of characters,
essentially a one dimensional
object. Selection begins wit6i a paint called the edit point and ends with the
cursor position. It may be
empty ( cursor and edit point coincide), forward or backward, relative or
absolute. When a selection is
ad justed) it is from the edit ;point to the new cursor position. A document
may not necessarily support
disjoint selection. The absolute selection parameters consist of a page
number) possibly optional, with
horizontal and vertical offsets relative to the page. The offsets are in some
local linear measurement
unit. Possible units include inches) millimeters) line number and character on
the line, etc. Relative
selection parameter: are in syntactic units such as CHARACTER) WORD) LINE,
PARAGRAPH) etc.
Selection Syntax
SELECT PAGE <paga numbsr>] <~c offsat>, <y offset>
~TO PAGE <paga ~urr~Qr>~ <x offs~t>, <y offset>'
ADJUST SELECTION ~TO~ ~ CPAGE <pagQ num6Qr>~ <x offset>, <y offsat>
~ <kayboard conn~and>
SELECT ALL
Keyboard Command Examples
t HP Confidential ~
S-6

Class Dependent Commands
NEXT csyntactic u~it~ [~int~]
PREVIOUS csyntactic unity ~rint~'
A relative command has the effect of deselecting any current selection and
moving the cursor and edit
point to coincide at the parametric units from the previous cl~rsor position.
It result: in an empty
selection.
Examples
FOCUS DOCUMENT "August Report"
Move the cursor and edit point together) nothing is selected
SELECT PAGE 2 1 INCH, 4 INCH
Select the next two paragraphs
ADJUST_SELECTION NEXT PARAGRAPH 2
Deselect the last word
ADJUST SELECTION PREVIOUS WORD
5.1.5 Image and Graphics Objects
Absolute selection panmeten for ima=e and ~nphia objects are a:pressed in
terms of coordinate on a
grid. The ADJUST SELECTION commaad requires the additional handle parameter to
specify the side or
:ides of the selection area which will bs moved. DISJOINT SELECT is sot
nipported in image objects In
a graphics object it acts as a toggle, deselecting as arei if previoudy
xlected. Relative selection is
mpported for the ADJUST SELECTION command.
Selection Syntax
SELECT cx~ , cy~ ~ TO ~x~ , rye
DISJOINT SELECT cx~ , ~y~ ~ TO ~x~ , rye
ADJUST SELECTION absotute
ADJUST_SELECTION TOP LEFT
TOP RIGHT
~O' cx~ , cp
BOTTOM RIGHT
BOTTOM LEFT
ADJUST SELECTION TOP
~TO) cy~
BOTTOM
ADJUST_SELECTION LEFT_SIDE
~TO~ ~x~
RIGHT SIDE
ADJUST SELECTION rstatiw
s HP Confidential'
S-7

C' ~ ss Dependent Commancb
ADJUST SELECTION LEFT ~ ~cint>' rUP ~ Ccint>'
RIGHT tDONN
DESELECT
SELECT ALL
~ HP Confidential t
5-$

Class Dependent Commands
Examples
10,t0 to 50,100 will be selected
SELECT t0,10 TO 50,t00
'Top left or 10,t0 becomes the anchor point
ADJUST_SELECTION BOTTOM_RIGHT TO 30,70
'New selection area is 10,t0 to 30,70
5.1.6 View Selection
View Selection. A View is selected in the coordinates of the object which
contains it. Selecting any
portion of the area which contains the View will select the entire View. User-
defined marks may be
associated with a View to simplify the Task Language command.
5.2 OBJECT MANIPULATION COMMANDS
Many application: will support some form of direct manipulation of objects The
supported rynttx is
defined below. The MOVE TO and COPIf_TO commands may refer to an object in
either opened or iconic
form. The operation is performed on the current selection. The IiITHIN clause
refers to an open object.
Object Manipulation Syntax
MOVE TO <classnase> "<titts~" ~ iiITHIN <classea~s> "<titta~"~
MOVE TO "<coo~irrates~" ~ NITHIN <classnare> "<titls~"'
COPY~TO <classnaise> "<titts~" ~ NITHIN <clssseawe> "<titls~"'
COPY TO "<coor~dinatss~" ~ NITHIN <classnase> "<titls~"'
Again, "<eoordinatss~" is expressed in the context of the object) e. g. a
spreadsheet cell, a graphic: grid
point, etc.
5.3 DATA MANIPULATION
Applications which nipport direct muiipulation of data should use the same
verbs and parameter syntax
as in the object manipulation command: whenever possible. In addition, the
follo~ring commands cause
the application to move dst>t sad objects to/from the Clipboard.
~ HP Confidential ~
5-9

Class Dependent Command~~
Cllpboa~d Commands
COPY copies the xlection to the Clipboard
CUT deletes the selection and places it on the Clipboard
PASTE inxrts the Clipboard contents at the selected location
SHARE makes a reference to the xlected item available to other objects
5.4 WINDOW MANNPULATION
These commands alter the state of the current window. They are applicable only
to windows in which the
user could perform the action interactively.
Window Commands
ACTIVATE set the current active window
ADJUST iiIN001i changes the size and/or location of the current window
MAXIMIZE mazimizes the current window:ize to full screen
MINIMIZE changes the current window to iconic form
RESTORE restores the iconic or maximized current window to its previous size
5.5 MISCELLANEO~IJS COMMANDS
Certain commands will be supported by most applications. Refer to the User
Interface Deu'ttn Rules for
further discussion.
' HP Confidential ~
S-10

Class Dependent Commands
5.6 REFERENCING SUBORDINATE WINDOWS j ~ ~ Q ~ ,Q 2
Moat applications will need to support Task Language commands which reference
subordinate windows.
These windows may be either the main window of the application, subwindows,
MDI windows) modeleas
dialog bones, in fact, any window except modal dialog boxes. The Task Language
must identify the target
window. For example) if an object has two subordinate windows visible, the
script must be able to specify
which (or possibly neither) an ADJUST_NINOOW command is to move or size. A11
window manipulation
commands fall into this category. Many applications will find a similar
situation with other commands
such as the Clipboard CUT, COPY, and PASTE Task Language uses the Class
Dependent ACTIVATE
command plus a window identifier to set the current active window. The
following guidelines apply for
referencing subordinate windows
1. If the subordinate window is a madeleas dialog box which is opened by the
task via a command
verb, use the same keyword as the window identifier) e. g. ATTRIBUTES to
identify the Desktop
dialog box.
2. If the subordinate window is an application child window which is user
created or has a caption
bar which is user created, use a string parameter as the window identifier) e.
g. the date
windows in the Agent's Desk Calendar. ''
3. If the command can be applied to the main window of the application a: well
as one or more
subordinate windows, the window identifier parameter is optional. The absence
of this
parameter indicates the main window of the application.
5.6.1 ACTIVATE Command
The ACTIVATE command sets the current active window. All application must
support it if they have
subordinate windows Commands following an ACTIVATE command which are directed
to a particular
window will be sent to that window until the current window is changed, either
explicitly with another
ACTIVATE or implicitly. The syntax ix
ACTIVATE ~~Windoe~ idantiff~r~'
Th a ~ t~indo~ i.den t i f iar~ parameter is used to designate the window as
in the cane of a modeless
dialog box. If this parameter is not present, the main window of the
application is assumed. However, the
keyword parameter MAIN may aLo be used to designate the application main
window.
5.6.2 Implicit Window Activation
Certain commands carry implicit window activation. When they are executed, the
window acted upon
becomes the current active window. An ACTIVATE command may be used as well)
but it is redundant.
The following command typo imply activation.
1. Any command which open a subordinate window activates that window. For
example) if a
command opens a modeles dialog box, that bon is then the current active
window.
2. Any command which is completely unambiguous in the window it references
implies an
activation of that window. For example) the CHANGE ATTR I BUTES Desktop
command is only
applicable to the CHANGE ATTRIBUTES dialog box and therefore would imply
activation of it.
~ HP Confidential ~
5-I1

Class Dependent Command:
1340092
5.6.3 ADJUST WINDOW Command
The ADJUST IiIND0li command is used to change both the size and location of
the selected window. The
window must be of a type such that. the user could perform these operatiotu on
it in interactive mode, e. g.
the main window of an object or a modeless dialog box. A single command is
used since it is difficult to
resize a window without rrioving it. The syntax is
ADJUST WINDOW ~1'0 <~7idth~ CBY' <height~~
Cp~T <point-locatiorr~'
t HP Confidential ~
S-12

Class Dependent Commando
5.7 KEYWORDS AND PARAMETERS
'this section summarizes some of the rules for parameters.
1. The keyrvord parameters ON and OFF are used to define the state of
checkboxes in dialog bones
and of toggle type menu commands.
2. When referencing an object by its class and title use the sequence
<elass~esse> <titla~
where <elassnaae> is a keyword identifier referring to the class of object (e.
g. SPREADSHEET)
and ~ t it Z~~ is a string parameter containing the title of the specific
object. Do not u:e
noisewords such as KITH TITLE in this content. They add to verbosene:s and do
not improve
readability.
3. When referencing locations, use a parameter of data type point. If needed,
the parameter may
be prefaced with the keyword AT, optionally to improve readability or required
to
disambiguate.
4. ON, OF, etc. may be used as noisewords after the command verb if doing so
makes the command
more grammatically correct in the context of an Enilish sentence. '
CHANGE TITLE ~OF] DOCUMENT "Ordstrs" TO "August Orders"
5. The keyword QUIET is an optional parameter in commands which open modeless
dialog
bore: When present, the box is invisible during task execution although the
commands
changing its state will be executed. Otherwise, the default, the box is
displayed.
6. The keywords should be identical whenever posdble to the selection the user
chooses' when
performing the same action interactively. If the selection contains more than
one word, the
keyword phrase should replace the blanks with the underline character.
~ HP Confidential '~
S-13

DATA TYPES AND FORMATS 1340592
6
Data is used in commands to control their specific actions. Task Language
supports five types of data.
These are Integers) real nu~ubers) strings, points, and regions.
6.1 INTEGER CON:>TANTS
Integer constonts are represented by strings of digits (0,1,2,3,4,5,6,7,8,9).
Integer constants may begin with
a unary minus operator (-). e. g.,
Example
1
t0743
0
-12
6.2 REAL CONSTANTS
Reo1 constant: are represented in Task Language by zero or more digits that
are followed by a single
decimal point (. ) and another string of digits. Scientific notation is aLo
supported. Below are some
examples of real constants.
Example
23.5
0.0
-10S6.12345
-0.14322
.234
1.23~-9
Duriag parsing, all real coart;anti are stored in double precidon format.
However, the Agent can handle
both single and double precision real coastanb as command parameter at run
time.
6.3 STRING CONSTANTS
String constants are represented by strings of printable characters (not
control codes) ALT-characters, or
escape) that ue enclosed in quotes as follows:
! HP Confidential t
6-1

Data Types and Formats
Example
"This is a string"
"a"
"COST = Q23.00"
w~s~re? ~ e~~ . . . "
The only character that cannot be in a string constant is the quote) because
the quote would be mistaken
for a string terminator. However, a quote can be represented in a string
constant by repeating the quote
character. The CHR function may also be used. The commands
OUTPUT "Th is is a quote "" in a st ring. "
and
OUTPUT "This is a quote " + CHR(22) + " in a string."
would both display the following in a conversational window.
This is a quoto " in a string.
6.4 POINT CONSTANTS
Points are used to identify locations on the computer screen. Point constants
have the format (xy), where
x and y are integers or 0 and are in unit: of screen coordinate: The exact
metric or granularity may be
context dependent; however, it is not row) column. See the SCREEN cotnmaad in
Appendix A for a
discussion of physical and logical screen coordinates.
Example
(o,o)
(639,349)
(50,S0)
(12,3)
The EDITBOX and PUSHBUTTON Task Language commands have point data type
parameter:
6.5 REGION CONSTANTS
Regions are used to identify rectangular areas oa the computer screen. Replon
consta~ts have the format
((x,y),wh) , where x) y, w, snd h are all positive integers or 0 and are in
units of screen coordinates. (x,y) is
a point constant and represeab the upper-left corner point of the region while
w and h represent the
width and height of the region. Below are two examples of region coiutants
Example
((0,0)) 640) 350j
((20,20), 50g00)
* HP Confidential t
6-2

__ 13 4 0 ~ ~ 2 Dad Types and Formats
The DEFINEIiIND011 commsmd, which defines a user conversational window, has a
region type parameter.
Example
DEFINEWINDOW MYWINDOW ( (10,50)) 200) 250 ) POPUP "Teat Window"
6.6 AGENT INTERINAL DATA TYPES
Certain commands cause the: Agent to store a value into a variable (either
system or user) that is not one
of the standard data types.. This data is needed by the Agent to monitor some
aspect of the task.
Variables containing non-standard data types should not be modified by the
Task Language script, and
the data in them cannot d: displayed with Task Language command:. To
illustrate, the PUSHBUTTON
command causes a button id to be stored in a user variable. Using that
variable later as a parameter in
the TESTBUTTON function cell will return the state of that button, i.e. 1 if
pushed eLe Q
= HP Confidential t
6-3

Data Types and Formats
t3~4~'~2
Example
TASK 'illustrates the PUSHBUTTON command and internal data types
DEFINEWINDOW MYWINDOW ( (t0,50), 200) 250 ) POPUP "Task Window"
OPENWINDOW MYWINOOW
'The~actions start here
LABEL DO WORK
< action commands >
'See if the user is ready to quit
CLEARWINDOW MYWINDOW 'clear the window and home the cursor
OUTPUT "Do you want to stop now?"
'Display the pushbuttons
PUSHBUTTON MYWINDOW go# "Continue" AT (5, 25)
PUSHBUTTON MYWINDOW stop# "Quit" AT (50) 25)
'go# and stop# contain tha button ids for the pushbuttons. Button ids
'a n internal agent data types
WAIT 'for user to push a button
IF TESTBUTTON( go# ) <> 0
GOTO DO_WORK
ENDI F
CLO6EWINDOW mywindow
ENDTASK
w
Further explanations of internal data types are found in the descriptions of
the commands using them.
! HP Confidential ~
6-4

EXPRESSIOMIS 1340592
The Task Language supports arithmetic, string) and logical expressions. A
command which specifies a user
supplied literal as a parameter should also accept an expression as that
parameter.
SELECT DOCUMENT "August Orders"
and
months _ "August"
SELECT DOCUMENT months + " Orders"
will result in the same action a,t task execution time.
Variables may be used as terms in expression:. Since the type of the data in a
variable cannot be
determined at compile time, validity of the data in expressions will be
checked by the Agent at run-time.
8.1 ARITHMETIC ANID STRING EXPRESSIONS
The following are valid terms in an expression:
. a data constant
a variable
an expression encloal;d in parenthesis
Operations involving both real and integer terms are supported; the result
will be a real. Although both
single and double precision reaJs are supported, the result of any expression
involving reals will be double
precision.
Arithmetic Operators
+ addition
e~rbtraction
amltiplication
/ d:iviaion
String Operator
+ concatenation
* HP Confidential *
8-1

VARIABLES
1340592
Variables are used to store data which can then be accessed by a task at a
later time. Variable identifiers
are identical to keyword identifier: with the ezception that the last
character is a pounds ai~n, "i4i".
Variables assume the type of the data stored into them.
7.1 USER VARIABLES
User variables are defined by the user to hold data needed to execute the task
correctly. Data is stored to
a variable by an assignment statement.
Example
texts = "hello world!"
The user variable texts now has the value "hello world and is of type strin:.
If, later in the task)
texts is aaai~ned a anmeric value) its type will change accordingly. Since
variable type: can change) they
are not initialized or typed at the start of a talc. Accessin= data in a
variable before a value ha: been
assigned to it results in a a run-time error.
Certain commands, e. ~. INPUT and MESSAGE take a user variable as a parameter.
Information obtained
as a result of the execution of these commands is stored in the variable and
is then available to the task.
t HP Confidential =
7-1

.EXPRESSIONS 13 4 0 5 9 2
8
The Task Len=cage support' arithmetic, string) and logical expressions A
command which specifies a user
supplied literal as a parameter should also accept an expression as that
parameter.
SELECT DOCUMENT "p~ugust Orders"
sad
months s "August"
SELECT DOCUMENT month; + " Orders"
will result in the same action at task execution time.
Variable may be used as t~:rmt in r.:pres:ions. Since the type of the daft in
a variable cannot be
determined at compile time) validity of the data in expressions Will be
checked by the ABent at run-time.
8.1 ARITHMETIC AND STRING EXPRESSIONS
The followrin~ are valid term in an expreaion:
~ a data constant
w
~ a variable
~ an expresoon enclosed in parenthesis
C~peratioa: involving both real and irate=er term: are supported; the result
will be a real. Although both
single and double precision ra'I: are supported, the result of say expression
involving real will be double
precidon.
Arithmetic Operators
+ addition
- rsbtraction
mnltiplicaiion
/ dividoa
String Operator
+ roncatenatioa
~ HP Confidential ~
8-1

,.
.~34~J5~9~
D~si~n i~hiiosophy for API Applications
Daignin,g an application that u:a the API be~1ities is no more diff;cult than
developing any other applintion with comparable features. By keeping the
desip~ pt~ilofophy in mind, it should) in fact) be easier because of the tools
that
ue provided. There are four bs:ic thugs you must do when darning an API
applintion:
~ Define a :tt of command: to demnbe the functions a user perform: in your
application. This is called the ?ask LaoSuaSe.
~ Separate the interpreution of user actions from the actual execution of the
oorrrrnands derived from those actions.
~ Structure your application :o that it hss the following categories of
~PF~rt:
1. playback of tasks;
I. the recording of tasks;
3. interrogation) that is) for help, Agent, or CBT requesu;
4. monitoring of user actions and commands for computer based trainiag;
S. error handling within tasks.
I
~ Incorporate function calls into your code for interfacing with the API.
Chsptsr Organization
The chapter is divided into the following sections:
~ The API Facilities
~ The API Modes
' ~ Messages to 'Your Application
~ The Task Language
~ The API Architecture
i Message Handling in API Applications
~ Hovr Your Application Responds to the Different Modes
~ API Function Summary
~ API Component Summary
Ywodus~on b 1M APB

Expreaions 13 ~ o
8.2 POINTS AND RIEGIONS
Operations on point and re=ion constanb are not supported. However, any
integer element of a point or
region constant may be replaced by a numesic ezpreasion. e. ~.,
(10, 2~a~ + 1)
((ai, b~)) c~) di)
8.3 RELATIONAL EXPRESSIONS
Two valid espreaaions of the same type may be joined by a relational operator
to form a relational
ex pression. Relational expresdo~ may be used as the wnditional in I F and NHI
LE rtatements.
Relational Operators
= equal
< less than
> greater than
<= leas than or equal
> ~ greater than or equal
< > not equal
Strings are compared characiter by character using as appropriate collating
sequence as the compariaioa
basin The compari:ioa terminate with the first instance of non-identical
characters or the end of one of
the strings. If the comparisi~on reaches a terminating character) the relation
is "equal to" if both strings
are the same length eLe the longer atria= is consider greater than the
shorter. The comparision is case
sensitive. if compared, "hol,lo" will lba greater than HELLO". The result of a
relational expresion i: not
presently a recognized data t~rpe.
8.4 LOGICAL EXPRESSIONS
Relational espsesrionsr msy ix (joined by Al10 or OR, or prefixed by NOT to
form tostcd expressions.
Logical espreseions may also be used as conditional.
~ HP Confidential ~
8-2

FUNCTIONS 134059 i~fi
Task Language function: yore very much like functions found in conventional
programming languages.
They are used to provide information to the Agent Task and, as such, return a
value which may be
assigned to a variable. The; type of the function is the data type of the
value it returns If the function
type is appropriate) the functions itself may be used as a parameter for a
Task Langusge command. If the
type is numeric or string) the functian may be used as a term in an
expreation.
A function may have relaxed type restrictions on some parameters (e. g. real
or integer, numeric or
string). The return value may be similarly rely:ed.
Optional parameters are permitted but should follow any required parameters.
9.1 BASIC SYNTAX
A function hss the format:
<tunction new>( [pare [, par~2) ~ ~ ~~ )
when <tunetion nsiwa> is a keyword identifier. Note that function paruneters)
unlike command
parameters) are positioasl and separated by commas. A function parameter may
be an a:presdoa or
another function.
9.2 CLASS INDEPIENDENT FUNCTIONS
Claw Independent functions can be used across applications. When the function
type is appropriate they
can be used in both Clae W dependent sad Clan Dependent commsads. Most Class
Independent functions
are used for data manipulation or conversion, although functions are provided
to interrogate elements in
conversational windows
9.2.1 Data Manipullation Functions
Table 1. 2 contains a :yaopeu of the data manipulation functions implemented
for Con Wave.
a HP Confidential'
9-1
s~

cAUTioN i 3 4 0 ~ ~ z
Functiom
The jwictton~ality of the preceding functions will be provided althou=h the
name: and parameter xtructure may change.
9.2.2 Interrogations Functions
A task may need acres to data within an application. For example) the
execution flow of a task may
depend on the value of a cell in a spreadsheet. The Clipboard interro=stion
functions make the data on
the Clipboard available to tlhe ta:)c. The same data is available to the
application via the CUT, COPY, and
PASTE commands
Clipboard Functions
Ga tC 1 i pboa rd Da t a ( ) returns a rtrin~ containin= the data on the
Clipboard. If the Clipboard it
empty) it returru the null :trim. GetC 1 i pboa ~dData has ao parameters
Pu tC 1 i pboa ~dData ( ) puts the :<rins referenced by the :trin= parameter
onto the Clipboard. It
returns a non-zem value if the operation :ucceed~, zero if it failed.
Tf~ dsiinitiorr ara VERY) VERY pnliminaty.
9.3 CLASS DEPENDENT FUNCTIONS
Clan Dependent functiom sre nred for two purpo:ec
1. To provide inf oraa~stion or data of a type that i: very application :pecif
ic.
2. To interropte aa~ object to receive information on ib state) e. ~. request
the content: of an
element is a dialog bos
Clan Dependent inaction: may be used is the Clan Dependent commandr of that
particular application.
< example should so here. . ~, ~
~ HP Confidential t
9-3

r
TASK LANGUAGE INTERNAL~.~ 4 p :~ 9 ~
APPENDIX C
The creation and performing of an Agent Task is a three stage process:
1. The Ascii script is entered using the Task Editor. Alternatively, the
script is created as the
Agent remembers user actions while in learn mode.
2. The script is translated into a binary form which is readable by the Agent
Interpretive Engine.
3. The Engine interprets the binary script and dispatches the instructions to
the appropriate
target.
This section discusses the second stage, translation of the Ascii script to a
form readable by the Engine.
?.1 COMPILATION PROCESS
An Ascii script will be compiled (i.e. translated into binary format) from
within an open Task Editor
object upon demand of the user. As a default, a modified Task Editor object
may be compiled when it is
closed. Compilation is not permitted during the Learn process. A script must
be free from syntax error
before an executable binary file is created.
The full compilation process was selected for runtime speed and space
efficiency. Incremental or
automatic compilation was rejected in favor of full compilation on demand
since programmers tend not to
create source files linearly. It is expected that users will frequently build
new script files from pieces of
existing ones.
Compilation is a two stage process.
1. Pass 1. The first pass reads the Ascii script and generates binary P-code
records. For short
scripts, these are kept in a memory buffer. Large scripts will be written to a
temporary file. If
syntax errors occur) the compilation is terminated at the end of pass 1.
2. Pass t. During the second pass the object file is created. The header
record is written. The
P-code records are read from the temporary file (or buffer) and instruction
records referencing
labels are fixed up. These include jump and procedure call records. The
records are written to
the object file. The P-code records are followed by data information records.
The data
information records are used for debugging and will not be used unless the
task is executing in
debug mode.
'' HP Confidential !
?-1

Tz~k Language Internals
1340e2
?.2 OBJECT FILE FORMAT
Successful compilation of a task creates a binary object tile. An object file
consists of three main parts: a
fined length header record) the binary P-code records which will be executed
by the Agent interpretive
engine) and assorted data tables. The data tables are useful for debugging but
are not used in the actual
task execution. The object file format is shown in Figure 1.
Figure 1. Object File Format
DATA TABLES
?.2.1 Header Record
The header record is a fined length of 128 bytes. It contains information on
the locations and sizes of the
rest of the file. Its contents are described in Table 1.
'~ HP Confidential ~
?-2

Task Language Internals
Table 1. Header Record Information
134t~5~~
Offset Size Description
OH 8 bytesAscii string containing version of compiler
8H 2 bytesInteger containing page size of code
AH 2 bytesInteger containing number of pages of code
CH 2 bytesInteger containing number of task variables
EH 2 bytesInteger containing total size of array
tables
10H 4 bytesLong integer containing byte offset of
line number/code
table
14H 2 bytesInteger containing length of line number/code
table
16H 4 bytesLong integer containing byte offset of
variable names info
lAH 2 bytesInteger containing length of variable names
info
1 CH 100 Reserved for future use
?.2.2 Code Section
The code section of the object file consists of variable length) binary P-Code
records These are executed
at run time by the Agent. A more detailed description of P-Codes will be found
in the next section.
Pointers to locations in the code are; maintained as Addresses. Addresses
consist of a page number and an
offset into that page, thus identifying the start of an instruction. The Agent
will request a page of code
at a time. Page size is tentatively set at S 12 bytes P-Code instructions will
not cross page boundaries.
?.2.3 Data Tables
The code section is followed by data tablex These are not needed in the actual
execution of the task.
They are included as debugging aids. A brief description follows.
1. Line Number/Address Pairs map the line number in the Ascii source file to
the address in the
object file of the P-Code instruction which corresponds to the beginning of
the line. This will
allow breakpoints to be s,et symbolically and execution monitoring.
2. User Names. The User Names table contains information which maps the Ascii
name in the
source file into the runt:ime data structures. There are two types of user
names, variable and
label. The latter includes procedure and function names. During debugging,
this table will be
searched linearly for a match with the name. In this situation, I don't think
search speed is
critical) and I don't anticipate the table being really big.
The formats of the records are shown in the following figures.
~ HP Confidential '~
?-3

Task Language Internals
1344e2
Figure 2. User Variables Record Format
'.ength ( 1 byte)
;ype - 0 ( t byte)
-untime index (2 bytes)
Ascii Name (variable length)
Figure 3. User Labels Record Format
length ( 1 byte)
type - 1 ( 1 byte)
address of code (4 bytes)
Ascii Name (variable length)
* HP Confidential *

Task Language Internals
?.3 COMPILE TIME ENVIRONMENT
The Task Editor Compiler uses the YACC Parser Generator. Therefore, its
structure will be somewhat
determined by YACC. This section is still somewhate tentative. There are a
number of details to be
worked out. I think the prototype developed by Tom Watson will serve as a good
skeleton) and the
algorithms and data structure are based on that. Briefly) YACC produces a
parse routines which is named
yyparse. (cute) huh?) And a bunch of tables which I haven't completely
deciphered yet. Summarizing
the interface of yypa rse with the outside world during the parsing, and
ignoring errors:
1. yypa rse requests a token by calling the scanner routine which must be
called yy leX.
2. yy lex returns either an integer which is a token type, or, if the
character does not map into
any recognized tolcen type, it returns the Ascii character itself. Token types
are declared in the
file which is input: to YACC.
3. If the semantic I>rocessing of a token requires more information, a value
is assigned to the
integer yylval which yy pa rse takes to be the value of that token. What it
actually represents
is token dependent and under the control of the writer of the semantic
routines. It frequently
is an index to a table of values or structures..
?.3.1 Data Files
The compiler requires several data files to set its environment.
7.3.1.1 KEYWORD FILE. The keyword file is an Ascii file containing a list of
a11 the keywords
recognized by the compiler. There is a Domain Independent keyword file, and it
is expected that most
application domains will have a keyword file as well. Within the file,
keywords are separated by CRLF.
?. 3.1. 2 DICTIONARY FI1LE. The Task Editor dictionary is contained in the
file AGTSK000 . DAD
(directory to be determined, currently \OMF\agent). This is an Ascii file with
records separated by
CRLF. The records and fields are variable length and blank terminated. Blanks
within fields must be
enclosed within quoted strings. The type of the record is contained in the
first byte. So far) only one type
has been defined. When applications are installed in a New Wave system)
provision must be made to add
the necessary records to this file. 'fhe format of the record used by the Task
Editor to access domain
dependent information in shown in 'fable 2.
* HP Confidential'
?-5

Task Language Internals
Table Z. AGTSKOOO.DAD Domain Information Record
Field Description
1 Rec;ord type - def fined as ' 0 '
2 Domain keyword - used in FOCUS) SELECT
3 Domain prefix for domain dependent commands
4 Fully qualified filename of . EXE file of domain
dynamic library
Name of dynamic library entry procedure
6 Object classname. This is enclosed in double quotes
and must be as it
appears on the OMF Property List.
?.3.2 Scanner Data Structures
This section defines the major data structures used by yylex.
?.3.2.1 STRINGTAHLE. The string table is a dynamically allocated buffer which
holds the Ascii of a11
identifiers and string literals, Names are fetched via an index into this,
buffer. Once an identifier is
inserted, it is not removed. Keywords are included. The entries are
contiguous, null-terminated strings.
?.3.2.2 IDENTIFIER BUFFIER. The Identifier Buffer is an array of type
IDENTIFIER. It holds
information on a11 identifiers encountered in compilation. This is the
Spellings buffer of the prototype
expanded System defined identifiers) e. g. keywords, are entered into the
buffer at initialization. I
haven't decided whether it should be static or dynamic, probably the latter.
typedef struct {
int type; /~ of identifier. Currently defined types
are:
keyword) class symbol) class prefix, user
symbol)
user variable) label ~/
int name; /~ index into the string table w/
int value; /~ additional information depending on type)
possibly
index into another table /
int link; /x to next entry with same hash value /
IDENTIFIER;
Each keyword has a unique type. Other types include user variable, class
symbol, class prefix, etc. (This
structure may expand.
?.3.2.3 HASH BUFFER. The hash buffer is an fixed size integer array. If an
element is non-zero) it is
an index into the Spellings Buffer of the first identifier which hashed to
that value.
* HP Confidential ~
?-6

Task Language Internals
40y2
?. 3. Z.4 CLASS INFORMATION. The class information array holds information
about the domains
which was found in AGTSK000. DAD. The structure of an element is:
typedef struct
HANDLE /~ to the domaindynamic
hLib; library
/
int value; /M Identifier fferindex of class /
bu symbol
int cmnd /k Identifier fferindex of command ix
prefix; bu pref x/
int _ /~ string tableindexof library .EXE x/
libname; file
int libproc; /~ string tableindexof library procedurex/
int classname; /~ string tableindexof classname string/
} CLASS;
?. 3. 2. S ADDRESS. References to locations of P-Code instructions are kept in
data type ADDRESS.
typedef struct
fnt page;
int offset;
} ADDRESS;
?.3.2.6 ADDITIONAL TABLES. Other tables include arrays for numeric and string
constants. Their
structure and use hasn't been decided
?.3.3 Labels and Flow Control
A11 labels have an entry in the Late! Table. The Label Table is an array of
type ADDRESS. Labels can be
declared (e. g. procedures, user labels) or implied as in an I F statement.
The first time a label is
encountered, it receives an entry in the Label Table. If the label has an
associated identifier, the Label
Table index is assigned to the value; field and type becomes label (except
when it is a keyword. . .). When
the address is known, it is assigned to the Label Table entry.
Frequently) a P-Code instruction referencing a label whose address is not yet
known is generated. When
this happens, the Label Table index is put into the address field, and the
length word of the P-Code
instruction is made negative. During Pass 2 of the compiler, this field will
be fixed up with the correct
address.
?. 3. 3.1 IF STATEMENT. I F statements are handled by the If Stack. If Stack
is an integer array of
indices into Label Table. I expect: to choose some reasonable level of nesting
and give an error if it is
exceeded. When an I F statement is encountered, a Label Table entry is created
and its index pushed on
the stack. The necessary JUMP P-Code can be generated. If an ENDIF is found,
the address is assigned to
the top entry and it is popped. If an ELSE pops up) the address is assigned to
the top IF and it is popped.
A new entry is created and its index pushed on the stack. The pushed value is
negative to indicate ELSE
When the ENDIF comes along) the address is assigned and the entry popped.
Other flow control statements will be handled similarly.
?.3.3.2 IF AND WHILE.
'~ HP Confidential'
?_7

Tsk Language Internals
a or Procedi.lres ~ ~~ ~~92
. .3.4 M j
?.3.5 Scanner
?.3.6 Parser
NOTE
We anticipate using the public domain version of YACC. We have not
really checked out the limits of this parser generator.
~ HP Confidential ;
?-8

Task Language Internals
... ~34~59~
?.4 RUN-TIME ENVIRONMENT
The Agent Interpretive Engine (run-time environment) is a simple stack
machine. The basic components
are:
. Stack. The Stack. is a dynamically allocated buffer (probably from Local
Heap). It is used
mostly for expression evaluation, but it will also hold the return address for
procedure calls. A11
data items except strings will be put on the stack by value. Strings will be
pushed by reference.
~ Stat is Data St ructu res. These tables are set up at task initialization.
They include user
variables, system variables, array structures, and interrupt state tables.
. St ringtable. This dynamically allocated buffer holds the current values of
string variables.
~ I P. The IP (instruction pointer) contains the address of the next
sequential P-Code instruction
to be executed.
These are described in greater detail :in the section on data structures. When
task execution commences,
the engine does the following;:
1. Send message to the task object to open the object file and send
information from the header record.
2. On the basis of the header information, set up the static data structures,
allocate the stack and
stringtable, etc.
3. Set top of stack to 0.
4. Send message to the task object to send the first page of code
instructions.
5. Set the IP to 0 and call FETCH to get the first instruction.
?.4.1 Run-time Data Structures
?.4.1.1 STACKTOKEN. Data items on the stack are in the form of the structure
STACKTOKEN. All
types except strings are pushed on by value.
typedef struct
int type;
un ion
int ival;
float fval;
long lval;
double dual;
REGION rval;
POINT pval;
int stringindex; /~ index into string buffer ~/
} value;
} STACKTOKEN;
'' HP Confidential !
?-9

T,~sk Language Internals
~~~o ~yz
?.4.1.2 STRINGTABLE. The Stringtable is a dynamically allocated buffer which
holds the values of the
string variables and string values currently on the stack. Each entry is of
the form:
typedef struct
int length;
char string [);
} STRINGTABLEENTRY;
The length is the length in bytes of the entry and includes the length word
itself. The string is null
terminated. If length is > 0) l:he entry is allocated; if < 0, the entry is
free.
?.4. 1.3 VARIABLE TABLE. The Variable Table is an array of type VARITEM which
holds information
and values of user and syst~:m variables. Its structure is quite similar to
STACKTOKEN. The size of
Variable Table is known of ter compilation.
typedef st ruct
int type; /~ of the data item
union { ,
int ival; /~ integer x/
int lval; /k long integer ~/
float foal; /~ single precision real x/
double dual; /~ double precision real
REGION rval; /~ region variable x/
POINT peal; /~ point variable /
st ruct {
int type; /~ of the array x/
int size; /~ number of items in the array
~/
int index; /x byte offset of start in array
table /
} array;
int stringindex; /~ of the entry in StringTable
x/
};
} VARITEM;
?.4.1.4 ADDRESS.
References to locations of P-Code instructions are kept in data type
ADDRESS.
typedef struct
int pag~;
int offset;
} ADDRESS;
?.4.1.5 ARRAY TABLE. The size of the array table is known at compile time. It
is a bytc array.
?.4.1.6 INTERRUPT TABLE. The interrupt table holds information on the state of
the interrupt
conditions, address of interru~~t procedures) etc. The structure will be
defined later.
t I-iP Confidential ~
?-10

Task Language Internals
?.4.1.7 TO BE DEFINED. Among the things to be defined are breakpoint and other
debugging tables. I
have also not included various flags and pointers (TOS) IP etc.).
?.4.2 Run-time Procedures
?.5 DEBUGGING AIClS
~ HP Confidential'
?-1 1

Task Language Internals
134459
?.6 P-CODE INSTRUCTION SET
The Agent Interpretive Engine performs a task by fetching and executing P-Code
instructions. The
generic P-Code record formal: is shown in Figure 4.
Figure 4. Binary P-Code Record Format
length word, 2 bytes
P-Code Id) 2 bytes
optional parameters,
variable length
Field Description
Length contains the number of bytes in the record including the length word. A
record with no parameters will have a length of 4.
P-Code Id is the numeric opcode of the instruction.
Parameters are any parameters which the instruction requires. The type and
length are
instruction-dependent. Parameter field: should be defined as fixed length.
Strings are null-terminated.
The currently defined instructions are summarized in the following table. It
will be updated as required.
Note that A refers to the item on top of stack and H refers to the item
immediately below A. The IP
(instruction pointer) is a data item of type ADDRESS which contains the page
number and offset of the
instruction.
'~ HP Confidential ;
?-12

Task Language Internals
Table 3. Summary of P-Code Instructions 13 4 ~ ) ~7
Mnemonic ICI Action
#
PUSH * Puts the contents of a data item on top
of stack as A.
POP * Stores A into a data item and removes
it from the stack.
PUSHI * )guts a value on top of stack as A.
PUSH ARR * Puts the contents of an array element
on top of stack as A.
COMPARE * Compares A and B, pops A and B) and sets
A to reflect the
result.
EXCHANGE * Exchanges A and B.
DUP * Pushes a copy of A on top of stack.
JUMP * Move the IP to a specified value.
JUMP_IF * JUMP if the value of A meets a specified
condition. A is
popped from the stack.
CA!! * Pushes the contents of the IP on the stack
and replace it
Writh the address of a procedure.
RETURN * Replaces the contents of the IP with the
value of A. A is
popped.
SET_INT * Sets or clears an interrupt condition)
optionally saving the
address of a bound procedure.
ADD * Pops A and 8, puts A + B on top of stack.
SUB * Pops A and B, puts'A - B n top of stack.
__ . _ _ ,~
MUL * Pops A and B, puts A * B on top of stack.
DIV * Pops A and B, puts A / B on top of stack.
COMMAND * Sends the parameters to the object with
the focus for
execution.
FOCUS * dives a specified object the focus.
FUNCTION * Executes a system or Domain Independent~function.
The
result is left in A.
! HP Confidential *
?-13

CALL 130592
Transfer control to a user defined procedure
Syntax
CALL Addre:~s
Fields
Length 8 bytes
P-Code Id
Pa ramete rs
Address of procedure. Type is ADDRESS.
Algorithm
~ HP Confidential ~
?-14

COMMAND
Sends a command message to the object with the Focus ~ 3 4 0 5 9 ,~)
Syntax
COMMAND Domain Id, Command Length,
Cormrrnd Parameters
Fields
Length variable
P-Code Id
Parameters
Domain ~d Integer indicating class of object recognizing this
command
Command Length Integer containing length of length word, command,
and parameters
Parameters variable length and type, command dependent
Figure 5.. Structure of P-Code ~ COMMAND
parameters (optional)
domain P-code
command length
domain id (class)
'~ HP Confidential '
?-15

COMMAND
Algorithm

COMPARE
Compares A and B and set top of stack accordingly
Syntax
COMPARE
Fields
Length 4 bytes
P-Code. Id
Parameters None
Algorithm
~ HP Confidential'
?-17

DUP
Pushes a copy of A on the sta~~k
Syntax
~I
DUP
Fields
Length 4 bytes
P-Code Id
Pa ramete rs None
Algorithm
~ HP Confidential t
?-18

EXCHANGE
1~40~92
Exchanges A and B on the stack
Syntax
EXCHANGE
Fields
Length 4 bytes
P-Code Id
Parameters None
Algorithm
a HP Confidential ~
?-19

Focus 13 4 0 y
2
Gives a specified object the focus
Syntax
FOCUS title, Ctassname
Fields
Length 85 bytes
P-Code Id
Pa ramete rs
Title of the object. Null terminated string) field length is
1 S characters.
Classnarr~e of the object. Null terminated string) field length is
6 5 characters.
Algorithm
~ HP Confidential ~
?-20

JUMP
1340592
Move the IP to a specified value
Syntax
JUMP Addrasrs
Fields
Length 8 bytes
P-Code Id
Parameters
Address New value of IP. Type is ADDRESS.
Algorithm
~ HP Confidential ;
?-21

~iVMP_~F
1340~9~
JUMP if A meets a specified condition
Syntax
JUMP IF Con~~ition, Address
Fields
Length 10 bytes.
P-Cods Id
Pa ramete rs
Condition Integer indicating condition A must meet to execute
the JUMP
Address New value of IP. Type is ADDRESS.
Algorithm
'~ HP Confidential ~
?-22

PoP
134~~~2
Stores the top of stack into a data item.
Syntax
POP Data Item
Fields
Length 6 bytes
P-Coda Id
Paramot~ra
Data Item Integer index into the Variable Table
Algorithm
'~ HP Confidential f
?-23

PUSH
Puts the contents of a data item on top of stack ~ 3 4 0 ~ ~ 2
Syntax
PUSH Data Item
Fields
Length 6 bytes
P-Code Id
Pa ram~te ra
Data Item Integer indez into the Variable Table
Algorithm
t HP Confidential ~
?-24

PUSH
~34os.y~
Puts a value on top of stack
Syntax
PUSHI Derta Item
Fields
Length 14 bytes
P-Code Id
Parameters
Data Item to be pushed. Type is VARITEM.
Algorithm
* HP Confidential *
?-25

Table of Contents
... 13400.92
Section ?
TASK LANGUAGE INTERNALS
. . . . . . . . . . . . . . . . . . . . .
cess . . . . . . . . . . .
P ?-1
i
i
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
ro . . . . . . . . . . .
on ?-2
lat
?. 1 Comp
. . . . . . . . . .
at
Fil
F
b
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
orm . . . . . . . . . . .
e ?-2
ject
?.1 O
2. 1 Header Record . . . . . . . . . .
. . . . . . . . . . . . . . . . . . .
?
. . . . . . . . . . . . .
2. 2 Code Section. . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . ?-3
.
?
. . . . . . . . . . . . .
?. 2. 3 Data Tables . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . ?- 3
. .
?. 3 Compile Time Environment . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . ~ . . . ' ' '
?-
?. 3. 1 Data Files . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . ?- 5
?. 3. 1.1 Keyword File . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . ?-5
?. 3. 1.1 Dictionary File. . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . ?-5
3. 2 Scanner Data Structures . . . . . . . . . . . . . . . . .
.. . . . . . . . . . . . . . . . . . . . . ' ' ' ' ' ' ' '
? ?-6
. . . . . . . . . . . . .
?. 3. 2.1 StringTable . . . . . . . . . . . . . . . . . . . .
. . . .. . . . . . . . . . . . . . . . ?-6
. .
3. 2.1 Identifier Buffer. . . . . . . . . . . . . . . . . . . .
. .. . . . . . . . . . . . . . . . . . . . . . . . . . . . .
? ?-6
. . . . . . . . . . . . .
3. 2.2 Hash Buffer. . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . ?-6
?
. . . . . . . . . . . . .
?. 3. 2. 3 Class Information . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . ?-7
. .
?-7
3. 2.4 Address. . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
.
?
. . . . . . . . . . ~ ' '
?. 3. 2. 5 Additional Tables . . . . . ' ' ' ' ' ' ' ' ' ' '
. . . . . . . . . . . . . . . . . . . ?-~
. .
3. 3 Labels and Flow Controll . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . _ . - ' ' '
? ?-7
. . . . ?-7
3. 3.1 IF Statement . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . . . .
?
. ?-7
3. 3.1 IF and WHILE . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . . . . . .
?
. ?-8
?. 3. 4 Major Procedures. . . . .. . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
. .
. . . ?- 8
?. 3. 5 Scanner . . . . . . . . . . " . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . ?- 8
3. 6 Parser . . . . . . . . . . . .. . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . .
. . .
?
. . . . . . . . . . . . .
4 Run-time Environment . . . . . . . . ' ' ' ' ' ' ' ' ' ' '
. . . . . . . . . . . . . . . . . ? 9
?
. . . . . . . . . . . . .
4. 1 Run-time Data Structures . . . . . . ' ' ' ' ' ' ' ' ' '
. . . . . . . . . . . . . . . . . ? 9
?
. . . . . . . . . . . . .
4. 1.1 STACKTOKEN . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . ?-9
?
. ?-10
?. 4. 1.1 Stringtable . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . . . . . .
. .
2 Variable Table . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . . . . ?-10
?
4
1
. . ?-10
.
.
?. 4.1. 3 Address. . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . ?-10
1. 4 Array Table. . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . . . . .
?
4
. . ?-10
.
?. 4. 1. 5 Interrupt Table. . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . . . . .
. .
. . ?-11
6 To Be Defined . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . .
?
4
1
. ?-11
.
.
2 Run-time Prxedures . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . . .
?
4
. ?'.11
.
?. S Debugging Aida . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . . . . . .
.
. . . ?-12
6 P-Code Instruction Set . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . .
?
. ..... ?-14
CALL........................................................
.... ?-15
COMMAND ....................................................
.... ?-17
COMPARE.....................................................
..... ?-18
DUP. ................................. .................
.... ?-19
EXCHANGE....................................................
...... ?-20
FOCUS......................................................
...... ?-21
JUMP.......................................................
...... ?-22
IF.................................... ................
JUMP
_ ..........
POP. ........................................... ?-24
...
PUSH..........................................
............
........ ?-25
PUSHI....................................................

API Messags
Summary Tables
7fhe API Mesaaga deacn'bed in this chapter are defined in the NWAPLH
fee. They can be summarized as:bown in the following table:
Table 5-1. API Mes:ag~es
Wl~sss~ D~xiption
API_II~'TERROGATE_MS,C3 To allow the Agent to aend for
read-only
information from the application
so that the
iaformation reduned is avar~able
within the
Agent Task.
API PIAYBACK~MSG To POST the esternal form of the
c~mmaad to
the application so that it can
be translated to
the interaal form and aceaited
by the
application.
API_SET_MODE_FLAGS_ MSG To SEND a meaage to c~ranSe the
mode bits in
the APIModeF4p so that the flow
of the
control ride the appvntion will
be changed.
c~es~~s-~
~ ~ _ ~ APPENDIX D
Q34 0 ~.~~
CHAPTER 5 - AP! Messages

API INTERROGATE MSG ~ 13 4 0 ~ ~ z
PurpOSe To allow the Agent to xnd for read-only information from the
application so that the information returned is avar7able withia the Agent
Task or xt by other variable components.
The Agent thus SENDs the message API~TTERROGATE_MSG. The
application looks up the necessary information and pasxs it back to the
Agent. The Agent will then put that information into variables inside the
task or will pravide it to HELP. The Interrogate message can have many
types) as defined in the CLASS IrTDEPENDENT INTERROGATION
FUNG'ITONS (refer to Section A~9), e.g, interrogation for HELP) CBT)
or the Agent.
Perem~te~i wPomm is a rumba that falls withia the range coding of the API and
Cases Independent Intention Functions.
IParans is a global handle to s global buffer) containing information
relevant to the interrogation function being called.
Return Yalu~ The return value is dependent on the Qass Independent
Interrogation
Function.
API_WMAT8_INSERTA9LE~T_fTl
Purpore - Thi: memage allows a caller to find out if there are sny
inaertable objects at the specified point. An insatabk eject is one that
supports the OMF_INSERT message.
Parameter - the LOWORD of lParam is a handle to global memory
allocated with GMEM_LOWER The metaory is (MAXCLASSNAME
+ b~L+ 2) bytes in size.
MAXCL.ASSNAME (defined in NWOMF.H) is the maximum length of
tire PROP_Ci.ASSNAME property. I~~ (also defined in
NWOMF.H) is the muimum length of the PROP_TITLE property. The
two extra bytes are for NiJLla.
HIWORD i: not uxd.
The receives of this message should lode the handle that was pasxd and
cast the resulting long pointer to as LPPOINT. This point is a scaeen
coordinate position.
s-s cans-~

1340~~~
API_INTERROGATE_MSG
Retmu Vs~e - If there is as insettable object at the point that was
p~~ in the receivW's window) it should copy into the :hared memory
that object's null terminated PROP_CLASSNAME string immediately
followed by that object's null terminated PROP_ZT1IE string. The two
string should be packed into the memory (e.g.,'Folder\OJune Sales\0"
w~here'~0" indicates the NULL character).
If there is no inaertable object at this point is the receiver's window) it
W cold copy a NULL striag into the memory.
Tlhe handle should be udodced before returning.
the return value from the call should be -iL to indicate auooaa and OL to
indicate problem:. If the receiver does not support insertable children) it
is permitted to return OL without altering shared memory.
ShecLl Notes ~ It it possible that the screen coordinates pssacd may
specityr a pout tluit is not amently is the receiver's client uea. If the
receiver has a sa~ovable window) this is not neoasarily an ermr. The
rectiva should map screen coordinates to his logi~l space when
searching for an ia:atable child.
API_WHO.~RE_YOU_R~1
Purpose - To allow an open object to supply its dassname and tide to the
caller. 'Ibis need only be handled if the object supports the
Ol!~~1SERT message.
psu~oetern -'Iha LOWORD of IPuam is a handle to global memory
allocated with OMEM~.OWER The memory is (MAXCLASSNAME
+ MA7C.<Z"IZ.E + 2) bytes in size.
MAXCLASSNAME (defined in NWOMF.I~ is the ma~cimum length of
the PROP CI.ASSNAME ptope~cty. h~LNC'Il'IZ.E (also defined in
I~~OMF.F~ a the muimum length of the PROP_TTTLE property. The
two aura bytes are for NULL.
HuWORD is cot used.
caw s ~ asp wows.. a - s

API_INTERROGATE_MSG ~. 3 ~ ~ ~ 9 Z
Return Value - The receiver of this massage should lock the handle that
was passed and a~py the null terminated PROP_CLASSNAME string
immediately followed by the null terminated PROP_ZTTi.E string. The
two strings should be packed into memory (e.&, "Desktop\ONewWave\0"
where "\0" indicates the hTUIl. chuaeter).
The return value from the call should be -1L to indicate success and OL to
indicate a problem.
API_WHERE_~ FN
Purpose - To allow the nller find the position of a child object of the
receiver in screen coordinates. This message need only be supported by
container objects that return -iL is response to the
API_WHATS INSERTABLE~T FN mes: age.
Parameters - The LOWORD of IParam is a handle to global memory
allocated with GMEM~.OWER The memory is (MAXCLASSNAME
+ I~iAXTTIZ.E + 2) bytes in size.
MAXCLASSNA11~ (defined in NWOMF.H) is the maximum length of
the PROP_CLASSNAME property. MAxI3TLE (also defined in
NWOMF.H) is the maximum length of the PROP_ZTIT.E property. The
two extra bytes ue for NLTLLj.
HIWORD is not used.
The receiver of this message should lode the handle that was passed and
read the PROP_CI~SSNAI~ string followed by the PROP_TI'I'LE
wring. The two strings ue packed into memory (e.g.,
~dktop\ONewWave\0' where'\0" indicates the NULL diaracter).
Retoro Valoe ~ The memory pointer should be cast into a long pointer to
a P01NT struduro. If the above object does exist as a child of the
roceiver) the position of its representation (view or icon) should be
writtea into shoed memory. The value ~1 should be written into the
word following the point structure in shared memory. If the object does
not euist) point.x and pointy should be set to zero and the value zero
should be written into the word following the point structure in shared
memory.
The handle should be udocked before returning.
s.a awrTt~ s-~ w..~a

~ ~ ~ ~ ~ ~ ~ API_INTERROGATE_MSG
7be return value from the call should be -1L to indicate and OL to
n~diate a problem.
S~p~lal Notes ~ 1f the receiver of this menage has a saroJlsble window)
and the child object in question does not happen to be via'bte at the
moment) this is NOT an esror. The return region) in aaeen coordinates)
slhould reflect the'oorrea" position of the child objax.
The returned position should be sn xy pair that would have meaning to
die object if it teoeivod an OMF INSERT message with these values in
die xy fields of the OMFINSERTSTRUCT. For example, the Desktop
r,~ponds to this memage by returning the exact canter of the du'Id
dbject's loan.
A~PI_RENDER_MEtp_FN
burin' Help processing with the question mark) the point in the
appliaition disp4y coordinates will be pssacd in the lParam of this
interrogation funMion. The application w~l determine the Help ID and
p~~s this bade as the return value for this message.
This value will be used to aooe,~s the help butt by Builder snd alla~wa Help
an user interface elements managed by the application.
_.. . _. .- pea s ~,,,h ~,~,sra,s s - s

APt PLAYBAC~MSG ~ 3 ~ 0 0 9
PurpoSA To POST the a~ternal form of the command to the application so that it
can be translated to the internal form and eaccuted by the application.
Parameters wParorn is ignored (~ 0).
lParrt~n has a LOWORD that is a global handle to a global buffer,
containing the esternal form of the command.
Return Valuo T6e return value is TRUE if the message has been aaepted.
s - s cw~r~r~n s ~ an ~a.~a

]~ 4 ~ ~~ ~ ~ API_SET_MODE_FLAGS_MSG
Purp0sl This message requests an application to change the mode bits in the
APiModeFlaga ao that the flow of the control inside the application will
bye ganged
Parameters wPorom is either API_SET_MODE_OFF_Fi.AG or
A,PI_SET_MODE_ON_FLAG.
U'ortsrn is a string of bits that you prooaa in your application. If wParam
a~ in the ON mode, the application should perform a bitwise "OR" with
the existing A,PIModeFlags. If wParam is OFF) then the application
should "AND" lParam and the existing A,PIModeFlags.
Return Viluo The return value is TRUE if the message hsa been aaxpted.
Example ass A1t f~l~ llAtii IIIa
tf (sl~rr ~~ A11 fET_IImE 011_flAa)
Allllod~flap ~ Althod~ilps ~ lp~r~;
~11~
Ml~od~itl/t WII~od~fllp ~ lp~ral,
~i
CMAPTE11 i - API tWss~s s - 7

APPENDIX E
~34'0~~2
Extensible Agent Task Language
Barbara Packard
Chuck Whelan
INTRODUCTION
The Agent Talc Language is a set of procedural command: which provide users
accea to the tack
automation functionality of NewWave. Script: can be written which will create)
delete) modify, and
otherwise manipulate NewWave objects. The scripts are procesud by an
interpretive engine, which is part
of the Agent object. In the NewWave environment) each task a a separate object
with associated data
files. They will function acroa and within object classes sad be supported by
all NewWave applications
When the user opens a task, he xa the contents of the file containing the Task
Language commands,
available for editing sad compilation. When he drags a task to the Agent icon)
the associated binary
P-Code file is executed. Tut Language commands have a verb/object syntu:
<coswnd keyword> [ para~et~e~ ]. . .
The parameter of a cmnmaad may be either a keyword or a literal. Commands are
line-oriented, but a
continuation character is available to a:tend a command across the line
boundary. A primary concern in
the language definition was the mapping of the interactive user interface to
the Ta:>c Language
command: To make the sculpts a readable as posible, we wanted to have the
command keywords reflect
user actions. For e~nple) if an action was accomplished interactively by
clicking oa a menu item such as
CUT, the corresponding Task Language command would contain that menu item a
its command keyword
verb. The parameter type is command dependent) but numeric, siring sad keyword
command parameters
can be used.
User Requirements
Agent Tsslu will be created sad eucuted by users whose expertise will vary
widely. The spectrum will
vary from
~ the casual NewWave user who records a few of hi: actions within an object,
saves them as a task
which he a:ecutes to repeat his actions) to
the power user wb~o constructs complicated automated tasks, frequently for
other users, which
a:ecute for coaside;rable time without user intervention,
The novice, using Task Language as a macro recorder, quite powibly may never
look at the Task Language
form of the talc which has been recorded. He will require ease of use and high
performance. The power
user will demand a language with at least the power of command languages in
existing applications such
as Exc~l or DBas~III Plu~~
Our chosen model for Ta:>c Language is the power user. The language is
appropriate for constructing
large automated tasks which involve several applications We have provided a
conversational window
facility) designed by the task writer and controlled by the Task Language
script, which enables the task to
receive user input and di:Dla.y information. Other features include variables,
functions, task subroutines)
control statements such as conditionals and loops) and numeric) string, and
logical espresdoas Note that a
command parameter which n~ defined a a literal may aLo be an e:presioa of the
same type.

13~0~~~
We a:pest that in time many casual users will move toward the power user model
The language should
be designed to facilitate thv~ Toward this end, the Agent Task Recorder
facility has a built-in Watch
feature. The user can see t:he Task Language command which was recorded as a
result of his action.
Obviously) recorded tasks mill not contain the advanced programming features
listed above, but the
relationship of the user's int.enctive actions to the Task Language commands
will be apparent from the
syntax of the command. In ;particular, the syntax of Task Language commands
will be meaningful enough
to serve as a learning aid to users who wish to explore the more advanced
feature: of NewWave Agent
Taskx Here ire have anoth~:r reason for the close mapping of the command
keywords to the interactive
user action.
System Requlreme~nts
The rystem requirements) t:o automate a task which spans applications, have a
somewhat different
perspective. A Tsak Language statement is either a control statement (
examples include variable
assignment) procedure call) loops ) or an action command ( such a: CLOSE, CUT)
PASTE ) to a particular
object. The former is independent of the curnnt active object and can be
executed by the Agent
interpretive engine) but the latter will be sent to an object of a particular
application clan and executed
by i>~ Command: are not identical acroo applications; many are very clan
specific. For example) most
spplications support some form of SELECT. But the object of the selection)
which translates to the
parameter of the Ta:>c Ls~ng~uage command will vary widely depending on the
object class Ia a document
one would SELECT a rang; of text) in a spreadsheet s cell or range of cells
However, in the Office
Window the selection is as icon, i. e. another object with a clan and title.
The NewWave open
architecture specification mandates the dynamic irutallation and removal of
application clans and leads
to a unique configuration on each system. Task Language commands for a NewWave
application written
by as ISV mart also be supported. It is impaaible for the Agent engine to keep
track of the command set
and ryntu supported by each application currently active on the system. The
Agent engine should not
interpret the contents of a ~:ommaad it sends to an application in playback.
It is equally impractical to
have each application class >>arse its commands at execution time) returning
similar syntax error messages)
or handling variables or expreaion: as parameters. The solution is a Task
Language parser module and
recorder template for each application class The paper converts ASCII Task
Language commands to the
sxternal co~unand form recognized by the application. The recorder template
converts the external
command to ASCII Talc Lmguage commands during the task recording. These are
irutalled into the task
automation process when the application a ixutalled into NewWave. As
applicatior~ are added to and
removed from the system) t;he set of Task Language commands accepted by the
compiler and created by
the recorder is customized a~:cordingly.
Application Developer Requirements
If developers of NewWave s~pplicatioru need to provide parser and recorder
modules for task automation)
we moat supply toob sad =uidelines to make their job a simple as ponible. We
mart separate out the
components which are common to a11 applications sad provide code for these in
libraries, as well as
providing source templates for typical semantic routines Since we wish to have
the Task Language
commands appear as one programming language to the user, we must provide
guidelines and examples of
appropriate syntax for comnnand: which are the ::me or similar across
applications
THE TASK LANGUAGE COMPILER
Our first design decidon wsu to compils Talk Language scripts to a binary P-
Code format for execution
rather than ieterprsrtng the ASCII commands at runtime. There were several
reasoru for this
2

1340yz
1. The binary format is more compact particularly for long tasks.
2 A standardized binary format is more suitable for execution by applications
in the Windows
environment.
3. Syntax and other obvious errors can be flagged and fixed at compile time.
4. Non-sequential iru;tructions such as loops and procedure call can be
handled efficiently.
5. Functions) variables, and expressions can be preprocessed and handled in a
standard manner.
In the future we will add ru:Mime debugging aids such as sourceline execution
monitoring to minimize the
disadvantage to the developer of separating the development and ezecutioa
environments.
.~ a result) the Task Language compiler a a two pass compiler. The first pan
follows the general
compiler model of scanner) ~parxr and semantics It receives as input the ASCII
Taak Lan;uage script)
p~arus it, and generates binary P-Code records which are written to a
temporary file. The second pass
fixes up instructions which reference addresses which were unknown when the P-
Code was initially
generated.
Object Flle Format,
Succasf ul compilation of s task crates a binary object file. M object file
consists of two main parts a
fixed length header record and the binary P-code records which will be
executed by the Agent
interpretive engine. As a future enhancement, data tables to be used for
debugging will be added as a
third:ection. The object file: format is shown in Figure 1.

... 1340W
HEADER RECORD
CODE SECTION
DATA TABLES
<to be ~dded>
L I
Figure 1. Ob jsct File Format
The header record is a fined llength. It contains the version id of the
compiler a: well as information such
as the number of variables, c~onver~ational windoan) and page: of code in the
task.
The code section of the object file consists of the variable length) binary P-
Cods records which are
executed at run time by the Ageat engine. Many P-Code instruction: are similar
to high level assembly
language. Pointers to locati,om in the code are maintained as Addrsrsss.
Addresses consist of a page
number and an offset into that page, thus identifying the :tart of as
instruction. Page size is a fixed
length. P-Code instruction do not crow page boundaries; however, a
continuation P-Code is available.
THE P-CODE RECORD. The Agent Interpretive Engine performs a task by fetching
and executing the
P-Code instructions. The generic record format is shown in Figure 2.
4

1340a9
optional parameter,
variable length
P-Code Id word
length word
Figure t. Binary P-Code Record Format
Field Description
isngtle contains the number of byte: in the record including the length word.
A
record with no parameters will have a length of 4.
P-Code Id is the numeric opcode of the inrtruction.
Parurreeters are any parameters which the instruction requires 1'he type and
length are
iiutrnction-dependent. Parameter of type ::ring are null-terminated.
THE COMMAND P-CODE. A: mentioned earlier, mort P-Code instructions result in
an action command
sent to a particular application object. Figure 3 illurtrates the P-Code
format for a command. The
parameters of the P-Code, except for the integer class id word, comprise the
external cornrrsand form
which will be sent to the application.
parameter (optional)
command id
command length word
class id
Command P-Code
P-Code length word
Figure 3. Structure of P-Code ~ COMMAND
S

1340~9~
Parameters
Class Id Integer indicating class of object recognizing this command, task
dependent
Command Length Integer containing length of length word, command id word, and
parameten
Command Id Set by application
Parameters variable length sad type) command depeadeat
At runtime, the Agent engine strips the fint three words and end: the
remainder, the a:ternal command)
to the application. The Agent engine requires the length word; the remainder
of the structure is designed
by the application. However;, applications are strongly urged to used the
format illuarated.
RUN-TIME ENVIROINMENT
The Agent Interpretive Engine is implemented as a ample stack machine.
Variable aaigaments) function
and procedure call) sad a:presion evaluatioru are all stack operations. Whoa a
task :tart: up) the Agent
initializes its data rtructwao using information in the task header record. It
then makes a Windows
intriwic call to receive a Wiindows TIMER message at regular intervals. Each
TIMER triggen a P-Code
f etch and execution. The Agent relinquishes control between inrtructiona thus
allowing tasks to waform
to same a:ecution guideline: as other NewWave objects P-Codes are fetched from
the cwrent page
which is retained in memory. Pages are procured as needed If the P-Code is a
command, the Agent
checks the class id to determine if the instruction clay matches the clan of
the object which currently has
focus If so, it ports an API -PLAYBACK MSG to the object with the command as a
parameter. No
more P-Code instructions are. esecuted until it receives an API RETURN MSG.
THE TASK LANGUI~GE PARSERS
To implement the modulariz~ition and customization of Task Language) we
dedgned a system of multiple
panen. The msin compiler contains two parson) the top level) or Class
Independea, parser and the
Expression parser which handles functioru sad expreuions of numeric, :triag
sad logical type. Each
application has a paper module which parses ib claw dependent Task Language
commands This module
also includes semantic routines which wavert the paned command to the external
command form. The
paper modules are in the form of Windows dynarnte Ilbrories and are acceoed
from the Clay Independent
paper through the Windows LoadLtbrary intriaric. The application's
installation file identifies the
library file) the application clauaame) sad the names of ib parse routines by
adding them to its OMF
Property Lirt as a property PROP AGENTTASKINFO The Task Language Compiler
enumerates all
applicatioru with this property. It ~s then aware of all available classes of
Task Language commands
Again) this can be unique to each system coafiguntioa. However) it loads only
the libraries of those
requested by the tuk script.
Components
The following sectiow describe briefly the various components of the panen.
Figure 4 shows a data flow
diagram of their interaction.

1340 092
PARSER ROUTINES. The current parser modules have been created using the YACC
Parser Generator.
YACC was developed at Bell Labs and may be purchased, together with its source
files, at a nominal price
from various software house;:. It is also available from public domain
sources. There is nothing to
preclude a developer from :uhetituting his curtomized parse routine in place
of s YACC generated one.
KEYWORD FILE. The recommended Class Dependent parser model atora i4 command
keywords in a
file which is read into a table during the initialization process. The token
number of each keyword
depends on its position in tike filr. This permits a certain amount of command
localization without
reconstructing the paexr.
SCANNER ROUTINE. The scanner was developed in-house and provides the only
access to the Task
Language source file. All pa.raer modules must call it for token input. The
scanner returns token types
and indices to appropriate table: where the values are stored. If a parser
module uses a different token
representation) it may modify its scanner to retrieve the value at thin point
and continue processing.
EXPRESSION PARSER 'The Ezpresuon Parser is avaihble to the Claaa Independent
and Class
Dependent parxra. It is activated by a semantic call during the ps.rn of a
command. It processes tokens
until it finds one which is not part of the ezpre;saion. The a~ociated
semantic routines generate P-Code:
to evaluate the ezpresaion and place the result on the engine run-time stack.
The Ezpreation Parser then
seta the state of the scaaaer s~~ that the nezt token requert (by a command
parser) will return a token type
sxprssstore which will satisfy the conditions of the parse. Note that there is
no requirement that Clay
Dependent parrera ux the Ezpreoion Parser.
SEMANTIC ROUTINES. Since the: structure of the ezternal command form is known
only to the
relevant application, the semantic rautinea must be the rapoasibility of the
application developer.
However) we have provided a library of routine: to perform functions such as
initialization, buffer
management, cleanup. A):o) there are routines which handle the semantic
processing of ezpresaions when
they occur as command parameter Using this library will greatly :unplify the
implementation of the
semantics. The output of the semantic routines is returned to the; compiler in
a buffer and then written
to the object file.
The FOCUS Command
The compiler directs the source procesring to the appropriate parxr through
the i0CU5 command. This
command need: additional discusaoa dace it results in both compile time and
run time actions. The
:yntaz is
FOCUS ~ON' <ci~sssnaae~ "~titla string"
~oF~
when <elassnaae> refers to the class of object ( for ezample DOCUMENT or
FOLDER) as recognized by the
Task Language parsers, and "~tittt stritrg~" is the title of the specific
object referenced. At
installation time this claasna~me is added to the OMF PROP AGENTTASKINFO
property of the class.
When a task is compiled) tbve compiler adds the classnamea to its list of
keywords recognized by the
scanner, and through the scanner by the CIa~ Dependent parsers as well.
When a task is executed, the majority of the commands will result in the Agent
leading s message to the
object which currently has the focus. The parameters of this message comprise
a command which will

~~~o~s2
direct the object to change its state. At run time) the FOCUS command tells
the A~eat which object is the
car=et of subsequent command messages. At compile time it ha: another role. It
controls selection of the
class parnr which will parse Clans Dependent commands and generate the
external command. Command:
are compiled sequentially in, the order received However, the order in which
commands are actually
executed at run time will seldom) if ever) be completely sequential. The
inclu~on of conditional execution
(IF, MHIlE)) jumps (GOTO), procedure execution (DO), or uatr variables in a
task virtually guarantees that
there is no way to make a determination at compile time which object will have
the focus at runtime.
The FOCUS command refs a compile time focus. In effect) it determines which
Clan Dependent parser will
parse the commands followin.~ it. The command
FOCUS DOCUMENT "Cird~rs Report"
will cause au Cla:: Dependent commands to be parsed by the Document parser
until another FOCUS
command is encountered. If the class and title parameters are misdn~, only
class independent command:
will be accepted by the parser: until another FOCUS statement is encountered.
The main effect of this
command is to reduce compilation time since a syntax error returned by a Clay
Dependent parser will
cause the command to be reprocessed by the Class Independent parser.
THE TASK LANGUAGE RECORDERS
Task Recording provides tb~e ability to monitor run-time event: and recompose
them to produce a
reusable task in the ASCII 7,'adc Langua'e format. The focal point of the
recording process is the Class
Independent recorder. This module is a Windows dynamic library which is loaded
only during a recording
section. It receives all external commands from the went engine while
recording is active.
The recorder first determines if the received command is specific to a clse.
If it is not, the command is
immediately converted to its ASCII Talc Lan~ua~e form. If the command is class-
specific, the librarg
will either pmvide Default Ikpendent Recording or will in turn pass the
external command to a separate
Clan Dependent recorder module. In either case) the completed Task Lan~ua~e
text is used to build a
source file of ASCII task commands.
Default Dependent Recording
The majority of class-:peci,fic external commands are handled wholly within
the Class Independent
recorder. This module uses ASCII Recorder Template Files to provide the
necessary iaf ormation to do
default dependent recordin'. These files provide formattin= information so
that a multiplicity of
external commands can arch be recomposed into Task Lan~ua~e without the need
of invoking
claw-specific executable coda.
Recorder Template formatt;ins strints are patterned after the 'C' pro~rsmmin~
laa~ua=e prtntf control
strings. They describe how a riven external command's panmeten are to be
interpreted and formatted
into compilable Task Lan~u~a~s text. They can include information on the order
of parameters in the
external command) the size ~of each parameter, the data type of the parameter)
and how the parameter is
to be foraratted in the taslk lan~ua~e tine. Templates also support arrays of
parameter:) enumerated
parameter:) optional parameters, and optional fields in the Task Language
form. Comment field:) ignored
by the recorder but useful for documentation) may be included as well.
Template file information for a particular class is read into memory when a
FOCUS command for that
clay is first received durin,~ a recording session. As external commands are
paned to the recorder at
run-time, they are then forrnatted into Task Lan~ua'e.

~3~Q~y~
Default Recording E:xarnple
TEMPLATE FORMATTING STRING
"xv x1d COPIIES TO DEVICE x2s" ; this is the 13th tsa~plats
COMMAND DEFINITION
103 13 "LIST"
The example shows as actual template and command definition from the NewWave
Office Recorder
Template File. The COMMAND DEFINITION specifies that when external command 103
is received (and
NewWave Office has the focus) the 13th template in the file a to be used with
the verb "LIST'. The
TEMPLATE definition tpecifia~ that the first parameter in the external command
is a decimal integer and
the second parameter is a string.
The external command form ~~f the example i: shown in Figure 5.
15 i 103 i 2 i L a s ~ r J s t \0
device name parameter
number of copies parameter
Command Id word
length word
Figuts S. LIST Erternal Command Format
From the external command shown in Figure S) Default Dependent recording will
produce
LIST 2 COPIES TO CREVICE "LassrJst"
Class Dependent Recorders
If there are case: which can, not be handled by template filed as application
may provide it: own Clasa
Dependent recorder. 'The C:las Independent recorder will pan the external
commands that it can not
handle by default recordinE ~on to the Class Dependent recorder for the clan
with current FOCUS
Extenslbillty
AlI recorders includin= the Class Independent recorder are written as Window:
dynamic libraries that are
loaded only when needed arith the Windows intriruic LoadLibrary. All recorders
must also support a
common programmatic interfau) so that the interactions between the Independent
recorder and any Class
Dependent recorder are ideaticsl.
The developer of a new application can implement recording by producing the
ASCII Recorder Template
File and) if necessary) by developin5 the application's own separately linked
dynamic library. The
9

13~0~9,~
filenames are declared in its PROP_AGENTTASK_INFO property in the
application': inrtallation file.
Now a11 the running application needs i: to bet the FOCUS daring a recording
sesaioa, and away it goes
ACKNOWLEDGMENTS
< to b~ added. . " >

Table of Contents
Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . . . 1
User Requirements. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . 1
Syrtem Requirement: . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . 2
Application Developer Rdluirements . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . 2
'The Task Language Compiler. . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . 2
Object File Format. . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . 3
The P-Code Record . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . 4
The Command P-Code . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . 5
Run-time Environment . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . 6
The Task Language Parsers . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . 6
Components..........................................................6
Parser Routines . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . 7
KeywordFile.......................................................7
Scanner Routine . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . 7
Expre,~ion Parser . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . 7
Semantic Routines . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . 7
The FOCUS Command . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . 7
The Task Language Recorders . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . 8
Default Dependent Recoriiing. . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . 8
Class Dependent Recorders. . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . 9
Este~ibility. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . . 9
Acknowledgments . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . 10
W

134092
Learning Products Center
:Investigation Summary
Tools and Strategies
for
Implementing
Corr~puter-Based Training
in the
New Wave
prepared by
Tom Watson

~~~o ~.~z
UNDERSTANDING Ct)MPUTER-BASED TRAINING IN NEW WAVE
The Vision of Computer-Based Training in New Wave
Congratulations: You are the proud new owner of HP Vectra
Office including' the operating environment and a host of software
that we currently refer to as New Wave. Much to your surprise,
you find very little docwaentation on either the WorkTop or the
applications you have purchased. What you find is a small booklet
for each that explains installation, and how to run training that
is on a floppy disc.. It is called Computer-Based Training (CBT
for short).
With the straight-forward instructions and easy-to-use
installation programs, you manage to install the WorkTop,
applications and CBT onto your Vectra. At this point, what you
see on your Vectra's display is quite different from anything you
have previously seen. Along the top of the screen are words that
look like some sort of menu: File, Edit, and so on. Immediately
under these are several little pictures that seem to represent
what your system can do. There is a filing cabinet, an in- and
out-tray, etc.
A box in thle center of the screen contains text that explains
what you are looking at.
lA~~~~i.'rr. i /'~ I
~i~e ~7.' Tr2~S'~' ~~e''.'~C~ ~"e',G Fns SESS;Or.
:':'X.'.'.'
:.::
'.':w'I:':':
.'.t~!~'~S'. :'
~.~.:a,.....
':' ~.
...4V ..
. .w .
~.:. ..
:.:.
~~ ~~'T :.J:':':'
'.',~('.~. .
1
....5.:::.
'.':4w~ .'.
'.''v~.' '.'
::.. :
..
' ' :':
:...
'.'
~.,. : .
'.~~.. :: ~ r:":
' ..: .a:ei::::::
~: r.
r:~:
. .
a
..r :v
~.'.~ x:': .Lrat~zrt~.~.ertax.~tremL~.~. err. . ..~ . .
F~ie Folder D~cume:~t
Welcome to tits AP-Offlce Worktop
for ~onr ~er~ first time !
I
Yon are currently reading from a
trainla~ wladow that pop: n p oa top
I of the Worktop and other Offfce
applicatloae to teach boa Yow to
nes them.
~~ ress ~-J for more tralninp.
1'
SUDE003 '

1~40~~~
You easily locate the ENTER key and press it. Another window
with new text replaces the previous one. You read it. As you
progress from window to window you are at first given a guided
tour of the WorkTop. Oftentimes, a pointer extends from the
window to an object on the WorkTop to accompany an explanation of
that object. For instance, it points to a graphic image that
resembles a piece of paper and explains that this is a document.
The window does not always appear in the same location, either.
This way it can talk about objects that are a11 over the screen.
Sometimes the training actually performs a sequence of operations
on the WorkTop, as if you were doing them yourself. It selects
the document and moves it to another spot on the WorkTop. At the
same time, it explains that your WorkTop is like your desk and you
can arrange documents, reports, and other obje~w ~ any way you like
to best suit your needs.
It all looks pretty intuitive. The r.-pining extends a ..
pointer to an object that resembles a file folder, and tells you
that is exactly what it is! Then you are asked to move the
document over to the folder. First, you roll your mouse around
the surface of your desk to move the mouse pointer on top of the
document. The coordination to do this seems a bit tough to get at
first. After pointing to the document, you press the left mouse
button as tha training window tells you. The document highlights
and you know you got it! When you release the mouse button the
training reminds you that you must keep the button depressed after
selecting the document, and then drag the document to the folder.
It lets you try again.
A11 in all, the training is like having someone sit beside
you and hand-hold you through learning. You are not just told how
to use Office. You actually see tasks performed in the
application and often get a chance to try them yourself. Best of
all, the CBT is much more effective than a written tutorial
because it is smart enough to catch you when you make a mistake.
It is forgiving and lets you try again. It usually has some idea
of what you did wrong and re-explains what you should do
accordingly. As a result you invest less of your own precious
time becoming proficient in the use of Office.
The Vision is Almost a Reality
The concept of training, as explained above, is nothing new.
Learning Products Center (LPC) has had the experience of producing
four such products f~ P$p_. These training modules teach HP
Access/150, Executive em'~~er, Executive Memomaker with
graphics, and Drawing Gallery. These applications are not taught
in the context of an integrated office environment, but they are
taught in the context of hand-holding the user through tasks in
the application.
The newness of MS-Windows limits the availability of tools
that can be used to produce such training. Although tools exist
for producing this kind of training for MS-DOS applications like

13~0~~2
the four HP applications (above), these tools will not work for
producing MS-Windows based training. In the investigation summary
that follows, you are introduced to the different ways LPC can
develop this kind of training. We will also reflect on LPC~s
experience in similar projects that can light the way into New
Wave.
The Average LPC 'Traini;ng Author
In choosing or developing a tool for New Wave authoring
(writing CBT) one must take into consideration several key
factors. You have s~sen one of them. That is what the tool must
be capable of in order to produce the end result already
described. We wall visit the specifics of its capabilities in due
time.
i
The Vision is Almost
a
Reality
HP Access CBT EMM CBT
-....~..
,
-
~ ~.~. Q11. ~. ~ ..1. ~r
~ ~~.
:.~ ;._.~
IYY i 111r Y !'r. 1~
A 1/...1 r.1
e11. 1~
1
1 ...r
Y=~ ~ .~ ~ I ~ nnvr lr ln-.r lr
r ~nrW .Mr vu ~
rwrv.... ~w.~.. m~u
m~ -~ ~ ~~ t_. .~..:~,w.rw~r:.~rw.i.r",r~wr
trmw m vm ,~ wr
~r.rr~.mr.~rwwur.~ w
..~.rr..rri~...rr~.....111
r~rl. ~ rr rm r ..~.
'r ~ ~ ~~~ ~ ~ t -.
~ i ~ r.
,o ".~ r ~ 1~
Drawing Gallsry CBT
.,.
New Wavy
CBT
~i
'E~ c"~- ~ !: '~?! Gi -'!

059
Another factor that bears equal importance is who will be
using the tool and how usable the tool is for them. Let us
examine this rare breed, the LPC Training Author.
LPC authors are user advocates. They are learning
specialists and masters of instructional design. They are not
true programmers, although they do understand simple programming
logic, storyboarding and the concept of a flow chart. They have
even developed training in a programming environment, but require
assistance in programming tricky logic or anything close to the
machine (e. g., monitoring mouse motion).
If LPC is to be productive in authoring New Wave CBT, the
authoring tool it uses must wr~rk the way its authors think. Even
though a language such as Microsoft C or Pascal has every
capability required to achiev cur training vision, the authors
would require extensive tra ~.ng and assistance in order to work
in such a media. A couple ai .~~rnatives are introduced later in
this report.
A programming language, or any development tool, is more than
a set of capabilities/features. It is a convenient way for
thinking out a certain domain of problems and writing their
solutions. We should not strive to turn our authors into
programmers. We must seek a tool that allows them to work more as
instructional designers. This is not to rule out any efforts that
authors might make to advance their programming abilities. These
efforts can only help us all.
New Wave Computer-Hased Training Should Be Concurrent
Another important factor is the means by which the authoring
tool achieves its end result. There are two basic methods of
developing CBT for computer applications. Simulation involves
"faking" the application. For instance, in our CBT on HP
Access/150, a series of screens are painted to give the student
the illusion o! running HP Access. However, at no time is the
student actually using HP Access. This was not difficult to
implement in HP Access because all of the application's screens
are composed o! text and no graphics. HP Access's screens are
also very forms oriented.
Another method uses the application for which training is
designed. In CBT on Executive Memomaker/Vectra and Drawing
Gallery/Vectra, the application runs at the same time as the
training. The user is hand-held by the training to carry out
tasks in the application, itself, and not a simulation of the
application. This method is known as concurrent training.
Each of these two methods has its own pros and cons
Simulation of a graphics-based application tends to be more
difficult than simulation of a text-based application. Painting
the application's screens is not difficult for either one. The
screens can be saved from the application with a screen dump

~.340~~~
Simulated CBT
user ...fakes the
application.
Simulated Training
Concurrent CBT
~C B T I'E-~(~ s a r
...uses the
application.
ap~pllcation
Concurrent Training

1340~~~
utility. Even if a utility is not readily available, they are
simple to create. Of course, a graphics screen dump requires
orders of magnitude more storage space than one of text. There is
also the issue of supporting different display types with
graphics. This is a trivial matter with text.
But the most difficult aspect of simulating a graphics-based
application is simulating its features. Had we simulated Drawing
Gallery for its CBT, we would have had to simulate the user
interface, as well as creating, deleting, copying, enlarging,
stretching and moving both a square shape and the word "hello."
This would have meant re-writing a subset of a very complex
application. Instead, it was much easier to write concurrent
training for Drawing Gallery and let Gallery perform its own
features. New Wave, being a completely graphics-based
environment, would be very difficult to simulate.
Another problem With simulation is reflecting changes in the
application during development. LPC's strategy for delivering
learning products in the New Wave recommends CBT for an
introduction to the Worktop and each New Wave application. These
training programs would be bundled with the products. Due to this
requirement the development of training must keep pace with the
development of the products so that they are completed together.
Concurrent training trusts the application to implement its
own features. Therefore, changes in the application during
development would not need to be mimicked in the code of the
training as they would if it were simulated. Of course, both
types of training would need to be updated instructionally, and
concurrent training would call for comparatively small changes in
its interaction with the application. The concurrent model is the
natural choice for training that must be developed and completed
in the same time frame as the application.
Tight Integration to Applications is a Must
Although the development of concurrent CBT does not require
extensive work to simulate the application, the process of
coordinating application tasks can be complicated. The complexity
of developing concurrent CBT is greatly reduced if the application
and training are tightly integrated. The authoring tool and the
applications must be designed to work together in order to achieve
such integration.
Tha Shelley (TMj authoring tool that LPC used to develop EMM
and Drawing Gallery CBT does not permit tight integration, because
the tool and the applications are not designed to communicate with
one another. This lack of integration presents the greatest
obstacle in coordination of tasks to be performed by the
applications.
For instance, when EMM CBT requires that a task be performed
in EI~, it must relinquish control of the processor to EMM until

13~p,p
the task has been completed. With no integration between the
training and a~~plication, it is difficult to detect completion of
tasks so that control of the processor can be regained by the
training at the proper time. Our current CBT monitors screen
output of the. application to determine what the application is
doing.
An example of this is when the CBT directs EI~I to home the
cursor to the beginning of a memo. The CBT detects completion of
this task by looking for the cursor in the upper-leftmost position
on the display (the home position). Unfortunately, visual cues do
not always reflect the internal state of the application. This is
such a case because L:MM's order of operations to complete this
task are:
( 1 ) Posit:ion tree cursor in the home position, and
(2) Refreah the display.
It is the refresh operation that presents a problem. If the
training intends to,
( 1 ) Let EI~I home the cursor,
(2) Regain control and display a message that instructs
the student. to do something in E1~I, and
(3) Give control back to EN~i so the student can do what he
was asked,
here is what happens. E1~I, having been given control and
instructions to home the cursor, performs the steps to home in
order. First, it positions the cursor in the home position. Now,
the training detects that the cursor is in the home position and
assumes that EI~t has completed the task. The training resumes
control, while EMM is waiting to perform the second step of home,
which is to refresh the display. The training, having control,
displays instructions for the students next task in Eli and gives
control back to EMM so the student can complete the task.
However, as soon as EN~i regains control it completes the home task
by refreshing 'the display. This promptly erases the instructions
displayed by the training. The student had no time to read them
and is now in El!~I with no idea what is expected of him.
This problem w,as eventually solved by letting Eli have
adequate time to complete the refresh after the cursor was
detected in the home position. This is not an ideal solution
because it does not work for the coordination of a11 tasks in EMM.
Visual cues cannot always be used. To give an example of
this, EI~I CBT dEatects completion of loading a memo by reading the
screen until t:he last character of the memo has appeared. This
technique has a limitation. The training must know What the memo
looks like so that it knows what the last character is. The

~3~0~.~~
training must also know where to expect the character on the
display. In Windows, this prerequisite is difficult to meet since
a number of variables (i.e., as window size and position, text
font used, and display hardware/resolution) affect where the last
character appears.
A more integrated approach is to monitor messages from the
application that indicate completion of tasks and other status
information. This integration must be designed into the
application and/or operating environment with CBT in mind. It is
our intent to do this in the New Wave. If EI~i were designed for
integrated CBT, it would send a message to the CBT, indicating
when the load is complete. This message would be sent after the
refresh and any other disruptive steps were completed.
Visual cues are also difficult to use with a graphics-based
application. This is the case in Drawing Gallery to the point
that LPC actually employs an integrated approach in several
instances.
The problem is detecting activation and grabbing
handles/edges of objects (a square and the word "hello"). The CBT
asks the user to:
(1) Click on an object to activate it,
(2) Grab the handle of an active object to enlarge and to
stretch it,
(3) Grab the edge of an active object to move it.
The CBT must recognize successful (and unsuccessful) completion of
a11 of these operations by the user.
One way it might have done this without tight integration
would have been to look for changes on the display that signal
completion. This would have been extremely difficult. It would
have required reading graphics memory for the changes that
coincide with these operations. Of course, graphics memory is
mapped differently for different display types and resolutions.
Instead, for these operations an integrated approach is used.
In order to achieve this, a communications channel is established
between the training and application via two PCAIOS interrupt
calls.
PCAIOS is a routine that is called by Drawing Gallery prior
to loading its own code. PCAIOS implements most of the AIDS
interrupts of the HP150 in Vectra's interrupt structure. It was
designed to facilitate porting of Drawing Gallery, and other
applications, from the HP150 to Vectra. PCAIOS also implements
two interrupt calls for CBT. These are SET CBT_FLAG and
GET CBT FLAG.

~3~u~.9~
In Drawing Gallery CBT, SET CBT FLAG is called to clear the
flag before each request of the user to manipulate an object in
the application. The application, in turn; calls SET CBT FLAG to
indicate what type of action has occurred after the user has
manipulated an object. There are three values used to indicate
(1) activation of an object, (2) grabbing the handle of an active
object, and (a) grabbing the edge of an active object. Finally,
the CBT calls GE:T_CBT FLAG to retrieve the flag. If the flag is
zero, the uses' failed to manipulate any object. otherwise, the
value of the flatg is used to determine whether the user performed
the correct object manipulation.
If this e~:ample appears simple, it is because most often the
best solution is~ a simple one.
Integration in Drawing Gallery CBT
PC-AIOS
0
CLEAR
(G;
Drew:~ - SET CBT
g FLAG prawinp
Ganery SET (t, Gane~y
2, o 3)
CBT .~ GET_CBT_FLAG
READ
FLAG
1,2.or3
CBT Ftap

134~~~~
Integrated Control
Integration can be subdivided into monitoring the application
and controlling the application. The examples presented have a11
illustrated monitoring. Tight integration is also critical to CBT
in New Wave for control.
CBT often needs to control the application in the same sense
that the user would control it. In EI~I CBT, control is required
to change the current state of EI~i so that lessons do not always
have to start from square one. For example, a format file must be
loaded at the onset of each lesson. It serves no instructional
purpose to have the student perform this load each time. Instead,
the CBT automatically controls EI~i to load the format file by
"ghosting" keystrokes through to EI~I. In general, CBT in New Wave
should be capable of ghosting all user actions (keyboard, mouse,
touch, etc.), and these actions should be received by the
application as if they were performed by the user.
Such control can also be used for demonstration. An
application task can be presented, automatically, with annotation
by the CBT. For example, CBT can teach the student how to create
a pie chart by doing it for the student the first time, and
explaining what is being done in windows that overlay the
application screen.
Although ghosting benefits CBT, it is not a very integrated
approach. Consider what happens when the CBT must select a
certain pull-down menu option in a hypothetical New Wave
application. The rudimentary approach is to ghost mouse moves and
button clicks that make the selection. This approach is not only
tedious. It does not work if the application window is positioned
differently, because the menu now appears in a different place and
the mouse moves are no longer correct.
A more integrated approach is to have the CBT send high level
commands to the application. In the menu example, the CBT simply
sends the application a message that commands it to make the given
selection. This approach requires that the New Wave applications
identify a11 commands that they are capable of processing. Of
course, this more integrated approach should not supplant ghosting
user actions. There are instances where each is more valuable.
When a New Wave application receives either a ghosted user
action or a command, it should provide visual feedback so that the
student can see what is happening. CBT should be capable of
either slowing down or speeding up the application) as required.
If the application is commanded to change its state, rather than
for demonstration, then the CBT should be capable of suspending
visual feedback. In this case flickering selections and mouse
moves would be disruptive and should be hidden from the student.

~3~i~5~2
Journaling - A :~pecia7. Case
Apple Computer, Inc. is experiencing success with a special
type of computer-based training that is synchronized with audio.
This type of training is inexpensive to deliver, because, the
audio is delivez~ed on a cassette tape. It is also simple to
develop with the. proper tool, a journaling program.
A journaling program is to a computer application as a tape
recorder is to voice. A journaling program runs behind the scenes
on the computer. It allows the course developer to perform tasks
in applications, while a11 actions are recorded for later
playback. When the scenario is replayed) timing and a11 visual
feedback are identical to when the scenario was recorded.
Using a journaling program and a tape recorder, a course
developer can create training by demonstrating tasks on the
computer while a commentator simultaneously explains them into the
tape recorder. .After a training module is created, the audio tape
is processed by .adding music and effects.
The end result is a non-interactive demonstration. The
student reads brief written instructions to prepare the tape in a
cassette player and run the training program in a computer. The
first screen of 'the training instructs the student to play the
tape at this point, a:nd wait until the tape has instructed him to
proceed in the training. He then plays the tape, which instructs
him to press a key or mouse button when he hears the audible tone
from the tape. 'this synchronizes the audio with the training
program.
From this point until the end of the training, the student
relaxes, watches and listens as the free running lesson exercises
features of the :system and application, and explains them audibly.
This training can come across as very polished.
Presenting Conceptual Models
Other special cases are animation and graphics. These
features are not necessary for demonstrating application features
in concurrent i:raining. However, they are often useful for
presenting conceptual information to the user. This is an example
of the adage, "a picture is worth a thousand words."
In HP Acce~ss/150 CBT, a conceptual model of data flow is
presented with bath graphics and animation. Graphic images of a
PC and a 3000 mainframe are shown, with animated data moving along
paths between them. This conceptual model is presented to give
the user an idea of how Access works.
Human factors research dictates that when a user learns a
product, he will develop his own conceptual model of how that
product works, whether it is accurate or not. It is the CBT
author's goal to reinforce the correct conceptual model. Graphics

~.34Q59~
H P 3000
Graphics and animation
are often useful for
presenting conceptual
models.
PC with pC Disc
HP Access
a
i
DATA ~
and animation are appropriate extensions to an authoring system
for doing so.
Looking Towards the Future
Technologies such as CD-ROM and interactive video disc are
quickly becoming a!lordable and will likely enhance offica systems
o! the near luture. Our current vision o! CHT in New Wave does
not call !or these technologies, but our design should not lock
them out. An authoring tool !or New Wave CBT should at least
provide extensibility to work these technologies in when they
become desirable.

~~~o~.~z
DESIGNING INTEGf,ATED CBT INTO NEW WAVE
How Integration Can Be Achieved in New Wave
Two methods for integration have been examined. In one, the
application communicates directly with the CBT. The CBT is
capable of sending messages to command the application. The
application sends messages to the CHT, indicating the completion
of tasks and other status information. This method provides very
tight integration, but requires that the application be aware of
CBT's presence. It might also lead to inconsistent approaches for
integration in different applications.
Architectures for Integrated CBT
Application Direct
H.W Application must be
"~ --~ CB's aware of CBT
Appllc~tlon
Duplication of effort
Does not enforce
consistency
Operating System
The less restrictive approach takes advantage of an operating
environment that incorporates a message interface. This approach
is based upon the assumption that within the standard
communications beaween the application and the operating system
lies all of the information that CBT needs. The message interface
provides a standard for communications between all applications
and the operating system. It also is the vehicle into which CBT
can tap to intercept communications.

1~4~~9~
Architectures for Integrated CBT
Message Interface
Nwv
CBT
Applloltlon Application unaware
of CBT
Built into New Wave
system architecture
Message Interlace
Consistent approach
r
Operating System
MS-Windows is Well Suited for Integration
Central to the MS-Windows environment is a message interface.
Windows manages all input, output and coordination of concurrent
applications by carefully designed messages that are sent between
Windows and its applications. The message interface can be used
by separate applications to communicate with each other, also. In
this way, CHT can communicate with other applications. In order
for applications to communicate with each other, they must know
each other's handles. A "handle" is a unique number that
identifies each instance of an application running under Windows.
It is used like a mailing address for sending messages.
Application Design for Integration
The figure above shows the architecture of an application
that is well suited for integration with CHT. In order to develop
CHT for the WorkTop, the Work"~'~p must be designed similarly. The
application does not need to kr~ ;r of CBT's existence. It does
require more formal organizatic than current applications tend to
have.
The design splits an application into two basic components, a
PRESENTATION PROCESSOR and a COMMAND PROCESSOR. The presentation

I340
processor is responsible for a11 communications With the user. It
collects user actions and issues higher-level commands to the
command processor at appropriate times. It also receives
responses from the command processor and produces appropriate
feedback to t;he user. This feedback is visual, audible, or
whatever makes aense for the application.
The presentation processor is also subdivided into two
components, an ACTION PROCESSOR and a FEEDBACK PROCESSOR. The
action processor is the part that collects user actions. It also
controls the feedback processor for immediate feedback of these
actions.
The command processor receives high-level commands that are
issued from the presentation processor. For instance, the user
moves the mouse pointer around a document and finally selects a
position between two <ilphanumeric characters by pressing a mouse
button. The command processor receives nothing of this movement,
but receives not:ificat:ion of selection when and where it occurs.
Once it receives a command, it processes it. This most likely
results in some changes to internal data structures. It might
also result in I/0 that is not associated with the presentation
processor, as with a disc file. Finally, it produces one or more
responses to t:he presentation processor. The last of these is
always a null rsaponse that simply indicates command completion.
The responses directs the presentation processor to produce
feedback of the command. If no feedback is required, which is
unlikely, then a.t lea=of the null response is returned.
From the perspective of the presentation processor, mouse
movement is received and the current pointer position is
maintained. Action responses are sent from the action processor
to the feedback processor so that visual feedback (the moving
pointer) is produced. Once a mouse button down action is
received, the presentation processor recognizes the completion of
a command and sends this command to the command processor. The
command processor acts. on the command and at some point returns
one or more responses to the presentation processor. One response
might produce visual feedback like an I-cursor in the document
where the mouse button was pressed. The last response indicates
command completion.
The concept of a command is also important to this design. A
command should be processed at any point where it makes sense to
modify internal data of the application. More important to CBT is
that commands should be designed such that each can be undone.
This would also benefit the application.
Tying CBT into t:he Design
In this design, each communications channel is used by the
CBT. The CBT intercepts user actions and selectively allows them
to be fed to the action processor. This permits basic monitoring
of the user and ghosting of user actions to the application,

I340.~y2
similar to the non-integrated approach in EI~i and Drawing Gallery
CBT.
The CBT also intercepts commands and selectively allows them
to be fed to the command processor. This lets the CBT give the
user more freedom in the application. CBT needs not be concerned
with how the user carries out a task, just that he performs it in
some fashion. CBT can also monitor the user and control the
application without concern for variables like window size and
position, screen resolution, etc.
Application
D a s i g n ~a.r
~
iHdDaek
tlona
A
e
for .
.
.
..,,/
.....
.....
.
~ BT : A~tton
Aetlon fwdD~ek
Proc~aaor
ProevsaaRasponau
Pnaantatlon Proovasor A
~:.. ........................~..~mT:na
R~spona~l
Commwd Proeusor

1~40~~~
The response channel is used to detect completion of tasks.
Thus, CBT need not guess when a command is complete based on the
output of the application. A11 it does is wait for the null
response that inclicates command completion. The response channel
is also used t:o disable feedback. Responses are read by the
training and discarded, never reaching the feedback processor.
This would haves been a useful feature in EI~i CBT for loading
format files at t:he beginning of each lesson. In this case
feedback is disn:~ptive.
In a similar' way, the response channel can be used to make it
look like a command was processed, when it was not. To do this,
the CBT reads the: command and discards it, but sends the responses
to the feedback processor that would have been produced by the
command processor.
The feedback channel is also useful for suspending
application output to the user. This is the only way to disable
feedback from both commands and user actions. It might also be
useful in another capacity. If the author of CBT wants to
demonstrate features of an application, he probably would prefer
controlling the application at the command level. This way he
would not have to worry about window positioning, resolution, etc.
The problem is that only command level feedback would be produced.
For instance, you might see a menu option selected, but you
wouldn't see the mouse pointer actually move to the menu and pull
it down. The feedback channel would be the only way to produce
this type of feedback, although it would have to be handled with
care.
Comrnunications Channels Usage
Intercept User A~a~ons
Control Low-level
'/ Output
Selective Filterin
9 "~.~, Suspend Output
'Gnosnn~' ....... .....
uwa" ree.s
rra...w ~" rr....w
Detect Semantic ~~ Feke Sementlo
Actrvny ............... ..... / Activities
ceo~raa.~ ll..ww.
Detect Command
Selective FOterinp
Semannc Control o"".,w r"""e, Completion
Suspend
Feedbeok

~3405.9~
THE AUTHORING TOOL AS SEEN BY THE AUTHOR
Without concern for the specifics of a tool's implementation,
an authoring tool must include certain features. These features
can be divided into six categories:
(1) Application monitoring
(2) Application control
(3) Data manipulation
(4) Flow control and conditional execution
(5) Presentation
(6) Extensibility
Application monitoring and control have already been
examined.
Data manipulation consists of a means to save integer,
character and string data, as well as a means to massage this data
with arithmetic and string operations.
Flow control is required to permit non-linear training
sequences. Conditional execution is required to support
intelligence in the training sequence, for instance responding to
user error.
Presentation capabilities allow the author to open
instructional windows that contain text, and perhaps graphics.
Presentation capabilities also include accepting user input
outside of any application, as when the user presses a key to
continue to the next instructional window. Presentation
capabilities can be quite fancy, as with animation routines. They
might also include a vector pointer that can be extended from the
training window to an object elsewhere on the screen.
Finally, extensibility is the feature that allows the tool to
evolve into more than it was originally designed to do. Clearly,
not every data operation required for the long term will or even
should be required. Basic operations should be incorporated into
the tool's design. Extending the operations should be supported
through functions and/or subroutines. This reduces the overhead
of intrinsic operations and provides a means for growth as well.
Extensibility should also provide for data communications and
high-level language calls and/or direct MS-Windows messaging.
This way the tool's growth is not limited by its intrinsic
operations.
All of these features can appear to the author in a number of
different ways. Two basic classes of existing tools are languages
and interactive systems. Authoring languages closely resemble
traditional programming languages. Hence, they require some
programming skills to use. Typically they provide very high-level

-- ~ 1340~~~
commands like c~PEN <window> or MENU to implement training. These
would have to be built from scratch in standard programming
languages.
An interacaive system prompts the author to design a training
sequence. Windows are designed and opened by choosing menu
options in the: tool. Text is placed in windows by typing it into
the window, interactively. Application control and monitoring is
defined by acaually working in the application. Interactive
systems tend to make the creation of training steps easier for a
non-programmer. They are often cryptic in their editing
capabilities arid not as flexible as a programming environment.
The authoring tool for New Wave would best be a mix of the
.language and interactive system. A language would allow
flexibility and extensibility for the future. Interactive
extensions could be developed to enhance productivity with simple,
redundant tasks. Such an extension proved successful for
designing instructional windows for Drawing Gallery CBT.

~.3~0~~9~
Edit Font St~le Find
Neu
This is how you
Saue load a memo in
Saue as .. Wavewriter...

~34(~yz
AUTHORING TOOL ALTERNATIVES
Summary of Existing z'ools
The most complete tool set is provided by Microsoft for
software development under Windows. The MS-Windows Development
Tools include libraries for either Microsoft C or Pascal
programming. Only th.e serious programmer need attempt to use
these tools. LPC's course developers are accustomed to a much
less technical environment. Concurrent CBT would be especially
difficult to implement, using these tools.
Of the existing authoring tools that are available, LPC's
preference would be to use the Shelley Monitor, a product of
ComTrain, Inc. LPC's developers are already comfortable with this
system, having developed training for EMM and Drawing Gallery with
it. Unfortunately, Shelley Monitor does not currently work with
Windows applications, and ComTrain has clearly stated that it does
not plan to .remedy this problem. Other concurrent systems -
Trillion, CDEX, Vasco and Automentor - also fail to work with
Windows applications. This is no surprise since Windows is a
relatively new product and places many restrictions on how
applications work under its control.
A Microsoft Authoring Tool
Microsoft is currently developing a tool for authoring CBT
and prototyping software. LPC had the opportunity to evaluate
this tool and interview its developers at Microsoft in April.
Although the tool is not ideal, it seemed to have prospect as a
short-term solution for Core Wave CBT for the WorkTop. A
development plan was drafted with this tool in mind.
Unfortunately, Diicrosoft responded that their first priority is to
apply the tool t:o developing CBT internally for Windows Excell and
that they cannot commit to the plan. Hence, the tool has been
abandoned as a~ short-terra solution. The investigation of
Microsoft's tool provides an excellent opportunity to examine
competitive technology.
Microsoft classifies their tool as symbiotic, because it
works with the application. This is the same concept as LPC's
concurrent training. They also allow control of the application
at two distinct levels. They refer to the user action level as
syntactic control. The command level is called semantic control.
Microsoft's tool. also supports monitoring user activities in an
application at the same two levels. The tool requires that an
application's Wi.nMain procedure be written a certain way in order
to support semantic monitoring and control.
Comparing their design to the ideal described earlier, their
tool intercepts user actions and commands, but neglects to
intercept command responses and feedback. This means that the CBT
cannot advance the state of the application transparently to the

13~o~9z
user. The CBT also cannot command the application to act like it
performed a task that it did not perform.
Microsoft's tool is not a language. It is menu driven and
interactive. The author develops training by defining successive
steps. Steps are defined by choosing menu selections, using a
window editor, and by interacting in the application. For
instance, the author might define step as follows:
(1) Choose the menu option to create a new step and then
type in the unique name and a description of the step.
(2) Choose from another menu that the step should include
an instructional window.
(3) A window opens. Interactively modify its position or ..
size if necessary.
(4) Interactively enter text into the window. A bitmap
graphic can be created in MS-Paint and pasted into
the window.
(5) Choose from another menu that this step should wait
for a response from the user.
(6) A window appears for defining the response. Define
all acceptable responses and corresponding steps to
branch to if those responses occur. Define the step
to branch to if none of the defined responses occur.
The tool does not provide any features in the domains of data
manipulation or extensibility.
The most interesting find with regard to this tool is that it
requires special hooks to be added to MS-Windows. The problems,
as described by engineers on the project, spawn from Windows
asynchronous nature and the fact that some features on windows
applications are outside of the application's control. POD has
investigated the special hooks and reported that they are a11
included in the version HP plans to introduce as HP Windows
(version 1.03) this fall. They are currently not documented and
we are working on obtaining detailed descriptions of their use.
Finally, the Microsoft tool is still under development. Its
developers were unable to demonstrate any monitoring or control of
applications. These critical features were still not working when
the tool was evaluated in April. Its interface also requires much
work. It seems to have been developed haphazardly as several
interacting modules in separate windows. Each menu occupies its
own window, so that the author tends to have a confusing screen
full of these while writing CBT. Microsoft indicates a preference
to change this before they release the tool to ISVs. Microsoft
also indicates that developing their tool for ISVs is not a

134U5~~
priority and cannot be until Excell CBT is complete, sometime in
the next few months.
The Agent
The most promising candidate for a tool comes from within
PSD. Glenn Stearns (of Ian Fuller's group) is currently
developing an agent,/macro facility that will communicate with,
monitor and control New Wave applications through the message
interface and vthe New Wave infrastructure. This tool can provide
a strong foundaition for LPC's internal development of a tool.
The Agent will exists to automate tasks in the New Wave.
Conceptually, ii: will be that dedicated subordinate in the office
environment th~~t wi:Ll faithfully carry out tasks that it is
assigned. The <~gent will learn by doing. Similar' to a LOTUS
1-2-3 macro, i:he user will instruct the agent into a LEARN MODE,
and then carry out the task he wishes to automate. A11 the while,
the agent wil7~ remember the user's actions so that they can be
performed automaitical:ly at a later time. While in this learn
mode, the agent can be best understood relating it to a tape
recorder that can record and play back tasks.
In addition to learn mode, the agent will provide a SCRIPT
EDITOR. Once a task has been learned, it can be viewed and edited
as a program. This will allow the power user to add statements
for prompting, conditional execution and other high-level
functions. The: end result will equate to more powerful task
automation beyond a simple linear sequence of operations. Thus,
the user will be able to program the agent to stop at predefined
points, to ask t~or input, and then make decisions based upon this
input.
The agent will traverse application boundaries. It will be
able to automates a task that involves the coordinated use of
several New Wave applications. For example, the task might be
cutting a table of numbers from a spreadsheet and then pasting
this table into the business graphics application to generate a
pie chart.
The agent will also support being "triggered" on time or
events. A trigger will. automatically invoke the agent to perfozza a
given task. Triggers might be defined for specific times (e. g.,
Once a week, perform this task. ). Triggers might also be defined
for specific events (e.g., the phone rings). Triggers might be
used to instruct the agent to automatically check inventozy levels
once a week and if the level on a certain item is bellow 100
items, automatically order enough to replenish stock to 150. The
ordering might take place electronically to a distant New Wave
workstation. This workstation might have a macro triggered on the
event of the phone ringing. When the phone rings, it
automatically answers it, establishes a connection, takes the
order, and hangs up.
.,

1310'p9
The Agent's Interpretive Engine
In the current stage of agent's specification, its design
calls for an interpretive engine. This feature makes the agent a
viable tool for the development of computer-based training and
therefore deserves an explanation.
The agent's interpretive engine will communicate with New
Wave applications through the MS-Windows message interface and the
New Wave infrastructure. This will permit the agent to intercept
the users input to a given application and to control the
application without being tied to the application's code. This
feature will provide integration to applications. The integration
is achieved through the operating environment.
The agent's interpretive engine will understand and be driven
by a low-level, but rich, instruction set of p-codes (intermediate
program codes as in language compilers and interpreters). P-code
programs will be written to drive the engine and, hence, control
any New Wave application, as well as the Desktop. However,
writing in p-code is analogous to writing in assembly language.
In order to make the agent more usable, it will include
FRONT-END PROCESSORS, as with the script editor. The macro editor
front end will have a graphical interface that displays macros for
browsing and editing. These macros translate to and from p-codes
for the interpretive engine to process. Hence, the power user can
program the engine in a higher level language than p-codes.
Adapting the AGENT for CBT Development
Just as front-ends to the agent will be developed for task
automation, so might front-ends for authoring CBT if the p-codes
provided by the agent's interpretive engine support implementation
of authoring functions. A majority of what authoring functions
require will be provided by the current vision of agent's p-code
instruction set. Those remaining instructions that should be
added to the agent to support authoring functions can be divided
into two categories, those which enhance task automation and those
which yield no added value to task automation. In the interest of
providing CBT for the New Wave, both categories should be
implanted fn the agent's design.
The Agent team agreed to do so in their Investigation to Lab
meeting of April. In order to do so in a timely manner, the Agent
team requires additional resources for their own project, as well
as a full-time commitment of one engineer from LPC for most of the
next year.

NeW V~'ave Architecture
~GENP communicates pith Nee Dave a~pJications
o through the rnessa~e interface ~~d infrastructure, reguiring
a minimum ~mounr of su~port~~e cods in each ~pplie~tion,
".............
New ~' AGENT
~lau~i ~J~ua v Intarpr~ciu~
Applica~ior~ Aap~rcatian '~ En~~n~,
I~ew Wage Infrastructure
MS-Windows Message Interface
it to

13~o:~~z
user! luserl luser
Training Ruthoring ~9ent
Sequence Commands t~~~r~
Commands
. .
~C4n~ ~ enu
~1'IYE~II ~uth~~~g Script
~~~~5 ~utha~in~ L~n~u~~ a Lan9ua9e
Taol Editor
p-codes ~ ~ p-codes ~ ~ p-codes
. . . . ......
~~E1~JT
Into r ~reti~e
~ Engine
. . .....

I-1 ~~"'
HP JOURNAI, ARTICLE: COMPUTER-BASED TRAINING FACILITY
("COUSTEAU")
PASS Two: ~~~~ss 13 4 0 5 ~ ~
Larr,V Lynch-Freshner, Tom Watson, and Brian Egan,
I. Introduction
A. The Evolution of'h~aining
Most people associate "travniag" with a aowded lecture room. Despite a long
and successful history)
doom training is becoauag iacreasiagly expensive without a corresponding
iaccase is effectiveness.
The growing influence of a~mputers provided a possibility for improvement: let
a computer do all or part of
the instruction.
Computer-Based Training, or "C'.BT," has been extensively used by the military
for teaching everything from
medicine to flying. Academia has also come to rely heavily on the patience of
the computer, while bright
colors, interesting music, and supplemental video all add appeal for a
generation raised on television.
Industry has been slower to~ adopt CBT: Available courses are limited,
equipment and software is expensive)
and people have felt threatened by the new technologies. Most importantly)
many people are unconvinced
that CBT is efl'ective) often because of bad experiences with unimaginative or
boring CBT they've seen.
Properly written CBT, though) can cut costs while raising retention and
motivation. Achieving this requires
a partnership between the courseware and the CBT authoring software:
o Ideal CBT courseware; is flexibly enough to handle a variety of student
experiencx levels, provides task-
based instruction that can be applied immediately on the job) is available to
the student whenever
needed, provides "chunks" of iaswdion relevant to the task at hand) and
doesn't constrain the student
because of its own limitations.
o Ideal CBT authoring .software is simple to use with minimal programming
experience, provides a
realistic learning envil~onmeat) costs very little, and allows courseware to
be developed quickly and
inexpensively in response to local needs.
In Beating the HP NewWave CBT Facility) we set out to get as close as possible
to these ideals. Earlier
experiences with cammerdally available CBT authoring and delivery systems
showed the potential of CBT)
yet also pointed out the limitations of conventional technologies. It wan time
for origaat thinking.
B. Types of Compater-Based Training
There are two basic CST techaolopcs:
o Simulation. T'he GBT' software is fully respona~te for what the student
sees, with all saeen displays
produced by the tenanting software itself. Simulations have great flenbility,
allowing training oa any real
or imagined subject, but require more development effort because an entire
environment must be
seated.
o Concurrent. A'C~T engine' resides in memory and runs in conjuactioa with a
real software
application. The application provides all its screen displays and computations
just as if it were being
used normally. The CST software sequences the lessons) supplies inswctional
teat) and controls which
keystrokes and commands are allowed to reach the application. Since the
application supplies the bulk
of the code, concurrent C'.BT is usually easier to produce) but few
applications can interact with the
CBT en~iae in a reall;~ meaningful way.
1

134~~~2
The NewWave CBT Facilir,~ was designed to allow both methods, providing text,
graphics) and animations
for vivid simulations) and intimate communication between the CBT lesson and
applications which are being
taught.
II. NewWave CBT Facility Design
Throughout the project) there have been four design goals for the NewWave CBT
Facility: it must utilize the
NewWave architecriue, it must provide effective courseware, it must simplify
and speed the development
process, and the courseware must be .adaptable to local cultures and languages
with minimum effort.
No commercially available ~CBT suthoring/delivery system was available for
either NewWave or Microsoft
Windows. In order to take advantage of the power of NewWave, a CBT system
needs a graphic display) full
mouse and keyboard input capability) the ability to span concurrently open
application windows, and the
ability to operate on what the students do as well as haw they do it. The CBT
also must be started from
within NewWave, since requiring a return to the DOS prompt for training would
probably discourage people
from using it. Lastly, since NewWave is capable of nmaing existing MS-DOS and
MS Windows applications
(without providing many NnwWave features) the CBT must also provide some way
of training on these
applications, even if only by~ simulating them within NewWave.
A second "must' for the NewWave CBT Facility was that it must provide the
capabilities for as unparalleled
level of quality in the CBT ~eoursewar~. CBT is an integral part of the
learning product strategy for
NewWave. When properly designed and exuvted, CBT has been proven to be
successful and inexpensive,
but unfortunately) many previous CBT efforts have been ignored because they
were regarded as ineffective)
boring, or inconvenient. Over goal was to minimize the technical limitations
on the lesson author by allowing
for multimedium lessons (text) graphics) ...), modularized courseware) and
easy access from within the
normal work environment.
The third requirement for the NewWave CBT Facility was that it must reduce the
long development times
traditionally assodated with CBT courses. A typical hour of CBT takes between
300 and 500 hours to
design) conswct) and teat; much of this time is spent in programming the CBT
logic and creating the screen
displays, rather thaw is the instructional design itself. By providing
efficient) easy to use tools, and by
eliminating as much programming as possible, the NewWave CBT Facilit~~ can
make expense less of a
consideration when decidia~g whether CHT is as appropriate medium for a
particular course.
Finally, the courseware created with the NewWave CBT Fadlity had to comply
with HP's guidelines for
localizability. The primary requirement was that text should be maintained
separately from program logic;
this way) non-technical translators could translate the lessons into local
languages without having to delve
into the source code of the oourae. Since translated text i: often 30~.h
larger than the original English
version) the position) size, amd proportions of the text window had to be
easily adjustable) with automatic
text wrap and preservation of formatting) so that the localizers could assure
the legibility of the lesson
without having to recode it. Finally, teaQ within illustrations had to be
acaasible xparately (i.e. no bit-
mapped text) so that the illustrations would not have to be redrawn to
translate them.
III. The NewWave CBT Facility Components
A. A Sample Lesson
The best introduction to th.e NewWave CBT Facility is probably a sample
lesson. Figure 1 shows a sequence
of screen displays that migJht appear during part of a CBT course about the
NewWave Office itself:
Frame 1. The real lVewWave Office (not a simulation) is r~aniag, with al! of
its normal tools and
objects visible. Also showing is a real folder object, placed by the CBT
speciFcally for this
2
l

134~~y~
lesson. Overlaying the Office is an "instructional window" which contains a
text
explanation of how to open an object, and a pushbutton control. The student
reads the text
in the winnow and clicks the mouse pointer on the Continue pushbutton to go on
to the
next frame;.
Frame 2. The window now contains an illustration of a folder, along with an
instruction for the
student to try opening the real folder. The directions are repeated for
reinforcement. At
this point) the student has two options. First, he can actually try to open
the folder called
"Fred." Second, he c:an click on the Demo pushbutton, asking for the CBT to do
it for him
once as a demonstration. We'll assume this choice for now.
Frame 3. The window now displays the first step in the Open process. The mouse
pointer, by itself)
slowly moves from its previous position to the folder 'Fred" and pauses.
Frame 4. The window changes to display the second step. The screen reacts to
the double-click
performed by the CBT) and the folder begins opening. The mouse clicks were not
simulated:; instead, the equivalent message was iajecxed into the system by
the CBT. Beeps
were sounded by the computer's speaker to mimic the sound of the moux buttons
clicking.
When the student clicks on the Continue pushbutton) the folder is cloxd
automatically and
the next text frame is displayed.
Frames 5-6. The student is now asked to opta the folder unassisted, just as in
Frame 1. If the open is
unsuccxssful, an appropriate remedial message is given, and the student is
asked to try
again) as in Frame 6~.
Frame 7. If the open is successful, congratulatory text is displayed, and a
brief animated "reward"
appears. 'Then, using pushbuttons, the student chooses the next step: continue
to the next
lesson) or return to the main course menu. In either case, the "Fred' folder
is cloxd and
destroyed so that it won't remain as as artifacx of the training.
The NewWave CBT Facility is capable of monitoring the student's actions to a
very fine level. The choice of
which conditions to watch is left to the instructional designer, and will
probably vary throughout the course
of a lesson. In this lesson) for diagnosing the caux of the open failure) some
posu'bilities might be:
o An open was attempted, but on the wrong object. In this case) to save time
and distraction, the open
can be prevented, witlb as appropriate message being substituted.
o The moux was double-clicked) but off to the side of the folder icon.
o The folder was xlected (by a single click) but not opened. Here) a timeout
would be used to assume
the action wouldn't be: completed.
o The student selected the Help menu option, probably seeking specific
asaistaace for the task at hand.
o Aad so oa. The numlxr of monitored possibilities is limited more by the
designer's imagination and
time constraints than by technology.
B. An Overview ofd the Components
The initial vehicles for CB'f were the Newarave Agent and the Application
Program Interface (API). As a
task automation facility, the Agent oauld xquenx through a xries of steps
either automatically, or in
response to interactions with the computer uxr. The API gave the Agtat a door
into a11 conxating
Newwave data objects and tools) allowing the Agent to control them or
determine their inner states.
Together, the Agent and A,PI automated anything a uxr could do. Thus the
basics for a powerful CBT
toolset were prexat in HP' Newwave from the beginning.
At its simplest, a CBT lesson is just as Agent task automation language
program ('script'}. Generic Agent
commands can open conversational windows anywhere) prexnt textual information,
prexat pushbuttons for
_. j _,~_.___..._..._..~_..~-----.--._-_--- ---~---------

1340W
user control, monitor certain events within the system, and make sequencing
decisions in response to user
actions.
While thex Agent scripts mere suffident for some training) they are not
optimal for large-scale) highly visual
CBT. They required progrstmming expertix to construct, and because of their
size when used for CBT)
were expensive in terms of development time and memory utilization. We also
needed additional visual
effects, full-screen graphics" the ability to simulate applications which
didn't lend themselves to concurrent
training, a more detailed knowledge of what the student was doing to the
system, acrd a clean and easy
method for starting acrd coc~trolling a sequence of lessons. This required
that the generic Agent task
automation language be supplemented by:
o ~,~j,~ to the geae.ric Agent task automation language which perform training-
specific actions such
as moux position xn.~iag and uxr input simulation.
o A Frame Obiect (with integral development editor) which displays the
sequcaces of instruction
windows) text) static graphics) and uxr controls which make up the student's
view of a lesson.
o An (and editor), which displays color or monochrome animated graphics.
o A Trainin~enu obieg (and editor)) the courx's initial user interface) which
allows access to all
instructional objects.
o jp~ation "hooks." coded deep within Newwave data objects and tools, which
xad information and
perform actions is res~ponx to application-spec ("class-dependent") task
automation language in the
CBT Agent script.
In its current form) the NevrWave CBT Faality allows lesson authors to create
full-color graphical and
textual objects of nay description, without writing a single line of code. A
short and straightforward logical
swdure written is the Agent's task automation language provides flow control
for the lesson.
C. An Architecture; for Application'Ih'alning
NewWave has been designed with as architecture to support an integrated
application training approach.
This approach has its roots in concurrent traiaiug technologies, but relies on
a xmantic integration with
applications.
To facilitate an integrated approach, NewWave objects are designed to
communicate through as
Application Program Interface (API). A typical application architecture
includes a Uxr Action Processor
acrd a Command Processor, The user action processor collects uxr input (moux
movement) keyboard
input) ete.) and detects when as input xquence conforms to the syntax of a
command. At this point) the
uxr action processor xnds~ the command through the API to the command
processor, which processes the
command and updates the .object's state, data) and/or interface. Heave,
syntactic uxr actions are translated
to xmaatic commands.
At the same time that xveaval objects are open under the NewWave em~iroament)
all following this protocol)
a system xrvice called the ,Agent has privileged access to eacamiae and
selecxively filter commands that are
xat by the objects through the API Command Interface. If a command from a uxr
action processor is
filtered; it never reaches its respective command processor, so it is never
executed. Thex techniques) called
"command monitoring" aadl "oommaad filtering," are employed by training
programs that are based on the
Agent sad can be used to guide the uxr through learning sad using NewWave
applications. The primary
advantages over previous application training technologies are:
1. Training programs naed not simulate applications) since the applications
are used.
2 Monitoring of application activities are at a semantic level) so the
training program obxrves a
command likCOVEIi foILOEIt ~fr~ instead of a xquence of moux and keyboard
inputs that must be
interpreted.

13~~J~~~
It is common for a NewWave application to provide alternative access to nay
given command. Typically
there are at least two alteraa~tives, one using the mouse and another using
the keyboard. In either case, the
same command is generated. This greatly simplifies the effort involved in
developing training.
1. The Agent :od Agent Tla<sks
Agents can be thought of as ;software robots. The NewWave Agent follows
instructions from the user and
can automate tasks (by xadi:ng commands to applications)) and record tasks (by
receiving and storing
commands from applications). Additionally, the Agent can monitor and filter
commands) as was previously
mentioned. The sequence of instructions that the Agent follows is called a
Task and the language in which
Tasks are written is called Task Language.
2. Command Level Control
The easiest and most common use of Task Language is to control NewWave
applications. For example,
FOCUS 011 DOCU4ENT "Letter"
TYPE "Deer Chris,"
tells the Agent to direct its commands at (FOCUS o11 ) a Document object
called ratter, and then to TrPE some
text is the letter. The letter is assumed to be open. This would have happened
earlier is the Task by a
similar xquenx:
FOCUS ON MPOf F i CE "IieMiwe Of f i ee"
SELECT DOGJMEIIT "Letter"
OPE11
Here, the Agent is instructed to direct eonimands at the main llewew of f t
ee" window) select the Document
object called 'Letter; and then open it. Notice how Task Language is modeled
after the xmantic activities
of the uxr. The uxr would follow as identical sequence to open an object.
Training Tasks will typically control applications this way in order to
initialize them for a lesson. For
example, in a lesson oa enhancing text within a document, a training Task can
open the document and
conduct other activities that alre not relevant to the training) rather thaw
requiring these of the user.
Additionally, training Tasks can ux command level control to present inawetion
that is collected is a
separate object. Consider a .special object that is uxd by the training author
to design instruction windows
interactively, with text) graphics, and watrols (e.g.) pushbuttons). This
application that we will call as
"instruction-box" is used to dlesi~n a xt of named iaatrudional wiadaws that
can be randomly accessed.
Within the lesson Task) comrnands can be xnt to the instruction-base object to
open iaswction windows at
appropriate times. At the be~ianiag of such a Tank) the instruction-base
object is opened:
FOCUS 011 IIPOf 11 CE "N~4lwe Of f i ee"
SELECT C~T_EDITOt "lessont Instruction"
OPEN
Later is the Tank, commaada; are used to display specific inswdioa windows:
FOCUS 011 C~T_ED1TOE "Lessont I~xtruetion"
SN01I_~IND011 "Now To OpsM '
This approach offers two significant benefits. First) training content (ia the
instruction-base) is conveniently
xparated from training logic (in the Task). Hence, lesson content can be
seated sad modified) for updates
or localized versions) in a non-programming environment. A xcond) loftier goal
also makes ux of this
xparation to implement an intelligent tutoring system (ITS). To achieve this
goal) the instruction-base must
contain "training converaatirn~al element:' of a small enough granularity to
be uxful is a wide variety of
~ ~,.._. ~.._._..._.~....._..~..~

1340~0~
training scenarios. Rather Ihan writing Tasks) authors develop expert systems
that employ deductive logic to
resolve appropriate paths o;f discourse, and emit Task Language directly to
the Agent. T'he Agent is still
used as a vehicle for delivery.
3. Class Independent and CIBT Commands
So far) it has not been necessary to introduce the concept of "class
independent commands.' During the
control of objects from an Agent Task, most of the commands used are specific
to a class of applications
("class dependent"). FOr Cx8mplC, SN011_4I11004i IS 8 COmm8ad Lhat IS SpCCitIC
t0 C6T ED I TOR ObjCCtS. Such
commands are executed by the object with focus, not by the Agent.
In order to provide the rich syntax available in other high-level languages,
Agent Task Language also has
c1a55 ladCpCadCat a~lmaad5 Such as
If..ELSE..EIIDIF,PROCEDURE..ENDPROC,11MILE..EN041NILE) and the variable
assignment statement. These commands are executed by the Agent.
Aa additional set of commands that are specific to training development can be
aaxssed by entering the ceT
ow command at the be~ianiiig of a task. Likewise, if these spedal a~mmaads are
not used thCCeT OFF
command can be used to expedite the language translation proa;ss.
1. Command Level Modftorlng
A monitoring proaydure mmt be written to trap the command activities of
NewWave objects. Since Tasks
may have several procedure;a, the o11 calelAlio Do command is used to define
which proa,~,dure is for
monitoring.
011 C0111AIP' DO TrapProcaclura ~sp~reify wonttorir~ proe
SET C0111A1ID 011 turn ~onitorirp on
W1IT Twit for a eo~rnd trap
After a monitoring procedure has bees speed, monitoring will still not occur
until it is turned oa with sET
col~ulro o11. Typically) the third command is such a sequence iauAtT. TheuAIT
command directs the Agent
to stop proaasing Task I,artlguage commands, and to wait for a a~mmand to be
generated in a NewWave
object. Essentially) the Agent is idle until this condition is met. Once a
command is detecxed, a trap is
produced within the Agent. This trap directs the Agent to eioeciite the
commands asso~ated with the
monitoring procedure) sad then to resume execution of the Task) be~naing with
the first command after the
IrA t T . It is also possible for ithe Agent to execute other a~mmaada in a
Task while it is waiting for a
command trap, but this scenario is more complex.
'The monitoring procedure ~caa contain nay class independent command:. Its
role is a Task is to examine
commands that are trapped, and filter undesired a~manaads. From the objecfa
perspective) the monitoring
procedure sits between the ia:er action and amimaad processors. A monitoring
procedure can call any of
four spedal functions to acquire spec command information. This way it am
determine the class and title
of the object in which the a~mmand accursed, as well a~a the amnmaad sad its
paraimeters.
It is particularly useful that the Agent can monitor commands in several
applications at once. This way a
single trap proaedure can be written to accept either a command in the object
being taught) or the
CST ED I TOIL window.
PROCEDURE TrapProeadura
GD/ w STa OOIIIANDt)
C!D CLASS/ w 51E dOCLASft)
If C!0_CLASS~ ~ wCIT 1:DITORw
EXECUTE
rlfUtt>f w w~pw
RETURII
ELSE
6

1340y?
IF (C!0 CLASSt ~ "NPOIFFICE") AND (Cl~~ ~ OPEN)
STRING* ~ 1
INDEX ~ 0
OBJ_CLASS~() ~ STS_~COf1(ANDPARh(STRINGx,INDEXx)
INDEX ~ lEN(OBJ_CL~ASSI)
08J TITLEx() ~ STS ~CDIIIANDPAR11(STRINGx,INDEXS)
IF (OBJ CLASS/ ~ ~NIPFOLDER") AND (OBJ TITLEx ~ ~Fred")
EXECUTE
results ~ "open~
RETUR11
ENDIf
ENOIf
ENDIF
I GIIORE
r~sultAt = "bid"
ENDPROC
This. monitoring procedure will filter any command except for CIIhCr OPEN
FOLDER "f r1d" (In NPOFF I CE ) or the
only command available in t6le C1T ED I TOR Object, which is pressing the
"Demo' button.
5. User Action Levd Control and Monltoriag sad Interrogation
Although command level control gad monitoring are quite efficient ways to work
with objects in a Task,
there are instaacea where the; user action level is more appropriate. For
example) the training Task may
need to distinguish between two alternative syntaxes of the acme command is
order to ensure that the user
hat learned one of them. The user action level is also inherently part of all
MS-Windows based applications)
therefore training can e,~ent into the domains of non-NewWave applications at
the expense of working at a
lower level.
One exciting use of user action level control is a training Task is the
demonstration. In a demonstration,
Commands like P01 NT , DRAG ) CL, t CK, DaUILE -CL I CK, gad TrPE , are used
to manipulate objects with full visual
feedback of mouse pointer mlovement and individual key entry. Command level
control would not suffice
for demonstrations, since it a~aly provides the feedback of differential
display changes that follow changes in
object state.
Interrogation function: comFrlemeat user adios control by locating specific
display elements. Class
independent interrogation fundiona locate elements of the windowing interface,
such as window caption
bars gad system menu baoces, Cast dependent interrogation functions locate
elements that are managed by
specific objects) such as a folder object iaDn within the NewWave Office
window.
Clsas dependent interrogation functions are used to ask the object with focus
questions like:
1. Where is a display element?
2. What dispLy element isl at a given point?
3. What is the status of so~methiag is as object?
Each of these question deal with revealing information that is only known by
the object. The first two
questions map between display elements that are managed by the object and
screen coordinates. WHERE I S
returns a 'region,' which is a data type that specifies a recxangular screen
region by its origin (upper-left
point), width and height. uNATS wT uses a point to specifies an exact pixel
location by its x and y coordinates.
Together, user action level a~atrol gad interrogation can be used to construct
demoa>arations that will
always work, regardless of wibere elements of the demonstration have bees
moved. For example:
'D~aa to opM ~ folder olij~et "fry!"
7
.....~_.__~.~..._.~--.----~~

134~~~~
FOCUS ON NPOFFICE "NeWtve Offict"
FRED1 ~ HNERE iS( "fOLOEIR", "FRED" )
POINT TO CENTER( FREDI )
DOUBLE CL1CK LEFT 6UTTON
D. The Frame Obj(xt
1. Overview
The CBT Frame Object provides a fast, easy, and fleldble way to scale sad
display training-spec screen
displays) which range from small and simple teat windows to complete
simulations of whole applications.
The basic building block of a~ CBT lesson is the "frame," which contains
everything that might appear on the
screen at any one time. By .sequencing between frames) various tcxiual
inswctions and graphical
illustrations can be presented to the student. Frames may be made up of any of
the following:
o Windows, which may be full- or part-screen. Thex may be solid color or
transparent, and may have
various styles of bordem. The most elaborate window can look like a Newwave
application window,
with sizing controls) sa~oll bars, menus) and so on. Windows are uxd as the
framework to hold other
elements, or may be used as backgrounds.
o Te>tt, in a variety of sizes, colors, .and typefaces.
o Controls, which can be pushbuttons) check boos) or other forms.
o Color or monochrome bitmaps) input through the ScaaTet scanner or seated
using a built-in painting
utility.
o Icons (which are known to the system).
o Graphic primitives suds as lines) ara) and arees, which can be drawn in
various weights) colors, and
fills.
o Animationa, which are acxuslly xparate Animation Objects controlled by the
Frame Object.
o "Hot regions," which are invisible areas sensitive to the prexace of the
moux pointer or mouse button
dicks.
A frame consists of a foregr~DUad and a background which are displayed
simultaneously, much like two slide
projectors sharing one screen. For convenience and to save data storage) a
background may be shared
between xveral foregrouad:~. The unchanging parts of a display are usually
placid in the common
background) while the foregrounds contain only thox parts of the displays
which differ from moment to
moment.
A Frame Object contains once ar more frames with foregrounds sad backgtouads.
Thus) a single Frame
Object contains all of the dulplays which would be needed for a lesson.
Windows) menus, sad controls are
all real) but are connected to the intelligence of the Agent rather than real
application code.
Figure 2 shown a schematic vew of the ample lesson discussed earlier. There is
one baccgrouad) which
contains the iaawcxional window. There is one foreground for each frame the
student can xe. All
foregrounds contain the lead; for the frame, and some foregrounds also contain
pushbuttons and/or icons.
When the final animation is displayed, the inswdioaal window should not be
xen, as it would be
distracting. There are two ways to handle this: by advancing to a xcond
background without the
inswctional window, or by ~eaplicitly "hiding" the iaswctional window through
a lass-dependent Agent
command. In this sample lesson, the latter method is uxd.
The actual frame xqueacinrg for a lesson is handled by the Agent task
language. Simple statements
command the Frame Object: to display a particular frame. The student then
performs an action such as
opening an object) xlecting from a menu, or typing is teat or numbers. The
task language script reacts to
the action in a predetezmiae:d way by advancing to the next frame) requesting
additional teat, or prexating
an error message.
8
_._~___.....~_. -~..--- -----------

134059
Without the Frame Object, iiaput-output and display control would have to be
handled by the Agent Task
language; with the Frame Object, the Agent is used solely for decisions and
xquencing. This greatly
reduces the size of the task language script) minimizes the aced for
programming expertise) and speeds the
lesson development procxss.,
The Frame Editor can aeate vtry sophisticated screen displays. This allows the
CBT to go far beyond
simply displaying text on tops of existing NewWave objects: it allows
virtually any program to be simulated or
prototyped, and allows courses to be developed on subjects far removed from
software applications. Figure
3 shows two such possibilities.
1. Program Structure
The Frame Object actually eadsts in two forms: development and run-time. The
run-time version displays
the frames inside the object" either under Agent control) or when the object
is opened; the development
version adds the editing facilities for text and graphics. When the object is
opened) it determines whether it
is run-time or development by checking for the prexnce of the editor. If
found) the initial editor interface
appears along with the first frame, if them is one. If the editor is not
there, the first frame is displayed as if
the lesson were being run by the Agent. Figure 4 is a block diagram of the
Frame Object. The Painter
(bitmap editor) and Color Selector are placed is dynamic libraries to maximize
code re-ux, since they
appear in xveral places within both the Frame Object and the Animation Object.
The Frame Object will usually be used to develop training which will run
concurrently with one or more
NewWave objects. In order to simplify the synchronization of the CBT lessons
with the applications) the
development-mode Frame Object is designed to unintrusively run on top of the
concurrently running object.
This allows the lesson autha~r to optimally place windows and other frame
elements without having to guess
what the final lesson will look Bike.
The primary user interface of the Frame Object (F~gure ~ consists of a small
window which contains menus
and a list of foreground and background frames. A new frame is created by
typing a unique name in the
Title field and clicking the ~~dd button.
Once a new frame exists, a window must be placed is either the foreground or
background to form a
"parent' for all other elements in the frame. Thus Window may be a borderless
transparent area used only
to contain other elements, or it may be a complex "dramatic" part of the
lesson, such as the opaque
background or the simulated NewWave O~ce shown earlier in Figure 3. Recall
that a flame may contain
windows) text, controls) bitmaps, icons) graphic primitives) animatioas, and
"hot regions.' Each of thex
elements is created through speaalized menus which provide easy and fast
xlecxion of various features.
Once seated, elements ma;~ be locked to prevent inadvertent movement. They can
be rexlected for editing
at nay time) sad may be moved, cut, or copied within or between frames, or
between nay foreground and
nay background. Additionally) text and bitmaps may be imported from any
NewWave or Microsoft
Windows application via the Windows clipboard.
When displayed) elements are axntially "stacked" on top of one another; if
elements overlap) the "highest~
one shows. A givra element msy be pulled higher or pushed lower within the
stack to control whether it is
obscured by other elements.
Internally) each type of element is a Frame Object is maintained in a xparate
61e; when a frame is loaded
into memory, all elements ire fetched sad readied for display. While the files
are esxatially invisible is an
object-based system, they may be specially accesxd for the purpose of
translating text to a local language.
1 figure 6 shows the logical and data-structure relationships between the
elements of a typical xt of frames.
3.'ILe Frame Object and the Agent
The Frame Object sad all c~f its elements are designed for a high level of
interactivity with the Agent. Class-
dependent commands are used at run-time to sequence between frames, hide and
show various elements)
9
.~.~.~...~.~._,_._...__..___.__

134U5~~
launch aaimations, and xns~: when elements have been clicked on or contain the
mouse pointer. All menus,
submenus, and controls sucb~ as pushbuttons which appear in a displayed frame
are the real thing) but they
are not connected to any application aide. Instead) any frame element can be
given a unique name. Class-
dependent Agent commaad~ are used to determine when a named object has been
xlectcd; the Agent then
evaluates the choice and dir~;cts the Frame Objecx to display the frame which
reflects the action.
An optional'go to frame n' feature may be specified for any element in any
frame. Clicking on that element
will cause the Frame Objeu to automatically display a speci5c frame, without
nay interaction with the
Agent) providing a high-pert ormaace, self-contained, hypertext-like
capability.
Larry) what about. ?lie Fronm Object; in addieiror~ to ramsaging its own
elerntnts, neat' bt ustd to sptcify and
control tht paths of animated imagrs which move around the screen...
E. Animation
1. Overview
Animated demonstrations and graphics are often more instructionally valuable
than static graphics) and can
play a major role in keeping students motivated. The Animation Object is
designed to provide high-quality
animadons with minimal effort.
The Animation Objecx is analogous to as ordinary animated cartoon. It consists
of a series of pictures
which) when sequenced rapidly) give the illusion of motion. Figure 7 ahaws a
typical Animation Object being
seated. The upper 'filmstrip' display gives a low-resolution view of the bird
is each of the eight poses
which make up the motion of flight; the lower display is a detail view which
can be edited. When the
xquence of poxa is played) the bird appears to fly in place. By moving the
sequencing object horizontally
as it plays, the bird actually appears to fit' across the screen.
Depending oa its purpose) am animation may take several forms. Aa animation
might be a single image)
such as as arrow) which is given a velocity in some particular direction.
Another animation might consist of
a sequence of poses which remain is the same place oa the aaeen. Another
variety might be a
barnstorming airplane, which would have a complex motion path and several
poses. The Animation Object
can provide all of thex.
Each pose or 'frame' in an animation is a bitmap. These bitmaps can be seated
is several ways:
o The integral Painter cam draw them directly) in either color or monochrome.
o They may be importedl 5rom any NewWave object or Miaasoft Wmdo~s app4cation
via the Windows
chipboard.
o They can be head drawn on paper and scanned in using the I~ Scaalet scanner.
o All or part of as imagt: is one frame may be copied to another frame) where
it can be used as is or
modified
If desired, a combination off these methods may be used) so that a head-drawn
image may be scanned ia,
combined with a bitmap from aaotha application, sad the composite image
colored using the Painter. An
image may be created on a bLck, white, or transparent background; an image may
also have transparent
areas 'painted' on it, so that the actual saeea display will show through the
animation when it is played.
'The editor provides as optional alignment grid to allow the image to be
positioned precisely for smooth
movement.
The initial position, velocity, and direction of an animated image may be set
using a menu in the editor of
the Animation Object. If a complex path or changes is velocity are desired, or
if the lesson author wishes to
freeze or hide an animation during run time, the Animation Object's
comprehensive class-dependent task
language allows full Agent control of the Object.
~r~..._.~.,..w..._~..~-.-.--..

~34o5~z
2. Infernal Operaaon of the Admaaan Obf tct
Like the Frame Object, the Animation Object has both de~lopment and run-timt
personalities, with the
differeaa being the preaemx of the editing facilities. The run-time version
plays when opened, and is
primarily controlled by the ,Agent or a Frame Object. The development version
opens ready for
modification, and provide: nnly limited play capability for testing the
animation.
Animations are sequences c~f bitmap images, transferred one at a time to the
display saeen. Regardless of
the size or shape of the "picture' part of a bitmap image) the image itself is
a rectangle. 'This greatly reduces
the computations needed fc~r display) 'but requirta compensating measures to
ensure that only the picture
part of the image appears on the screen. Unlike ordinary cartoons) which owns
the entire sceen of a
television) NewWave anims.tiona must ooe~ost with the other objects which
appear on the sQeen. This
requires a unique series of ;ceps to ensure that the animation doesn't corrupt
the display as it plays. Figure 9
illustrates the process.
Before the first frame of aa. animation is displayed) a setup process must be
done: the part of the saeea
display which will be overwritten by the first image is copied to a "Save"
buffer) and the first frame is then
displayed on the screen. Tlse following steps are performed) sad repeat (with
appropriate changes of frame
numbers) for the remainder of the animation:
Lad didrcltynu rewrite thisT
L The part of the :seen, which includes the location of both the Brat and
second frames is saved in a
'Composite' buffer.
L The Save buffer is cofnad to the screen over the font image area,
effectively erasing the image. (Larry:
doesnt sttp S do this moT)
3. The area which will bc; overarritten by the :uond image is cbpied from the
Composite buffer to the
Save buffer so that it ma be restored Ister.
4. The xcond image itself 's written into the Composite buffer (ia this case
at the lower right).
5. The Composite buffd~ is copied to the screen) erasing the first image sad
placing the second image.
6. The part of the screen to be copied to the Composite buffer changes from
the first/secoad frames to
the second/third fram~ea) sad the procxst repeats.
Larry: True? In order to maintain the cLrity of animated images) special steps
must be performed to
overcome Windows normal tendency to mis the colors of overlaid bitmapa. This
is analogous to the "matte"
process used by 5lmmaker,c. Every animation frame contains both its normal
image and an automatically
seated "mask,' or outline) of the desired part of the image bitmap (refer to
Fgure 10). When an image is
written to the Composite buffer, its madc i: lust combined with the buffer)
removing all the colors from the
area where the image will go. The matlc i; also combined with the image
itself; removing the background
sad leaving just the picture part. The'ttripped" image is then placed preasely
over the 'hole' left in the
aaeen display) since the tai parts anent adnally overlaid) there is no problem
with interference.
The Aalmaaoa Object aodl tie Agent
The Animation Object h~ s rich dau-dependent task language which allows
powvtrful Agent control of a
running animation. Aaimutio~ can be darted or stopped) frozen or hidden; the
course and velocity can be
changed at will) or a ~~ed complex course can be loaded into the object in one
step. Subsets of the
flames is as object as be apeafied for display so that several different
animations can be performed
without having to load a new object.
F. CBT Startap'~nd Menuin~
A ruaable C13T lesson censors of as Agent script object and a Frame Object)
sad may a>:o contain one or
more Animation Objects. A full CBT oourx will have many of these objects. If
all of these had to reside
openly in the NewWave :yrstem, they would undoubtedly clutter the OflJCe sad
confuse the student. To
prevent these problems) all of the objects is a CBT course are contained is a
special CBT Meau Object)
11

i34050
which is a special form of ec~ntaiaer object. It has many of the properties of
other container objects like
folders) but remains invisible is a run-only system.
The CBT Menu Object serves several purposes:
o It contains all of the other CBT objects) simplifying the Office display.
o It protects the CBT objects from accidental deletion or modification.
o It provides a startup cxrpability for the various Agent saipts which drive
the lessons.
o It presents a menu whiich allows the student to choose a particular lesson
within a course) and
maintains a record of which lessons have been completed.
o It accepts additional CBT lessons from newly installed objects and
integrates them into the standard
menu.
To start the CBT, the student pulls down the Office Help menu sad chooses
Tutorial This opens the CBT
Menu Object sad displays its initial menu, which might appear as in Figure 11.
First-time users will
probably not have acquired the mouse skills needed to use means in the
ordinary way. To get them going,
the NewWave Office comers with an autostartiag Agent task which gives the
student the option of running
the~CBT by pressing the Eater key. This sutouar<ing task can be disabled after
the student takes the lesson)
removing the overhead of the question when it is no Longer needed.
The CBT Menu Object alsn allows a !coon to be :peaf ed a: the default; prasiag
the Eater key will initiate
the default lesson. This provides a backup for new students who may be working
on a system whose
autoatart training task has been removed.
CBT means are nested is outline form. Students choose a broad topic, and a
:ubmeau is then brought up
which specifies the actual lesson choisea. After a lesson has been completed)
s check mark is placed oa the
mean to help the atudeats tray their progreaa.
When s lesson is chosen, the CBT Meau Object dispatches the appropriate task
language script object to the
Agent) which then rug the; lesson. When the script is finished) the Agent
returns control to the CBT Menu
Object so the next lea:oa can be run.
Developers of NewWave applications may write their own CBT less using the CHT
Development
Fadity. Aa part of the installation proctaa for the new application) its CBT
it loaded into the CBT menu
object sad its menu atrucxme is registered. This allows all !corona on the
NewWave system to be run from
the same place, enawiag easy use sad easy tracing of aD lessons.
IV. T6e Future of NewWave Computer.Based l5rainln~
The next generation of Computer-Barred Training must teach more sad rapond
more to the needs of the
individual user. Training rmust be developed to overcome the >imir:nona ~
firoed lesson formats by
providing assutanoe sad ilsaght at a variety of a~erience level:, sad by
iaawding within the users own
'real' environment.
Oae way to overcome share limitations involve: the use of expert systems. Aa
instructional knowledge-base
of ff..THFN style rules u;~ deductive logic to determine the most appropriate
inatrudion path to satisfy a
user's needs is response W a natural language query. Authors will move fiom
designing specific lessons to
designing rules) repreaeatative of haw as application arorka sad how to
present and sequeax instruction.
Looking towards the future) acpert systems) natural language proces~rs, and
other arch technologies) will
be "plugged into' the Agent to obtain the current world model of a user's
NewWave environment) and to
drive the Agent. Working at a semantic level, it is reasonable for these
a~teraal technologies to control and
12
..~.,.~........~...~._.~..T._.

~340~9~
monitor the Agent's activitiat. Using interrogation, demonstrations can be
conswded "on-the-fly; and
work) despite variable circua~ataacea with the em~ironment.
New hardware technologies ;such as voice) interactive videodisc) CD-ROM, and
"pen-and-paper' input
devices are all finding places is the training environment. 'The inherent
fleaa'bility and expandability of the
NewWave architecture make; it easy to incorporate new ideas) and the power and
ease-of use of NewWave
make it a natural vehicle for exploring new techniques. The NewWave CBT
Development Facility
eumplifies the spirit of innovation which differentiates NewWave from other
software environments.
13

Dessin représentatif

Désolé, le dessin représentatif concernant le document de brevet no 1340592 est introuvable.

États administratifs

2024-08-01 : Dans le cadre de la transition vers les Brevets de nouvelle génération (BNG), la base de données sur les brevets canadiens (BDBC) contient désormais un Historique d'événement plus détaillé, qui reproduit le Journal des événements de notre nouvelle solution interne.

Veuillez noter que les événements débutant par « Inactive : » se réfèrent à des événements qui ne sont plus utilisés dans notre nouvelle solution interne.

Pour une meilleure compréhension de l'état de la demande ou brevet qui figure sur cette page, la rubrique Mise en garde , et les descriptions de Brevet , Historique d'événement , Taxes périodiques et Historique des paiements devraient être consultées.

Historique d'événement

Description Date
Inactive : CIB expirée 2018-01-01
Le délai pour l'annulation est expiré 2007-06-08
Lettre envoyée 2006-06-08
Lettre envoyée 2000-06-22
Lettre envoyée 1999-08-12
Inactive : Page couverture publiée 1999-06-22
Accordé par délivrance 1999-06-08
Inactive : CCB attribuée 1999-06-08
Inactive : CIB en 1re position 1999-06-08

Historique d'abandonnement

Il n'y a pas d'historique d'abandonnement

Historique des taxes

Type de taxes Anniversaire Échéance Date payée
TM (catégorie 1, 2e anniv.) - générale 2001-06-08 1999-08-03
Enregistrement d'un document 2000-04-20
TM (catégorie 1, 3e anniv.) - générale 2002-06-10 2002-05-21
TM (catégorie 1, 4e anniv.) - générale 2003-06-09 2003-05-21
TM (catégorie 1, 5e anniv.) - générale 2004-06-08 2004-05-25
TM (catégorie 1, 6e anniv.) - générale 2005-06-08 2005-05-20
Titulaires au dossier

Les titulaires actuels et antérieures au dossier sont affichés en ordre alphabétique.

Titulaires actuels au dossier
HEWLETT-PACKARD COMPANY
HEWLETT-PACKARD COMPANY
Titulaires antérieures au dossier
BARBARA B. PACKARD
GLENN STEARNS
RALPH THOMAS WATSON
Les propriétaires antérieurs qui ne figurent pas dans la liste des « Propriétaires au dossier » apparaîtront dans d'autres documents au dossier.
Documents

Pour visionner les fichiers sélectionnés, entrer le code reCAPTCHA :



Pour visualiser une image, cliquer sur un lien dans la colonne description du document. Pour télécharger l'image (les images), cliquer l'une ou plusieurs cases à cocher dans la première colonne et ensuite cliquer sur le bouton "Télécharger sélection en format PDF (archive Zip)" ou le bouton "Télécharger sélection (en un fichier PDF fusionné)".

Liste des documents de brevet publiés et non publiés sur la BDBC .

Si vous avez des difficultés à accéder au contenu, veuillez communiquer avec le Centre de services à la clientèle au 1-866-997-1936, ou envoyer un courriel au Centre de service à la clientèle de l'OPIC.


Description du
Document 
Date
(aaaa-mm-jj) 
Nombre de pages   Taille de l'image (Ko) 
Dessins 1999-06-21 12 314
Abrégé 1999-06-21 1 46
Revendications 1999-06-21 2 60
Description 1999-06-21 198 7 272
Avis concernant la taxe de maintien 2006-08-02 1 173
Correspondance 1999-08-11 2 46
Correspondance reliée au PCT 1993-03-29 1 56