Skip to main content

Full text of "HP Journal 1994-08"

See other formats


HEWLETT-PACKARD 

JOURNAL 




August 1994 



f >«\ ^ 




T/iS% HE WLETT 
mL'HM PACKARD 



Hewlett- Packard Co. 



The Hewlett-Packard Journal is published bimonthly by the Hewlett-Packard Company to recognize technical contributions made by Hewlett-Packard 
(HP1 personnel White the information found in this publication is believed to be accurate, the Hewlett- Packard Company disclaims all warranties of 
merchantability and fitness for a particular purpose and all obligations and liabilities for damages, including but not limited to indirect special, or 
consequential damages, attorney's and expert's tees, and court costs, arising out of or m connection with this publication. 

Subscriptions The Hewlett-Packard Journal is distributed free of charge to HP research, design and manufacturing engineering personnel, as well as to 
qualified non-HP individuals, libraries, and educational institutions. Please address subscription rjr change of address requests on printed letterhead (or 
include a business cardl to the HP headquarters office In your country or to the HP address on rhn back cover When submitting a change of address, 
please include you' zip or postal code and a copy of your old label Fiee subscriptions may not be available m all countries 

Submissions. Although articles m the Hewlett-Packard Journal are primarily authored by HP employees, articles from non-HP authors dealing with 
HP-related research or solutions Id technical problems made possible by using HP equipment are also considered for publication. Please contact the 
Edttor before submitting such articles. Also, the Hewlett- Packard Journal encourages technical discussions of the topics presented in recent articles 
and may publish letters expected to be ol interest to readers. Letters should be brief, and are subject to editing by HP 

Copyright >' 1994 Hewlett-Packard Company All rights reserved Permission to copy without lee all or pan ol this publication is hereby granted provided 
that 1 1 the copies ere not made, used, displayed, or distributed (or commercial advantage; 2) the Howl on -Packard Company copyright notice and the title 
of the publication and date appear on the copies; and 3} a notice stating that the copying is by permission of the Hewlett-Packard Company 

Please address inquiries, submissions, and requests to: Editor. Hewlett-Packard Journal. 3000 Hanover Street. Palo Alto. CA 94304 U S A 



2 Angus! lfKt-i IIpwlHt-Packard Journal 

© Copr. 1949-1998 Hewlett-Packard Co. 



HEWLETT-PACKARD 

JOURNAL Augusl 1994 Volume 45 • Number 4 



Articles 



6 



•\ An Advanced Scientific Graphing Calculator, by Diana K. Byrne. Charles M. Patton, 
David Arnett, Ted W. Beers, and Paul J. McClellan 



20 



User Versions of Interface Tools 



2 ^ HP-PAC: A New Chassis and Housing Concept for Electronic Equipment, by Johannes Mahn, 
Jiirgen Haberle. Siegfried Kopp, and Tim Schwegler 

I 

T Q Thermal Management in Supercritical Fluid Chromatography, by Connie Nathan and 
Barbara A. Hackbarth 

What is SFC? 



Linear Array Transducers with Improved Image Quality for Vascular Ultrasonic Imaging, 

by Matthew G. Mooney and Martha Grewe Wilson 

Structured Analysis and Design in the Redesign of a Terminal and Serial Printer Driver. 

by Catherine L. Kilcrease 

Data-Driven Test Systems, by Adele S. Landis 



Q High-Speed Digital Transmitter Characterization Using Eye Diagram Analysis, 

Christopher M. Miller 



43 



62 



Departments 



4 In this Issue 

5 Cover 

5 What's Ahead 

66 Authors 



Editor Richard P Oolan • Associale Editor. Charles L lealli • Publication Production Manager. Susan E Wnghl ■ 
Illustration ^enee D Pighmi • Typography/layout. Cindy Rubin 

Advisory Board. Thomas Beecher. Open Systems Software Orvrsion. Chelmsford. Massacbosetles • Steven Bntienham. Disk Memory Division Boise, Idaho • William W 
Blown. Integrated Circuit Business Division, Santa Clara. California • Frank J. CaMlto. Greeley Storage Division. Greeley, Colorado • Harry Ctiou, Microwave Technology 
Division. Santa Rosa. California • Deiek T Dang. System Support Dwsion. Mountain View. California • Raiesh Desa>. Commercial Systems Division, Cupertino, California 
• Kevin G Ewen. integrated Systems Division. Sunnyvale. California • Bernhard Fischer. Bbblingen Medical Division. Boolmgen. Germany • Douglas Gennetton. Greeley 
Hardcopy Division. Greeley. Colorado • Gary Gordon, HP laboratories. Palo Atto. California • Matt J Marline. Systems Technology Division. Roseville, California • Bryan 
Hoog. take Stevens Instrument Division, Fveiptt. Washington • Roger t Jungerman, Microwave Technology Division. Sanla Rosa. California • Paula H Kanaiek, Inkjet 
Components Division. Con/altis. Oregon • Thomas F Kraomur. Colorado Springs Division. Colorado Springs. Colorado • Ruby B Lee, Networked Systems Group. Cupertino 
California • Alfred Meute, Waldbronn Analytical Division, Waldbronn. Germany • Dona I Miller. Worldwide Customer Support Division. Mountain View. California • 
Michael P Moore. VXI Systems Division, lowland. Colorado • Shelley I Moore. San Oiego Printer Division. San Diego, California* Steven J Narciso. VXI Systems 
Division, Loveland. Colorado • Garry Oroohni, Software Technology Division. Roseville. California • Han Tian Phua. Asia Peripherals Division, Singapore • Ken Poulton, HP 
laboratories. Palo Alto, California • Gunier Riebesell, BOblingen Instruments Division. BOblingen Germany Man: Sabatella Software fngtneenng Systems D'Vision. fort 
Collins. Colorado* Michael H Saunders Integrated Circuit Business Division. Corvallis. Oregon* Philip Stenton. HP laboratories Bristol. Bnstol. England" Beng-Hang 
Tay. Singapore Networks Uperaiion. Singapore • Stephen R Undy. Systems Technolorjy Division Fort Collins. Colorado • Jim Williti Network and System Management 
Divtiitm fw Collins. Colurxlu • lUiichi Vanagawa. Kobe Insmmient Division. Kobe, Japan • Dennis C York. Corvallis Division. CorvalUs. Oregon • Barbara /tmmer. 
Corpornto Fogi/tecring. Palo Am California 



•OHflwIert Packard Cimipnny 1994 Printed in USA Tho Hewlett-Packard Journal is pnntod on recycled p 



-BMinnn[l _ , . « Aittftisi 1QM EfeWteti -Parkanl Journal 3 

) Copr. 1 949- 1 998 Hewlett-Packard Co. 



In this Issue 



At first glance the HP 48GX advanced scientific graphing calculator looks 
exactly like its predecessor the HP 48SX scientific expandable calculator. With 
the exception of a few changes in the key labels both calculators are physically 
the same. The big difference is in functionality. In addition to having all the func- 
tionality of the HP 48SX, the HP 48GX has more advanced problem solving fea- 
tures, such as polynomial root finding and Fourier transforms, expanded memory 
capability (up to 4.75M bytes of address space), and seven new plot types in- 
cluding 3D and animation. A major attribute of the HP 48GX is a much improved 
user interface. As described in the article on page 6, one of the main design 
objectives for the HP 48GX was to make the calculator easy to use for both novice 
and experienced users. To help accomplish this and other objectives a team of six mathematics professors 
were recruited to help design the HP 48GX. For the user interface, dialog boxes much like those found in 
an Apple Macintosh or Microsoft'- Windows PC are used. Users are presented with input forms to fill in 
for a particular task and are given application-specific keys for acting on the data filled in. To handle all 
the new features and an expanded address space, an improved (in speed, cost, and manufacturability) 
CPU was built and a new memory controller configuration was developed for the HP 48GX. The article 
describes the hardware design for the HP 48GX and the differences in memory controller configurations 
between the HP 48SX and the HP 48GX. 

Environmentally friendly, easy to manufacture (i.e., manufacturability), low parts count, and low cost are 
some of the phrases we hear used today to characterize what an ideal product design should be. The 
article on page 23 describes a chassis and electronic component housing concept developed by the 
Mechanical Technology Center at HP's Boblingen Manufacturing Operation for trying to provide the 
"ideal" product design. This packaging concept, called HP-PAC, replaces the traditional metal chassis 
with expanded polypropylene foam for housing electronic parts. The article describes how this concept 
was used on a typical HP workstation chassis, resulting in a reduction in mechanical parts, screw joints, 
assembly time, disassembly time, transport packaging, and housing development costs. In addition, all 
this was achieved with an environmentally friendly, recyclable material. 

The quality of any transmission system is based on how well it can transmit error-free information from 
one location to another. In a digital transmission system, the primary metric used to measure this quality 
is the bit error ratio, or BER. The BER is defined as the number of bits received in error divided by the 
number of bits transmitted. Although the BER value does convey some important information, it is only a 
pass/fail parameter and it does not tell the complete story about the quality of a digital transmitter. There- 
fore, a typical BER test system includes an oscilloscope or an eye diagram analyzer for doing time-domain 
measurements on the transmitted waveform. An eye diagram is a waveform that has an opening like an 
eye, and in general the more open the eye the greater the likelihood that the receiver system will be able 
to distinguish a logical 1 from a logical 0. The HP 71501 A eye diagram analyzer described in the article on 
page 29 provides the means to characterize high-speed digital transmitters using eye diagram analysis. 
The instrument constructs both conventional eye diagrams and special eyeline diagrams to perform 
extinction ratio (ratio of maximum to minimum optical transmission in dB) and mask tests (tests used to 
measure the size and shape of the eye). The article describes how the instrument uses a technique 
called harmonic repetitive sampling to construct eye diagrams similar to those produced by digital sam- 
pling oscilloscopes. A modified version of the sampling technique is used to construct eyeline diagrams. 
With the modified sampling technique the samples or dots can be connected so that a whole trace or a 
portion of a bit sequence can be displayed at one time, enabling the eyeline diagram to provide more 
information than the conventional eye diagram. 




4 August 1994 Hewlett-Packard Journal 

© Copr. 1949-1998 Hewlett-Packard Co. 



Four of the papers in this issue are from the 1993 HP Technical Women's Conference ► In supercritical 
fluid chromatography temperature control is very important. Cooling is critical on the fluid supply end of 
the system and heating is critical on the separation end The paper on page 38 describes the challenges 
engineers at HP's Analytical Product Group faced in modifying components from existing HP GC and 
LC products to meet the thermal requirements of the HP G1205A supercritical fluid chromatograph. 
► Increasingly, test software development consumes the majority of the time spent developing manufac- 
turing resources for electrical test processes required for new instrument products. In the paper on page 
62 the author describes a test system that exploits the commonality among instruments to reduce test 
software development time ► Improving the near-field image quality of the HP 21258A linear phased-array 
transducer, which is used for vascular ultrasonic imaging, was the mam goal for the engineers at HP's 
Imaging Systems Division. The paper on page 43 provides a basic overview of ultrasound imaging (com- 
paring sector phased-array and linear phased-array transducers! and then describes how customer feed- 
back helped to guide the design of two new vascular transducers. ► Most of the software literature that 
discusses structured analysis and structured design techniques focuses on applying the techniques to 
the development of new software systems. The paper on page 52 describes the application of structured 
analysis and design to the redesign of an existing system. The article describes how the techniques were 
used to lay out the existing system so that areas for redesign and improvement could be easily identified. 
The paper also includes examples of redesigned modules and recommendations for software projects 
considering using structured analysis and design techniques for a redesign effort. 

C.L Leath 
Associate Editor 



Cover 

The HP 48GX advanced scientific graphing calculator displays a wireframe plot of the surface 

z = x 3 y - xy 3 . 



What's Ahead 

In the October issue, seven articles will describe the design and development of the first two VXIbus 
modules for the HP HD2000 data acquisition system: the HP E1413A 64-channel scanning analog-to-digital 
converter and the HP E1414A pressure scanning analog-to-digital converter. Four articles will discuss 
various aspects of the design and applications of the HP 9493A mixed-signal LSI test system. There will 
also be design articles on the HP 4291A high-frequency impedance analyzer, the HP 15800A virtual re- 
mote software, and the FDDI Ring Manager application for the HP Network Advisor protocol analyzer. 
Other articles will discuss frame relay conformance testing and an electrical overstress test system. 



Microsoft N B U S fegrtinied trademark of MtCTOIOfl Coipmation 
Windows is a U S liadomaik ol Mirrosoit Corporation. 



© Copr. 1949-1998 Hewlett-Packard Co. 



August 1WI llinvli-it l'ackaril Journal 5 



An Advanced Scientific Graphing 
Calculator 



The HP 48G/GX combines an easy-to-learn graphical user interface with 
advanced mathematics and engineering functionality, expanded memory 
capability, and seven new plot types. 

by Diana K. Byrne, Charles M. Patton, David Amett, Ted W. Beers, and Paul J. McClellan 



The HP 48G/GX, Fig. 1, is a state-oF-the-art graphing calcula- 
tor that Combines an easy-to-learn graphical user interface 
with advanced mat hematics and engineering functionality. It 
is a continuation or the HP 28S 1 and HP 48S/SX 2 series of 
calculators, which are designed for high power, extendability, 
and customizability. 

The HP 48U/GX includes improvements to address the 
needs of both novice and advanced users of scientific and 
graphing calculators. For the new user or the user who does 
not use certain functionality very often, the calculator has a 
dialog-box-style. Iill-in-t.he-blanks user interface. 




For the user who needs to do advanced problem solving, die 

calculator offers the following features: 
i Differential equation solvers 
i Polynomial root finder 
i Financial problem solver 
i Library of engineering equations and constants 
i Fourier transforms 
i Matrix manipulations 

Linear algebra operations. 

For the user who needs more memory and extendahility, the 
GX version has 12SK bytes of built-in RAM, compared t o 32K 
bytes in die S and G versions. The only other difference be- 
tween the G and the GX is the two memory card slots in the 
HP 48GX. The second memory card slot accepts up to 4M 
bytes of RAM or ROM. 

The graphing capability has been expanded with the addition 
of seven new plot types, for a total of fifteen. The HP 48S/SX 
has function, polar, parametric, conic, truth, histogram, bar, 
and scatter plots and the HP 48G/GX has these plus differen- 
tial equation, slope field, wireframe, parametric surface, grid 
map, pscudocontour, and y-slice cross-section plots. 

The HP 48S/SX functionality has been retained in the HP 
48G/GX. The new calculator has twice the ROM and retains 
much of the original code. The HP 48S/SX and HP 48G/GX 
both have the following features: 2 

■ L'nit management 
MatrixWriter 

■ Equation Writer 

■ HP Solve numeric solver 

i RPN-style stack calculation 

Symbolic mathematics 

Time and alarms 

Statistics operations 
i Variables and directories for data storage 

■ User-definable keyboard and custom menus 
1 RPL programming language 

Two-Way infrared communications link 
i RS-232 serial cable connector. 

Education Trends 

'Hie creation of the HP 48G/GX can be traced to the American 
Mathematical Society (AMS) meeting in January 1992. We 
had been closely following the trend of using technology in 
the mathematics classroom because our software team 



Fig. 1. HP 48GX scientific grapliing calculator. 



6 August ISM Hewlett-Packard Journal 



I Copr. 1949-1998 Hewlett-Packard Co. 



includes former mathematics educators and a calculus text- 
book author, and because we visit mathematics, engineering, 
and education conferences and talk to educators both to 
promote existing products and to find out what teachers and 
students would like to see in future products. 

We had watched the interest in graphing calculators grow 
steadily each year, and had been involved in workshops for 
teachers using technology in the classroom through an HP 
grants program. Although the HP 48S/SX was becoming a 
standard for engineering students and professional engi- 
neers, it was just the first step in meeting the needs of the 
education community, and we continued to hear that it was 
too difficult to use and too expensive for classroom use. By 
January 1992, when we talked to educators attending the 
AMS meeting about their needs and about the calculators 
that they were considering for use in the classroom, it be- 
came obvious that we needed to have a new product for the 
education market no later than the 1993 back-to-school pe- 
riod. This resulted in the formation of an education advisory 
committee, a group of six mathematics professors who 
would help us design the calculator to fit the needs of the 
education community, and then give us feedback on our 
implementation. 

Design Objectives 

Our number one objective for the new calculator was ease 
of learning. The users of the HP 48S/SX told us that they 
appreciated its power, but its complexity made it difficult 
for both novice users and experienced users. The novices 
tended to be intimidated by the extensive owner's manual 
and I he difficulty of mastering so many new operations, 
while the experienced users had a hard time remembering 
how to use some operations that they did not use frequently. 

Creating a state-of-ihe ail graphing calculator was our second 
objective. We needed to add some graphing capabilities, such 
as tracing along a graph and shading between graphs, just to 
maintain parity with oilier graphing calculators, but we also 
wanted to go far beyond the compel it ion with feat ures such 
as 3D graphing and animation. 

Our third objective was to enhance the high-end mathematics 
capability of the HP 48S/SX with the addition of features such 
;is differential equation solving, polynomial root finding, 
more matrix operations, and Fourier transforms, thereby 
Strengthening our position as the most powerful technical 
Calculator in the world. 

By offering two models, the HP 48G and the HP 48GX, we 
intended to please customers at both ends of the education 
spectrum. The HP 48G has a list price that represents a sub- 
stantial decrease from the list price of Ihe HP 48SX or HP 
18S. This makes the HP 48G competitive with other graphing 
calculators and makes it appealing even at the high school 
level. The HP 48GX has more appeal for college students, 
with four times as much built-in memory and two plug-in 
card ports that are expandable to 4M bytes of memory. 

Ope rat ing System 

A calculator or computer operating system is primarily a set 
of conventions for memory organization, data structures, 
and resource allocation together with a sel Of software tools 
to aid in performing operations in accordance wilh those 
conventions. In contrast, an application is software built 



using the resources and conventions of the operating sys- 
tem. As new hardware resources become necessary and 
available, the operating system must grow to manage those 
resources effectively and as transparently as possible to the 
applications built on the system. 

The operating system ( and system programming language) 
in the HP 48G/GX is the RPL operating system, first used in 
the HP ISC and HP 28C and subsequently in a number of 
other machines including the HP 28S. HP 48S/SX, and now, 
with extensions, in the HP 48G/GX. 

HP 48G/GX Fundamentals 

The key concept underlying the operation of the calculator 
is the idea of objects on the stack. A stack is a data structure 
that is similar to a stack of cafeteria trays. The clean trays 
are added to the t op of the stack, and as trays are needed, 
they are removed from the top of the stack. This type of last 
in. first out ordering characterizes the HP 48G/GX stack. All 
operations take their arguments (if any) from the stack and 
return their results (if any) to the stack. 

There is only one data stack in the HP 48G/GX. This resource 
is shared by the user and the system RPL programmer, who 
must take great care to make sure that any objects that be- 
long to the user are preserved through the operation of sys- 
tem RPL programs. For example, the user may have a few 
numbers sitting on the stack, then decide to plot the graph 
of a function. The system RPL program that runs when the 
DRAW key is pressed does many operations that require the 
use of the stack, such as recalling die plotting parameters, 
checking that they are valid, calculating the range over which 
to plot) evaluating Ihe user's function, and converting the 
function values to pixel coordinates. After the graph is com- 
plete (or if the drawing of Ihe graph is interrupted by the 
user), when the user sees Ihe stack again, the same numbers 
that were there to begin with should not have been disturbed. 

Instead of trays, users may collect various types of numeric, 
symbolic, and graphic objects on the IIP 48G/GX stack. The 
types of objects available in the HP 48G/GX include real and 
complex numbers, real and complex arrays, binary integers, 
names, characters, strings, tagged objets. algebraic objects, 
unit objects, and graphic objects. There are also backup 
objects, library objects, directories, programs, and lists. (IIP 
48 object types are discussed in more detail in reference 2.) 

In a key-per-function calculator, there is a single key that the 
user needs to press to get the machine to perform any opera- 
tion, such as cosine. The IIP 48G/GX has many more opera- 
tions than the 19 keys on the keyboard, so there needs to he 
a way to access all the functionality without assigning one 
Operation to each key on Ihe keyboard. This is accomplished 
through the use of menus and softkeys. The top row of keys 
on Ihe keyboard do not have anything printed on them be- 
cause they correspond to menu labels that appear along Ihe 
bottom of the screen. These keys are called softkeys, and 
their meaning changes whenever Ihe corresponding labels 
on the screen are changed by the software. 

HP 48S/SX Memory Controller Configurations 

We will now discuss the memory controller configurations 
used in the 111' ISS/SX and how these are used in implement- 
ing Ihe v arious types of expanded address modes developed 
for these products. The next section outlines Ihe differences 



© Copr. 1949-1998 Hewlett-Packard Co. 



August hum Hewlett-Packard Journal 7 



MMIO 



System RAM 32K 




System ROM 224K 


Coveted ROM 
32K 


Unused Controller Address Space jl 





Larger Addresses 



Fig. 2. Standard memory control- 
ler configuration fur the HP 
-I8S/.HX calculator, Memory sizes 
are in iivtes. 



in configuration between the HI* 48S/SX and the HP 48G/GX 
and discusses how these differences are used to extend and 
refine the expanded address technology to provide access to 
a total of 4.7i)M bytes of code and data as transparently as 
possible. 

The CPU bus architecture first developed for t he HP 71 and 
used in all HP calculators since that lime has several useful 
features. One of the nicest is its address configuration capa- 
bilities. All chips attached to the bus are required to be able 
to change, on command of the bus, the range of addresses 
that evoke a response from the chips. Such a system elimi- 
nates, once and for all. the inconvenience and headache of 
configuring jumper switches on cards designed to plug into 
the machine. For a consumer product like a calculator (his 
is not only a nicety, it is a necessity. 

In the early days of the architecture (HP 71 to HP 28C). the 
CPU bus lines were actually routed around the circuit board 
and any RAM, ROM, or memory mapped I/O that was at- 
tached to the bus had to be custom-made willi Hie bus inter- 
face attached. This had the advantage of allowing an arbitrary 
number of parts lo be added to the system with assurance 
that the system would be capable of handling all of them in 
one way or another. It had the grave disadvantage of (jutting 
a price premium on such essential items as ROM and RAM. 

In the second-generation CPU chip, a fixed number of mem- 
ory controllers were included onboard the CPU. The CPU 
bus was then, for all practical purposes, completely hidden 
within the CPU itself. The combination of external standard 



RAM or ROM together with one of the internal memory 
cont rollers was then equivalent ( so far as the CPU bus is 
concerned) to a standard bus device. 

In the standard dev ice implementations, the size of the device 
(thai is. the address space occupied by the device) is de- 
signed into i he device. In the second-generation chip, the size 
of the controllers was mask programmed at the time of man- 
ufacture since we knew exactly what size each controlled 
device would be. 

Willi the advent of plug-ins for the HP 48S/SX, the configura- 
tion capabilities of the memory controllers had to be ex- 
panded to include varying the apparent size of the memory 
controller to conform with the device being plugged in. This 
is one of the many advanced features in the third-generation, 
IIP 18S/SX implementation of the architecture. This resizing 
feature, in addition to allowing plug-ins of various sizes, also 
presented the opportunity to explore expanded address 
modes, which we have come to call the "covered" technology, 
for reasons that will be apparent shortly. 

The third-general ion CPU chip has six memory controllers, 
hi the HP 48SX. these are allocated to memory mapped I/O. 
system RAM, port 1. port 2, and system ROM, and there is 
one extra controller. Their configuration in the usual state is 
shown in Fig. 2. The memory controllers are shown with 
their sizes and locations in the address space (OOOOOh to 
FFFFFh). They are also pictured as having a vertical location 
in "priority space." In the CPU bus definition the devices are 
chained, with the result that devices closest to the CPU on 



MMIO 



System RAM (Shrunken I 



Port! 128K 



0 

I 



Covered Code to he 
Executed In-Place 



System ROM 



Covered ROM 32K 



targer Addresses 



Fig. 3. Execute-in-ptece configu- 
ration for IIP48S/SX covered 

code 



8 AUSUS) litlM Hewlett-Packard Journal 

© Copr. 1949-1998 Hewlett-Packard Co. 



System RAM 
"Shrunken! 



Mailbox in 
System RAM 



System ROM 




Coveted Code and Data 
to Copy lo Mailbox 



Covered ROM 32K 



Larger Addresses 



Fig. 4. Copy-to-nuiilbux i.uiifign- 
ration for HP 48S/SX covered 
data 



the chain have Uie firsi opportunity to respond lo bus re- 
quesls. hi consequence, if two devices are configured with 
overlapping address ranges, the one closer to the CPU on 
the chain effectively hides the more distant one. In Figs. 2 to 
12. higher priority can he interpreted as "closer to the CPU" 
or "hides those helow " 

As shown in Fig. 2. the memory controller for system RAM 
hides the section of ROM shown as covered. This is the 
reason for the name "covered" technology. 

Fig. B shows more detail of the covered ROM and the first 
way in which it is used. In one section of the covered ROM 
there is assembly language code (mostly math routines) thai 
requires no RAM resources outside the CPU for execution. 
This code is executed in-place in the covered ROM by 
shrinking and/or mining the memory controller for system 
RAM so that the relevant section of code is temporarily un- 
covered. W hen the routine finishes execution, system RAM 
is returned to its normal configuration. 

A second set of routines, all of which only need access to a 
fixed set of locations within system RAM, can execute with 
system RAM in any one of 16 locations, as long as they 
themselves are not current ly covered by system RAM. 

Fig. 4 shows a second way in which the covered R< >M is 
used. In this case, code and data (mostly data) are copied 
from covered ROM to a mailbox at a fixed loc ation in system 
RAM. After the copy is completed, system RAM is returned 



to its normal configuration and the code and data are avail- 
able to the rest of the system. Coders using this data must 
remain aware that it is volatile and can be destroyed by an- 
other fetch of data from covered ROM. In this sense, this 
method is not transparent. 

Another way in which covered ROM is used is shown in 
Fig. 5. It is as transparent as the execute-in-place method 
but entails fewer restrictions on the code and data that can 
be included. In the IIP 48SX code, this system is usually tied 
to the execution of ROMPTRs. Recall that ROMPTRs are RPI, 
objects that substitute for hard addresses of objects whose 
precise location is not known in advance (and in fact might 
not even be present.) They are midway between hard ad- 
dresses that only change at compile/link tune and Identifiers 
whose corresponding objects may move between subsequent 
calls at run time. 

If. din ing the conversion of a ROMPTR to an address, it is 
determined that the corresponding object lives in covered 
ROM, the object is copied from covered ROM. through the 
mailbox, to the TEMPOB (temporary object ) area. The address 
of its new location in the TEMPOB area is then returned. Fig- 6 
shows a comparison of a named R( iM word ( keyword or 
command) as il would exist in covered ROM and as copied 
to the TEMPOB area. Although we'll refer back lo Fig. h' later, 
for now notice thai in addition to the object itself, an addi- 
tional piece is added to the image in the TEMPOB area. This 
piece is a ROMPTR preceding the object itself. This allows 



TEMPOB Area 



Mailbox in 
System RAM 




Covered ROM Words to Copy 
to the TEMPOB Area 



System ROM 



Covered ROM 3?K 



Larger Addresses 



Fig. 5. Cupj In IFMPfJB riniriguni- 

limi for HP 48S/SX covered tB >M 
words, 



August 1894 Hewlett-Packard Journal 



I Copr. 1949-1998 Hewlett-Packard Co. 



Property List Flags 
ROMPTR Body 

ROM Word Body 

Property List Item 
Property List Item 
Property List Item 
Property List Item 
Properly List Item 




In ROM In TEMPOB 



Fig. 6. Comparing the structure of a ROM word an resident in ROM 
and when copied to TEMPOS using covered technology. 

the routine converting the ROMPTR to an address to check 
whether the object in question is a copy of one residing else- 
where. This method of covered ROM access, which we call 
"covered ROM word access," will be especially relevant to 
our discussion of the HP 48G/GX. 

Preexisting design elements of the RPL system contributed 
greatly to the practicality and transparency of covered ROM 
word access, including: 

Encapsulation of code and data into RPL objects that are of 
determinable size 

Indifference of RPL object execution to object location in 
RAM or ROM 

Equivalence of direct and indirect execution of RPL objects, 
which allows (noncircularj structures to be stored and used 
in the same format. 

HP 48G/GX Memory Controller Configurations 

The HP 48GX has a number of important features including: 

LTp to 128K bytes of built-in system RAM 

One plug-in port electrically equivalent to the HP 48SX ports 

Access to 512K bytes of system ROM 

Access to 4M bytes of RAM or ROM at a second port using 

industry-standard parts. 



These features required increasing the usable address space 
from 0.5M bytes to 4.75M bytes, an 850% increase over pre- 
vious machines. 

While the HP 48G/GX has CPU functionally equivalent to the 
third-generation CPU discussed above and thus has six 
memory controllers, these controllers are configured and 
used differently. Fig. 7 shows the Standard HP 48GX Configu- 
ration. The controller previously allocated to port 2 is now 
used as a bank switch control, and the extra controller is 
now allocated to port 2. Furthermore, there are now as 
many as 34 layers over the last 128K bytes of address space. 

Eliminated in this configuration is Ihe IIP 18S/SX covered 
ROM. This means that all of the functionality included in the 
HP 48S/SX can be accessed more quickly. Two things t hat are 
visibly enhanced are plotting (since the math routines are not 
covered) and screen update (since the font bitmaps are not 
covered.) Since there are a great many more covered places 
to access, however, there are many more "temporary" con- 
figurations to keep track of while working with lite covered 
data. 

To simplify the system, we use only a single covered tech- 
nique, namely, covered ROM word access, with appropriate 
modifications. Without this simplification, the number of 
access method and configuration combinations would be 
unmanageable. Moreover, this is the only feasible method of 
covered access to code written for the IIP 48S/SX or not 
expressly written for the the new configuration. 

Fig. 8 shows the configuration while copying an object from 
a bank of port 2 to the TEMPOB area. Port 1 is unconfigured. 
In the unconfigured state, the controller responds to only a 
handful of bus commands and acts as if it weren't there for 
data access. 

Fig. 9 shows the configuration while copying an object from 
the second half of t he upper system ROM. In this case, both 
ports are unconfigured. 

Fig. 10 shows the configure lion while copying an object from 
the first half of the upper system ROM. Since a controller 
move or resize Operation lakes many more CPU resources 
than configure or unconfigure, it is often necessary to copy 
objects from this section, through a mailbox, and then into 
the TEMPOB area. 



mmio ■ ■ #*mp 

faufij^^^^^^^^^^^^^^B System 



Port 1 (128KI 



Bank Switch 
Control 




g 



System ROM I (256K) 



System ROM II I256KI 



Higher Addresses 



Fig. 7. HP 48GX standard 
memory controller configuration. 



10 August 1994 Hewlett-Packard Journal 

© Copr. 1949-1998 Hewlett-Packard Co. 



__ Portl 
Unconfigured 




Syslem R0MII2S6KI 



Syslem ROM II (256K) 



Higher Addresses 



Fig. 8. HP 48UX configuration for 
copying an object from a bank Of 

pon 2 in TEMPOS. 




Higher Addresses 

Fig. 1 1 shows the configuration when it is determined that 
nothing is plugged in at all. In this ease, the only covered 
access is to the first half or the upper system ROM. Again, it 
Is likely to be necessary to copy this material through a mail- 
box. Otherwise, ail the R< )M words can be executed in-place. 

Fig. 12 shows the standard IIP 48G configuration, which is 
identical to Fig. 1 1 except for the smaller size ol" system 
RAM, While it is not strictly necessary to use this configura- 
tion, which matches one of the HP-18GX Configurations, 
there are advatilages. First, it allows maximal code sharing 
between the two machines. In fact, the code can be identical 



Fig. 9. HP 4S6X configuration for 
ropying an object to T6MP0B from 
the upper half of system ROM. 

in this case. Second, it gains the advantage of faster access 
to the base functionality, providing a more responsive 
implementation. 

Hardware Design 

The heart of the HP 48G/GX is a fourth-generation CPU chip. 
This custom ASIC is built around the original HP 71 proces- 
sor, and its development was key to the creation of the HP 
48G/GX. This chip has four advantages over the third- 
generation chip used in the HP 4HS/SX. First, ii is produced 



Bank Switch 
Control 



System RAM (128KI 




System ROM I (256KI 



System ROM IM256KI 



Higher Addresses 



Fig. io. iip 48GX configuration 
for copying an objecl to tempos 

through a mailbox. 



© Copr. 1949-1998 Hewlett-Packard Co. 



August Hewlett-Packard Journal 11 



MMIO 



Portl 
IShrunkcn) 



Bank Switch 
Control 



Port 2 j 
IShrunkenI j 



System RAM (128K) 



System ROM 1 (256K) 


System ROM II (256K) 







Higher Addresses 



Fig. 11. HlM8GXall-port.s-.!ii|My 

eon figuration. 



using a different CMOS process, allowing better stability 
with Onboard voltage regulation circuitry. Second, these 
improved voltage characteristics and several low-level opti- 
mizations allow the new CPU to operate al Iwice the speed 
Of its predecessor. This speed increase gives it a 4-MHz hits 
rate. Third, the new CPU is packaged in a lfiO-pin c|iiad flat- 
pack, improving ihe manufact inability of lite HP 48G/GX. 
Fourth, with all these improvements, the final cosl is lower, 
increasing Ihe budget for other hardware improvements to 
Ihe calculator. 

The faster processing speed of the IIP 48G/GX CPU gave the 
.software team incentive to improve the user Interface, im- 
plementing graphical rout ines I hat would not have been 
acceplable at the slower processing rate. This added func- 
tionalily required an increase in data storage space, so we 
boosted the size of ROM and RAM. We also decided to add 
the facilities to bank-switch a data card plugged into card 
poll two. 

The HP 48G/GX circuitry, with its additional components, 
bad to fil in the same physical space as in the HP 48SX. The 
product plan and schedule did not allow changes to produc- 
tion tooling or plastic parts except for those thai were abso- 
lutely necessary. At times we felt like poets Hying to write 
crossword puzzles. The HP 48SX circuit board design was 
optimized such that it did not leave us much free space. 
These space constraints affected many of the HP 48G/GX 
hardware design choices. 



The RAM increased from :I2K bytes in the HP 48SX to 12SK 
bytes in the HP 48GX, while Ihe HP 48G retained the original 
:J2K-byte chip. This difference between the G and Ihe GX 
offers two advantages. First, it provides more differentiation 
between Ihe fund ions and cost of the G and the GX, in- 
creasing the product family's market appeal. Second, the 
difference in RAM size provides a way for the calculator to 
know whether it is a G or a GX. If the calculator scans the 
RAM and finds only :12K bytes, then (here will never be a 
plug-in dala card installed. With this information the covered 
memory options become much simpler. The RAM memory 
size becomes an internal product type identifier, and several 
software routines are optimized for faster performance on 
the HP 48G. 

ROM Changes 

The IIP 48G and GX share a common ROM code set. They 
also share a common circuit board. While this simplifies 
documentation, manufacturing, and stock control, it also 
complicates some areas. The IIP 48GX RAM ( hip is wider 
and longer than the chip used in the IIP 48G: the 32K RAM is 
in a 28-pin small-outline package (SOP), and the 128K device 
is a 32-pin SOP. Both conform to Ihe JEDEC pinout standard. 
A 128K device was chosen that has an extra chip select line 
at pin :30. This chip select is tied liigh. allowing pins 1 through 
28 of the smaller RAM to overlay pins 3 through 30 of the 
larger device. The extra chip select of the HP 48GX RAM 



Portl 
(Shrunken) 



Bank Switch mmm 

Control U 



System RAMI32K) 



s 



Port 2 
(Shrunken) 



System ROM I (256K) 



System ROM II I256KI 



Higher Addresses 



Fig. 12. HP48U standard 

configuration. 



12 August lL'y-l Hewlett-Packard Journal 

© Copr. 1949-1998 Hewlett-Packard Co. 



matches the V (ill line of the HP 48G RAM chip, and all of the 
other lines are pinout -compatible. 

The difference in physical package width also posed a 
problem. The foil patterns on the circuit board had to be 
modified to accept RAM chips with different lead spacings 
across the package. The immediate response was simply to 
stretch the oval-shaped patterns. However, this resulted in 
ilio foil extending well under the body of the 12SK chip, a 
situation that could have led to solder bridging where the 
solder paste contacted the part body. This is avoided by using 
two different solder stencils on the manufacturing line. A 
paste of solder is laid on the blank circuit board before parts 
are loaded onto the board. A metal stencil defines the pattern 
of the solder paste, just as a silk screen controls the pattern 
of ink on a shirt. By using a different stencil pattern for the 
G and GX circuit boards, we control the original location of 
the solder paste and keep it out from under the 128K-byte 
RAM body. Once all the components are loaded onto the 
circuit board and the solder is heated to a molten state, a 
danger might again exist for the solder to flow under the 
part body. Fortunately, the nature of the mechanical contact 
between the RAM lead and the circuit board foil tends to 
cause the solder to pool or wick to the lead rather than 
spreading across the elongated foil pad. 

The packaging of the ROM chip was also changed between 
the HP 48SX and the HP 4SG.X. The SX used a square 52-pin 
quad flatpack for its 25GK bytes of program data. The code 
size of the HP 48GX is doubled to 512K bytes. Its package is 
a 32-pin SOP like the 128K-byte RAM chip. Their common 
package configuration allowed us to conserve space in lite 
placement Of (he two chips and in the routing of signal wires 
between them. 

Use of a st&ndard SOP ROM chip also allowed us to use one- 
lime programmable (OTP) ROMs in prototype calculators. 
An OTP uses the same semiconductor core as a 1 'V-crasahlc 
EPROM. To gel the semiconductor chip into an SOP, how- 
ever. I he manufacturer omils the familiar glass window in 
the chip, covering the device in opaque plastic. The resulting 
ROM is no longer erasable. 

Typically, a product schedule requires months between code 
release and the start of production so that ROMs can be 
Imill with the software code buill-in. The use of OTPs on 
this project CUt the required time from months to days. For 
one prototype run. the time between code availability and 
product build was only a few hours. 

The IIP 48G/GX CPU multiplexes the highes! address bit, A18, 
with an additional chip enable line. CE3. The original idea 
was to allow future expansion of the IIP 18 family, either to 
use a larger ROM chip or to include an additional memory 
mapped device. By the lime the IIP 48G/GX design was Com- 
plete, we had decided to do both. We doubled R< )M to 2 18 
bytes, and we added bank switching to card port 2. Two 
small IK'MOS chips were added to the board to demultiplex 
these signals. The two chips are a quad NAND chip and a hex 
D flip-flop, similar to the standard TTL devices. The multi- 
plexing is accomplished by simply toggling a control bit in- 
side the HP 1KG/GX CPU. To demultiplex the A18 and CE3 
signals, we developed a protocol for mirroring the state of 
the Internal bit to one of the externa] I ) flip-Hops. The NAND 



gates handle signal demultiplexing, and the remaining five 
flip-flops form a register for the card port 2 bank address. 

Other Hardware Changes 

The card ports of the HP 48SX were designed for Epson 
memory cards. Several unused lines were adapted to pro- 
vide external video signals to drive an enlarged display for 
classroom use. On the HP 48GX. die video lines are retained 
only on card port L On card port 2, the video lines are re- 
placed by five additional address lines. The system software 
allows lite card in port 2 to be subdivided into 128K-byte 
sections, with each section treated as a virtual plug-in card. 
Five bank select address lines permit up to 32 virtual plug- 
ins in card port 2. yielding a maximum card size of 4M bytes 
in the plug-in port, with the ROM. RAM. and plug-in options, 
an HP 48GX can access 4,980.736 bytes of onboard data. 

Since the inception of the HP 48 family of calculators, liquid 
Crystal display technology has progressed significantly. The 
display in the HP 48G/GX provides improved visibility by 
improvements in pixel contrast. The display is thinner than 
before. This change in glass thickness reduces the parallax 
between the pixel within the display and its shadow on the 
rear face of the display, hi the HP 48SX. the pixel contrast 
was lower and the shadow was not dark enough to cause 
problems, but in testing the new HP 48G/GX display under 
various light conditions, we found that shadow effects made 
the display hard to read. With the thinner glass now used, 
the pixel and its shadow appear as one image, and the 
shadow now enhances the appearance of the pixel. 

Changes to plastic parts were not permitted, except where 
necessary. The back case of the IIP 48G/GX required changes. 
The changes were all accomplished by making mold inserts. 
Where text on the mold needed changes or additions, the 
affected area of the mold was milled away. A piece of steel 
was placed into the hole to make a perfect fit, and the face 
of this new piece was etched or inscribed with the new tex- 
tures and features. The new back case helps identify the 
differences between card polls I and 2. updates the copy- 
right information, adds a mark indicating that the HP 
48G/GX cainpBes with Mexico's importation laws, and adtls 
an area for a customized nameplate. The customized name- 
plate is a piece of metal with adhesive on one side. The cus- 
tomer's name can be engraved on the plate and attached to 
an inset area of the back case. This is the same nameplate 
used on Ill's palmtop computer family. 

The result of these changes is a computer platform that is 
more powerful than its predecessor, is well-suited to the 
enhanced user interface developed by the software team, is 
more versatile for both the user and the design engineer, and 
is less expensive to produce. It Started as a processor up- 
grade and became a major product improvement. 

User Interface 

With ease of learning and ease of use the primary goals for 
the new HP 48G/GX calculator, the user interface and many 
built-in applications have been largely redesigned. 

In/ml forms provide the common starting point for the 
new and rewritten applications in the IIP48G/GX. I king 



© Copr. 1949-1998 Hewlett-Packard Co. 



Aimiisi IBM Hewlett-Packard Journal 13 



Label - 



Help 
Line 



PLDTL. 

type Function <£= Pad 

INDEP: X H-VIEW:-6.5 6.5 
_AUTOSCALE V-VIEM: -3. 1 3.2 

ENTER FUNCTIONr.S . lJrO PLOT 

EMI 1 1 1 1 Tl ii II M I Ml 1 1 l"l 



rule 



- Field 



Menu 



Fig. 13. A typical Input form. 

much like dialog boxes in an Apple Macintosh or Microsoft - 
Windows PC, input forms provide a fill-in-ihe-blanks guide 
to the input needed for a task, plus application-specific 
menu keys for acting on that input. 

For selecting an application in a particular topic and for 
picking an input from among several choices we developed 
choose boxes, a type of pop-up menu thai suggests alterna- 
tives and narrows the input focus. 

We designed message bo.res to make feedback to the user 
more manageable within our increasingly crowded display 
space. Message boxes appear on top of whatever t he user is 
working on and provide more flexibility for formatted mes- 
sages and icons than the two-line, fixed-location error mes- 
sages they replace. They also preserve the context that can 
otherwise be lost when something surprising happens 
within an application. 

Input. Forms 

An input form provides both a means to enter data pertinent 
to an application and operations that permit the user to direct 
actions. 

Visually, an input form consists of (see Fig. 13): 
A title suggesting the form's purpose 

One or more fields, typically with explanatory labels, which 

are used to gather and display user input 

A help line that details the Input expected in the selected 

field 

Menu keys that provide more options for working within or 
exiling the input form. 

Each input form field can be one of four types. Most input 
forms, such as the Set Alarm and I/O Transfer input forms, con- 
lain several or all types of fields. TeM fields are used to enter 
arbitrary HP 48G/GX objects like real numbers and matrices; 
the object types allowed are specific to each text field, hi 
Fig. 14, a text field is used to enter an alarm message in the 
Set Alarm input form. 

When a single choice among several is required, list fields 
are used to eliminate invalid input and to help focus user 
actions. To select an entry hi a list field, a choose box is dis- 
played. In Fig. 15, a list field is used to specify the transfer 
format in the I/O Transfer input form. 

Somet imes only a simple yes-no, do-or-don't type of choice 
is needed. For this we use check fields. Fig. 16 shows how 
the overwrite f OVWR) field is used to specify whether or not 
an existing variable should be overwritten. 

Finally, when arbitrary input is possible but logical choices 
are also available, combined IcvtAist fields are employed. In 



_SET ALARM 

MESSAGE: [ 
TIME: 12 = 00:00 

DATE: 8 /2S '94 

repeat: Hone 

ENTER ALARM MESSAGE 



LUNCH RT MO'S" 



PM 



I rxi : m l 



EDIT 



"LUNCH FIT MO'S 1 



PM 



_SET ALARM 
MESSAGE: " 
TIME: 12 : 00 : 00 

DATE: 8 '26 S94 

repeat: None 

«LUNCH RT MO'S" 



(AN<L ok 



Fig. 14. Using a text field in an input form. 

the Transfer input form, the Name field is a combined field that 
permits new names to be entered or the names of existing 
HP 48G/GX variables or PC files to be selected (see Fig. 17). 

As the figures illustrate, each of the three base field types 
has associated with it a dedicated menu key (hat triggers the 
unique feature of that field type. This feature is an important 
part of how we maintained a calculator key-per-function- 
style interface within the constraints of a small display and 
with no pointing device. In other graphical user interfaces, 
visual elements such as list arrows are activated by mouse 
clicks to elicit different behaviors from fields. In the HP 
48G/GX, the user's finger acts as the pointing device, trigger- 
ing the desired behavior by pressing the appropriat e action 
button for each field. Consistent location of the three types 



TRANSFER: 

port: Wire tvpe: Kermit 

NAME: 

FMT: fgHB KLAT: Newl CWC 3 

BAUD: 9688 parity: None _mm 

CHOOSE TRAN SFER FORMAT 



RECV KGET 



4- 

CHOOS 



transfer; 
port : Hire tvpe= Kermit 

NAM 
FMT: 



ASCII 



BAUD — 



Binary 



i twi i _ 



:= 3 

VRH 



CHOOSE TRANSFER FORMAT 

Ensif 



Fig. 15. Using a list field in an input form. 



1 4 August loo t Hewlett-Packard Journal 

©Copr. 1949-1998 Hewlett-Packard Co. 



transfer; 
port: Wire type Kermit 

NMiE= 

FMT= RSC XLftT: Newl CHK= 3 

parity: None Invftw 

OVERWRITE EXISTING VARIABLES? 



✓ CHK RECV KGET 



.-chk 



TRANSFER! 

port: wi r e type Kermit 

NAME: 

FMT: RSC KLAT: Newl CHK: 3 
6AUD:9690 PSRITY-'None 9DVRW 

OVERWRITE EXISTING VARIABLES'? 

i rmTTafiT^PliaH^iE??Til 



Fig. 16. Using a check field in an input form. 

of action buttons helps the user navigate an input form 
confidently. 

Some input form menu keys perform application-specific 
operations — for example. DRAW in Plotting. In the second row 
of the input form menu are more advanced input form opera- 
tions for resetting a field or the entire Conn, displaying the 
object types allowed in a field, and temporarily accessing 
the user stack to calculate or modify a field value. 

Choose Boxes 

Choose boxes are used to make a choice in an input form 
list field. They are also used in most subject areas to choose 
a specific application from among several. Fig. 18 shows the 
choose box that is displayed when the STAT key is pressed to 
perform statistical calculations. 



TRANSFERS 

port: ire type: Kermi 

NAME: ■■■■■■^■HH 
FMT: RSC XLAT: Newl CHK: 3 

baud: 9600 parity: None _0VRW 

ENTER NAMES OF VARS TO TRANSFER 

■ I llll 1 1 1 1 IMI 1 1 I'll I I HI 1 1 1 1 



CHOOS 

4 



PORT 
NAM 
FMT= 
BAUD 



ENTL 



VARS IN { HOME > 



2DRT: [[ 1 2... 



R: 1234567 
I3CYR: 6.25 
TREKLIB: Lib...i 



it 



IHMHEBH3I 



:= 3 

VRW 
SFER 



IffiEHll 



{ HO 






IS i ng 1 e-var» 




4* 

3: 
2: 
i: 


Frequencies... 
Fit data... 
Summary stats... 






147 


|(ANCL| 


_0KJ 



Fig. 18. A typical choose box. 

When circumstances require, choose boxes can include any 
or all of several advanced features. The Memory Browser 
application, for example, is actually a maximum-size choose 
box embellished with a title, multichoice capability, and a 
custom menu (see Fig. 19). 

Message Boxes 

Message boxes are used primarily for reporting errors that 
require attention before proceeding. For example, if the user 
attempts to enter a vector in the EXPR field of the Integrate 
input form, a message box appears to inform the user of the 
problem (see Fig. 20). 

Some applications also use message boxes to give additional 
Information. For example, in the Solve Equation input form, the 
user can press INFO any time after a solution has been found 
to review the solution and determine how it was calculated 
(see Fig. 21). 

Input Form Implementation 

For the HP 48S/SX, we developed an RPL tool called the 
parameterized outer loop' 1 to speed development of new 
interfaces such as the MatrixWriter by automating routine 
key and error handling and display management. The input 
forms in the new HP 48G/GX embrace this concept — in fact, 
the input forms engine is a parameterized outer loop appli- 
cation — and lake it one step farther to automate routine 
matters of application input entry and selection of Options, 
The input forms engine brings a uniform interface to all new 
HP 48G/GX applications. 

While narrowly focusing the task of application develop- 
ment by managing command input tasks, the input forms 
engine also leaves muc h room for the customization that 
helps optimize the HP 48G/GX for ease of use. Since an im- 
portant measure of our progress towards our goals for the 
calculator was to be feedback from typical users throughout 
the development cycle, we designed the input forms engine 
from the ground up to be highly customizable. This was ac- 
complished in a programmer-friendly manner by including 



f^SSgg DEJECTS IN i HDME > 

✓ZDRT: [[123] 

R: 1234567 
✓ I5SYR: 6.25 
S 

EQ: 'SIN<X) 



TREKLIB: Library 5... 



i eeeh casual 



4- 



Fig. 17. Using a combined text/list field in an input form. Fig. 19. The Memory Browser: a complex 'I Sf Im>.\ 

©Copr. 1949-1998 Hewlett-Packard Co. August IQMHewiett-Packart Journal 15 



'- >■:-:■:'-■ 




e ;-; p F; : ■■^■■■■■m 


VHR: 


A Invalid 




RESU 


object type 




[ 1 


2 3 ] 




OK j 



Fig- 20. A typical message box. 

over fifty hooks into the input forms engine's responses to 
external and internal emits. External emits are triggered 
by users and include low-level events such as key presses 
and high-level ev ents such as completion of a field entry. 
Internal events are usually activated by external events, 
such as formatting a completed field entry for proper dis- 
play. A single external event can trigger a half dozen or 
more internal events, all of which are customizable. 

Input form applications can customize any or all form-level 
events such as title display or field events such as displaying 
a help line. Each field has afield procedure associated with 
it, and the entire form has nforw procedure associated with 
it. Whenever an event occurs, the appropriate field or form 
procedure is called with an identifying event number and 
perhaps additional information. If the procedure does not 
customize the event, it returns FALSE to the input forms en- 
gine. If it does customize the event, the procedure performs 
the custom behavior and returns TRUE. In this manner, every 
event first queries the proper form or field procedure to 
determine if custom behavior is needed, then handles the 
event normally only if it isn't customized. If a form or field 
has no custom behavior, it specifies a default procedure that 
quickly responds FALSE to all event queries. 

The reason for a form procedure and multiple field proce- 
dures is to spread the burden of customization throughout 
the form. Since each field procedure only checks for the 
events that pertain to it, and since the form procedure only- 
checks for form-level events, no single event processing is 
slowed by a highly customized form that would otherwise 
have to compare the event number against a lengt hy list of 
event and field combinations. 

For the HP 4SG/GX project we needed another layer of 
regularity not enforced by the Input forms engine. Because 
we sought and reacted to usability feedback almost until the 
code was released to product ion. the user interface details 
for each subject area were subject to constant change. It was 
imperative, therefore, that we maintain a strict and formal 





EG: ' fl+B ' 




* ri B: -13 
E: iZero 


^ 

OK 1 





Fig. 21. The Solve Equation INFO message box. 



division between unchanging and well-understood tasks — 
such as getting and saving problem domain information and 
calculating results — and the user interface details that were 
changing regularly. We developed a set of convent ions that 
were embodied in what we called translation files. We used 
naming rules and constrained responsibilities to greatly miti- 
gate the effects of user interface changes on the underlying 
problem-solving functionality. For example, one RPL word 
in the plotting translation file has the simple task of reading 
the current horizontal plot range from calculator memory. 
Since the word has no presumptions about how and when it 
will be called, references to it could be (and often were) 
changed around as the fields populating the plotting input 
form were worked out. 

Choose Box Implementation 

The choose box engine is veiy much like the input forms 
engine. For customization, the programmer can Supply a 
choose procedure that responds lo 2(5 messages. 

A feature of choose boxes that simplifies their use is the 
option — heavily used by the built-in applications — of items 
that encapsulate both display and evaluation data. For exam- 
ple, when an angle measure — degrees, radians, or gratis — is 
to be chosen in certain input forms, the choose box engine 
displays plain descriptions but returns an RPL program that 
sets the selected angle measure. This circumvents the need 
for branching according to the returned object and simplifies 
the extension of choices. 

Results: Benefits and Costs 

Initial feedback from the educational advisory committee 
and user reviews suggests that the use of input forms and 
other graphical user interface elements has greatly improved 
the ease of use of the HP 48G/GX over the HP 48S/SX. How- 
ever, the path we look to (his accomplishment was more 
challenging than we planned. 

Event customization, originally conceived as a means to 
extend the functionality of input forms in unforeseen ways, 
turned out to be a key component of our ability lo prototype 
new user interface ideas rapidly. As their name may imply, 
the original intent of input forms was very modest compared 
to the role they now play. We designed input forms to be the 
standard means by which applications gather data for a 
task. One or more input forms would be displayed as neces- 
sary within the context of another, undefined, application 
context. This original concept is applied successfully 
throughout the calculator. For example, in the Memory 
Browser, when NEW is pressed to create a new variable, an 
input form is used to get the information required (see Fig. 
22). In litis context, the user can do only three things in the 
input form: enter data, cancel the form, or accept (OK) the 
form This simple but effective behavior was the model used 
for the original input form design. 

As the project developed, however, it became apparent that 
an input form could serve not only as a information gatherer 
but also as an action director. Input forms thus graduated 
from simple dialog boxes to full-fledged application 
environments. 



August IWU Hewlett-Packard Journal _ . .„,„.„„„,, „ „ , « 

© Copr. 1949-1998 Hewlett-Packard Co. 




NAME: 

_ DIRECTORY 

ENTER NEN DEJEC T 

■TTilHTRTBI^MI^MnTPl 



Fig. 22. Memory Browser NEW input form. 

Interestingly, no major changes to the input forms engine 
were necessary or even desirable to support their new role. 
Instead, the essence of input form functionality remained 
always data management, and the events customization was 
applied selectively where needed to enhance application 
forms. 

In a similar manner, the event-driven choose box engine was 
eventually pressed into service as a powerful base for list- 
style applications like the Memory Browser 

The combination of lean, focused, standard feature sets for 
input lorms and choose boxes and high customizability 
proved invaluable during the calculator design refinement 
Throughout the middle portion of the project, when the 
basics had been settled but many user interface details were 
still unclear, we were able to prototype new ideas quickly 
and realistically by customizing event responses. 

Translation files were another development elf on tJiat helped 
us keep the design and implementation moving forward. 
However, we learned over time that their overhead caused 
some duplication of code and inefficiency to creep into the 
interface between the input forms and the calculator main- 
frame. We addressed this issue where possible by making 
simple and Safe code substitutions while leaving the inter- 
face concepts inUict to enable high-confidence code delect 
fixes late in the project. In effect, we made a choice between 
maintainability and high performance that still remains a 
controversial topic among the IIP 48G/GX developers. 



Like the other plotting routines, all the 3D plotting routines 
assume that the function of interest is stored in Ed 1 Further, 
they assume, by default, that the function is represented as 
an expression in the variables X and Y — for example, u.v -* 
sin(u+v) is represented as SIN(X+Y) in Ed The use of other 
variable names is provided for by input form options or by 
the INDEP and DEPEND keywords. 

While this section is titled "3D Plotting." a better name would 
be "visualization techniques for functions of two variables." 
This would cover the perspective view of the graph of a sca- 
lar function of two variables (WIREFRAME), the slicing \iew of 
a scalar function of two variables ( YSLICE i. the contour-map 
view of a scalar function of two variables ( PCDNTOUR ), the 
slope interpretation of a scalar function of two variables 
(SLOPEFIELO ). the mapping grid visualization of a rwo-veetor- 
valued function of two variables (GRIDMAP). and the image 
graph of a three-vector-valued function of two variables 
(PARSURFACE). 

Given this unity of purpose, there is considerable overlap in 
the global parameters (options) used in these routines. These 
plotting parameters are stored in the variable VPAR, analogous 
to PPAR. :i The main data structure stored in VPAR describes the 
riew volume, a region in abstract three-dimensional space 
in which most of the visualizations occur (see Fig. 23). 

VPAR quantities controlling the view volume are: 

• Xieft and Xrfghti controlling the width of the view volume 

• Yf ar and Y near , controlling the depth of the view volume 

• Z| 0W and Zhigh. controlling I he height of t he view volume 

• X e , Y e , and Z P . the coordinates of the eye point. 

hi addition to these, VPAR contains other quantities used by 
some of the routines. These are: 

• XXjefl and XXn,,,,,, an alternative X input range, used for 
GRIDMAP and PARSURFACE 

• YY|j,i and YY, 1( , iir , 3,1 alternative Y input range, used for GRID- 
MAP and PARSURFACE (note that this differs from the current 
Suite3D interpretation) 

• Njj and N v , the number of X and Y increments desired, used 
In all of the routines instead of or in combination with RES. 



3D Plotting 

The functionality described in this section is a suite of 3D 
graphing and viewing utilities for the IIP 48G/GX. We had 
several requirements to consider in creating these routines. 
Our aims were tiiat they be psychologically effective and 
require only a small amount of c ode. 

hi exploring visualization techniques on a variety of ma- 
cliines we found that increasing "realism" (ray-traced, 
Phong-shaded, hidden-line, etc.) in the graphical presenta- 
tion of functions of two variables did not necessarily corre- 
late with increasing ease of comprehension. The IIP 48G/GX 
routines represent the results of some of these experiments 
(including time-to-completion as an important factor), 

All of the 3D plot ting routines are intended as seamless ex- 
tensions of the other built-in plotting utilities. In particular, 
they share the same standard user interface and are selected 
as alternative plot types. The 3D plotting routines are SLOPE- 
FIELD. WIREFRAME, YSLICE, PCONTOUR, GRIDMAP, and PARSURFACE. 



SLOPEFIELD 

The SLOPEFIELD plot type draws a lattice of line segments 
whose slopes represent the function value at their center 
point. Using SLOPEFIELD to plot f(x,y) allows your eye to pick 
out integral curves of die differential equation dy/cLx = f(x,y). 
It is quite useful in understanding the arbitrary constant in 
aiitiilerivatives. 



y Top View 




(X„, Y„, Z.I 



View Screen 



View Screen 

Fig. 23. VPAR parameters in relation to the view volume. 



"loll XfjCjhl 

j 1 Unit 

(Xe.Y e ,Z 0 ) 



© Copr. 1949-1998 Hewlett-Packard Co. 



August 1994HewleU-PacltardJbama] 17 



(2,3.61 




Top View 



Superimposed 
Integral Curve 



Fig. 24. SLOPEHELD plot of dx/dt = sin(xt). 

The number of lattice points per row is determined by N x 
and the number of lattice points per column is determined 
by Ny. The input region sampled is given by Xigfl < X < X r ig|,| 
and Y npar <Y<Y far 

The input form in this case allows the user to: 

Choose or enter the defining expression for the function to 

be plotted 

Choose the names of the two variables (identical to INOEP 
and DEPEND) 

Choose X| t .f, and X^i,, (default to their current value, or 
XRNG if no current value) 

Choose Y near and Y', :u (default lo their currenl value, or YRNG 
if no current value) 

Choose N x and N y (default to their current: value or 13 and 8 
if no current value) 

Verily and/or choose RADIANS, DEGREES, or GRADS mode. 

hi trace mode for SLOPEFIELD, the arrow keys jump the cursor 
from sample point to sample point indicating both the coor- 
dinates of the sample point and the value of the slope at that 

point 

Example Problem: Determine graphically whether all solutions 
of the differential equation dx/dt = sin(xl ) with initial condi- 
tions 3.0 < x(0) < 3. 1 satisfy 2.8 < x(t) < 3.6 for all t in [0,2]. 

Solution: Choose SLOPEFIELD plot type and enter SIN(X-T) as 
the current equation. Choose T as the independent variable 
and X as the dependent variable. Choose 0 as Xjeft, 2 as 
Xrighi, --8 as Y,,p ar , and 3.6 as Yf ar . Verify RADIANS mode, and 
draw the result. As seen in Fig. 24, almost all of the integral 
curves in this region leave the window either through the 
top or the bottom. Therefore, not all the integral curves 
satisfy 2.8<x(t)<3.6 for t in [0,2]. 

WIREFRAME 

The WIREFRAME plot type draws an oblique-view, perspective, 
3D plot of a wireframe model of the surface determined by 
z = f(x,y). The function determined by the current equation is 
sampled in a grid with N x samples in each row and N y sam- 
ples in each column. Each sample is perspective-projected 
onto the view screen along the line connecting the sample 
and the eye point (see Fig. 25). 

Neighboring samples are connected by straight lines. The 
sampled region is determined by the base of the view vol- 
ume (XjgQ, Xgghti Y npar , Yfar ). The region of the view screen 
represented in the PICT GROB (graphics object 3 ) and hence 
on the display is determined by the projection of the view 
volume on the view screen (see Fig. 26). 




(Xe.Ye.Ze 
View Screen 



]_ 1 Unit 
(Xe.Ye.Zel 



Fig. 25. Perspective projection of a point in the view volume Onto 
the view screen. 

The input form in this case allows the user to: 
' Choose or enter the defining expression for the function to 
be plotted 

1 Choose the names of the I wo variables ( identical to INDEP 
and DEPEND) 

i Choose Xied and Xrj$,t (default to Iheir current value, or 

XRNG if no current value) 
i Choose Y near im <l fcjjfa (default to their current value, or YRNG 

if no current value) 
1 Choose Zjow and Zi^i, (default to their current value, or 

default YRNG if no current value) 
1 Choose Xp, Y e , and Z e (default to their current value, or 0, 

-1, 0 if no current value) 

1 Choose N x and N v ( default to their current value or 13 and 8 
if no current value) 
i Verify and/or choose RADIANS, DEGREES, or GRADS mode. 

In trace mode for WIREFRAME, the arrow keys jump the cursor 
from sample point to sample point and the display indicates 
all three coordinates of the sample point. 

Example Problem: Determine graphically whether the surface 
defined by z = x 4 - 4x?y- + y 1 is, at the origin, concave up, 
concave down, or neither. 

Solution: Choose WIREFRAME plot type and enter X A 4-4*X n 2*Y A 
2+Y*4 as die current equation. Choose X and Y as the indepen- 
dent and dependent variables. Choose -1 for Xjeft, 1 for 
X vi gi„, -1 for ifnegD 1 for Y fan -1 for Z ]ow , and 1 for Zi ljgll so 
that die view volume surrounds the origin. Choose 4 for X,.. 
-10 for Y P , and 3 for Z e to give a distant, oblique view of the 
graph. As seen in Fig. 27, the graph displays a "monkey 
saddle* which is neither convex nor concave at the origin. 

New Interactive Features 

The picture environment, which is invoked automatically 
when graphs are drawn or by pressing the PICTURE key, al- 
lows the user to interact with a graph. The user can move 

View Volume 



View Screen 




(X..Y„Z e ) 

Fig. 26. Relalionsltip of view volume anil eye point to XRNG and YRNG. 



18 August 139-1 Hewlett-Packard Journal 

© Copr. 1949-1998 Hewlett-Packard Co. 




Fig. 27. WIREFRAME plot of the surface determined by z= x* - 4x 2 y 2 * 
y* Willi ww volume |-l,l|x|-I,](x(-l.l| and rye point (4.-10,3). 

the cross hairs around using the arrow keys, trace along the 
graph, add picture elements such as dots, lines, and circles, 
or do interactive calculus operations suc h as finding the 
derivative at the cross hairs location. 

Trace, Faster Cross Hairs 

The HP 48S/SX cross-hair-nioving code was rewritten for the 
HI' 4XG/GX. The cross hairs needed to be faster, lo fade less 
as I hey moved, and lo accommodate added functionality 
such as tracing and shading. The cross hair code originally 
came from the HP 28 and has been maintained and modified 
over the years. In the HP 28. most of die cross hair code was 
written in high-speed assembly language and actually con- 
tained a routine called SLOW which was buried deep Within 
RPL subroutine calls. SLOW was needed to slow the cross 
hairs down to an acceptable speed. The use of this word 
was discovered during the process of porting the code from 
the HP 28 to run on the bigger HP 48S/SX display. There 
were many occasions during Ihe HP 18G/GX project when 
we were trying to speed up various operations and wished 
we could just find the word SLOW and lake it out! 

The fading of ihe cross hairs as they moved was improved 
by changing the code so that ihe time between turning the 
cross hairs offal one pixel and turning them on at the next 
pixel is minimized. To do this. all ihe calculations required 
for moving are now done before turning Ihe cross hairs off. 
I nfoi'tunalely. the new display on the IIP 18G/GX trades off 
response lime for contrast, so although it is brighter and has 
higher contrast than the IIP 48S/SX display, it lakes longer 
for pixels to lurn dark. Thus, much of Ihe work to reduce 
lade in moving cross hairs was canceled out by the new 
screen characteristics. The user can darken the display by 
holding down the ON key and pressing the + key a few times, 
and this will make Ihe moving cross hairs easier to see. 

Tracing along a graph wilh ihe cross hairs presents a chal- 
lenge because the user's function must be evaluated at every 
point, so in effect the system RPL programmer must turn 
control over to the user at each point of the graph. This re- 
quired careful attention lo error handling and lo managing 
Ihe data stack, which is a shared resource. The procedures 
for tracing vary with the different plot types. The proce- 
dures are kept in the property list associated wit h each plot 
type, and then the appropriate procedure is passed in and 
evaluated when trace mode is turned on. It requited quite a 
bit of rewriting to implement this object-oriented, extensible 
approach because much of the existing cross hair code had 
previously undergone rewriting and optimizing for speed 
and code size. 



Animation 

The ANIMATE command Ls a program that was easy to write 
and tliat the user could have written in user-RPL program- 
ming language, but we added it for the sake of convenience. 
Also, it is used as part of Y-slice 3D plotting. It sets up a loop 
that repeatedly puts graphics objects into Ihe PICT display 
area 

A qui.-k way to get started with animation is to press PICTURE 
to go to the interactive graphics environment, where you 
will l>e able to create some pictures to animate. Press EDIT, 
then DOT* to turn on the etch-a-sketch-style drawing mode in 
which pixels are turned on wherever you move the cross 
hairs. Using the up, down, left, and right arrow keys, sketch 
something, then press STO to send a copy of your picture lo 
Ihe slack. Continue sketching, press STO again, and repeal 
this procedure, continuing to add to your sketch until you 
have a handful of pictures on Ihe stack, say six of them. 
Press CANCEL Co leave the PICTURE environment, and you will 
see the the picture objects sitting on Ihe stack. They are 
called GROBs. which is short for graphics objects/' To use 
the ANIMATE command, all you have to do is enter the number 
of GROBs (for example, press 6 then ENTER if you created six 
pictures), then press the ANIMATE key, which you will find in 
Ihe GROB submenu of the PRG menu. Your series of sketches 
will come to life as the ANIMATE command flips through 
them. 

Mathematics 

Several new mathematical features were added to meet Ihe 
needs of Ihe educational market and lo match or exceed 
corresponding features recently introduced by our competi- 
tion. Design trade-offs made for and inherited from earlier, 
less capable platforms were reconsidered, and relevant soft- 
ware developed for earlier machines but not used in the IIP 
4SS/SX was used wherever appropriate. 

Design and Implementation Issues 

The IIP 18G/GX is targeted at the college-level mathematics, 
science, and engineering educational market. W'e hoped, 
also, to achieve more success in high school advanced 
placement courses. In these environments calculators are 
used as pedagogical tools, illustrating mathematical anil 
modeling concepts introduced in Ihe courses. 

As a pedagogical tool, the calculator's accuracy and reliability 
are paramount design goals. Speed of execution is important 
but secondary to Ihe validity of the computed results. To 
achieve high accuracy and reliability the computational 
methods needed to be more numerically sophisticated than 
typical textbook methods. This greater complexity is hidden 
from the casual user wherever possible, but made available 
to the sophisticated user so the methods can be tuned lo 
their needs. 

One means of achieving maximum accuracy and reliability Is 
to read the current literature and consul! with expert special- 
ists lo obtain Ihe best methods, then implement those meth- 
ods from scratch. We have employed this approach in the 



© Copr. 1949-1998 Hewlett-Packard Co. 



August lf«M llrwlcli-l'arkiinl.lounml 19 



User Versions of Interface Tools 



Although the primary facus uf Ihe new usei interlace lor [he HP 48G/GX was to 
enhance our built-in applications, it became apparent as the project piogressed 
that calculator owners whu program would want access to the same capabilities 
to enhance their effons. For the choose box, message box. and especially the input 
form tools, the biggest challenge involved scaling back the numerous leatures to 
produce simple user commands that still offer customization potential. 

The message box command, MSGBOX. was designed to display pop-up messages 
with a minimum of fuss. Thus, it takes just one argument — the message string — 
and produces a word-wrapped normal-sized message box 

The choose box command, CHOOSE, is slightly more complicated lo enable but 
not require the same object-oriented use of choose boxes as the built-in applica- 
tions, the CHOOSE command accepts a list of items in two formats. In The simplest 
format, an item is specified by a single object, which is displayed and returned if 
chosen In the alternate format, an item is specified by a two-element list object. 
The first element is displayed in the choose box, and the second element is returned 
if the item is chosen 

For simplicity of the user interface, CHOOSE displays a normal-sized choose box 
without the multiple-choice capability used by some built-in applications 

The MSGBOX and CHOOSE commands largely follow the same interface specifica- 
tion methods as their system-level counterparts. This differs markedly from the input 
form user command, INFORM lo maintain complete flexibility over all elements of 
form layout and behavior, the input forms engine takes three arguments for each 
label and thirteen arguments for each field, specifying sucti details as exact location 
and size, display format, and so on Added to thai are global arguments for the 
form procedure and iorm title and some other details. All together, an input form 
with four labeled fields requires 68 arguments. While this amount of information 
is justified for the varied needs of built-in applications, it is an unnecessary burden 
for programmers just wanting to get some simple input from the user. 

For the INFORM command, therefore, we developed an automatic form layout 
scheme that serves most needs, with options for further detailing. Basically, the 
INFORM input form is viewed as a grid that is filled with fields starting in the 



5: "Personal Information" 



Field Specifications 



Reset and Current 
Values 



"Name:" 0 0 

Bldg: " "Phone: " ( h 
Notes:" () 1 1 I 
3: (35) 

{$" 



Title 



Field Expander 

Column Count |3| 
and Tab Width (51 



1 1 



INFORM 
4 




i personal information; 



'OXY MORON" 



7 phone: 555.1234 
"CONTRADICTORY" 



IEHHII 



Fig. 1 A custom input form created qy INFORM 

upper-left corner and proceeding from left to right and top to bottom The number 
of columns in the grid is specified as one nf INFORM S arguments, and each field's 
width is determined by the width of its label and by the user-supplied tab width, 
which places invisible tab stops within each column to help align fields vertically. 
A field can span multiple columns with a special lield-expandei specification. Help 
text and object type restrictions can be included for any field, but aren't required 

Fig. 1 shows an example of a custom input form created by INFORM Notice that, 
despite Ihe relative simplicity of the input arguments, an input form with aligned 
fields of varying widths is presented This technique for building input forms proved 
so valuable that it was used to create the Solve Equation input fotm, which changes 
according to the number and names of variables in the equation to be solved. 



past with success, but it can be time-consuming, expensive, 
and risky. 

Another approach sometimes available is to consult standard 
computational libraries used by the professional scientific 
comnuinity. Several such public-domain libraries are available 
that represent, the Currenl state of the art. In some develop- 
ment environments these libraries can be used directly. In 
Others, they can at least provide high-quality methods and 
implementations that when judiciously used fac ilitate meet- 
ing tight dev elopment schedules at low cost. We found the 
LAPACK library 4 of FORTRAN 77 numerical linear algebra 
subroutines particularly helpful in this regard. 

As usual, code was reused whenever possible to achieve 
timely and reliable implementations, hi addition lo the 
source code for the HP 4HS/SX and its Equation Library- 
card, we had implementations dating from the HI* 71 Math 
Pac that were revised for the HP 48S/SX but didn't find ROM 
space in that product. 

While reusing code, we took advantage of the HP 48G/GX 
GPU clock speedup and larger RAM environment over the 
HP 48S/SX t o reconsider some of our previous implementa- 
tion trade-offs in an effort to achieve greater accuracy. In 



some cases we decided to employ more computational effort 
and to store intermediate values in higher precision to 
achieve more accurate results. 

New Mathematical Features 

The IIP 48G/GX includes many new mathematical features 
over those provided by the HP 4SS/SX. These are array 
manipulations! additional linear algebra operations, a poly- 
nomial root finder and related operations, two differential 
equations solvers and associated solution plotters, discrete 
Fourier transforms, and financial loan computations. 

The array manipulation commands are primarily pedagogical 
tools. These include a random array generator and com- 
mands lo add or delete rows or columns of matrices or ele- 
ments of vectors, decompose matrices into or create matrices 
from row or column vectors, extract diagonal elements from 
a matrix or create a matrix from its diagonal elements, per- 
form elementary row and column operations, anil compute 
the row-reduced echelon form of a matrix. 

We significantly improved and expanded the linear algebra 
functionalily or the HP 48G/GX over the HP 48S/SX. The 
determinant, linear System solver, and matrix inverter were 



20 August 1994 Hewlett-Packard Journal 

©Copr. 1949-1998 Hewlett-Packard Co. 



revised to be more accurate through additional computation 
and by storing all intermediate values in extended precision. 
We added a command to compute a condition number of a 
square matrix, which can be used to measure the sensitivity 
of numerical linear algebra computations to rounding errors, 
a conuiuuid to compute a solution to an underdetermined or 
ovcrdetermined linear system by the method of least squares, 
commands to compute eigenvalues and eigenvectors of a 
square matrix, commands to compute the singular value 
decomposition of a general matrix, and commands to com- 
pute related matrix factorizations and functions. These linear 
algebra commands accept both real and complex arguments 
and perform all intermediate computation and storage in 
extended precision. 

The HP 48G/GX has commands to compute all roots of a real 
or complex polynomial, to construct a atonic polynomial 
from its roots, and to evaluate a polynomial al a point. The 
polynomial root finder is a modification of the HI' 71 Malh 
Pac's PROOT command, extended to handle complex coeffi- 
cients. It uses the Laguerre method with deflation for fast 
convergence and constrained step size and an alternate initial 
search strategy for reliability. 

The HP 48G/GX has commands to compute the discrete 
Fourier transform or the inverse discrete Fourier transform 
of real or complex data. These commands were leveraged 
from the HP 71 Math Pac's FFT and IFFT commands, requiring 
the data lengths to he a nonzero power of 2, and were modi- 
tied slightly to match the customary definitions of these 
transformations. 

Finally, we included time-value-of-money commands. These 
commands have appeared in our financial calculators and 
were available on the HP -18SX Equation Library card. Since 
engineering feasibility studies must include at least rudimen- 
tary time-value-of-money computations it seemed useful to 
include these commands in the 1IP48G/GX. 

Differential Equation Plotting 

The HP48G/GX contains two differential equation solvers 

and solution plotters. These solvers and solution plotters 
can be accessed via their input forms or invoked program- 
matic ally via commands. We provide a programmatic inter- 
lace to the differential equation solvers and their subtasks 
so the user can use them with the calculators general solver 
feature to determine when a computed differential equation 
solution satisfies some condition, or to implement custom 
differential equation solvers from their subtasks. 

In implementing the differential equation solution plots, one 
challenge was to identify and implement good solution meth- 
ods. Another challenge was to merge this new plot type with 
the new .'ID plot types described earlier and with the existing 
IIP 48SX plot environment in a backward-compatible manner. 

The IIP 48G/GX specifically solves the initial value problem, 
consisting of finding the solution y(l) to the first-order equa- 
tion y'(t) = f(t.y) with the initial condition y(to) = yn. Here 
y'd ) denotes the first derivative of a scalar-valued or vector 
valued solution y with respect to a scalar-valued parameter t. 
Higher-order differential equations can be expressed as a 



first -order system, so this problem Is more general than it 
might at first appear. 

Many solution methods have been developed over the years 
to solve the initial value problem. We decided to implement 
two methods, a Runge-Kutta-Fehlberg method for simplicity 
and speed of execution and a Rosenbrock method for reli- 
ability. The first method is easier to use. requiring less infor- 
mation from the user, but can fail on stiff problems.* The 
Rosenbrock method requires more information from the user, 
but can solve a wider selection of initial value problems. 
Both initial value problem solution methods require the user 
to provide the function f(t.y). the initial conditions, the final 
value of t. and an absolute error tolerance. The Rosenbrock 
method also requires the derivative of f(t.y ) with respect to 
y (FYY) and the derivative of ft t.y ) with respect to l (FYT). 

All plot types use the contents of the variable EQ, typically to 
specify the function to be plotted. If the user selects the stiff 
(Rosenbrock) method the extra functions are passed to the 
solver by binding EQ to a list of functions f(t,y). FYY. and FYT. 
Otherwise. EQ is bound to the function f(l,y ) needed by the 
Hunge-Kutta-Fehlberg method. 

Both methods solve the initial value problem by computing 
a series of solution steps from the initial conditions towards 
the final value, by default taking steps as large as possible 
subject to maintaining the specified error tolerance. The 
solution plotter plots the computed values and by default 
draws straight lines between the plotted points. However, 
although the computed steps may be accurate, the line seg- 
ments drawn between the step endpoints may poorly repre- 
sent t he solution between those points. The plot parameter 
RES is used by many plot types to control the plot resolution. 
If RES is zelo the initial value problem solution plotter im- 
poses no additional limits on the step sizes. If RES is nonzero 
the plotter limits each step to have maximum size RES. 

For the scalar-valued initial value problem it is typical to 
plot the computed solution y(t) on the vertical axis and the 
parameter I on the horizontal axis. However, in the vector- 
valued case the choice of what is to be plotted is not as 
clear. The user may wish a particular component of the com- 
puted solution plotted versus t or may wish two components 
plotted versus each other. The HP 48G/GX allows the user to 
specify the computed scalar solution, any component of the 
computed vector solution, or the parameter t to be plotted 
on either axis. This flexibility was introduced into the plot 
environment by expanding the AXES plot parameter. Pre- 
viously, this parameter specified the coordinates of the axes 
origin. This parameter was expanded so that an optional 
form is a list specifying the origin and the horizontal and 

vertical plot components. 

By judiciously expanding the meaning of the various plot 
parameters we were able to accommodate the differential 
equation solution plot type while maintaining backward 
compatibility with previous plot types. 



Still pinbleins typically have solution components with large dillerences in time scale More 
information is needed by a solver lo compute a solution elliclently 



© Copr. 1949-1998 Hewlett-Packard Co. 



August IIKM lU'wIi'll-l'arkanl.loumal 21 



Acknowledgments 

We would like to acknowledge the rest of the SOftw&ire de- 
velopment team: Jim Donnelly, Gabe Eisenslein, Max Jones, 
and Bob Worsley. Bill Wirkes also eonlributed software, 
("lain Anderson and Ron Brooks from the market ins depart- 
ment were involved in the day-to-day design process and 
kept us informed about user needs. Dennis York oversaw 
both the R&D and marketing aspects of the project, which 
helped generate synergy between R&D and marketing. We 
would like to thank Dan Coffin, our manual writer, and John 
Loux from the technical support group for going out of their 
way to participate in the design work and for providing 
many valuable ideas. 



References 

1. W.C. Wickes, "An Evolutionary RPN Calculator for Technical Pro* 
fessionals," Heivlell-hirknnl Jmirnal. Vol 38. no. 8. August 1987, 
pp. 11-17. 

2. W.C Wickes and CM. Patton, The HP 4&SX Scientific Expandable 
Calculator: Innovation ;u«l Evolution," HciiUit-l'uckarti Journal, 
Vol. 42, no. :J, June 1991 pp. 6-12. 

■i. T.W. Beers, et al, "HP 48SX Interfaces and Applications." Ilcwlrtt- 

Packaiil Journal, Vol. 42. no. 3, June 1991 pp. 13-21. 

I. E. Anderson, et al, LAPACK Users' Guide, SLAM. Philadelphia, 

1992. 

Microsoft is a U.S registered trademark of Microsoft Corporation. 



22 Aliens! WHM Ilewlett-Packaril.loiinial 

© Copr. 1949-1998 Hewlett-Packard Co. 



HP-PAC: A New Chassis and Housing 
Concept for Electronic Equipment 



HP-PAC replaces the familiar metal chassis structure with expanded 
polypropylene (EPP) foam. Large reductions are realized in mechanical 
parts, screw joints, assembly time, disassembly time, transport packaging, 
and housing development costs. 

by Johannes Mahn, Jiirgen Haberle, Siegfried Kopp, and Tim Schwegler 



Business competition between PC and workstation manu- 
facturers lias resulted in shortened life cycles for computer 
products, faster development and production times, and 
steadily decreasing market prices. The Hewlett-Packard 
Boblingen Manufacturing Operation and its Mechanical Tech- 
nology Center are faced with this trend, along with others, 
such as lightened environmental protection guidelines and 
lake-hack regulations. 

These trends call for new concepts — environmentally 
friendly materials and matching manufacturing methods. 
Assembly and disassembly times for computer products 
have to be as short as possible. Assembly analysis of some 
IIP products clearly showed the necessity to reduce pails as 
well as to improve manufacturing and. joining techniques. 

At the Mechanical Technology Center, these observations 
provided the motivation to look for a new packaging and 
assembly concept for computer products, one that would 
leverage existing techniques and incorporate new techno- 
logical ideas. 

t >ur objectives were to reduce the number of components 
and the number of different pail numbers, to achieve con- 
siderable savings in the area of logistics and administration, 
to save lime in building a chassis, to automate the mounting 
Of parts on the chassis, and to reduce overall chassis costs. 

Genesis Of an Idea 

After we had critically weighed all of the technologies 
known to us — namely, producing enclosures and chassis of 
sheet metal or plastics — the only reduction potential 
seemed to lie in reducing the number of parts and using 
snap fits to save on fasteners and assembly times. However, 
in contrast to our expectations, we could not do this to the 
extent we had in mind. 

I 'sing snap fits is a disadvantage, since disassembly is time- 
consuming and can lead to destruction of the components 
or the enclosure. In the future, enclosures not only need to 
be assembled quickly but also need to be disassembled 
within the same amount of time to make recycling easier 
and cheaper. 

We could not gel out of our minds the idea of fixing pails in 
such a way that they are enclosed and held by their own 
geometrical forms. The idea is similar lo children's toys that 



require them to put blocks, sticks, cards, or pebbles into 
matching hollows and at the same time keep track of posi- 
tions and maintain a certain order at any time during the 
game. We applied this idea to our problem and thought 
about how our game collection would have to look in terms 
of composition and performance for us to be able to pack- 
age and insert components for a workstation. It seemed 
most feasible to apply this idea at the assembly level, that is, 
to use the new method to fix conventional assemblies such 
as the disk, speaker, power supply, CPU board, and fan. 

The only problem was what kind of material could we use to 
realize this idea. How could we acliieve a form fit and not 
compromise on tolerances, feasibility, anil price? Not to 
condemn the idea almost seemed impossible. It became ob- 
vious that we could no longer use conventional methods and 
standards to find tin- ideal material. We were forced to deal 
with a completely different field. The solution seemed to be 
U) jettison everything we had learned before and direct our 
orientation towards something totally new. 

The material we were looking for had to be pliable and 
bouncing — like foam, for example. Could foam be used for a 
Torm fit? 

Raw Material Selection 

After the [idea had been bom to use foam, we started our 
search for a suitable material. The goal was to embed all 
components necessary for ;ui electronic device in one chassis 
made of foam synthetic- material. We had plenty of material 
to choose from, including polyurethane, polystyrene, and 
polypropylene. The material had to be: 

• Nonconduclive to hold and protect electronic components 

• Able to hold tolerances in accordance with HP standards 

• Able to fix Components without fasteners. 

With the help of our internal packaging engineers and an 
external supplier we soon found a suitable material: ex- 
panded polypropylene (EPP) with a density of GO g/1. 

In comparison to other foam synthetic material. EPP has the 
following advantages: 

• Excellent mechanical long-term behavior 

• Moisture resistance 

• Resistance to chemicals 

• Ileal resistance 



© Copr. 1949-1998 Hewlett-Packard Co. 



AllgUSl l!i!M llr»lHI I'iirk.'inl.liiiinuil 23 




Fig. 1. Parts required for a workstation using the existing packaging 
concept. 

• Kill"" rccyclahility. ( iranulos produced from recycled EPP can 
be used for mantifael tiring oilier parts such as packaging 
materia] and shock absorbers. 

EPP foam parts can be produced in densities of 20 to 10(1 fj/l 
Lower density provides excellenl shock absorption, while 
higher density offers tighter manufacturing tolerances. Thus 
design trade-offs are possible. 

From the Idea to a Workstation 

The next step was to apply this new concept to an already 
existing workstation. One workstation seemed suitable for 
the conversion. The existing concept (see Fig. 1), consisting 
of sheet-metal chassis (top and bottom), electrical compo- 
nents, sheet-metal enclosure, EMI liner, and plastic parts, 
was transformed inlo a foant chassis, electrical components, 
sheet-metal sleeves, integrated EMI liner, and modified plastic 
parts (see Fig. 2 ). In t he new technology, all of I he compo- 
nents are held by their own geometry in form-fitting spaces 
in the foam chassis. The connections between them are 
achieved through cabling held in foam channels (Fig. 3). 

Time was saved by processing the foam chassis, the sheet 
metal, and the plastic parts in parallel. To obtain the foam 
parts quickly, we rejected the ordinary way of creating 
drawings with the help of a CAD system and instead created 




Fig. 2. Parts required Tor the workstation of Fig. 1 using the HP-PAC 
concept. 




Fig. 3. Channels in the foam cany cooling air (shown) and Cabling. 

a 2D cardboard layoiit showing the placement of the compo- 
nents. A packaging company placed their sample tooling 
shop at our disposal for a few days. The first prototype was 
built step by step. We milled, cut, and glued, applying a lot of 
imagination. 

After two clays the first prototype was nearly finished. Com- 
ponents were fixed in (he necessary form fit and we were all 
aware thai we had taken a slep in the right direction. Back 
at HP we made a few minor changes to the EPP chassis and 
the remaining enclosure wilh the help of a knife and finished 
the prototype. 

The major question now was, "Will it run?" We ran software 
on the workstation and stalled testing, surrounded by our 
production staff. The programs worked! 

Next, temperature, humidity, and environmental tests were 
performed. Temperature problems were corrected by alter- 
ing (he ah- channels through culling and gluing. HP class B2 
environmental tests were passed (see Table I and Fig. 4). 



Table I 
Thermal Test Results 
CPU Temperatures ( C) 



Test Point 


HP-PAC 


Original 


UB7 


29.1 


32.1 


UD15 


33.0 


37.5 


CB20 


36.6 


44.8 


UH25 


35.7 


46.7 


TO-220 


49.3 


59.8 


UM10 


39.9 


44.5 


UM25 


58.7 


07.5 


UR30 


56.7 


73.7 


MUSTANG 


70.0 


82.9 


CPU Average 


45.44 


54.39 



24 August IS84 Hewlett-Packard Journal 

©Copr. 1949-1998 Hewlett-Packard Co. 



500 

450- 

400 

S 350- 
1 300- 

1 a" 1 s\ 

a 200- / \ 


DEVICE UNDER TEST 
Inslrument HP799 HPPAC 
Serial No 02101 
Options Chassis 
Prod Weight . 9.7 kg 


SHflPK TATA 
Side of OUT Rear 
Waveform Hall Sine 
Duration ■ 1 7 ms 
Delta V . 246i crcVs 
Faired Acc. 240 0 y 


< ,50 T / \ 
100- / \ 

.5 
Time (msl 


MACHINE SETTING 
Pressure PR0GR. 0.0 MPa 

0.9 psi 

Pressure UK 0.0 kPa 
0 0 psi 

Heigh! 5.0 in 

12.7 cm 
Pads : 1 Thick/1 Thin 


Dale : 10.03.92 
Name : Bart Dechesne 



50- 
45- 
40- 

= 35- 

1 I 
1 20- 


A 


DEVICE UNDER TEST 
Instrumem HP799 HPPAC 
Serial No 02101 
Oplions H Disk 
Prod Weight 9.7 kg 


SHOCK DATA 
Side ol DUT Rear 
Wavelorm Half Sine 
Duration . 7.5 ms 
OeHa V 2079 cm/s 
Faired Acc 451 g 


1 »* 
10- 

Si 

( 


IX 


MACHINE SETTING 
Pressure PROGR O.OMPa 
0.0 psi 

Pressure LIK 0 0 kPa 
0.0 psi 

Height .5 0 in 

12.7 cm 
Pads : 1 Thick/1 Thin 


5 *10 

Time Imsl 


Dale : 10.03.92 
Name ; Bart Dechesne 



lb) 



(0 




DEVICE UNDER TEST 
Instrumem : HP799 HPPAC 
Serial No . 02101 
Options Chassis 
Prod. Weigh! . 9.7 kg 



SHOCK DATA 
Side of OUT : Rear 
Wavelorm 
Duralion 
Delta V 
Faired Acc 



• Trape2oidal 
. 23.5 ms 
. 737.2 cm/s 
:31.7g 



MACHINE SETTING 
Pressure PROGR. : 2.1 MPa 
300.0 psi 



Pressure LIK 



Height 



: 0.0 kPa 
: 0.0 psi 
: 30.0 in 
162 cm 
1 Thick/1 Thin 



1003.92 

Ban Dechesne 



Idl 




DEVICE UNDER TEST 
Inslrument HP799 HPPAC 
Serial No. : 02101 
Options : H. Disk 
Prod. Weight : 9 7 kg 



SHOCK DATA 
Side ol DUT 
Wavelorm 
Duration 
Delta V 
Faired Acc. 

MACHINE SETTING 
Pressure PROGR. I 2.1 MPa 
300.0 psi 



Rear 

Trapezoidal 
28 0 ms 
804.7 cm/s 
28 3 (| 



Pressure UK 



0.0 kPa 
: 0.0 psi 
: 30.0 in 
76.2 cm 
1 Thick/1 Thin 



Date 
Name 



10.03.92 

Bart Dechesne 



Fig. 4. Imparls transmitted to B hard disk ilrivr- by HP-F'AC fnam. In (a) anil (c) I hi" sensor is on the workstation, In (l>) and (d) the sensor 
is on Che hard disk drive held by HP-PAC 



Fig. 4 shows the impacts transmitted by HP-PAC to a hard 
disk for two different types of shocks i hair sine, trapezoid). 
These imparls could be minimized by optimizing the design 
Of the supports and form Ills for the devices. 

Savings i'"d Advantages 

Comparisons were drawn between 8 Iradilional HP work- 
slalion and an HP workslalion in which the system compo- 
nents such as (he CPU board, disk drive, and flexible disk 
were mechanically integrated using the HP-PAC concept. 
The HP-PAC workstation showed: 
A 70% reduction in housing mechanical pans 
A 95% reduction in screw joints 
A 50% reduction in assembly time 
A 90% reduction in disassembly lime 
A :jO% reduction in transport packaging 
A 50% reduction in time and expenditure for the mechanical 
development of the housing, 

( ompared to conv entional chassis concepls. HP-PAt "s 
advantages include: 

A reduction in the number of chassis parts. 

Separation between functionality and industrial design. 

The cxlornal enclosure is designed after definition of the 



mechanical interfaces between the enclosure and the chassis 
and between the enclosure anil I he components. 

• One production step to produce molded parts. 

• Simple. fast, and cost-effective assembly of the components 
(see Fig. 5). The assembly process is almost self-explanatory 
as a result of the indentations in the molded parts, and no 
additional .joining elemenls and assembly tools are necessary. 
Assembly at I he dealer's site is feasible. 

• Reduced product mass because of the lighter chassis. 

• Good protection against mechanical shock and vibration. 

• < tn-ihe-spot cooling of components as a result of air channels 
in the foam. 

• Cost savings during almost all working processes. 

• Reduced transport packaging as a result of the good absorp- 
tion of the chassis material. Also, less transport volume. 

Impac t on the Development Process 

With HP-PAC. a 100% recyclable and environmentally 
friendly material is used for the construction of the chassis. 
Development of a chassis only means spatially arranging 
components within a molded part and adhering to Certain 
Construction guidelines (function- and produclion-specific). 



© Copr. 1949-1998 Hewlett-Packard Co. 



Angus! 1994 Hewlett -Packard Journal 25 




(bl 





(91 




(hi 




Fig. 5. UP-PAC workstation assembly sequence, (a) Foam bottom chassis, (b) Loaded bottom chassis, (c) Partially loaded foam top chassis, 
(cl) Open loaded chassis, (e) Assembled loaded chassis. (0 Lower enclosure added, (g) Upper enclosure added, (h) Enclosure completed 



The external enc losure is developed separately after defini- 
tion of the interfaces. There are hardly any tolerance prob- 
lems as a result of material flexibility. Changes are simple to 
perform by cutting, gluing, and additional grinding so an 
optimal solution can be reached quickly. 



It is possible to perform all relevant environmental tests on 
the first prototype. Prototypes can also be used for the first 
functional tests. It is relatively simple to make changes dur- 
ing the tests, since industrial design and functionality are 
clearly separated and c hanges on the molded part c an be 



26 August leMBewteUrPaek^Jountal 

©Copr. 1949-1998 Hewlett-Packard Co. 



performed in the lab. Once the design is complete, manufac- 
turing of the molding tool is fast and cost-effective, and tool 
changes are noi required. 

The result is a short, cost-effective chassis development 
phase. 

Material Development 

HP-PAC places high demands, some of which are new. on 
the material used. Expanded polypropylene meets these 
demands in almost all ways. 

So far EPP has been mainly used in reusable packaging and 
to an increasing extent in the automobile industry, where 
bumper inlays and side impact cushions for car doors are 
typical applications. Traditionally the automobile business 
has placed high demands on the quality of components. In 
terms of precision and long-term behavior these demands 
are identical to those of HP-PAC. 

However, the situation is somewhat different for two new 
HP-PAC-spccific material requirements: BSD (electrostatic 
discharge) suitability and flame retardant prop e rt ie s. We are 
working with raw material manufacturers in the U.S.A., 
Japan, and Germany to develop optimum raw material for 
HP-PAC in the medium and long term. For the short term, 
procedures had to be found and checked to meet these de- 
mands. In terms of ESD suitability this meant spraying or 
dipping parts in an antistatic solution. A suitable flame retar- 
dant was developed and patented together with a company 
specializing in flame retardants. Like plastic molding, treat- 
ment with flame retardant places an additional burden on 
the environment and impairs recyclahility. However, a good 
product design renders flame retardants unnecessary. Three 
prototype HP-PAC products without flame retardant already 
have I IVCSA and Tl'V approval. 

We expect that the incentiv e for suppliers to develop suitable 
raw materials will steadily increase as the number of prod- 
ucts using HP-PAC grows. Thus, in the future we hope to 
have custom-made materials available that will allow Further 
improvements in quality at reduced cost. 

EPP and Its Properties 

EPP raw material is available on a worldwide basis. Some 
EPP manufacturers and their trade names for EPP are JSP 
ARPRO, BASF NEOPOLEN P. and Kaneka EPERAN PP. EI'!' 
comes in the form of foam polypropylene beads. Its chemi- 
cal classification is an organic, polymer, one of the class of 
ethylene polypropylene copolymers. 

EPP contains no softeners and is free oI'CFCs. The product 
does not emit any pollution. Compressed air, steam, and 
water are used during the molding process. According to 
one EPP manufacturer, no chemical reactions take place 
during this process. 

The specifications quoted here are for the JSP raw material 
we used. EPP from other manufacturers should vary only a 
little or not al all from these specifications. 

Mechanical Properties. The following list shows the relevant 
mechanical properties of parts made of expanded polypro- 
pylene foam with a density of (id g/l. 

Density: 60 g/l 

Tensile strength: 785 kl'a 



Compressive strength at 25% deformation: 350 kPa 
Residual deformation after 24 hours at 25% deformation: 9% 
Deformation under static pressure load (20 kPa): 1.2% 

After 2 days: 1.3% 

After 14 days: 1.4% 

Thermal Properties. After the molding process, the parts are 
tempered so that the dimensions become consistent. Any 
further temperature influences will not result in significant 
contraction, expansion, or changes of mechanical properties 
between -40°C and 110'C. The coefficient of thermal expan- 
sion is 4.2xirj-V ; 'C from -4©*C to 20'C and 74W0"*/*C from 
20"C to SOT. Thus, a 100-mm length of foam at 20 C C will be 
100.375 mm long at 70T. 

The material clianges stale above 14b C. Thermal dissolution 
occurs at 200 C C and the ignition point Is 315°G 

There was no permanent deformation in a temperature loop 

lest consisting of: 

4 hours at 90°C 

0.5 hour at 23°C 

1.5 hours at -40°c 

0.5 hour at 23°C 

3 hours at 70°C and 95% humidity 
0.5 hour at 23°C 
1.5 hours at -40°C 
0.5 hour at 23°C 

Electrical Properties. EPP has good electrical insulation prop- 
erties. This means that the foam parts can easily acquire an 
electrical charge. Consequently, methods are being devel- 
oped to produce antistatic EPP. There is no noticeable inter- 
ference between EPP material and high-frequency circuits 
with square-wave signals up to 100 MHz. Tests at very high 
frequencies ( > 100 MHz) have not yet been conducted. 

We can infer from solid polypropylene some of the electrical 
properties of expanded polypropylene: 

• Dissipation factor of injection-molded EPP at 1 MHz: tan 
<5xi(H 

• Breakdown voltage of injection molded EPP: 500 kV/cm 

• Surface resistance at 23"C and 49% relative humidity, 
untreated: 10" Co lo'-ohms. 

Chemical Resistance. EPP has good chemical resistance be- 
cause of its nonpolar qualities. It is resistant to diluted salt, 
acid, and alkaline solutions. EPP can resist lye solutions, 
solvents at concentrations up lo 60%, and alcohol. Aromatic 
and halogenal.ed hydrocarbons found at high temperatures 
in grease, oil. and wax cause it to swell. When EPP is mixed 
with other substances, dangerous chemical reactions do not 
lake place. 

Reaction to Light. In general. EPP is sufficiently resistant to 
radiation at the wavelengths of visible light. 

Reaction to Humidity and Water. II i< lit \ has little or no effect 

on the mechanical properties of EPP. Water absorption is 
0.1% to 0.3% by volume after one day anil 0.6% after seven 
days. No changes are visible after 2-1 hours in water at IOC. 

Manufacturing Process 

The raw material beads are injected in a prceoinbustion 
chamber al a pressure of approximately 5 bar which reduces 
the pellet volume. The beads are then injected inlo the mold 
at a pressure of approximately 1 bar until a particular filling 



© Copr. 1949-1998 Hewlett-Packard Co. 



Augusi UKM Hirtrtett-Packart Journal 27 



ratio is readied. The pressure is then reduced to normal so 
iJiat the heads can reexpand and lill the mold. Once the mold 
is filled, steam at 180°C is injected into the mold through 
DOZZleS, wanning the surface of the beads and fusing them 
together. This defines the foam pan. which is left to cool 
down and then removed from the mold. Subsequently, by 
means of specified lemperaiure cycles, controlled maturing 
and dimensional changes are induced in Ihe part, resulting in 
its final form. This form remains constant within a specified 
temperature range. 

Recycling of EPP 

Polypropylene foam malerial can be recycled and used for 
manufacturing of other products. Manufacturers of polypro- 
pylene lake back EPP wasle free of charge. 

EPP can be melted and fed back into source malerial poly- 
propylene in thermoplastic fonu. Compression, melting, and 
granulation take place in gas extruders. The extruded re- 
cycled material can be used for polypropylene injection 
molded or extruded products. Recycling trials with a bumper 
system made out of short glass fiber (approximately 20% of 
weight ). EP rubber (approximately 20% of weight ) and poly- 
propylene produced a granule that can be used for complex 
iiyection molding. 

Conclusions 

To protect the I1P-PAC technology in an appropriate manner 
we have filed for a patent under European patent application 
number 054621 1 



It goes without saying that we will continue to develop the 
technology further Efforts in which we are currently engag- 
ing are material development, prototype manufacturing, 
quality assurance, and marketing of HP-PAC. 

We have not yet set any specific limits on user distribution. 
Possible areas for user application range from Ihe electronics 
and electromechanical industries CO home electronic equip- 
ment and transportation. At Ihe Mechanical Technology 
Center, we offer various services ranging from consulting to 
complete solutions, nol only for HP-PAC but also forsheet- 
nictal and plastic parts. We have experience in Ihe computer, 
analytical, and instnunenl businesses and are in contact 
with others. 

Acknowledgments 

We would like to I hank the Novaplast Company and the 
following HP entities: Boblingen Computer Manufacturing 
Operation. Exeter Compuler Manufacluring Operation, 
Boblingen Manufacturing Operation Mechanical Technology 
Center, Boblingen Manufacluring Operation environmental 
test laboratory, Workstation Systems Group quality 
depart nient. 



28 Augiisl Hewlcit-Packanl Journal 

© Copr. 1949-1998 Hewlett-Packard Co. 



High-Speed Digital Transmitter 
Characterization Using Eye Diagram 
Analysis 

The eye diagram analyzer constructs both conventional eye diagrams and 
special eyeline diagrams to perform extinction ratio and mask tests on 
digital transmitters. It also makes a number of diagnostic measurements 
to determine if such factors as waveform distortion, intersymbol 
interference, or noise are limiting the bit error ratio of a transmission 
system. 

by Christopher M. Miller 



The goal of any transmission system is lo deliver error-free 
information reliably and economically from one location to 
another. The probability that any bit in the data stream is 
received in error is measured by a bit error ratio ( BER) test. 
This test is performed using an error performance analyzer, 
commonly referred to as a BER tester or BERT. Generally, a 
pseudorandom binary sequence (PRBS) from a pattern gen- 
erator is used to modulate the transmission system's source, 
while an error detector compares the received signal with 
the original transmit led pattern. The HER is defined as the 
number of bits received in error divided by the number of 
bits transmitted, which equals the error count in a measure* 
menl period divided by the product of the bit rate and the 
measurement period. 

hi general. BERT nieasiircnicnls lend to be pass/fail in 
nature, anil convey very little information about a failure. 
Moreover, some additional lesls are usually required on 
Components tb ensure thai I hey will meel ihe desired BER 
when they .'ire installed into a system. For these reasons, it is 
desirable to perform a number of parametric measurements 



on the transmitted waveform in the time domain. Typically, 
an oscilloscope or an eye diagram analyzer is added to the 
BERT system as shown in the typical optical transmitter 
measurement setup in Fig. 1. 

The pattern generator is still used to provide the stimulus. 
Different time-domain displays can be obtained depending 
on the choice of the trigger signal. When the pattern trigger 
or frame provides the trigger signal, a stable portion of the 
patient appears on the display. When Ihe clock frequency is 
used as a trigger signal, the dala pattern waveform, superim- 
posed mi itself, produces a waveform display that is referred 
to as an eye diagram as shown in Fig. 2a. hi general, the 
more open the eye is, the lower Ihe likelihood that the re- 
ceiver in a transmission system may mistake a logical 1 bit 
for a logical (I bit or vice versa. 

In an effort lo standardize Ihe high-speed lelceoiuiiiunic.il ion 
systems thai are being dev eloped and deployed, standards 
hav e been adopted for equipment inaniiiacliirers and service 
providers. Two such standards are Ihe synchronous optical 



Hardware 
Filter 



Optical-to- 
Electrical 
Converter 



HP 70842B 
Error Oeleclor 



HP 71501 A 
Eye Diagram Analyzer 



T 



Fig. t. Digital transmitter para- 
metric teat set up using an eye 
diagram analyzer. 



I Copr. 1949-1998 Hewlett-Packard Co. 



Aiwisi !!•!>» Ili-wlcn-l'iU'kurci .lonmul 2!) 




Clock 
Out 



HP 703I1A Clock Source 



10'MH/Rcl 





























W 












] - 






"1 




















ft 


' '1 












ES 

' 


il 










IP 


























Bus ■ 














Rafts 






' : ' 

























1B1.31G ps GB - 282 ps/div 

Trl=Chl 
80 mv/div 
240 mU ref 



(a) 




1B1.31S ps 60.282 ps/div 

Trl=Chi 
80 mU/div 
240 iV ref 



lb) 







































































\ 








u 





















































































181. 31G ps G0.282 ps/dlv 

Trl=Chl 
80 mU/div 
240 m U ref 



Fig. 2. (a) Conventional eye diagram, (b) Eyelinf* diagram. 
(■•) Eyeline diagram with eyt filtering. 



network (SONET), a North American standard, and I he syn- 
chronous digital hierarchy (SDH ). an international standard 
Both standards are for high-capacity fiberoptic transmission 
and have similar physical layer definitions. These standards 
define the features and functionality of a transport system 
based on principles of synchronous multiplexing. The more 
widely used transmission rates are 155.52 Mbits/s. t>22.08 
Mbits/s, and 2.48832 Obits/s. 

One of the goals of the standards is to provide "mid-span 
meet" so that equipment from multiple vendors can he used 
in the same telecommunications link. The standards specify 
extinction ratio and eye mask measurements on the trans- 
mitted eye diagram to help ensure thai transmitters from 
various vendors are compatible, 1 - The eye diagram shown 
in Fig. 2a is from a laser transmitter operating at 2.48832 
Gbits/s. Ii shows ihe characteristic laser turn-on overshoot 
and ringing. 

Eye Diagram Characterization 

An important Specified test parameter for these transmis- 
sion systems is the extinction ratio (ER) of the eye diagram. 
It is typically defined as: 

KR = 10 log p avw 

"avg( logic 01 

where I ' !1V gi iu«n- 1 1 fe die mean or average optical power level 
of ihe logic 1 level and P U vg(iogi<- Q) is '-he mean or average 
optical power level of the logic 0 level. For SONET/SDH 
transmission systems, the minimum specified extinction 
ratio is 10 dB. In some cases, the extinction ratio is ex- 
pressed as the linear rat ii > ©f 1 he two power levels. A good 
extinction ratio is desired in these systems to maintain an 
adequate received signal-lo-noise ratio. 

Although Ihe definition of extinction ratio is relatively 
straightforward, the measurement methodology to deter- 
mine die mean logic levels is not specified in the standards, 
such as SDII standard G.057. The histogram and statistical 
analysis capability of digitizing oscilloscopes can be used to 
determine the mean and standard deviation (sigma) of a 
waveform. However, there are no standard criteria for set- 
ting the windows and limits for the colled ion and evaluation 
of Ihe data to determine the mean logic levels. 

The Telecommunications Industry Association/Electronics 
Industry Association (T1A/EIA) has developed a recommended 
methodology for making eye diagram measurements called 
the Optical Fiber Standard Test Procedure #4 (OFSTP-4). 8 Il 
recommends thai voltage histograms be used to del ermine 
the most prevalent logic 1 and 0 levels of Ihe eye pattern 
measured across an entire bit period. The OFSTP-4 also 
points out the importance of removing any residual dc offset 
from the extinction ratio measurement because this can 
dramatically affect Ihe measurement accuracy. 

Over the years, the designers of digital transmission systems 
have learned I hat Ihe eye diagram should have a particular 
shape to achieve a good BER. Often these designers have 
constructed areas, or masks, inside and around the eye dia- 
gram. The eye diagram waveform should not enter into these 
masked areas. The polygons in 1 lie center of the eye diagram 



30 August lWlltrwIeli-PjirkanlJuurnal 

© Copr. 1949-1998 Hewlett-Packard Co. 





00 XI » X3 X4 1.0 

Unil Interval 





STM-I 

(155.52 
Mbte/sl 


STM-4 
1622.08 
Mbrts/sl 


XI /X4 


0.15/0 85 


0.25/0.75 


X2/X3 


0.35/0.65 


040/0.60 


Y1/Y2 


0.20/0.80 


0.20/0.80 





STM-16 
12.48832 
Gbits/s) 


X3-X2 


J 20 


Y1/Y2 


0.25/ 0.75 



Fig. 3. Laser transmilier eye diagram masks for SONET/SDH 

transmission systems. 

shown in Fig. 3 and ilu- lines ai ihe lop and bottom corre- 
spond to the mask used to evaluate optical transmitters in- 
tended for use in SONET/SDH systems. Depending on the 
transmission hit rate, the size and shape of the mask Changes. 
The x and y coordinates are specified for each hit rale and 
their relative positions are based on the mean logical 1 level 
and the mean logical 0 level. The mask for the lower bit 
rales is a hexagon, whereas the mask for 2.48832-Gbit/s 
transmission is a rectangle The receiver bandwidth for the 
measurement of Ihe transmit ted eye diagram is specified to 
be a fourth-order Bessel-Thomson response with a reference 
frequency al three fourths of Ihe bit rale. This ensures a 
common reference bandwidth for transmitter evaluation. 
Hardware low-pass fitters are commonly employed to 
achieve this response. 

To date, existing inslrumonlalion has been inadequate to 
design, build, and test optical transmitters sufficiently to 
meet Ihe requirements of these transmission standards in 
certain key areas. Hepcaiable extinction ratio measurements 
are often difficull to obtain, particularly extinction ratio 
measurements of the low-power optical signals common in 
these systems. Easy-to-use mask compliance testing. With 
default standard masks lhal automatically scale to the data 
would be a convenience. But. most significantly, a lool to aid 
designers in diagnosing transmitted BER problems would be 
a ma,|or contribution. 

Eye Diagram Analyzer 

The MP 71501A eye diagram analyzer combines Ihe HP 
7082QA transition analyzer module.- 1 the IIP 70004 A color 
display and mainframe, and Ihe IIP 707S4A eye diagram per- 
sonality. The personality is stored on a 128K-bytc R< ).M card 
and can be downloaded into ihe instrument Hie Instrument 
can be used wild a number of optical converters, When the 
eye diagram analyzer is combined with a pattern generator 

such as a member of the IIP 71t>()0 family of pattern genera- 
tors, a number of formerly difficull transmission measure- 
ments Can be made easily. This instrument configuration is 
shown in Fig. 4. 




Fig. 4. Photograph of the HP 715II1A eye diagram analyzer with an 
IIP 71603B pattern generator system. 

The operation of the IIP 7150 1A differs from lhal of a con- 
ventional digital repetitive sampling oscilloscope. As shown 
in Fig. 5, both instruments have microwave samplers to sam- 
ple the incoming waveform before it is digitized by an artalog- 
lo-digital converter (ADC). A digitizing oscilloscope has a 
trigger input lhal is used to Stall a sample. In addition, an 
incremental delay is added to the trigger signal so the sam- 
ples slep through the input waveform. After many cycles of 
the incoming signal, a complete trace of ihe input waveform 

is constructed. 

The sample rate of the eye diagram analyzer is not deter- 
mined by an external trigger, bin ii is sel according lo the 
frequency content of the incoming signal itself. As Ihe in- 
coming signal is digitized, il is analyzed to delermine an 
appropriate sample frequency to down-convert il optimally 
into the intermediate frequency (IF) section. 

As shown in the block diagram, Fig. 6, Ihe IIP 7150 1 A has 
two identical signal processing channels which can sample 
;uid digitize signals from dc up lo III (ill/.. Input signals to 
each channel are sampled by a microwave sampler at a rate 
(f s ) between 10 MHz and 20 MHz. The sample rale is depen- 
dent upon the signal frequency and Ihe type of measurement 
being made. The outputs of ihe samplers are fed into the 
dc-tO-10-MHz IF sections. The IF sections contain swiich- 
able low-pass fillers and step-gain amplifiers. Ihe dc compo- 
nents of the measured signal are tapped off ahead of Ihe 
microwave sampler and summed into Ihe IF signal sepa- 
rately. The outputs of Ihe IF sections are Sampled at the 
same rate as Ihe input signal and then convened lo a digital 
signal by the ADCs. 

Once Ihe signals arc digitized, they are fed into the buffer 
memories. These buffers hold the samples until Ihe trigger 
point is determined. The buffer memories make il possible 
lo view signals before Ihe trigger event without using delay 



© Copr. 1949-1998 Hewlett-Packard Co. 



August 19W Hwlett-Paekarf Journal 31 



Oscilloscope 




(bl 

lines. By triggering on the IF signal, the HP 7150 LA is able to 
trigger internally on signals with fundamental frequencies as 
high as 10 GHz. Once the trigger point has been determined 
and all the necessary data has been acquired, (he appropriate 
data is sent to the digital signal processing (DSP) chips. 

An FFT is performed on the time data that is sent to the DSP 
chips. With the time data now converted into the frequency 
domain. IF corrections are applied to the data. The IF cor- 
rections compensate for nonidealilies in the analog signal 
processing path, In addition, in certain modes of operation, 
the nominal measurement :J-dB bandwidth of 22 GHz can be 
extended to 40 GHz by applying RF corrections. The RF 
corrections compensate for microwave sampler conversion 
efficiency roll-off versus frequency. Similarly, in these 
modes of operation, user-entered corrections or filtering can 
be applied at this point as frequency-domain multiplication. 
As we will see later, this is a very useful capability. Finally, 
the inverse FFT is performed. 



Fig. 5. Simplified architectural 
comparison of the eye diagram 
analyzer (a) and a conventional 
sampling oscilloscope (b). 

Generating Eye and Eyeliiie Diagrams 

Fig. 7 shows how the HP 71501A acquires data in the time 
domain using a technique called harmonic repetitive sam- 
pling. The sample rate is set so that successive sample 
points step through the measured waveform with a specified 
time slep. The sampling period, T s , is computed using the 
fundamental signal period, the time span, and the number of 
trace points. T s is set such that an integer numher (N ) of 
signal periods plus a small time increment (AT) occur be- 
tween successive sample points. For example, suppose thai 
the input signal is a 100-MHz square wave as shown in Fig. 7. 
The minimum sampling period of the IIP 71501A is 50 ns. 
Therefore, five cycles of the input Waveform are skipped 
between samples. However, if the sample period were set to 
exactly 50 ns. sampling would occur at the same point on 
every fifth cycle, and no new information would be gained. 
For a time span of 10 ns and the number of trace points 



CH1 



CH2 




Anr 


■ 






■ 


Memory 



IF Step-Gain n 
Amplifiers " 

Fig. 6. Simplified block diagram of the eye diagram analyzer. 



Micro- 
processor 





32 August V.m Hewlett-Packard Juunial 



l Copr. 1949-1998 Hewlett-Packard Co. 




equal to 10 points, a time resolution or AT of 1 ns is re- 
quired. For this time resolution, the actual sample period is 
50 ns + 1 ns = 51 ns. Thus, at every fifth cycle of the input 
waveform the sampling point moves forward 1 ns. and after 
fifty cycles of the input waveform, one complete trace of the 
input signal would be displayed. 

With the signal frequency set equal to the clock frequency, 
the IIP 7150 1A uses the process of harmonic repetitive sam- 
pling to generate eye diagrams similar in appearance to those 
of sampling oscilloscopes. The eye diagram of a pseudoran- 
dom hit sequence (PRBS) obtained in this manner is shown 
in Fig. 2a. Like the display of a conventional digital sampling 
oscilloscope, the eye diagram is constructed from samples 
of a number of bits in the sequence, which are displayed as a 
family of dots when persistence mode is activated, The only 
relationship between the samples or dots is their position 
relative to a trigger point, which is usually the rising or 
falling edge of a clock signal. 

Hie 111' 71501A uses a modified version of this same tech- 
nique of repetitive sampling to construct the r/jeliiic ilin- 
gTO/ms Of PRBS signals or any bit sequence Whose pattern 
repeats, Fig. 2b shows an eyeline diagram of the same laser 



transmitter as Fig. 2a. In this mode, successive samples come 
from the same or adjacent bits in the pattern. The samples 
or dots in these eyeline diagrams can be connected, and a 
w hole trace or portion of the bit sequence is displayed at 
one time. When the display is in persistence mode, a number 
of traces of different portions of the bit sequence are super- 
imposed, forming the eyeline diagram. The continuous 
traces that make up the eyeline diagram convey much more 
information than the sampled smear of dots observed in the 
conventional eye diagram. The pattern dependent traces that 
make up the eyeline diagram, as well as the bit transitions, 
are readily seen. With the eyeline display it is possible to ob- 
serve whether noise, intersymbol interference, or mismatch 
ripple is causing eye closure. 

When llii' III' 7I5II1A is placed in the eyeline mode, Igtami is 
set to the clock rate divided by the patient length, and T s is 
set such that successive sample points step through the 
waveform pattern. AT is computed exactly as before. For 
the example shown in Fig. 8, the clock frequency is 1 (ill/., 
the pattern length is 15 bits, and fgjgoal is 00.07 MHz. For a 
time span of 15 ns and the number of trace points equal to 
15, a AT of 1 ns is required. T s can be computed to be 01 ns. 



15 ns 




I I I I I 

I 1 L J I 



Input 1-Gbit/s, 15-Bit Pattern 

•clock « t GHz 

Pattern Length « 15 bits 

l.ignni = 66.67 MH? 

Tiign»l = '5ns 

Instrument State: 
Time Span = 15 ns 
Trace Length = 15 
N = 4 



signal' 



\T 



Time Span 15 „s , „ 

AT = = ; — -r ■ -~r r — = 1 ns 

Trace Length 15 

Therelore: T^ = 61 ns 



Fi>{. H. Harmonic repetitive 
Bampling ora PRBS waveform to 
generate an eyeline diagram, 



© Copr. 1949-1998 Hewlett-Packard Co. 



August UXM Hewlett -Pactattl Journal 33 



Thus, every fourth cycle of I he pattern, the sampling point 
moves forward 1 ns. and after sixty cycles of the pattern 
waveform, one complete trace of the pattern is displayed 

Eye Filtering 

The eyeline mode traces can be filtered to remove the noise 
and nonpattern dependent effects. As shown in Fig. 2c, this 
allows an enormous improvement in the ability to view the 
individual traces. The reduction in trace noise and t he im- 
provement in signal-lo-noise ratio (SNR) available in eyeline 
mode are enabled by turning on the eye filter. Trace averag- 
ing, a common technique to improve the SNR of time- 
domain displays of a stable pattern, cannot be applied to 
conventionaJ eye diagrams because the averaged waveform 
would converge to the dc or average level. 

For signals with pat tern repetition frequencies greater than 
10 MHz. turning on the eye filter switches in a 100-kHz IF 
hardware filter. This effectively reduces the IF bandwidth 
from 10 MHz to 100 kHz. which provides a fixed 20-dB signal- 
to-noise improvement. However, for most I'RBS signals, the 
pattern repetition frequency is much less than 10 MHz. In 
this case, additional digital signal processing is performed to 
improve the SNR. In essence, a number of samples are taken 
at each time record point in the measured waveform. Tliis is 
accomplished by Sampling at a rate equal to or harmonically 
related to the partem repetition frequency. For each time 
point, the samples are passed through a narrowband filter 
implemented with an FFT. To sample the next trace point, 
the internal synthesizer controlling the sampling rate is 
phase-shifted a precise amount. This process is repeated for 
each trace point. The actual amount of noise reduction or 
processing gain is a function of t he number of samples taken 
at each trace point From an operational standpoint, the 
amoiuit of processing gain is represented by an equivalent 
noise-filter bandwidth. The bandwidth of the filter indicates 
the relative noise redaction normalized to the 10-MHz IF 
bandwidth. As the effective bandwidth is reduced, the 
number of samples increases, and the trace update rate 
gets slower. The following table shows the SNR ratio 
improvement for a given number of samples. 



Equivalent Filter SNR Number of 

Bandwidth Improvement Samples 

10 MHz 0 dB 1 

2 MHz 7dB 16 

1MHz lOdB 32 

500 kHz 13 dB 64 

250 kHz 16 dB 128 

125 kHz 19 dB 256 

62.5 kHz 22 dB 512 



The eyeline mode with eye filtering offers a significant im- 
provement over conventional eye diagram measurements in 
the ability to observe low optical signal levels. Shown in Fig. 
9a is the same laser transmitter whose output now has been 
optically attenuated. This conventional eye diagram display 
is of poor quality because of inherent sensitivity limitations 
and the inability to perform trace averaging on a FRBS wave- 
form. Enabling the eyeline mode and activating the eye filter 
makes the eye diagram once again clearly visible as shown 
in Fig. 9b. This function makes it easy to observe signals 









































hi 






r )», a 








. ' . > " 






: 'i 


\m 








^ts 




M. 










f V_,T 






■'. 


ill 




M 


ip 


















m 


M 


m 






H I 






!§ 


E . 1 



























1B1.31G ps G0.E3E ps/div 

TrUChl 
2 mU/div 
H aV ref 

la) 



























f 


































i 


Sis 































































































1B1.31G ps 6B.B8B ps/dlv 

Trl=Chl 
I aV/dlv 
H «V ref 

(bl 



Fig. 9. (a) Conventional eye diagram of a low-level signal. 

fb) Eyeline diagram ura lew-level signal with eye filtering applied. 

that are only several millivolts in amplitude, which is impos- 
sible to do with conventional high-speed digital sampling 
oscilloscopes. 

An application of this measurement capability is the observa- 
tion of the eye diagram of a transmitted optical signal after 
the signal has passed through a long length of optical fiber. 
Because of the attenuation of the fiber, eye diagrams are 
often difficult to observe in ibis maimer. With the eyeline 
mode, il is possible to determine if the chin) of the laser 
transmitter, along with the dispersion in the fiber, is causing 
any wavefonn distortion. 

Software Corrections and Filtering 

RF corrections can be applied to extend the measurement 
bandwidth of the HP 71501A to 40 GHz while the instrument 
is in the eyeline display mode, in which a unique mapping 
exists between the IF frequency and the input RF frequency. 



34 August 1994 lieu leu-Packard Journal 

©Copr. 1949-1998 Hewlett-Packard Co. 




path to 


FREQ 


rtflGN 


PHftSE neit FREQ 


1.3E85BB GHz 


-1.55 dB 


-8.8B deg 


Sl DpB 


I.H9299B GHz 


-1.B7 dB 


-8.87 deg 


sl ope 


1.617418 GHz 


-2.21 dB 


-8.8B deg 


slope 


1.741B2B GHz 


-2.59 dB 


8. 84 deg 


sl ope 


1.8BG2HB GHz 


-3.B1 dB 


8. IB deg 


sl ope 


1.990EGB GHz 


-3.17 dB 


8.33 deg 


slope 


2.115B7B GHz 


-3.97 dB 


8.59 deg 


sl ope 


E. 239490 GHz 


-4.51 dB 


8.37 deg 


slope 


E.3E3980 GHz 


-5.B9 dB 


1.H9 deg 


sl ope 


2.408380 GHzB 


-5.71 dB 


2.17 deg 


sl ope 



Fig. 10. Eyeline diagram with soft wart- filtering applied and user 
correction table corresponding to a fourtli-order Bessel-Thomson 
response. 



When the IK signal is digitized, an FFT is used to convert it 
to the frequency domain. The IF frequencies are then mapped 
into the corresponding BF frequencies, and the appropriate 
correction is applied at each frequency. Finally, an inverse 
FFT is applied to the result to transform it back to the time 
domain. These same processing routines are available for 
user-defined corrections or filters. For example, this capabil- 
ity can be used to calibrate the oyeline lightwave measure- 
ments by correcting for any optical Converter roll-off. hi 
addition, it can be used to evaluate eyeline diagrams under 
different filter response conditions. 

As previously mentioned, the transmission standards require 
that the eye mask measurements be made in a receiver band- 
width that corresponds to a fourth-order Bessel-Thomson 
response with a reference frequency at three fourths of the 
bit rale. Hardware filters or special lightwave converters are 
Oftetl employed to achieve Ibis. By using the user corrections 
available in the IIP 7 1501 A, the ideal iransl'er function can 
be implemented with a software filter. The filtered display 
shown in Fig. 10 was made with a software filter applied 
that corresponded tO the desired ideal Bessel-Thomson 
response. 

Up to 128 magnitude and phase points can be loaded as user 
corrections. Shown in the lower half of Fig. 10 is a portion of 
the user correction table that corresponds to the fourth-order 
Bessel-Thomson response. This table of magnitude and phase 
corrections was generated directly from trace math functions 
that are available in the IIP 71501A. The linear portion of the 
phase lias been corrected to remove the delay. This allows 
the trace to remain in the same place when the corrections 
are applied. Software filters for the major SONET/SI >I I 
transmission rales are stored on the U< >\! memory card. This 
capability could also be used lo develop an equalizer or tailor 
a specific- response lo improve the received eye diagram. 
Thus, the transmitted "equalized" eye diagram could be sim- 
ulated and observed before the final hardware equalization 
filler is constructed. 



User Interface 

An application-specific user interface was designed for the 
eye diagram analyzer. The goals of the user interface were 
to offer turnkey measurements that meet the requirements 
of the standards, to be easy to use. and to make the inter- 
face to tlie pattern generator as transparent as possible. 
Many of the specific measurements will be descril>ed in the 
next sections. One ease-of-use feature is the ability to set the 
time base in bit unit intervals, instead of having to recall the 
clock period to display the eye diagram. Time delay can also 
be set in bit intervals, which makes it convenient to scroll 
through the bits that make up the pattern. A menu key. READ 
PAT GEN, is used to interrogate the pattern generator. It re- 
turns the clock frequency, the pattern length, and the data, 
clock, and trigger levels, which are used to set up the display. 

The eye diagram application program is written in IIP 
Instrument BASIC and is downloaded into the host iastru- 
ment, the IIP 70820A microwave transition analyzer module. 
The microwave transition analyzer is an extremely flexible, 
general-purpose instrument that offers both time-domain 
and frequency -domain signal analysis capability. 4 This versa- 
tile measurement capability is available simultaneously with 
the eye diagram measurement capability, and is simply 
accessed through its own menu. 

Eye Diagram Measurements 

A number of parametric measurements are often performed 
on eye diagrams to determine their quality. Some of the 
more prevalent measurements include eye opening height . 
eye opening width, amplitude of the crossing level, jitter at 
the transition point, and the rise and fall times of the bit 
transitions. Fig. 1 1 shows a number of these measurements 
made with the IIP 71501A eye diagram analyzer on a laser 
transmitter operating at 022.08 Mbiis/s. The extinction ratio 
was determined by taking a vertical histogram of the eye 
diagram windowed over one bit interval. An algorithm that 
recursively adjusts the limits for subsequent evaluations of 
the histogram dala converges on the most prevalent logical 

Yl= 348 uU 




Ext inct ion Rat la: 


10.334 




10.143 dB 




1 Level (nean.u): 


G25.2B 


uW 


11. 1G uU 




B Level (nean.o): 


5B.E82 


till 


14.777 uW 




Eye Height : 


H98.79 


uU 


BB.31B X 




Crossing Level : 


353.93 


uU 


52.277 7. 




Eye Width : 


1.3H29 


ns 


B3541 Bit 


Interval 


Eye Jitter (c) : 


44.896 


ps 


027H3 Bit 


Interval 


Mean Rise Tine : 


S95.19 


pa 


412HB Bit 


Interval 


Mean Fal 1 Ti «e : 


7B5.93 


PS 


43915 Bit 


Interval 



i 



Fig. 1 1. Table of eye diagram paramHrii measurements. 



© Copr. 1949-1998 Hewlett-Packard Co. 



Augusi 1994 Hewled -Factoid Journal 35 



1 and 0 levels. The peaks of Ihe histogram are used lo set 
initial limits for Lite computation of the 1 and 0 levels. The 
initial mean and standard deviation (sigma) of Ihe 1 level are 
based on histogram data above the relative 5096 point of the 
peaks. The limits for Ihe next evaluation of the histogram 
data are set lo Ihe initial mean level plus or minus one 
signut. The new mean and sigma for ihe 1 level are then de- 
termined. This process iterates several limes until the sigma 
becomes small and the mean converges on Ihe most preva- 
lent 1 level. The delerminalion of Ihe mosl prevalent 0 level 
is based on the same algorithm, except that Ihe initial mean 
and sigma of Ihe 0 level are based on histogram data below 
the relative 5096 point of the peaks. This algorilhm for deter- 
mining extinction ralio has been demonstrated to be more 
repeatable than merely taking the peaks of the histogram 
disiribuiions. The Other eye parameter measurements are 
also based on histograms. 

At times, it is convenienl to display ihe Irace in optical 
power units. This can be easily done with the IIP 7150 1A. 
With the responsivity of the optical converter entered as in 
Fig. 11, we can easily see using the marker functions that 
the laser is putting out about :14() uW of average optical 
power. With the responsivity entered, the eye measurements 
are displayed in Ihe appropriate oplical power units. 

To get a good extinction ratio measurement result, it is 
especially important to get an accurate determination of the 
average power corresponding to the low logic level. II can 
be readily shown that a measurement error of a given mag- 
nitude has a much greater impact on the value of Ihe low 
level than Ihe high level. Because of sensitivity limitations, 
accurate measurement of the low level may be difficult 
However, this is an area where the eye filleting capability 1,1 
the MI' 71501A can make a contribution. When making mea- 
surements on high-speed laser transmitters with unamplilied 
phoiodiode converters, the resulting delected voltages are 
only millivolts in amplilude, making conventional extinction 
ralio measurements nexl to impossible. Yet, with the eye 
filter enabled on the HP 71501A. the extinction ratio can be 
readily determined. 

Mask Measurements 

The HP 71501 A has a general-purpose mask testing capability 
with internally stored default masks for the major SONET/ 
SDH transmission rates to test compliance with these stan- 
dards. These default masks can be autoscaled to the data 
according to the specifications in die SONET/SD1I standard. 
Mask margin testing Ls also provided. Fig. 12 shows a SONET/ 
SDH eye diagram mask test performed by the IIP 71501A 
Once again, the transmission rate was G22.08 Mbits/s. The 
measurement bandwidth was fixed by a hardware filler with 
a fourth-order Bessel-Thomson response and a reference 
frequency Of 466.56 MHz. For this transmission rale, ihe 
default mask consists of a hexagon, M 1 , in t he center of ihe 
eye. and upper and lower limit lines, L2 and L3, The default 
mask has been autoscaled to the data. The error count for 
each of the mask regions is displayed on the lower portion 
of die screen, along with a number corresponding to the total 
traces evaluated. The transmitter measurement Shown passes 
the mask test without any violations, hi some instances, it is 
useful to determine by how much margin a transmitter passes 
the mask test. It is often desirable in production testing to 



— 1 1 I ' ■ ' 1 t 



Mask Margin: 
"50% 




0 e 241 . 127 ps/div 

Trt=Chl (11: 0 (11; 0 

H0 raU/div L2: 0 L5: 0 

100 ill ref L3: 0 LB: t Tr: 1215 

Fig. 12. Eye mask measurement 

allow some additional guardband to ensure ihat the trans- 
mitter can pass the standard mask. M4, L5, and U5 denote 
the margin mask regions, and one can lesl for errors simul- 
taneously in those regions as well as Hie standard mask re- 
gions. As shown, there was a margin mask violation of the 
lower limit after 1215 traces had been evaluated. Custom 
masks can also be easily constructed for other transmission 
systems or to assist in the design and troubleshooting of 
laser transmitters. 

Error Trace Capture 

The jitter in the data transitions is a very important transmis- 
sion parameter that has a direct impact on Ihe HER. In many 
systems only the random jitter is specified. But in oplical 
systems, a deterministic component of jitter thai is depen- 
dent on the optical pattern is often present. Using the eye- 
line mode with eye filtering one can observe the peak-to- 
peak variations in the crossing points as shown in Fig. 13. To 
determine if a particular pattern is responsible for the jittered 



Xl=-30. 9H4S ps X2= 7.B328E ps X«-t)= 37. 9774 ps 




Trl=Chi 
80 mU/div 
2H0 B U ref 



Fig. 13. Jitter in data transitions, 



36 August 1904 Ht-wkHt-fiu'lcanl Journal 

©Copr. 1949-1998 Hewlett-Packard Co. 




-004 5H ps E0B.939 ps/div 

Trl=Chl TpH=Errors HI: 1 

IBB tf/dlv 1B0 lU/div 

2H0 nil ref 240 nU ref Tr: 34H 



Fig. 14. Bnor capture of a Jittered transition. 

transition, a custom mask can be employed. By displaying 
several preceding bit interv als and enabling the error trace 
capture capability of the HP 71501A, one can observe in the 
lower trace of Fig. 14 that the delayed transition seems to be 
caused by intcrsymbol interference from a preceding 100 
pattern. The error trace capture is a unique feature of the 
eye diagram analyzer, made possible by the eyeline display- 
mode and a triggering architecture that allows data before 
the trigger point to be viewed. 

The error trace capture can be applied to a number of mea- 
surements. It can be used to observe the pattern leading up 
lo a standard default mask violation. Displayed in Fig. 15 are 
a transmitter waveform and the associated mask for the 
2.48832-Gbit/s transmission rale displayed in the appropriate 
bandwidth. The error trace here also seems to be a result of 
intcrsymbol interference. 

Summary 

High-speed telecommunication standards require that eye 
diagram measurements be made nn digital transmitters. The 
III' 71501A eye diagram analyzer is designed to meet these 
measurement needs by performing industry-standard mask 
and extinction ratio measurements, ll can Construct both 
conventional eye diagrams and unique eyeline diagrams, 
which can be used for bil error analysis. 




-1.205633 its 351 . 50E ps/div 
Trl=Chl TrH=Err D rs HI I 1 

8B «U/div 80 iV/div LE: 0 
1E8 bV ref 1B0 nil ref L3: 0 Tr: 37 



Fig. 15. Errur capture of a mask violation. 
Acknowledgments 

The eye diagram analyzer involved the contributions of a 
number of people. Chris "CJ" Johansson wrote the IBASIC 
application program. Steve Peterson was primarily responsi- 
ble for revisions and upgrades to the hosl instrument firm- 
ware. Steve, along with John Wendler and David Sharrit. pro- 
vided technical input. John Wilson provided market, research 
and many of the ideas that make up the operation of the 
application. Finally. Mike Dethlefson and Mark Slovick mod- 
ified the existing base instrument's alignment and calibra- 
tion routines to achieve the eye diagram analyzer's current 
staie-of-the-art time-domain measurement performance. 

References 

[.AiiiiTirmi National Stamlairlfar Telecom in initiation - Digital 
llierarelii/ Optical Interface Specificnt inns, Sinylc Mode, ANSI 
TI-I06-198K. 

2. Optical Interfaces the Ktfti ipnicnls ami Si/stems Itclnlinii lo Ihr 

Synchronous Digital Hierarchy, < '(TTT Recommendation G.957, 
1990. 

3. OFSTP-4 Optical Eye Pattern Measurement Procedure, T1A' 
EIA-520-4. 1993. 

4. D.J. Ballo and .I.A. Wendler, "The Microwave Transition Analyser: 
A New Inslnimenl Architecture for Component and Signal Analysis," 
lleirletl-l'ackani Journal, Vol. 43, no. 5, October 1992. pp. tK-fii 



© Copr. 1949-1998 Hewlett-Packard Co. 



August 1994 Hewl^Pactard Journal 37 



Thermal Management in Supercritical 
Fluid Chromatography 

In supercritical fluid chromatography, very high degrees of accuracy are 
required for temperature control. On the fluid supply end of the system, 
cooling is critical. On the separation end, heating is important. This paper 
discusses temperature control in the HP G1205A supercritical fluid 
chromatograph. 

by Connie Nathan and Barbara A. Hackbarth 



Supercritical fluid chromatography (SFC) is a technique that 
has gained acceptance in the analytical chemistry market- 
place as a complement to gas chromatography (GC) and 
liquid chromatography (LC). In the development of the HP 
(il205A supercritical fluid chromatograph (Fig. 1 ), leverag- 
ing of major components from IIP GK3 and LC products was 
a primary goal. As a result of this goal, thermal management 
in the system was a challenge because the components were 
not intended to operate in the temperature range required 
for SFC. For example, the LC pumping module was designed 
to pump fluids at room temperature, while for SFC fluids 
optimal delivery is at •">' (' or lower. 

Modifications were made to the components to integrate 
them into one product. Some components were optimized for 
SFC. This helped to improve the chromatographic technique 
by incoiporating new ways to manage and control tempera- 
ture. This paper examines the design modifications made to 
components to meet the thermal requirements for SFC while 
leveraging current IIP analytical product components. 

SFC System 

An SFC unit consists of four major systems: fluid delivery 
(pumps), separation, detection, and data collection. A brief 
description of chromatography accompanies this article to 
provide the reader with an overview of the technology (see 
page 39). 

The HP G1205A SFC project goal of using, wherever pos- 
sible, components from already proven IIP GC and L( ' 
instrumentation resulted in reuse of the HP 5890 GC oven, 
the HP 1050 LC pumping module, a variety of both GC and 
LC detectors, and the HP ChemStation instrument control 
software. The major components of the HP G1205A SFC 
system will now be described in further detail. 

Pumping Module. The HP G 1205A SF( I system is available as a 
single-pump system using CO) (or other fluids) for its super- 
critical mobile fluid or as a dual-pump system that allows 
modifiers to be added to the CO^. Both pumps consist of 
reciprocating dual pistons in series, allowing for continuous, 
reliable, and unattended pumping. This eliminates the incon- 
venience of refilling syringe pumps and allows for control of 
the flow and changing of the composition. The pumps have 
feedback control algorithms that dynamically compensate 




Fig. l. hp G120SA supercritical fluid cluximatography system. 



for optimum fluid compressibility and minimize the pressure 
ripple of the reciprocating pistons. 

The HP G1205A modifier pump design parallels the HP 1050 
LC pump closely since both are intended to pump incom- 
pressible liquids. This is not the case with the supercritical 
fluid pump, which required more extensive modifications. 
To pufitp compressible fluids like (302 efficiently, the 



38 August Hewlett -Packard Journal 

©Copr. 1949-1998 Hewlett-Packard Co. 



What is SFC? 



Chromatography is a process in which a chemical mixture, carried by a mobile 
phase, is separated >mo components as a result o' differential distribution of the 
solutes as they flow over a stationary phase The distribution is the result of dif- 
fering physical and/or chemical interactions of the components with the stationary 
phase On a very basic level, chromatography instrumentation consists of (1| a 
delivery system to transport the sample wntim a mobile phase, 12) a stationary 
phase Ithe column) where the separation process occurs. 13) a detection system 
that identifies or distinguishes between the eluted compounds, and 141 a data 
collection device to record the results (see Fig 1 1 

The choice of which chromatographic method to use depends on the compounds 
being analyzed In gas chromatography |GC|, the mobile phase that cames the 
sample injected into the system is a gas GC is generally a method for volatile and 
low molecular weight compounds High-performance liquid chromatography 
IHPLCI is primarily used tor analysis of nonvolatile ana higher molecular weight 
compounds A combination of desirable characteristics from both ot these methods 
can be obtained by using a supercritical fluid as Ihe mobile phase A supercritical 
fluid is a substance above its critical point on the temperature/pressure phase 
diagram Isee Fig 2) Above the critical point, the fluid is neither a gas nor a liquid, 
but possesses properties of both 

The advantage nf SFC is that Ihe high density of a supercritical fluid gives it the 
solvent properties of a liquid, while it still exhibits the faster physical flow proper 
lies of a gas In chromatographic terms, supercritical fluids allow the high effi- 
ciency and detection options associated with gas chromatography to be combined 
with ihe high selectivity and the wider sample polarity range of high-performance 
liquid chromatography Applications that are unique to SFC include analysis of 
compounds that are either too polar, too high in molecular weight, or too thermally 
labile for GC methods and are undetectable with HPLC detectors Another benefit 
o( SFC over LC is the reduction of toxic solvent use and Ihe expense associated with 
solvent disposal This aspect has become increasingly important as environmental 
awareness becomes a larger issue 




Temperature T c 



Fig. 2. Phase diagram with cntical point and supercritical region 

Hewlett-Packard developed and manufactured its first SFC instrument in 1982 For 
the past decade. SFC has primarily been used in R&D laboratories. The market has 
now expanded to include routine analysis for process and quality control as SFC is 
continuing to gain acceptance as a complementary technique to GC and LC. 

Connie Nathan 
Barbara Hackbarth 
Development Engineers 
Analytical Products Group 




supercritical Hufd pump head must bp cooled to lower 
temperatures. Peltier Cooling was selected for its clean, self- 
contained, quiet, and reliable operation. Additionally, the 
Peltier elements provide precise temperature control, 
minimizing pump How noise and allowing more accurate 

compressibility compensation; 

Separation Phase. The column is when' the actual separation 
of components occurs. The unique interactions of Ihe com- 
pounds with a given stationary phase at certain conditions 



results in different elution times from the column. The IIP 
( [1205A system can be used with capillary, narrow bore, or 
par ked columns. Viscosity of a supercrilical fluid is at least 
one order of magnitude higher than Ihe viscosity in Ihe gas- 
eous state, but is one to two orders of magnitude less than 
in Ihe liquid stale This Iranslatcs into a pressure drop 
across the column thai is len titites greater with HPLC than 
il Is with SFC. The lower viscosity of SFC allows longer 
columns, which yield belter separation of closely related 
compounds. 



© Copr. 1949-1998 Hewlett-Packard Co. 



Aiifiimi twi Hewlett PadtardJounial 39 



Selectivity refers to I he selective physicochemical interac- 
tions between Ihe sample components and I he column. In 
SFC, the selectivity of compounds is adjusted by changing 
temperature, mobile phase density, and/or composition of 
added modifiers. These control parameters are a combination 
of those available in either a pure GC or a pure LC applica- 
tion. The column is installed in a temperature-controlled 
environment , t he GC oven. 

Detection. SFC not only supports both capillary and packed 
columns but also a variety of GC and LC detector options. 
Detectors are devices that sense the presence of the differ- 
ent compounds eluting from the column and convert this 
information into an electrical signal. Selection of which de- 
tector to use depends on several factors including the type 
of information needed from the analysis, the sensing level 
required, and the complexity of the compounds. 

GC detectors (hat are available in SFC are the flame ioniza- 
tion detector, the nitrogen phosphorus detector, and the 
electron capture detector. The electron capture detector is a 
halogen-sensitive detector primarily used in pesticide analy- 
sis. The flame ionization detector is a universal detector for 
a wide variety of organic compounds. The nitrogen phos- 
phorus detector is a selective detector for compounds con- 
taining nitrogen anil phosphorus, and is also used for pesti- 
cide and clinical applications. A new dimension to detection 
capability is made possible with the use of modifiers with 
the electron capture and nitrogen phosphorus detectors. 
The llame ionization detector and Ihe nitrogen phosphorus 
detector required design changes for SFC use. 

I.< ' detectors av ailable in SFC include the multiple wave- 
length detector, the variable wavelength detector, and the 
diode array detector. These cover application needs for high 
sensitivity and selectivity. 

Data Acquisition System. The data acquisition system sorts 
and executes the many different signals, functions, and com- 
mands. For example. Ihe first thing the instrument needs to 
know is how it is configured (one pump, two pumps, which 
detectors, injection method, etc.). Commands are given to 
operate the system at part icular conditions (fluid flow rate, 
pressure, temperature, etc.). Electronic signals from the 
detectors are collected and converted to useful chromato- 
graphic information. Analytical results are stored and later 
presented in a meaningful report format determined by Ute 
user. The software platform for the IIP G1205A SFC was 
leveraged from existing ChemStation platforms. Working 
through Microsoft - Windows as the operating control sys- 
tem, it can multitask with other applications and network 
with other data systems. 

The remainder of this article focuses on the thermal design 
of the SFC pump and one of the detectors. 

SFC Pump 

The pump module is an LC pump Optimized to operate with 
supercritical fluids at low temperatures and high pressures. 
Supercritical fluids such as CO2 are pumped through the sys- 
tem at rates as high as 5 milliliters per minute and pressures 
as high as 4(X) bars. The initial approach was to use cryogenic 
002 to cool the pump surface. The proposed design was a 
channel of metal tubing with cryogenic CO2 flowing through 
it. Using a network of channels, the cryogenic CO2 is evenly 



Electrical Insulation 
(Good Heat Conductor) 



Body to Be Cooled (Heat Source) 




DC Source 

Fig. 2. Thermoelectric (Peltier) cooler. 

distributed through the pump and provides an efficient 
method of cooling. The design could be further Optimized by 
using materials with high thermal conductivity for the [jump 
head to improve Ihe efficiency and enhance the cooling. 

The HP supercritical fluid extractor (SFE), introduced in 
1990. used a similar approach. The SFE uses a supercritical 
fluid to extract the sample and prepare it for analysis. The 
SFE pump, like the SFC pump, must be cooled. Cryogenic 
CO2 through a single stainless steel tube Ls used to cool the 
surface of the LC pump head. Although tliis design met the 
functional objective, it required an additional source Of ( 'A K 
Supercritical fluid grade CO2 is not used for cryogenic pur- 
poses because it is more expensive and a purer form of fluid. 

To make the product more compact and self-contained, a 
thermoelectric (Peltier) device was investigated. Thermo- 
electric cooling provides one of the simplest means of refrig- 
erating electronic equipment without using compressors or 
liquid refrigerants. A thermoelectric device is a heat pump 
that operates electronically. The device is made from two 
semiconductors, usually bismuth telluride. that either have 
an excess (n type) or deficiency (p type) of electrons. Cur- 
rent passes through the dissimilar conductors creating a 
temperature differential across the device. Heat energy is 
transferred from the cold side (body to be cooled) to the hot 
side (heat sink) and dissipated. This is known its die Peltier 
effect. Fig. 2 is a schematic diagram of a thermoelectric de- 
vice. Table 1 provides a comparison of a thermoelectric heat 
pump with other types of refrigerant cooling systems. 

In applying thermoelectric technology successfully, three 
design issues to be considered are: 

• The operating environment 

• How to dissipate the heat efficiently 

• Estimated amount of heat the device needs to remove. 

To maintain a 5°C operating temperature, the final design 
(see Fig. 3) includes several thermoelectric devices sand- 
wiched between an aluminum heat sink and a copper cold 
plate. 

Preliminary results showed that the thermoelectric device 
could maintain a temperature of 4°C at the pump head sur- 
face. If the heat sink reaches the temperature of the hot side 



40 August 1HH4 Hewleti-I'iic kanl Journal 

©Copr. 1949-1998 Hewlett-Packard Co. 



Table I 

Refrigerant Cooling Comparison 



Source 

Temperature 

Operating 

Range 

Operation 



Physical 
Attributes 



Thermoelectric 

No liquid coolant 

Can accommodate 

temperature 

extremes 

(-100°Cto + 100"C) 

Electronically 
controlled 

Modular and 
compact 



Cryogenic Fluids 

Requires liquid 
coolant source 

Provides cooling 
onlv 



Mining components 
or valves to control 
fluid flow 

Bulky witii extra 
hardware ( valves, 
tanks, etc.) 



of the Peltier device, then the heat sink alone cannot effec- 
tively dissipate the heat. A boxer fan at the rear of the pump 
cools the heat sink. The interfaces between the Peltier 
cooler, the heat sink, and the cold side of the pump head are 
secured with screws and epoxy to minimize thermal losses 
and provide a good thermal circuit 

Furl her efforts to optimize the design included the selection 
of materials with high thermal conduct i\ity. A nickel heat 
exchanger with porous sintered nickel is incorporated in the 
design to cool the CO> conductively before it enters the sys- 
tem. The temperature of the CO9 as it enters the pump is near 
room temperature. The heat exchanger is in thermal contact 
with the cold plate. The cold temperature is conducted into 
the heat exchanter lo low er the fluid temperature to 5 • '. 

The thermoelectric device combined with conductive and 
convective cooling techniques allows the pump module to 
be self-contained and modular. This approach requites no 
external source for cryogenic cooling. 

Flame Ionization Detector 

As described above, the separation and detection module 
includes a choice of columns and detectors. The flame ion- 
ization detector is thermally optimized for SFC. It is mounted 
to the top of the GC oven, which controls the temperature Of 
the column thai contains the sample. The flame ionization 
detector has two separate temperature-controlled heated 



zones. The base of the detector protrudes into the oven. The 
top of the detector, the collector, is a controlled zone ex- 
posed to room temperature. The oven and detector Iwse 
both operate at temperatures up to 450°C with less than 
0.5°C variations. 

A flame ionization detector is a stainless steel block with a 
jet (a stainless tube) attached to it. Heat is applied by a rod 
heater. At the tip of the flame ionization detector jet. a flame 
is produced by the combustion of hydrogen and air. The sam- 
ple elutes into the flame and ionizes. The detector generates 
a signal that represents the amount of carbon in the sample 
Fig. -1 compares GC and SFC flame ionization detectors. The 
temperature of the zone just below the jet is critical- Slight 
changes in the temperature affect the reproducibility of the 
residts. 

The GC flame ionization detector was evaluated for poten- 
tial use with SFC. The GC detector has a long heated zone 
with a removable jet positioned above the zone as shown in 
Fig. 4. A temperature gradient exists between the heated 
block and the jet tip of the GC flame ionization detector be- 
cause of the position of the heater and sensor and the loca- 
tion of the jet relative to the heater. When tliis GC design is 
used for SFC, spiking and flameout occur. Spiking is a fluc- 
tuation in the signal that shows up in a chromatogram. It is 
the result of the CO2 icing. Flameout occurs if there is too 
much CQ2 flowing through the jet. A long heated zone also 
causes undesirable density drops in CO* that allow the 
sample to precipitate. 

Customers familiar with flame ionization detectors expect 
the results to be clear and reproducible. The primary design 
consideration for optimization of the flame ionization detec- 
tor for SFC was to solve the problems seen with the GC de- 
sign, that is, to minimize density drops and eliminate spiking 
and flameout. 

A shorter heated zone flame ionization detector was de- 
signed. Thejel is an integral part of the heated zone (it is 
physically brazed to the zone) to ensure that there is no 
thermal break betWeeil the zone base and the Jet The healer 
and sensor are positioned close 10 the jet. The temperature 
gradient is minimized. The jel Orifice size is optimized to 
prevent flameout at high COj flow rates. A second heated 
zone is added lo the SFC flame ionization detector to elimi- 
nate problems of spiking and condensation above the jet. 



Thermoelectric 
Peltier Cooler 

Copper Cold 
Plate 




Nickel Heal 
Exchanger 



Pump Head 



Tank Value 



Fi«. 3. Sketch of the SFC pump 
shutting the iiiajer eiin\|«iuenls 
and the lliermoelertric ( Peltier) 
device. 



© Copr. 1949-1998 Hewlett-Packard Co. 



AiikusI 19)1-1 Hi-wli'll-l'ackard .lourim! 41 



o Collector 
Q Healed 



Jet 



Detector Block 
Heated Zone 




Collector 



Air In 



Heater 



Hydrogen 
In 



la) 



Heater 



Detector Block 
Heated Zone 



(b| 



Condensation forms from the combustion of hydrogen and 
air when ihe detector block temperature is below I00°C. 
Table II is a comparison of the GC and SFC detec tors. 

Table II 

Comparison of SFC and GC Flame Ionization Detectors 

SFC Detector GC Detector 

Physical Size 6 mm thick, compact 25 mm thick with. jet 

with jet permanently a separate part 
attached 

Temperature PC from sensor to 17 C from sensor lo 



Gradient 

Problem 
Area 



Collector 



jet lip 

No spiking or 
flamcout 



Heated collector 
zone 



jet tip 

Spiking and flame- 
out when used as an 
SFC detector 

Collector not 
actively heated 



The SFC flame ionization detector is insulated from the oven 
so that the heat from the oven has minimal effect on it. The 
SFC detector's minimum operating temperature is the oven 
temperature plus 10 C C because of convection and conduction 
of heat from the oven. 

To achieve temperature stability, the SFC flame ionization 
detector heated zone required changes in the temperature 
control parameters. The controller uses the Ziegler-Nichols 
algorithm 1 to obtain the proportional (P). integral (I), and 
derivative (I)) terms in the control algorithm. This algorithm 
defines the I'll) constants such that the object reaches its 
desired steady-state temperature without large offsets or 
deviations. The PID controller is designed for larger masses 
such as the GC heated detector zones. Although it takes 
longer for larger masses to heat up. the oscillations are mini- 
mal and therefore it is easier to dampen the response and 
bring temperature under control. The SFC flame ionization 
detector is 1/4 the size of the GC design, so it heats up very 
quickly. Using the GC PID constants, large oscillations were 
seen because of poor temperature control. New initial values 
for the PID parameters for die SFC detector were derived 
using the Ziegler-Nichols tuning algorithm. A trial and error 
method was then applied to fine-tune the PID parameters to 
eliminate oscillations. 




Fig. 4. Sketches of the SFC (a) 
and tiC (b) flame ionization de- 
tectors. Note the difference in the 
healed block size and the location 
of the jet. 



Results 

The major result was the introduction of an IIP G1205A SFC 
system in May 1992 that meets its performance goals. It was 
a challenge to reuse existing components that were not de- 
signed with SFC in mind. In SFC, control Of temperature in 
different regions of the system is critical lo the success of 
the chemical analysis and the performance of the instru- 
ment. To make components perform under conditions for 
which they were not originally designed was a major suc- 
cess for the team. Another tangible result was the reduction 
of product COS) by consolidation and reuse of components. 
The product is a compact modular design and maintains the 
integrity of the existing designs it leverages. The introduc- 
tion of a thermoelectric Peltier-cooled pump head provides 
an advantage over designs that use a second source of COo 
to cool the pump head. Other HP analytical products have 
incorporated the Peltier device. The SFC flame ionization 
detector, with its shortened zone, has improved the quality 
of the analytical results. 

Acknowledgments 

The work described in this paper is the result of several 
years of development efforts by the R&D project team. Spe- 
cifically, Hans Georg Haertl is responsible for successfully 
incoiporating the Peltier de\ice into LC pump design. Hans 
Georg was an engineer on loan to the site from the IIP Wald- 
bronn Division in Germany. The other members of the proj- 
ect team included Chris Toney (project manager), Elmer 
Axelson (software), Paul Dryden (firmware). Bill Wilson 
(chemist ), Terry Berger (chemist ), Mahmoud Abi lei-Rahman 
( linn ware and hardware ), and Joe Wyan (hardware). Our 
success must also be shared by several other functional 
groups (manufacturing, marketing, purchasing, and adminis- 
trative support ) which were instrumental in the release of the 
product. Without the combined efforts of these individuals, 
the SFC would not have met its targeted release date. 

Reference 

1. G.K. McMillan, Ttinhiy ami Conlinl Lan/i Pfifoiiiiiniii; Instru- 
ment Society of America. 1990. Chapter L 

Microsoft is a U S registered trademark of Microsoft Corporation 
Windows is a U.S trademark ol Miciosoll Corporation 



42 August l!t!M Hrwleti-Parkiirri Journal 

©Copr. 1949-1998 Hewlett-Packard Co. 



Linear Array Transducers with 
Improved Image Quality for Vascular 
Ultrasonic Imaging 

This project not only achieved its goal of improving the near-field image 
quality of an existing transducer design, but also added two-frequency 
operation. 

by Matthew G. Mooney and Martha Grewe Wilson 



Medical ultrasonic imaging is a real-time technique that uses 
high-frequency sound waves to image many different parts 
of the body including the heart, vessels, liver, kidney, devel- 
oping fetuses, and other soft tissue. The focus of this article 
is on noninvasive imaging of the blood vessels, which is 
more commonly referred to as vascular imaging. We begin 
with a general overview of ultrasonic imaging and then focus 
on the basic design aspects of a transducer used for imag- 
ing. Next, we examine the vascular market and the customer 
requirements, and then describe the design process used to 
develop two new vascular transducers. Finally, we present 
the clinical results. 

The ultrasound waves used for imaging are generated by a 
transducer, which is held against the patient's body. The 
sound waves are produced by a piezoelectric material in the 
transducer. When a voltage is applied across the piezoelectric 
material, it deforms mechanically, creating vibrations in the 
material. These vibrations are acoustic (sound) waves and 
can have frequencies of 0.5 to .'10 MHz. As the sound wave 
travels through the body and through various tissues, il 
bounces off the tissue interfaces, creating many reflections. 
Tfie reflected sound waves are then delected by the trans 
ducer, providing information on the location of the tissues, A 
characteristic of the tissue called acoustic impedance deter- 
mines the fraction of the energy that is transmitted from one 
tissue to another. The acoustic impedance, Z. is defined as: 

Z = pc. 

where p is the density of the material and c is the velocity of 
sound through lite material. 

Fig. 1 shows the sound wave at an interface and the trans- 
mission equation at thai interface. In the body, the imped- 
ances are often very similar, so the system must be sensitive 
to these differences to distinguish blood from muscle, for 
example-. As shown in Table I, the impedance of bone is much 
different, thus causing a great deal of reflection at that inter- 
face. 1 - Since the sound waves cannot penetrate bone, natural 
"window s" in the body such as intercostal spaces between 
the ribs are often used. Attenuation, which is the result of 
absorption and scattering of energy within a material, 
creates another challenge in ultrasonic imaging. 



Material! 


Material 2 


Zl 


h 


Incident Sound 


Transmitted 


Wave 


Soundwave ITI 


Reflected / 




Sound / 




Wave R K 





Z=(>c 

where Z = Acoustic Impedance 
p = Density ol the Material 
c ■ Velocity of Sound through the Material 

Transmission Equation: 

T = ^?- 

z 2 + z, 

Reflection Equation: 



Fi«. 1. Sound waves traveling between I wo materials. 

( luce the wave Is reflected from lite different interfaces, it 
travels back through the body and is sensed by the trans- 
ducer piezoelectric material. The energy is then transformed 
into electrical signals which are propagated over a cable 
to the ultrasound imaging system, where all of the signal 
processing and image display occur. 

Modem ultrasound systems are very complex and can consist 
of analog and digital portions. These systems process elec- 
trical signals in terms of amplitude and time, creating a real- 
time picture of the part of the body being scanned. Fig. 2 
shows the IIP Sonus WOO cardiovascular ultrasound imaging 
system. 

( )ne type of ultrasound imaging system is the phased-array 
system. These systems have multiple channels for Iransinit- 
ling. receiving, and processing sound wave signals (HI anil 
I2K channels are the most common). A lypical transducer for 
these systems is divided into many individual transmitter/ 



© Copr. 1949-1998 Hewlett-Packard Co. 



August l!t!"l lli'wlell-l'ai kurd Journal 43 



Table I 

Characteristic Acoustic Impedances 
of Several Biological and Nonbiological Materials 



Material 


Acoustic Impedance 




(10 6 Rayls) 


Air -a\ S T P 

J \ll til .J. 1.1 . 


i\ (HIM 1 


Water at '• > 0°(" 


i .if) 


Blood 


1 fi7 


Muscle 


1 70 


Fat 


1 .oo 


Soft Tissue 


1 fiS 

1 .uo 


Kidney 




Liver 




Bone 


ft if» 7 .1 


Polyet hylene, Low-Density 


J. 8 


Vinyl (rigid) 


3.0 


Lucite 


3.2 


V'alox, Black (glass-filled nylon) 


3.8 


Aluminum 


17.0 


Lead Zirc onium Tltanate 


28 to 36 


Stainless Steel 


45.4 



Linear Phased Array 

mi 



Phased Array 



Coupling 
Gel 





Fig. 2. H1 J Sonos 1000 ultrasound imaging system. 



Fig. 3. Comparison of linear phased-array and phased-array ultra- 
sound transducers. 

receivers which are called elements. Typically, each trans- 
ducer element is connected to one system channel. Two 
different types of t ransducers are commonly used: sector 
phased-array and linear phased-array transducers. The main 
difference is how these transducers are electrically excited. 
Each element in a sector phased-array transducer is acti- 
vated at a slightly different time delay, wluch allows the 
sound wave to he shaped into a beam of sound and steered 
at different angles, producing a picture shaped like a pie 
slice. Lineal- phased arrays have the additional ability to acti- 
vate groups of elements in a type of sequential scanning of 
the image, producing a rectangular-shaped picture. Fig. 3 
shows a pictorial comparison of the element pulse patterns 
and the respective image shapes produced by the linear and 
phased-array transducers. 

There are three main modalities in ultrasonic imaging: 2D, 
Doppler. and color How. The 2D image is a real-time gray- 
scale image display. A typical 2D linear image of a carotid 
artery in the neck is shown in Fig. 4. Doppler is a way of 
measuring the How velocity and movement within an image 
and is named for the principle it uses. The information is 
presented either with an audible tone or a visual plot. Color 
flow imaging detects the flow of blood and color-codes it 
depending on the direction and velocity of flow. An image 
showing color flow in the carotid artery is shown in Fig. 5. 

The references provide more detailed information on 
phased-array ultrasound imaging systems' 1 and on color flow 

and Doppler processing. 4 

Transducer Design 

As mentioned above, a transducer consists of many small 
elements. An element is a multilayer sandwich of piezoelec- 
tric and other materials. The basic acoustic transducer ele- 
ment is shown in Fig. 0. Lead zirconium litanate. PZT. is a 
commonly used piezoelectric ceramic sensor material having 
an acoustic impedance between 28 and 3(5xlO ti kg'm 2 s (Rayi). 
Recall that soft tissue has an impedance in the l-to-2-MRayl 
region. Because of the impedance mismatch, there is a lot of 
reflected energy at the transducer/tissue interface, and such 
a transducer would not couple much energy to the human 
body. To improve coupling, a front matching layer — a cou- 
pling material with an intermediate acoustic impedance — is 



44 August 191U Hewlett-Packard Journal 

©Copr. 1949-1998 Hewlett-Packard Co. 





Body 



Lens 



Piezoelectric / 
Ceramic 



Matching 
Layers 



Fig. 4. -il > linear phased-array image of a carot id artery. 



Fig. 6. Knlarged side Stew of one element of an ultrasound 
transducer. 

added to lite front face of the piezoelec tric- to aid in transfer- 
ring the sound wave more efficiently to and from the body. 
An acoustically absorbing material called a backing is added 
to the back of the sensor lo dampen energy that might cause 
additional mechanical vibrations (greater pulse length). 5 - 6 

An important aspect in ultrasound imaging is the ability to 
detect and resolve small structures in the body. This is 
largely determined by how well lite beam of ultrasound is 
focused. Beam focusing for linear and phased arrays is de- 
termined by two different measures: elevation beam width 
and lateral beam width. Fig. 7 shows a picture of a focused 
beam and the elevation and lateral planes. 7 

The lateral beam width is a measure of transmitted beam 
width in I he lateral plane and il changes as a function of 
many parameters including distance from the transducer. Il 
can be measured by keeping the transmitted beam fixed 
while moving a receiving hydrophone in an arc in the lateral 
plane. Lateral beam width can be changed by electronically 
switching elements on or off to change the aperture size. 
The electrical impulses delivered to the elements can be 
advanced or delayed in time to provide additional focusing 
in I he lateral plane. Fig. 8 shows a typical lateral beam plot 
at some distance from a transducer. Beam width is extracted 
from the beam plot and is a measure of how wide the main 




Fig. 5. Coloi Qov. image of a carotid artery. 



Fig. 7. Focused beam, showing elevation and lateral planes. 



© Copr. 1949-1998 Hewlett-Packard Co. 



Augusl HUM I li'wlwt Packard Journal 45 



-20-dB Beam Width 




Angle between Target and Transducer Face 
(degreesl 

Fig. 8. Topical lateral bean plot at some distance froth the 
transducer, showing beam width measurement. 

transmitted lobe is at 6, 10, 20, or even 30 <1B down from the 
maximum. 

The elevation beam width is a measure of t transmitted beam 
width in the elevation plane. Like the lateral beam width, it 
varies with the distance from the transmitting element. It is 
different from the lateral beam width because the size of lite 
transducer in the elevation direction is not the same as the 
size in the lateral direction. Also, since most transducers are 
not divided into multiple elements in the elevation direction, 
elements cannot be electronically switched to change the 
size of the aperture, or phased to provide additional focus in 
file elevation plane. As such, the choice of elevation aper- 
ture is critical to any transducer design. Fig. 9 shows the 
effect of aperture size on elevation beam width and focal 
point. 

Instead of electronic elevation focusing, a lens is placed 
over the elements to provide some focusing in the elevation 






Formulas: 9 



F = Y I/. = 0.2 mm at 7.5 MHz) 



AX = 



0.89 Z i. 





a 


F 


« 


4 


10 mm 


500 mm 


4.5 mm 


2 


7 mm 


250 mm 


3.2 mm 



2a 
Where: 
F = Focal Length 
\X = Beam Width 
- = Wavelength 
Z = Distance from Aperture 
a = Aperture / 2 




R 2 




Formula: 10 
F 





R 


F 


1 


25 mm 


50 mm 


2 


15 mm 


30 mm 



V L 

Where: 
F = Focal Length 
R = Radius o( Lens 
Vr = Velocity ol Sound in Body 
V, = Velocity of Sound in Lens 



Fig. 9. Effect of aperture size on elevation beam width and focal 
point. 



Fig. 10. Influence of lens ratlins on fora] point and elevation beam 
width. 

plane, but this focal point remains constant for a given 
choice of lens. The radius of curvature in conjunction with 
material properties of the lens determines where the nar- 
rowest poinl in the elevation beam will lie. This narrowest 
point is called the elevation focal point. The sketch in Fig. 10 
illustrates how the lens radius influences both the focal 
point and the elevation beam width. 

Another important factor in resolving small structures is 
lime resolution. A well-focused beam allows small targets 
that are side by side to be resolved, while a short pulse 
length allows resolution of small targets that are Separated 
by short distances into the body. Because the depth of an 
echo is determined by its time of arrival at the transducer, 
this attribute is called time resolution. Time resolution mea- 
sures how well small structures can be resolved along the 
beam axis, so it is also called axial resolution. Axial resolu- 
tion is measured in seconds of duration after excitation be- 
fore the transmitted acoustic pulse fades to a certain level, 
usually 20 or 30 dB below maximum. The transducer can be 
designed to have a higher frequency, which increases axial 
resolution, but the body absorbs higher frequencies more, 
thereby reducing the depth into the body that can be 
imaged. Two pulses at two different frequencies are shown 
in Fig. 1 L The higher-frequency transducer produces a 
shorter pulse. The choice of frequency requires a trade-off 
between axial resolution and depth of penetration. 

Vascular Linear- Array Transducers 

Since each imaging application has very specific require- 
ments, different systems and transducers are sold for differ- 
ent applications. The medical ultrasound market can be seg- 
mented into cardiology, vascular, radiology, and obstetrics. 
Hewlett-Packard's product line is focused on the cardiology 
and vascular markets. Our project developed I wo transducers 
for the vascular ultrasound market. 

In June 1990, HP introduced the HP 21258A 7.5-MHz linear 
phased-array transducer. This transducer offered several 
new leclmologies. one of which is continuous steering of the 
image. Since color flow and Doppler imaging are based on 
the Doppler principle, they are most sensitive when the blood 



46 Anmist 1994 Hewlett-Packard .lowmal 

©Copr. 1949-1998 Hewlett-Packard Co. 



-20-dB 
— Poise — 
Length 




TimelHSl 



-20-dB 
Pulse 









Length 






OdB 


















— 


-10 dB - 










-20 dB 








E 
< 




N 







Time lusl 



lb) 



Fig. 11. € "i unparison of ultrasound pulses from transducers operating 
at different frequencies, (a) 7.5-MHz pulse, (h) 4. 5-MHz pulse. 

vessels and the blond they carry point towards or away from 
the transducer. I 'nfortunately, the 2D image has its best sen- 
sitivity when the blood vessels run parallel to the transducer. 
Steering can be done on the linear arrays to account for the 
conflicting angle requirements. This continuous steering is 
made possible by combining 288 transducer elements into 
128 signal paths, which match I he 128 channels on the ultra- 
sound system. The ability to steer the 21) image to get the 
nest grayscale picture while independently steering the color 
flow image to get the best color filling allows the user to gel 
the best of both modalities. 21) image steering is ;ui important 
feature of III' linear array transducers. 

The introduction of the HP 21258A was followed in April 
Will by the lower-frequency HP212B5A l.'vMHz linear-array 
transducer. The HI' 21255A employed the same technology 
as the HP 21258A. but the lower-frequency ultrasound energy 
of the 1 II' 21255A could penetrate deeper into the body. As 
expected, the axial resolution was not as good as the IIP 
21258A, since the frequency was lower. Together, these two 
linear arraj transducers provided the vascular ultrasound 
user with alternatives to the penei ration/resolution trade-off. 

As with any new product, improvements were planned for 
the next version soon after the introduction of the version A 
transducers. The improvements were centered around cus- 
tomer feedback and were organized into two major catego- 
ries: the large size of the transducers and the inability of the 
IIP 21258A to resolve small structures close to the surface 
of the skin (near field). 

A cross-functional project team was formed to define and 
develop new versions of the version A transducers. The ver- 
sion B vascular transducer team soon divided the work to 
be done into two categories to match customer needs: ergo 
noinic improvements and near-field image quality improve- 
ments. The remainder of this article will discuss the process 
used [o address the near-field improvements. 



Customer Feedback 

The version B near-field image quality team set as an initial 
goal to be able to produce near-field images as good as lite 
best competitor while maintaining our ability to produce 
good images in the far field. 

Knowing that the near field needed improvement was a 
good start, but the project team required more detail What 
did the customer define as near-field? Could this improve- 
ment be made at the expense of other attributes? How much 
improvement was needed? In addition to finding out exactly 
what the customer wanted, the project team also needed to 
figure out how to improve the near field. Should the fre- 
quency be c hanged? The aperture size? The focal point? 

At this point in time, we needed a framework for organizing 
the information w e had as well as some way to highlight 
areas that required more data. An organizational tool called 
quality function deployment (QFD) was identified as one 
way to help us translate what the customer wanted into a 
product. The QFD method is based on the construction of a 
"house of quality."" 

The first step in building a house of quality is to tabulate 
customer wants and assign them weighing factors. 8 Given 
enough initial feedback, our team was able to develop and 
distribute a survey that listed many customer "wants" and 
asked for relative weights for each. The next step in building 
the house of quality is to tabulate those engineering charac- 
teristics that affect some or all of the customer wants. This 
list contains all the parts of the design that engineering 
could modify to give Customers what they want. 

The final sieps in the housebuilding process took the most 
lime. The competition's products were benchnuirked rela- 
tive tO HP's (0 see in what areas of customer wants we were 
winning or losing. The engineering characteristics were re- 
lated to customer wants in matrix form so that we could see 
what characteristics affected which customer want and by 
how much. The final house of quality provided a graphical 
means of displaying all this data in a readable formal. A 
small piece of our house is shown in Fig. 12. The actual 
house had 20 wants and 30 characteristics and several 
"rooms" like the example in Fig. 12. 

Engineering Design 

The house of quality was an effective tool to show what we 
had to do tO give customers what they wanted. Next the 
project team needed to change the design of the HP 21258A 
transducer to create the HP 21258B. 

Using customer dala. other rooms in the house of quality 
showed thai the IIP 212- r >8A transducer was approximately 
equivalent to its competitors in the areas of color How and 
Doppler performance. However, it was inferior to its com- 
petitors in near-field 21) image quality. The house of quality 
also showed that elevation beam width was the engineering 
characteristic most strongly related to 21) image quality. 
This became the first area of redesign on the IIP 212S8B. 

The house of quality provided information that our customers 
considered the near field to be over a very specific depth 
range into the body. New IIP 2I2">SB designs were built and 
evaluated for elevation beam widths in this range. The IIP 
212">8B designs included combinations of smaller elevation 



© Copr. 1949-1998 Hewlett-Packard Co. 



August 1894 Hewlett-Packard Journal 47 




z 

Want i g 


Characteristic , 


M 
s 

E 
<o 
o 
03 
e 
o 

I 

01 
Ul 


■£ 

5 

§ 

CO 
CD 
CD 

E 

D 

—J 


Pulse Length 


Less Clutter 


8 


• 


X 




Good Color Flow 


9 


o 


X 




Good Detail Resolution 


6 


• 


• 


• 


Importance , 


153 


71 


54 



s = Strong Interaction 
X = Medium Interaction 
-Weak Interaction 



Relationship Matrix Values 
• =9 

0=3 
X =1 



Importance! => £ Weighty x Matrix Value^ 



Fig. 12. A small portion of Ihe QFI) (quality function (loployinciil ) 
"house of quality" for Lite vascular ultrasound transducer. 

apertures aiui lighter lens radii. The graph in Fig. 13 shows 
the elevation beam width of two such designs compared to 
the HP21258A. 

At least eight different designs for the 21258B were built and 
tested in terms of the critical few engineering characteris- 
l Les shown to be important to customer satisfaction. A few 
of Ihe most promising designs were taken to preference 
trials and tested against the competition. 

Initial Clinical Trials 

The initial preference clinical trials were held in-house with 
internal experts and some invited outside vascular ultra- 
sound users. Images were acquired using the various trans- 
ducer designs and customers were asked to grade the images 
relative to each other. Some preference trials were con- 
ducted at customer sites. The initial continents on the new 
designs were very positive from users of the HP 2 1258 A. 
According to these users, the initial IIP 21258B was defi- 
nitely an improvement over the IIP 21258A, especially in the 
near field. Unfortunately, the HP 21258B designs did not fare 
well in the eyes of t hose users that had experience with our 
competition. One technologist summed up the best of our 
new designs as "better than the old one ... about 80% as 
good as my brand X." 



9 

m 

c: 
o 

3 i 




Design 2 



Depth into Body 

Fig. 13. Klevation beam widths of the HP2125SA transducer and 
two improved designs 



The disappointing performance of Ihe initial HP 21258B 
designs required reexamination of the engineering design to 
identify further opportunities for improvement. The next 
most important engineering characteristics as defined by the 
house of quality were not transducer design changes but 
ultrasound system Changes involving lateral beam width. 
There were some ongoing investigations into system im- 
provements, but with the preference trial results, these 
efforts received more attention. 

In terms of transducer improvements, many of the second- 
ary engineering characteristics such as pulse length were 
shown to be moderately important to customer wants. Not 
one but many characteristics would have to be changed to 
make large improvements in transducer performance. Since 
we had already been through preference trials, we were 
pretty far along the product development cycle and a rede- 
sign would have taken too much time. At this point, some 
technique to change the transducer design significantly in a 
short period of time was needed. 

Second Matching Layer 

It was clear that an improvement in the axial resolution was 
required. A decrease in the pulse length would help. After 
investigating different ways to improve Ihe pulse response 
of the transducer, one idea was to add a second matching 
layer to the sensor stack design. 

Matching layers are very important to the transducer con- 
st ruction. The matching layer is attached to the front face of 
the sensor material and its main function is to help efficiently 
couple the energy to and from the body. A matching layer is 
one quarter of a wavelength thick to provide for const ruci i ve 
interference as the sound waves navel through it. By adding 
more than one matching layer, the energy is even more effi- 
ciently coupled, resulting in reduced pulse length. This would 
correspond to an improvement in the axial resolution. 

The investigation of adding a second matching layer began 
by using computer models that showed a reduction in Ihe 
pulse length when a second matching layer was added to a 
sensor. A comparison of the two modeled 7.5-MHz designs is 
shown in Fig. 14. The next Steps in the process were to tie- 
fine the desirable material properties for the second match- 
ing layer material and (hen select and lest an appropriate 
material. 

In terms of transducer design, there are two important mate- 
rial properties of a matching layer: the acoustic impedance 
and the attenuation. Typically, the preferred acoustic imped- 
ance of the matching layer is the geometric mean of the im- 
pedances of the two materials it is sandwiched between. For 
example, if the first matching layer has an impedance of 8 
MRayls anil the body is 1.5 MRayls. the desirable impedance 
of the second matching layer would be 3.6 MRayls: 

Z M1 . = (Z|Z2) o r ' = (8 MRayls x 1.5 MRayls)' 1 

= 3.5 MRayls. 

Ideally, the attenuation should be low to minimize the 
amount of energy lost when the wave travels through the 
matching layer. 

In addition to the acoustic requirements, other material 
properties of the second matching layer were important. 
The material needed to be bondable since it was going to be 



48 August HUM Hewlett-Packard Journal 

©Copr. 1949-1998 Hewlett-Packard Co. 



Pulse-Echo itwo-wayl Impulse Response 




Amplitude Relative to Maximum -40 dB 



0 000 
0 000 



lal 



Time us 
Frequency (Mto) 



Pulse-Echo (two-way) Impulse Response 




(b) 

Fig. 14. (a) Pulse ami spectrum of Lhe modeli'il HP 21258A 7.5-MHz 
transducer, (b) Pulse and spectrum of the modeled I II 1 21258B 
7.5-MHz transducer 

attached to the acoustic stack. It re(|itircd good resistance to 
changes in temperature and humidity and resistance to 
chemicals to which the transducer would lie exposed, such 
as acoustic coupling gel and disinfectants. The second 
matching layer also needed to be an electrical insulator for 
patient safety. To maintain consistent performance, each of 
lhe material properties needed lo be homogeneous within 
lhe second mulching layer. 

Polymers generally have impedances bet ween 1 and I MRayls 
as noted in Table I. To meet the acoustic requirements, many 
polymers were evaluated and lhe selection was narrowed 
based on lhe acoustic criteria Next, more complete malerial 
property testing was done and a final polymer material was 
chosen. 

To ensure that the second matching layer material selected 
was appropriate in lenns of transducer reliability, extensive 
onvironmenlal lesling was done. The testing included strife 
(stress to failure) testing in severe temperature and humidity 
environments. As ;t result of lhe strife testing, design changes 
were implemented lo improve the robustness of lhe product. 

A comparison of the acoustic response of transducers with 
and without second matching layers was done. The results 
showed thai lhe pulse lengths are greatly reduced wilh sec- 
ond matching layers. Acoustic pulse and spectrum lest data 
comparing 7 ,6 MHz transducers wilh and without second 
matching layers is shown in Fig. 15. The graph shows thai 
there is a significaiil decrease in lite -:!0-dB and -III ill! pulse 
lengths of lhe transducers wilh second matching layers. 




HP 21258A 
No Second 
Matching 
Layer 



Transducer 



HP212S8B 
Second 
Matching 
Layer 



Fig. 15. Comparison of pulse lengths at various transmitted beam 
amplitudes below the maximum for HP 7 5- MHz transducers with 
and wiUioul second matching layers. 

The main drawback to adding a second matching layer is 
acoustic cross-coupling between neighboring elements, 
which causes energy to be losl when the array is steered 
off-axis (roll-off). From the investigation, we found that the 
roll-off at 45° did increase slightly wilh the second matching 
layer. To gain a better understanding of how this would af- 
fect the 2D image quality and the color flow and Doppler 
performance of the transducer, clinical trials were done. 

Further C linical Trials 

Several preference clinical trials were done in-house and 
at many different sites, both vascular and mixed cardiac/ 
vascular. The main goal was lo lesl the new version B linear 
arrays with added second matching layers against the ver- 
sion B linear arrays without second mulching layers. In addi- 
tion, clinical information was gathered on the performance 
of these new version B linear arrays versus lhe version A 
linear arrays and lhe compel ii ion's linear-array transducers. 
The tWO main areas of interest wi re the near-field resolution 
and the color flow performance 

The clinical tiial results were very positive. A clear improve- 
ttteal in lhe 21) image quality was seen in all of lhe tests in 
which the version B transducer with lhe second matching 
layer was compared lo lhe version B transducers without 
second matching layers. An example is shown in Pig, 16, 
which shows two 2L) images of the radial artery in the arm 
(0.5 cm deep) The first image was taken with lhe version A 
7." -MHz linear array and the second was laken wilh the ver- 
sion B 7.5 MHz linear array The near-field image quality is 
much improved wilh the version B transducer, as demon- 
strated by the clarity of the horizontal artery near the lop of 
the righlhand image in Fig. Hi. The color flow and Doppler 
performances were determined to be about equivalent or 
slightly better. The version B transducer with the second 
matching layer also performed well against the competition's 
transducers. 

Fig. 17 shows a bar graph comparing the clinical perfor- 
mance of lhe version A and li 7.5-Mllz. transducers wilh two 
competitors; lhe higher score indicates belter performance. 
Overall, lhe near-field performance was improved, thus 
achiev ing the design goal. 



\ii«um 1 ; »* » I llrwliii t'ni kiii«l,Ii)iiniiil 49 

l Copr. 1949-1998 Hewlett-Packard Co. 




Fig. 16. 211 images i.r Hm ra.Juil 
artery in the arm. (left) HP 

2125HA transducer, (right | hp 
2 125HH transducer. 



Additional Features 

In addition to the shorter pulse lengths, iho new linear arrays 
have the ability to operate at two frequencies as a result of 
the increase in their bandwidth. With some system changes, 
the IIP 21258B became a 7.5/5.5-MHz transducer and the HI' 
2I255B became a 4.5/3.7-MHz transducer. This dual-frequency 
feature enhanced the performance of these transducers in 
addition to making them unique at that time in the vascular 
imaging market 

Manufacturing methods were also improved for the version 
B linear phased atrays. Taking advantage of state-of-the-art 
assembly techniques in one manufacturing step resulted in a 
threefold decrease in cycle time while making the process 
easier for the operators. 

Another feature that was added was the ability of the trans- 
ducers to image an expanded View. This new imaging format 
is called trapezoidal imaging and shows a much larger area. 
Trapezoidal imaging was developed outside the transducer 
image quality improvement project and is not covered in 
detail in this article. A comparison of a typical linear array 
image and a trapezoidal image is shown in Fig. 18. The ex- 
panded view allows sonographers to see more of the larger 
structures such as the kidney all in one image. Feedback on 
this new feature has been extremely positive. Customers 

2t 



0 




HP21258A Competitor 1 HP 212588 Competitor 2 



Fig. 17. Clinical trial results Showing measured user preference for 

image quality of vascular performance of approximately 7-MHz 

transducers. Preference is measured relative to the HP21258A. 



have reported better and faster diagnoses with trapezoidal 
imaging. The trapezoidal imaging format is another distinct 
feature of these new linear-array transducers. 

The ergonomic portion of this project was also very success- 
ful. The size of the version B transducer was reduced by- 
more than 25% compared to version A. New cable lecltnology 
allowed smaller and lighter cables, resulting in a version B 
assembly that is two-thirds the weight of version A. 

The version B vascular linear array transducers are shown 
in Fig. 10. 

Conclusion 

In June of 1993. the two version B linear phased-array trans- 
ducers were introduced at the Society of Vascular Technology 
and the American Society of Echocardiology conferences. 
The transducers were well-accepted by physic ians and 
sonographers, and the order rate for the version B linear 
phased-array transducers is several limes that for the ver- 
sion A linear phased-array transducers. Today these new 
vascular transducers are helping clinicians provide more 
accurate diagnoses with improved image quality, improved 
ergonomics, and trapezoidal imaging format. 

Acknowledgements 

Many people contributed to the success of the transducer 
portion of the version B linear array project. Kick Snyder 
championed steerable linear arrays at the HP Imaging Sys- 
tems Division in the early stages. Ray O'Connell was the 
R&D manager for this project. Some of the original image 
quality team members were Martha Keller, Gary Seavey, and 
Ed I'arnagian. The ergonomics team members included Ed 
Parnagian. Reggie Tucker. Martha Moriondo, and Greg 
Peatfield. The teams had great technician help from Linh 
Pham. .Indiana Ciarla. Sandy Decker, Tony Cugno. and many 
other manufacturing operators. The engineering teams were 
later expanded to include Greg Vogel. Hashi Chakravarty, 
Troy Nielsen. Sean Cranston. Paul Carrier. Ion Rourke, and 
Dan Stempel. A lot of help came from marketing which was 
represented by Pat Venters and Stockton Miller-Jones. Dan 



50 August 19M Hewlett-Packard Journal 

©Copr. 1949-1998 Hewlett-Packard Co. 



3 



% 



(al (b) 

Fig. 18. 2D images showing the- benefits of the trapezoidal format, (a) Linear-array formal, (b) Trapezoidal formal. 



Cote was quite helpful with the QFD and customer surveys. 
Finally. Gail Zwerling. Paul Magnin, Ray O'Connell. Don 
Orofino. and Rick Snyder were all helpful in editing and 
providing advice in writing this article. 



References 

1. F. Duck. Physical Properties of Tissue. A Comiirchcnsi n 
Reference Book, Academic Press, 1 990. 

2. A. Selfridge. "Approximate Material Properties in Isotropic 
MaterJate, USEE Transactions on Sonics ami Ultrasonics, Vol. 
SU-32, May 1985. pp, 381-394. 

3. Hewlett-Packard Journal, October 1983, entire issue. 

4. Hewlett-Packard Journal, June 1986. pp. 20-18. 

5. J. Larson, 111, "An Acoustic Transducer Arra.v for Medical Imaging — 
Part I," Hewlett-Packard Journal. Vol. 34, no. II). October 1983. pp. 
17-22. 

0. D. Miller. "An Acoustic Transducer Array for Medical Imaging— 
I'arl II." I Iruiclt -Packard Journal, Vol. 34, no. 10, October 1983, pp. 
22-20. 

7. T. Szabo and <i. Seavey, "Radiated Power Characteristics of Diag- 
nostic I'ltrasound 'lYansducers." Hewlett-Packard Journal. Vol. 34, 
no. 10, October 1983, pp. 20-29. 

8. I Mauser and D. Clausing. "The House of Quality." Harvard 
Business Review, Vol. OO, no. 3, May-June 1988, pp. 63-73. 

9. <i. Kino, Acoustic Wares. Prenl ice-Hull, Inc., 1987. 

10. D. Ilalliday and P. Kcsnick. Fiiiitlaiiicntnts of Physics. Second 
Edition. John Wiley H Sons, 1981, 



Fig. 19. HP 21255B and 21258B linear phased-array vasi ular 
ultrasound transducers, 



© Copr. 1949-1998 Hewlett-Packard Co. 



August 1994 Hewlett-Packard Journal 51 



Structured Analysis and Design in the 
Redesign of a Terminal and Serial 
Printer Driver 

The project team felt that the objectives could not be met with a traditional 
design approach. Structured analysis with real-time extensions and 
structured design provided an effective alternative 

by Catherine L. Kilcrease 



This paper describes the use of structured analysis with 

real-time extensions and Sinn-lured design in the redesign of 
the terminal and serial printer driver for the MPE/iX operat- 
ing system on the HI' :!<)()() computer system. The redesign 
project objectives were to: 

• Maintain the current block mode performance (the main 
mode of data transfer for terminal I/O is to transfer 
characters in blocks of data) 

• Improve IIP 3000 transaction processing performance on 
industry-standard benchmarks by 5% to 10% through a 20% 
to 40% reduction in the terminal driver path lengths 

• Maintain the current level of functionality 

• Produce a high-quality, supportable, and maintainable 
product. 

The project team felt we could not achieve these goals with 
the then-current development techniques. Object-oriented 
methods were ruled out because of the performance require- 
ments. We electee! to use structured analysis 1 with real-time 
extensions and structured design.- 

The Redesign Project 

The original driver was based on the terminal driver of the 
IIP :iO()(l MPE V operating system. During its design, specifi- 
cation of the terminal and printer .subsystem was unclear 
and led to many problems. Since the original driver had 
added many features since its first release, it was important 
to have a complete specification of the subsystem to meet 
the project goals. Structured analysis provided this. 

The original driver consists of seven modules that handle the 
I/O between the IIP -1000 file system, the MPE/iX operating 
system, and the data communication and terminal controller 
(DTC) (Fig. 1 ). There are two storage managers. The termi- 
nal storage manager provides the interface between IIP WOO 
file system read and write intrinsic - calls and the terminal 
logical device manager or fast write concat procedure. The 
serial printer storage manager provides die interface between 
IIP 3000 file system write intrinsic calls and the serial printer 
logical device manager. High-level I/O is the old path between 
tile file system and the logical device managers. It generally 
handles non-read/write I/O ( controls, opens, closes ). There 
are two logical device managers: one Tor terminals and one 
for serial printers. The logical device managers transfer data 
between the user stack and the data communication buffers 



for reads, writes, and controls. The fast write concat proce- 
dure processes writes received from the terminal storage 
manager and sends them to the terminal and serial printer 
device manager (it provides a faster write path for termi- 
nals). The terminal and serial printer dev ice manager com- 
municates read, write, and control information to the data 
communication and terminal controller (DTC) through the 
Avesta Device Control Protocol. The lower interface of this 
protocol is the How control manager. The flow control man- 
ager provides reliable transport between the IIP 3000 and 
the DTC by implementing a transport protocol called the 
Avesta Flow Control Protocol. The Storage managers and 
fast write concat procedure are invoked by procedure calls. 
The interface between the other modules in the driver and 
the operating system is message-based via MPE/iX ports. 



File System 


Storage Managers 


High-level 1/0 




i 


i 

r 


ill 



Logical Device 
Managers 



o 



Terminal and 


Serial Printer 




Device 




Manager 


i 


_L 


Flow Control 




Manager 



o 



LAN Driver 



Fig. L i iriginal driver architecture. 



52 August ISM Hewlett-Packard Journal 

©Copr. 1949-1998 Hewlett-Packard Co. 



Storage 
Manager 



Fast I/O Managei 
(Process Slack I 

Logical Layer 



Oevice Control 
layer (ADCPI 




1^ 


„ 




Manager 
1 Logical Layer 




■ — 




■ Device Conlrol 
1 Layer (ADCPI 




1 Transport Layer 


* 1 


(AFCP) 




1 I t 




LAN Driver 



High-Level I/O 



Deterred I/O 
Manager 

Logical Layer 




ADCP = Avesta Device Control Protocol 
AFCP = Avesta Flow Control Protocol 

Fig. 2. Redesigned driver architecture. 

Path trace utility traces of the original driver were analyzed 
lo determine good opportunities for path reduction. The rede- 
sign architecture then incorporated the best reduction ideas. 
The performance improvement is gained from Streamlining 
the path for the most common I/O through the use of direct 
procedure calls, and from a design emphasizing efficient 
operation, 

The new redesigned driver consists of three modules with 
three layers in each module (Fig. 2). The three layers are 
logical, device, and transport. The logical layer acquires the 
resources needed to complete the I/O request and transfers 
I he data from (lo) I he user stack into foul of) a data commu- 
nication buffer. The device layer handles the Avesta Device 
Conlrol Protocol, which is the mechanism to Communicate 
with the DTC. The iransport layer implements tite Avesta 
Flow Control Protocol (Iransport protocol) ami Interfaces 
With the LAN driver 

The three modules in the redesigned driver are the fasl |/( ) 
manager, the deferred I/O manager, and the inbound I/O 
manager. The hist I/O manager is invoked by the file system 
with a procedure call. It handles the most commonly exe- 
cuted I/O: reads, writes, and terminal controls (e.g., change 
speed, parity, etc.). It attempts to process each request lo 
completion. If il cannot complete Ihe processing because of 
the lack of some resource, the fast I/O manager w ill block 
Unti] the resource is available. If the I/O cannot be blocked 
(ie .ii is no-wail I/O), then the fasl L'< ) manager sends the 
I/O to the deferred I/O manager. If the request cannot be 
completed because the "window" is closed at Ihe transport 
level, the request is also sent to the deferred I/O manager 
The deferred I/O manager has a message-based interlace. It 
bandies deferred requests from Ihe fasl I/O manager and I/' ) 



requests that are made through the "old" high-level I/O path, 
such as open, close, and preemptiv e writes. The inbound I/O 
manager also has a message-based interface. It receives in- 
bound packets from the LAN driver. If the packet is a reply 
to an I/O request, the inbound I/O manager sends a message 
to either the fast I/( ) manager or the deferred I/O manager, 
depentling on which of the two initiated the requesi It also 
handles asynchronous events. 

Software Life Cycle 

There was some concern that structured analysis and struc- 
tured design documents would not fit into documents pro- 
duced by the product life cycle. Willi our recently reused 
life cycle'' this turned out to not be an issue. Our software 
product life cycle contains the following phases and produces 
Ihe following lab documents: 



Phase 

Proposal 

Investigation 

Development 
Specify 

Design 

Integration/Test 

Support 

Discontinuance 



Method 



Structured 
Analysis 
Structured 
Design 



Document 

Investigation Report 

External Specification 
Internal Specificat ion 
Internal Design 

Test Plan 



The external specification document describes the environ- 
ment in which the product operates, Ihe functional capabili- 
ties of the product, and the details of Ihe product's user 
interface. The internal specification describes the internal 
requirements of the system and the internal interfaces be- 
tween the system components. The internal design contains 
ihe complete detailed description of Ihe algorithms and data 
structures to be used in the implementat ion of Ihe product. 
The test plan outlines the types of tests to be used to guar- 
antee the quality of the finished product upon release from 
the lab. 

Training 

There are four groups of people who need training in struc- 
tured analysis: development engineers, inspectors, online 
and offline support engineers, and maintenance engineers. 
Training in structured design for the nondevelc/pmenl engi- 
neers is nol necessary. The structured design document 
Components are easy lo comprehend. The project leant look 
a class in strut-lured analysis with real-time extensions and 
structured design at the start of the project during the inves- 
ligalion phase. ' Il would have been helpful to have had Ihe 
training and some experience with the method before the 
start of Ihe project. The structured analysis I raining for Ihe 
nondevelopment engineers was developed by Ihe project lead 
during the structured analysis phase before inspection of the 
internal specification. 



© Copr. 1949-1998 Hewlett-Packard Co. 



August I9W Hewlett Packvd Journal 53 



am 

FIOM complete JO 



010 done 



disconnect notification | FIOM device 10 reply 



abort .notily 




FIOM logical reply 

\ 

FIOM 
logical 
J cmpl 
s1 I SEM 



I loi 
I cn 
li Pt 



FIOM 
logical 
completion 
controls 



i 



logical ID reply 



0 



device TC reply logical 10 mlo 



<bad status > 
logical 10 reply 



logical 10 reply device write reply 




logical 10 tnlo device preempt 

Write reply \ 



\ 

logical 10 reply 

\ 



logical 10 info 



FIOM 
logical. 
J cmpl 
s2l PAT 



4*t 

unused Duller ■ 
device read reply 

7 

read data 




abort nolity 

/ 

/ 



logical 10 reply 



010 done 



— ► Daia Flow 
— ► Control Flow 

Fig. 3. Data Bow diagram. 



Structured Analysis Overview 

Structured analysis is the use of tools to produce a struc- 
tured system funolional specification. A structured specifi- 
calion is easier lo read and understand lhan the classical 
lextual functional specification because it is graphical and 
contains many small specifications. The system is broken 
into small understandable pieces. The tools of structured 
analysis can be categorized into five functions. The redesign 
project used an extension of structured analysis for real-lime 
systems. In structured analysts processes arc independeiillv 
data-triggered and infinitely fast (i.e., a process will transform 
the data when the data is present). Real-lime extensions 
(i.e., the use of control information) allow the system to 
take other factors or conditions into consideration before 
enabling or disabling a process. 

Function Tool 



Partition the Requirements 
Describe Logic and Policy 
Show the Flow of Control 
Describe Control Processing 
Track and Evaluate Interfaces 



Data Flow Diagrams 
Process Specifications 
Control Flow Diagramst 
Control Specifications t 
Data Did ionarv 



The data flow diagrams show the major decomposition of 
function and the interfaces among the pieces. They show 
the flow of data, not control. It is the system from the data 
point of view. Process specifications document the internals 
of the primitive data flow diagram processes in a rigorous 
way through the use of structured English, decision tables, 
or decision trees. They describe the rules of data trans- 
formation and the policy, not the implementation. Control 

t Real lime extensions 



How diagrams share the same characteristics and relation- 
ships as data How diagrams except that Ihcy deal with con- 
trolling the system. They show the How of control in the 
system. A control specification converts input control sig- 
nals into output control signals or into process controls. It 
has two roles: one to show how control is processed, and 
the other to show how processes are controlled (activated 
in deactivated). The data dictionary is an ordered list of data 
and control flow names and data and control store names 
and their definitions. Data flow diagrams and control flow 
diagrams can be combined together into one diagram. 

Figs. 3, 4. and 5 illustrate die components of structured analy- 
sis with real-time extensions. Fig. 3 is a combination data 
flow diagram anil control flow diagram. The solid arrows are 
data flows, the broken arrows arc control flows, the solid 
vertical bare are state matrixes, and the circles arc processes. 
The finislwead process transforms the device_read_reply (indica- 
tor that read data is ready) and the read_data ( buffer of data 
input by the user) data flows into the logical_IO_reply data flow 
using information from logicalJOJnfo. The freed buffer Hows 
out (unused^buffer data How). The data dictionary entries for 
the data flows are:tt 

device_read_reply (data Howl = 'read reply Irom device layer to 

'logical layer. Contains read status, 
•length, and data pointer. * 

status 
+ length 
+ data_pointer 

1 1 Here the astensks indicate comments, ihe squaie Brackets indicate a choice of one ot the 
enclosed items, the vortical bat means Ofl. and the plus sign means AND TC means terminal 
control. 010 is amesce 1/0 (flush outstanding input/output and wail toi completion), RID 
means tequesl ideniihcalion number, and TI0 is terminal I/O 



54 August 1994 Hi'wlett-I'uckard Journal 

© Copr. 1949-1998 Hewlett-Packard Co. 



FIOM_deviceJO_reply Idata flowl = | device_read_reply 

i device_wnte_reply 
I device_TC_reply 
I device_preempt_wrne_repiy 
I 

FIOMJogical_reply (control (low) = | logical_read_reply 

I logical_wnte_feply 
1 logicaLTC_reply 
I logical_QIO_reply 
I logical_disconnect 
I logical abort 
I 

logical_IO_info (data flowl = FI0M.I0_pending 
• FIOM_IO_»vait_port 
+ DIOMJO_pending 
+ logicaLRID_pending 
+ logicaLabon_RID_pending 

logical_io_reply Idata flow) = I logical_read_reply 
I logical_QIO_reply 
i logical_TC_reply 
] 

read_data (data flow) = buffer_ID 

+ read_status 

The process specification for finistwead (Fig. 4) describes 
how data is transformed. Control information is a little trick- 
ier lo understand. For example, the FIOM_device_IO_reply is 
transformed Into a control flow. FIOM_logical_reply, by the 
classifyJogicaL reply process shown in Fig. 3. The control flow 
enters the stale event matrix. FIOMJogical_cmpl_SEM. The slate 
event matrix has memory, thai is. il remembers (he stale of 
the fast I/O manager. From the FIOM_logical_cmpl_SEM event 



NAME 

23213 

TTTlf 
finish, read 

INPUT/OUTPUT: 
read data data in 
device.'ead.reply data m 
unused buffer data out 
logical ID. info data.iftou! 
logical 10 reply data out 

BODY 

transfer data lif an, lo destination, domg backspace processing and freeing 
unused butters during the transfer 

send logical 10 reply with status from device read reply Of read datamsg. 

Fig. 4. Process specification for process fmistwead 

matrix ( Fig. 5 ). one can see that the finish_read process is acti- 
vated when the FIOMJogical_reply is a logical_read_reply and the 
state is read_pending. Empty Ixixes indicate error conditions. 

The Project and Structured Analysis 

During the investigation phase, the team considered five 
different architectures. The final architecture is a refined 
version of one of them. At the start of the development 
phase, we needed to start the specification of die system, 
define the architecture, determine the changes that were 
needed in ihe TTO support modules and operating system, 
and update the investigation report. Deterniining the 
changes we needed from the MPE/iX operating system lab 
and ihe project that handled the driver configuration mod- 
ules would have made more sense in the design phase and 
Dot the specification phase, but il could not wait until then. 



23-2-31:4 

FIOM logical cmpl SEM 



event 



slato 



FIOM logical reply - 
"logical, read reply" 



FIOM logical reply = 
"logical wrile roply" 



FIOM logical reply = 
"logical TC roply" 



FIOM logical reply: 
"logical QIO reply" 



FIOM logical reply: 
"logical disconnect" 



FIOM logical roply = 
"logical abon" 



Dpcn pending 



idle 



read pending 



finish read/ 
idle 



finish write/ 
idle 



finish write/ 
idle 



do disconnect 
close pending 



do abort/ 
idle 



do disconnect/ 
close pending 



do abort/ 
idle 



010 pending 



limsh OKI 
idle 



do disconnect/ 
close pending 



do noon/ 
idle 



tC pending 



finish TC/ 
idle 



do disconnect/ 
close pending 



do abort/ 
idle 



dose, pending 



close limui running 



Note: II an entry is blank, it is an "impossible" condition which should not be encountered due lo subqueue restraints, etc 
II the condition is hit, error code Inol shown) will lake appropriate action. 



Fig. 5. State evcnl matrix 



© Copr. 1949-1998 Hewlett-Packard Co. 



AUgUM HUM Hi'wI.MI I'm knril.lnurn.it 55 



TermDSM 



NDM 




We started structured analysis using I he fragment al ion lech- 
ni<liic. This technique selects a set of inputs and outputs and 
creates a fragment model of processes that transform thai 
set of data. The composite model is crealed by grouping the 
fragments. We tried to keep the sysiem Specification sepa- 
rate from the architecture. We broke the sysiem into parts 
based on the type of I/O. For example, one fragment mod- 
eled the read path. It specified what happened with read 
requests that were processed completely without blocking 
for a resource, read requests that blocked for a resource 
(such as a datacomm buffer), and read requests that could 
Hot be processed because ofa lack of a resource but could 
not block (no-wail read). The last type of read request could 
mil be processed in a procedure call environment, but needed 
10 be handled in a message-based environment (deferred l/< ' 
manager) so that the user process could continue running 
even though Ihe read request had not been completely 
processed. 

It became increasingly clear when we tried lo lie Ihe struc- 
tured analysis fragments togelher thai not taking the archi- 
tecture into consideration was a problem. The goal of Ihe 
redesign was lo improve the performance while maintaining 
Ihe same level of functionality. We had captured Ihe func- 
tionality of the driver in our fragments based on type of I/O. 
However, each fragment contained fasl paths ( fast in lenns 
of number of instructions — the fast path could block on a 
resource) and slower paths (required the message-based 
interface, Which is much slower than a procedure call inter- 
face). It was difficult lo figure out how to combine all the 
fasl pal lis. w hich w ere spread out across many fragments. 
This is where one major difficulty with Structured analysis 
arose — how to relate the functional specification lo the archi- 
tecture. The architecture had been selected as the best was 
to achieve the performance goals. We felt Ihat to create the 
specification without consideration of the architecture 
would make the design phase more difficult. We slopped 
Structured analysis work for awhile, and concentrated on 
completing the architecture. Hat ley and Pirbhar' helped us 
resolve the ;u"chitecture-versus-specification dilemma. 



Viewing Ihe sysiem from Ihe data point of view carried over 
into our parallel architecture discussions. Al one point, we 
physically simulated data flowing through ihe driver using 
pens anil erasers as dala and people as modules. This helped 
us visualize the interface operation and problems thai arise 
from a mixed procedural and message-based environment. 

For complex areas, we used existing code wherever possible 
to derive decision lives and state transition diagrams. As we 
became more comfortable with structured analysis, we were 
able to assign work lo each member. Material was reviewed 
and discussed ai project meetings. 

One aspect of Structured analysis is the iterative nature of 
Ihe method. One makes a first pass at the dala flow diagram, 
and then discards or revises it until satisfied. Once we felt 
satisfied with our architecture, we set aside our old st rue- 
lured analysis w ork and started again. This time our ap- 
proach was to use structured analysis to specify the sysiem 
given the architecture instead of specifying the system inde- 
pendent of the architecture. We felt this was necessary to 
meet OUT performance goals and to help clarify the interfaces. 
Where before we based the specification on the type of I/O, 
this lime we based the specification on fasl (able lo com- 
plete Within the driv er), deferred (needs operating system 
help lo complete), or inbound paths, Using Ihe dala inter- 
viewing technique, we started wilh the context diagram 
(Fig. 8) and the level 1 and 2 diagrams (Figs 7 and 8). The 
level 1 diagram has three processes: DIOM. FIOM, and IIOM, 
which make up the redesigned driver. The level 2 diagrams 
(of which Fig. 8 is an example) have a process for each layer 
of the architecture and some general utility processes. After 
these diagrams were done, we broke the work up by pro- 
cess. We were able to use many of the diagrams from the 
earlier structured analysis work. 

When reviewing dala flow diagrams and process specifica- 
tions, issues, questions, and problems were easy lo detect. 
They tended to stand out on the diagrams. It was easy lo see 
if dala was missing or wrong or hadn't been initialized. For 
example, in the fimsh_read process Specification (Fig. 4), 



56 Auriisi ISM Hewtett-fiKkanl Journal 

©Copr. 1949-1998 Hewlett-Packard Co. 




Fig. 7. Driver level I diatfnim showing the three major modules: the fast I/O manager FIOM. the deferred l/( ) manager DIOM. and the inlioiind 
I/O manager IIOM. 



unuseOufter was a data flow out of the process I nit the 
buffer wasn't freed in the first draft of the specification. We 
kepi a list of issues and questions and I heir resolutions 
dining the Structured analysis process. 

A month before the external specification was due. we 
stopped structured analysis work and concentrated on writ- 
ing the external specification document. Much of it was 
taken from existing documents since our upper and lower 
interfaces didn't change. The lop-level structured analysis 



diagrams (see Fig. (i) were used to determine interfaces and 
to help define the TIG support and operating system 
changes that the driver required. 

t )i iginally, we did not have an interna] specification in our 
plans. However, the internal design was appropriate for the 
structured design hut not the analysis, and we needed a doc- 
ument for the structured analysis work. Therefore, we split 
the design period into two periods and added an internal 
specification document. The internal specification contained 



© Copr. 1949-1998 Hewlett-Packard Co. 



August IfltM Hewlett^sdcardJounal 57 




. HUM level V, 'liagrnm 

a brief introduction about the context diagram and descrip- 
tions of the interfaces. The rest of tin- document was gener- 
alecl using Teamwork, a software loot for strucnired analysis 
and structured design from Cadre Technologies, Inc. 

Structured Analysis Recommendations 

Data How diagrams generally "feel right or wrong." Tom 
DeMarco 1 encourages engineers (o throw away diagrams 
several times. The use of structured analysis to specify a 
system naturally raises the questions that need to be 
answered about the product. 

If we had to do it all over again, we would have resolved the 
architeclure-versus-structured-analysis problem earlier, and 
not tried to do structured analysis without considering the 
architecture of the system. We did not use the approach 
outlined by llatley and Pirbhai." but we rlirl use something 
related to it. There is a fine line between considering the 



architecture and including design in the specification. Be- 
cause this was a redesign and performance was important, 
we had already analyzed the original driver to find opportu- 
nities lor Shortening the path lengths. These opportunities 
needed lo be incorporated into the design. We didn't know 
how to do that at the design phase If they weren't included 
in the specification as well, so we put the architecture in at 
the top levels of the specification (fast I/O manager, deferred 
I/O manager, inbound I/O manager). 

We should have spent more time keeping the data dictionary 
entries up lo date. All through the project, the lack Of atten- 
tion paid 10 data definitions was a major failing. The data 
needs to be defined as the diagrams and process specifica- 
tions are created. Dala dictionary entries that were related to 
data structures in the original driver were easy. However, we 
did not always type in the complete definitions. We did not 
document new data dictionary entries rigorously and this 



58 August if«M iifwioii -Packwd Journal 

©Copr. 1949-1998 Hewlett-Packard Co. 



noM tea 

ROM SC 




ROM handle 
terminal, reqs 



TP setup L 
wine iirio 



1ST, tret 
semaphores 



ROM rouie 
replies 



ROM handle, 
printer reqs 



ROM stati 

pruiler 

requests 



TP setup. p_ 
write, into 



^^^^^ 



Fig. 9. fiom si mature chart- 



weakened the entire data dictionary. We would have also 
planned time for the interna] specification, its inspections, 
and the rework in the schedule. 

Structured Design Overview 

Design is the process of transforming Hie specification of 
what must he done into the plan of how it will he done. 
Classical design produces a narrative document with some 
graphics. It starts with I he procedural characteristics. Infor- 
mation is often repeated throughout the design document, 
and the document is generally not specific enough. This re- 
sults in some design improvisation during implementalion. 

Structured design introduces Structure and graphics into the 
design process to cope with the hugeness of the system. The 
goal of structured design is a highly maintainable, compre- 
hensible, anil easily lested top-down design. The system is 
partitioned into components which interact to achieve the 
functionality of the system. 

To create the structured design, the data flow diagram pro- 
cesses are grouped into processes that deal with inputs, deal 
with outputs, transform inputs into Outputs, or handle trans- 
actions between inputs and outputs. Modules are then 
created from these groups. Evaluation and refinement tech- 
niques— called cohesion, coupling, and packaging — com- 
plete I he building of the documents. The Structure Chart, 
design dictionary, and module specifications are the docu- 
ments produced through this process. A structure chart 
shows the basil- components (modules) and their interfaces 
in a top-down graphical manner. The design dictionary de- 
fines the interfaces. It uses similar language to the daia dic- 
tionary of structured analysis. The module specifications 
define the procedural part of the design and the sequence of 
interactions. Each module specification describes what part 
of the specification is being satisfied, what the module 
needs to communicate, and how it performs the function. A 



module specification is written in structured English, as a 
decision tree or table, or as a stale transition diagram. 

Fig. 9 shows the first structure chart for the fast I/O man- 
ager. There are a main module, some utility modules (e.g., 
UT_get_s1 ). and device-specific modules (e.g.. FIOM_handle_ 
terminaLreqs) which eventually led to the FIOM_logical_level 
module. Fig. 10 is the module specification for the fast I/O 
manager module. FIOM. It shows the sequence in which the 
other modules are called, Design dictionary entries look 
similar lo (lata dictionary entries. 

The Project and Structured Design 

Once we had finished the internal specification, it was un- 
clear how lo derive the structured design from it. We decided 
on the following approach. Each structured analysis process 
was turned into a module, A hierarchical structure was de- 
veloped for each level by "promoting a boss" or "hiring a 
DOSS" module when necessary. Comparing the fast I/O man- 
ager data flow diagram ( Fig. 8) with the fast l/( ) manager 
structure chart (Fig. II), one can see that the FIOMjiiain mod- 
ule is "hired as a boss" (created ) to handle semaphores and 
lo call the FIOM Jiandlejerminal.reqs and FIOM_handle_printei_reqs 
niodules.t 

When the rough drafts of the structure charts were ready, 
they were entered into the Teamwork tool. The team then 
determined what each module does and knows. The next 
step was to walk through the design and be sure it works. 
This involved tracing each l/< ) through the design to be sure 



t Nam Snme of the process names are capitalized or spelled differently in Figs. B 9 10 and 11 
and in ihe text nl this paper Some names did f.hange between the structured analysis docu- 
mem and Ihe siructured design document In the module specification, liy 10, Ihe engineer 
chose lo capitalize Hie function codes, request types, and procedures called In Fig. 11. the 
names are capitalized because the coding convention thai was loltowed capitalized procedure 
calls Ihe MPE/iX operating system is case insensitive, so Ihe differences aie insigrnlicant 



I Copr. 1949-1998 Hewlett-Packard Co. 



August IIHI4 I Icwlfit-I'ac'karfl Journal 59 



NAME: 
FIOM main;? 

TITLE: 
FIOM main 

PARAMETERS: 
SM CB 
SM arg list 
return, status 

LOCALS: 

request 

status 

GLOBALS: 
BODY: 



Calling parms: 

SM CB — pointer to SM control block 

SM arg list pointer to storage manager arguments 



TIO CB .= SM CBMwrt cb. * used to be last writo control block ■ 

case SM arg list" sin generic parms lunc code of 

SM READ_FN: 
request = READ REQ: 

SMWRITE FN: 
request % WRITE RED. 

SM CONTROL FN: 

if <SM arg list A .sm control parms i* terminal control '<> TC QUIESCE 101 
request := TC REQ. 
else 

request s QIOREO; 

SM 0EVC0NTR0L FN: 
request := TC REQ. 



• since this is the FIOM. and FIOM only handles waited I/O. always block ■ 
■ if necessary to get semaphores " 

status :=UT GET Si (FIOM. TRUE); 
il (status <> TRUE) 
" error, log and exit *; 

status :=UT GET SZ (FIOM. TRUE); 
il (status o TRUE) 
• error, log and exit*; 

il (TIO CB A device type = * terminal *) 
status := FIOM HANOLE TERMINAL REQS (request. SM arg list); 
else 

status := FIOM HANDLE PRINTER REQS (request. SM arg list); 
return status :- MAP T0_ OS STATUS (status); 
UT_FREE_ SEMAPHORES; 



begin ( tio Horn ) 
try 

begin | Try section ) 

tio cb := tio cb plr typo( liom cb plr ); 
I index := 0; 



I 

( The FIOM only handles blocked 10, always block 
( il necessary to obtain semaphores 

{ 



Fig. 10. hum module specification 



il ( CUT GET SKlio cb. Fiom) ) then 
ill CUT GET SZltio cb. Fiom) I then 
illnolllio cb A device type.printerl ) then 
( terminal | 

status := FIOM HANDLE TERMINAL REQUESTS 
I sm arg list ptr l(org list)) 

else 
( printer ) 

status := FIOM HANDLE PRINTHR. REQUESTS 
I psm.porm plr ((arg list)) 

else 
begin 

{""LOG S2 error"") 
status > Bad stalus: 

l„tio_status.int .status-status code := Internal err; 

I lio status. int status layer = Main_ layer. 

Ijio status. int stalus.proc number ■= Pn _tio fiom; 

l_tio status.ini status location := 0; 

Ijio stalus int slatus.llio Hag = False. 

Ijio status oxt stalus lipe := status: 

CUT LOGMSG ( (io cb. 

I lio stalus. 
Main layer. 
Fiom 

); 

end 



status := Bad status. 
{""LOG SI error"") 

ljic_$taius.int status stalus code := lnternal_err. 

I. tio status.int_slatus.layer := Mam layer; 

l_tto stalus.int stalus.proc number = Pn tio liom; 

l_l.o_status.int siatus.localion >1; 

I tio status.ini status. Ilio flag := False; 

Ltio slatus.ext stalus hpe w status; 

CUT LOGMSG ( tio. cb. 

l_lio_status. 
Main layer. 
Fiom 

k 

end: 

CUT FREE S2(lio_cb>; 

CUT FREE SI (lid cbl; 
end: { Try section I 
recover 

begin ( Recover section J 



thai the right information was available to the iniKliiles. and 
thai Ihe modules were doing the right things. During the 
entire proress, modules were collapsed when il was reason- 
able. We did not do much evaluation of the inlerfaces (data 
coupling or cohesion ) because of a lack of lime. The internal 
design document contained only Teamwork structure charts 
and module specifications. 

After the internal design was inspected and the rework com- 
pleted, we worked on denning the procedure declarations, 
dala structures, software configuration (file structure), and 
coding standards. The module specifications were the basis 
for the Pascal procedures. Coding was mainly a matter of 
convening pseudocode to Pascal. Fig. 1 1 is the outer block 
of the fast I/O manager module. Competing lliis to the fast 
I/O manager module specification. Fig. 10. one can see how 
closely related they are. The code adds more detail to the 
module specification framework. (Pig. 10 also illustrates a 
problem we had with module specifications: many of them 



case ESCAPEC0DE ol 

0:; 

otherwise 
begin 

Ltio stalusJnt status.status code := ExternaLerr; 

IJio^status.inl_stalus.layer := Main layer, 

I tio status.ini stalus.proc number — Pn tio. liom: 

IJio.slatus.ini stalus location s £ 

I lio stalus.inl_status.llio (lag .= False: 

Ltio.status.exl stalus hpe := hpe statustESCAPECODE). 

CUT LOGMSG I tio cb. 

Ltio.stalus. 
Main layer. 
Fiom 

h 

end: 
end. 

end: ( Recover section } 
end, | liojiom ) 



Fig. 11. FIOM code 



60 August 1904 UpwIpii .Packard journal 

© Copr. 1949-1998 Hewlett-Packard Co. 



were too code-like. The case statement is an unnecessary 
level of detail, and is actually not carried through into the 
code since the argjisi is passed to the HANDLE procedures). 
Analysis of the code shows a 33% reduction in code size 
compared to the original flriver. The reduction comes from 
the reuse of procedures as a result of structured design, and 
from a structure that allows common routines to he shared 
between the DIOM. F10M. and MOM modules instead of requir- 
ing one copy per module In the original driver, there was a 
logical device manager for terminals and one for serial print- 
ers. In the redesigned driver, the fast I/O manager and def- 
erred I/O manager are able to handle both terminals and 
serial printers. 

Structured Design Recommendations 

In general, not enough attention was paid to defining design 
dictionary entries or data structures. Structured design 
added the next level of detail to the design from the specifi- 
cation. It was easy to develop the design from the specifica- 
tion once we had our approach. We occasionally created 
module specifications that are too much like code. Module 
.specifications are not meant to handle the small details thai 
are best suited to coding, but to define how the function is 
performed. If too much attention is paid to code-like details 
less Ls paid to how the specification is to be implemented. 

If we had to do it all over again, we would have paid more 
attention to the interfaces and better defined the data pass- 
ing between modules. Since we were using a lot of existing 
interfaces, we did not put the time info the data or design 
dictionaries. This meant that the data elements were poorly 
defined, since we tended to assume thai everyone knew 
how the data was defined. We would also have developed a 
module specification standard lo create Consistency and 
avoid variations in the degree of code-like English in the 

internal design. 

Inspections 

The internal specification required some basic structured 
analysis training for the inspections. This was because of 
the graphical nature of the diagrams and the decision struc- 
tures. The internal design (structured design document ) was 
easier to understand since it consisted of structure charts, 
pseudocode, and data descriptions. Inspections were done 
by a depth walk through the processes and hence concen- 
trated on architectural levels as opposed lo interfaces be- 
tween levels. Tin 1 inspections did an excellent job of check- 
ing for functionality and design flaws, bill were weak in the 
area of interface checking. 

The original driv er internal design documents were narrative 
with some state transition diagrams. They were much shorter 
than the internal specification and internal design created 
With Structured methods. Past reviews of project documen- 
tation (external specification, internal specification, internal 
design ) look place ai one 2-to-4-hour meeting per document. 
Since the slruclured documents were larger and contained 
more detail, il took about 300 engineer-hours lo inspect or 
rev iew them. The participants felt that I he reviews of the 
iiilernal specification and internal design (structured analy- 
sis and structured design documents) were very valuable. 



The material was much easier to inspect for completeness, 
correctness, and functionality than the narrative internal 
specification and internal design documents. 

Schedule 

There is a concent that using structured analysis and struc- 
i ured design has a large impact on the schedule. The nega- 
tive impact of structured analysis and structured design on 
our schedule consisted of training time (two weeks), time 
spent becoming familiar wiUi structured analysis (one 
month of intermittent structured analysis ), and longer in- 
spections. A positive result of using structured analysis and 
structured design is thai each step is a progression from the 
last Each document is built on die previous document, un- 
like the classical design in which each document is not so 
closely tied to earlier ones. The effort required at each 
phase was less than the one before, and was mainly directed 
at adding more details. The positive impact of structured 
analysis and slruclured design on the schedule was shorter 
test cycles. Nonregression tests were passed ahead of 
schedule. 

Results 

The redesign project improved block mode performance 
and achieved a greater than 30% reduction in the terminal 
driver path length. The code passed all functional tests in- 
cluding 24-hour and 72-hour nonregression tests. 75% of the 
testing errors were coding errors. The majority of the re- 
maining defects were found in areas that were known to be 
weak from the design and inspection phases. The engineers 
valued learning structured methods and tools both for the 
quality and completeness of the design and the acquired 
skill set The project team believes that the design of the 
new HP 3000 MPE/iX lenninal and serial printer driver would 
not have been accomplished in Ibe same amount of tone 
with the same amount of thoroughness by the traditional 
design techniques. 

Acknowledgments 

I would like lo I hank I he project engineers— Jeff Handle. Jon 
Buck, Larry White, and Erik Ness — for their hard work and 
team spirit with special thanks to Jeff for his help With the 
figures for Ibis paper. It was a pleasure to work with them as 
project lead engineer. Without them, this paper on our suc- 
cess with structured analysis and structured design would 
not have been possible. I would also like to I hank the project 
manager. Celeste Ross, for her willingness lo try something 
new and for her support, and the inspectors, reviewers, and 
moderators for their much appreciated lime and effort. 

References 

1. T. DeMarco. St run ured Analysis and System Specification. 
Prenttee-HaU, Inc., 1979. 

2. M. Page-Junes. Vic I'lvclical Gniilr In Slrncliiml Systems Design, 
Sburdon toe., i960. 

:). Grenoble Networks Division I'roilin t Life Cycle Guidelines, 

Edition J. Hewlett-Packard, April 1891. 

-I. R. Merckling, Structured Analysis/Renl-Time and Slruclured 

DisiiinJlirul Tiiiic ( loss Notes. 1081. 

3. D. Hat ley and l. A. Pirbhai. Strategies for Ileal Time System 
Specification, Dorset House. IS88. 



© Copr. 1949-1998 Hewlett-Packard Co. 



August 1904 He wtea-Padoid Journal (il 



Data-Driven Test Systems 



In a data-driven test system, all product-specific information is stored in 
files. Within a product classification, the test software contains no 
product-specific information and does not have to be changed to test a 
new product. This concept lowers new product introduction costs. 

by Adele S. I. and is 



111 today's competitive marketplace, new products must be 
designed, manufactured, and delivered to customers in an 
effective and timely manner. This is the only way to maintain 
or create a desirable market share. At the IIP Santa Rosa 
Systems Division (SRSDJ. the introduction of a new product 
from the research and development laboratory into a pro- 
duction environment is called the new product introduction 
cycle. This introduction cycle Can be lengthy and expensive. 
One of the most critical and time-consuming phases in the 
new product introduction cycle is the development of the 
elect lical test process. 

The electrical test process is that part of the manufacturing 
cycle that verifies that the product meets published customer 
specifications. The electrical lest process is a combination 
of manufacturing resources, including: 
» Test methods for doing each tesl that verifies a product's 
specifications 

• Software that properly implements the tests 

• The equipment necessary to perform the lest methods 

' Measurement integrity, which includes Iraceabilily, error 

analysis, and calibrations 
i Documentation, including test procedures, test support 

documents, and the like 

• Training of manufacturing personnel. 

Fig. 1 shows an average distribution of the time spent on 
these resources for a group of electrical test processes re- 
cently developed al SRSD. The software portion of the pro- 
cess lends to be high because the division is moving towards 
completely automated testing. 

Analysis has repeatedly shown that designing the test soft- 
ware by product classification rather than by individual 
products can result in large cost and time savings during llie 
development process. A product classification can be defined 
as a group of products that have similar block diagrams and 
require the same type of testing. Some of the different prod- 
uct classifications that are manufactured at SRSD are net- 
work analyzers, spectrum analyzers, scalar analyzers, 
oscillators, amplifiers, and mixers. 

Historically, production test software was written for and 
dictated by specific products or models. When test software 
w r as required for a new product or follow-on model, the test 
software was implemented or leveraged in a product-specific 
manner as shown in Fig. 2. This approach required continual 
introduction and/or reworking of the tesl software. The 
greater the number and v ariety of new products, the more 
lime and software were needed. Also, tliis approach required 



Test Methods 16.7% 
15- 



Documentation 22.2' 
20 




Measurement 
Integrity 5.6% 
5 

Hardware 5.6% 



Training 3.3% 



Software 46.7% 
42 

Fig. 1. Average proportions of lime spent developing various manii- 
fad tiring resources lor a group of elerlncal tesl processes. Software 

dominates heeanse the division is moving towards completely auto- 
mated testing. 

that an individual with software expertise be assigned for 
every cycle of new products that emerged from the lab. 

Data-Driven Software 

C ontinued analysis has shown lliat by designing test software 
to handle large variations within a product classification, the 
overall development lime of the electrical test process can 
be drastically reduced for all products within a classifica- 
tion. This greatly benefits both the manufacturer and the 
customer. 

A study of a large group of SRSD test programs across dif- 
ferent product classifications was recently completed. The 



Product A 


I 


Product B 


Test 




Test 


Software 


■ 


Software 




Fig. 2. Product-specific tesl software requires Introduction or 
reworking ortlte software for each new product. 



62 August lOMHewlefl Packard Journal 

©Copr. 1949-1998 Hewlett-Packard Co. 




Model 5086- 7621 Power Amplifier 



Model 5086 5781 Power Amplifier 



Model 5086-7153 Power Amplifier 

OPERATOR SETUP = "Connect the 
AMP Input to Source' 

INPUT POWER LVl = "10" 
INPUT POWER LVL = "0" 
FREQUENCY PNT 1 = "100" 
FREQUENCY PNT 2 = "300" 
FREQUENCY PNT3= "1.00E+3" 




Common Information 




Fig. 3. In data-driven software, product-specific information is 
removed front Hie test software ami stored in Tiles. The software 
doesn't have to be changed to test a new product witltin the product 
classification. 

results indicate that all test software consists of certain 
types of tasks, including test equipment setup, calibration 
setup, operator setup, measurement procedures, and data 
collection. 

Regardless of which of these tasks are required for Specifi- 
cation testing, ail of these tasks can be broken down into 
two basic informal ion types: common and specific. Com- 
mon information is common to all products within the prod- 
uct classification. Specific information is specific to the indi- 
vidual products of that classification. Removing the specific 
information and placing it in external files leaves the Common 
information to reside in the software (Fig. :(). 

The separation of these information types results in static 
test software and dynamic product -specific files. The static 
lest software contains the knowledge of the specific infor- 
mation required from the test's associated external file, the 
location of this file, how- to load and read the file contents, 
proper measurement implementation, and the needed data 
collection routine. Since the test knows what information is 
required from its external specific file (but not necessarily 
the exact values), this file is considered defined. 

Test software that knows how to make the measurement, 
while the external files know where to make the measure- 
ment , can be called data-driven software. 

Since data-driven test software is static-, this method allows 
total test software reuse. Any product variation is handled in 
the external files. The building of the Specific files can be 
automated easily since the type of information within these 
files has been previously defined by the test software. The 
specific file building is automated using a master form of the 
defined file and an editor. The editor only allows value entry, 
After the new values are entered into the master form, the 
prOdUCf-speCifiC file is stored, This approach, which does 
not require any software expertise, is extremely fast for new 
product setup. 

A w m m| example of data-driven software is an amplifier power 
lesl (sec fig. 1 1. The common information in this lest, de- 
picted by Hie software flow chart in h'ig. 1, shows the tasks 
that arc rommoii to all amplifiers needing this type of lest 



Read Files 

for 
Specified 
Information 




Setup Test 
Equipment 



Set Source 
to Required 
Power Level 



i 



Set Source 
to Required 
Frequency 



Measure 
Power Out 



Collect 
Measured 
Data 



Product-Specific Files 



Model 5086-5991 Amplifier with Mod 



Model 5086-5886 Amplifier 




Model 5086-5879 Amplifier with 
Splitter 

OPERATOR SETUP = "Connect the 
AMP Input to source using 10 dB 
pad" 

INPUT POWER IVL = "6" 
FREQUENCY PNT 1 = "500" 
FREQUENCY PNT 2 = "800" 
FREQUENCY PNT 3 = 1 001 ,3 
FREQUENCY PNT 4 = "2.00E+3" 
FREQUENCY PNT 5 = "5.00E+3" 



Fig. 4. 1 lata-driven amplifier power test 



© Copr. 1949-1998 Hewlett-Packard Co. 



Aiikiisi 1994 Hewlett Packani Joutnal 63 



Common Information Specific Information 



Test Executive Tasks Product-Specific Files 




Fig. 5. Like data-driven test software, the data-driven lest executive 
is made up of common and specific information. 

The product -specific nies contain the product measurement 
parameters and values required by the program to make the 
measurement. This same test software can he reused for any 
amplifier by huilding a product-specific file thai contains the 
amplifier's measurement parameters. 

Test Executive 

All manufacturers that are concerned with product test data 
need some form of a test executive. A test executive is a 
software package dial handles the administrative and man- 
agement responsibilities associated with test data. Even 
though the duties of the many tesl executives available Vary 
depending upon user needs, all test executives have the 
same basic tasks as shown in Fig. 5. 

Like the test software, test executive software is made up of 
common and specific information. By applying the data- 
driven theory to the test executive software, a data-driven 
test executive results. The test executive software then be- 
comes static and any varying product information required by 
Ihe test executive is available from external product-specific 
files. The building of these files can also be automated using 
master files and an editor. 

The combination of data-driven test software and a data- 
driven test executive is a data-driven test system. The major 
benefit of a data-driven test system is that all the software 
within the test system is static. This benefit reduces the ex- 
pensive and time-consuming cycle of product -specific soft- 
ware regeneration across a product classification. For test 
methods required by a new product that are not available in 
an existing data-driven system, the test method and associ- 
ated test software must be developed. If the new tests are 
developed using the data-driven theory, then the tests will 
become part of the system's reusable tests and will be avail- 
able for the next new product. Other key benefits are test 
system support and the automation of test system expansion. 

Expansion 

A data-driven lest system can be supported by a system ad- 
ministrator who resides on the production line where the test 
system is used. The system administrator can be a high-level 
technician who has some measurement and test equipment 
knowledge. Because the test system software is static, Ibis 



Product 
Specifications 




Fig. 6. Data-driven tesl system expansion. 

individual does not need to have any software expertise. The 
system administrator's main duly is test system expansion. 

Test system expansion is the process of adding a new product 
to a test system (hat already exists in the manufacturing envi- 
ronment. This process, whether for the traditional product- 
specific I est system or a data-driven lesf system, can be 
broken down into two steps: adding Ihe product into an 
existing test gyst&m and the initial product testing. 

In the traditional product-specific test system, product addi- 
tion lakes approximately one day to eight weeks. This lime 
frame is directly proportional to the amount of product- 
specific values that are imbedded in the test system software. 
Product addition in this type of tesl system must be per- 
formed by individuals with tesl software and test executive 
expertise. The time for product addition in a data-driven test 
system is approximately one hour and can be accomplished 
by Individuals who have no test software or test executive 
knowledge. 

Expansion of a data-driven test system can be accomplished 
easily and rapidly with a programmatic procedure. The sys- 
tem expansion process is shown in Fig. (i. 

The system adminislrator receives a new product and its 
specifications. I sing a system expansion program, a set of 
master specific files, and an editor, the system administrator 
builds all the necessary external product-specific files re- 
quired by the test executive and lest programs. 

Initial Testing 

After a product is added into a test system, the initial prod- 
uct testing must be done. This first testing is where the soft- 
ware used to measure the product specifications is verified 
to be working properly. In the product-specific lest systems, 
this initial testing can be a very tedious process as shown in 
Fig. 7. 

When the new product is measured for the first lime and the 
measured values are not satisfactory or the test system stops 
because of a programmatic error. Ihe software must be 
loaded into the computer, modified (debugged) and restored, 
and Ihe test system reloaded. Then Ihe product is measured 
again. Even though this process can made faster using two 
computers, one for performing the actual measurement and 
the other for software debugging, initial product testing is 



64 August 19114 ili'wlt-tl-l'ackarrt Journal 

© Copr. 1949-1998 Hewlett-Packard Co. 




« Ready lor 
Production 

Fig. 7. Ppxiiifl-sii'M-ifn mil ml product testing 

time-consuming and has to be performed by individuals With 
software expertise. 

The initial product testing on a data-driven system is modi 
easier, as shown in Rg. 8. After the product is added into a 
data-driven lest system, the system administrator performs 
the initial product testing. If the measured values are not 
correct, any changes required are made to the external 
(product-specific) files through an editor immediately acces- 
sible from the keyboard. Once the system administrator is 
Satisfied with the measurement, the product is measured a 
final lime and the data is presented to the appropriate 
engineer for review. 

This feature of a data-driven lest system, which shifts the 
laborious work of measurement debugging from the test 
sol i ware to the product-specific files, is extremely benefi- 
cial, ll nol only allows nonengincers lo perform the inilial 
lesling, bill also, since proven test methods are being used, 
the overall inil ial lesling process is very short 

Adding Test Stations 

/Ml test systems whether old or new must be able to handle 
a variety oflesl stations. In the traditional product-specific 
test system, the system contains tcsi-slalion-speeific values 
imbedded in the software. With these station-specific values 
in the software, adding stations with extended capabilities 
or station duplication can be very time-consuming and a 
software expert is needed. This problem doesn't exist in a 
data-driven test system since the data-driven test system is 
station independent. These systems not only have the ability 
to handle numerous lest stations with different measurement 






Product- 


1 


Specific 




Files 




\ 



Roody lor 
Production 



Fig. 8. i iaa driven Initial product testing 



capabilities, but they also allow for test station duplication 
whenever production capacity is exceeded. 

For extending a data-driven test system test station capability, 
the system administrator builds the new station hardware, 
assigns the station a new- station number and/or name, and 
enters tliis information into the appropriate test system's 
station table. 

Using this data-driven approach. SRSD has designed two 
complete test software systems as a pilot program. These 
data-driven tesl systems focus on measurement types within 
a product classification. These systems contain a set of well- 
designed tests that perform typical measurements required 
by a product classification. 

Mixed-Model Amplifier Test System 

The mixed-model amplifier test system was the first system 
to validate the data-driven software theory. The system per- 
forms a set of basic tests that are required for complete am- 
plifier characterization. The system has five different test 
stations: two network analyzer stations (40-GHz and 50-tiHz 
frequency ranges) for complete s-parameter measurements 
and three scalar network analyzer stations (26.6-GHz, 
50-GH2, and high-power 26.5-GHz frequency ranges). 

This system began with two tesl stations with specific mea- 
surement capabilities. With the addition of more products, 
the need Tor stations with extended measurement capabili- 
ties arose. Since the data-driven system software is indepen- 
dent of test station equipment, we were able to add three 
additional stations with extended measurement capabilities 
without modifying any of the test system software. 

This test system now provides complete characterization for 
over fifty separate products within the amplifier classifica- 
tion. Whenever necessary, the tesl system is expanded to 
incorporate new products by the system administrator. The 

total average product addition time is approximately three 

hours per product. The lest process development lime and 
expenses for these products have been greatly reduced. 

For comparison, a product-dedicated tesl system used in 
microelectronic manufacturing was expanded for a new 
follow-on product. This system, which used some data 
driven lesl software, still required one week of engineering 
lime for complete system expansion because of the product- 
specific informal ion imbedded throughout the test system 
software. Expanding this system for another product required 
three weeks of engineering/software time since the original 
tests could nol be reused. This type of costly regeneration 
cycle will repeat itself again and again unless a data-driven 
lesl system replaces the current system. 

The mixed-model amplifier lesl system design and sei of 
tests required 487 days of software development time. Even 
though the initial test system design was very expensive, 
huge savings occur each lime a product is added to Ihe test 
system. Test system expansion for a data-driven test system 
costs approximately fit) lo 100 limes less than the same 
expansion of a product-specific system. 

The break-even point on Ihe mixed-model amplifier lesl 
system was found lo be four products, while Ihe number of 



© Copr. 1949-1998 Hewlett-Packard Co. 



August III*! llt-wlcit-Pm-kanUiMirnal 05 



products tested by the system is more than 50. The return on 
investment Of the data-driven test system shows thai tliis type 
of test system is necessary to stay competitive in today's 
i Marketplace. 

Vector Network Analyzer Test System 

More recently, a data-driven test system was designed and 
implemented for a family of network analyzers. 'Hie complex- 
ity of instrument testing dictated that the data-driven test 
system be limited to a product family instead of a product 
classification. The system is currently testing five products. 
The new product introduction engineer estimated software 
generation lime saved was approximately 600 engineering 
hours. More time would have been saved except that addi- 
tional tests were required. These new tests were developed 
and added to the system in the data-driven formal and are 
now part of the set of tests to be used by future family 
products. 



In summary, by designing test software tO handle large varia- 
tions within a product classification, the overall manufactur- 
ing resources required for the electrical test development 
process are reduced over an entire product classification. 
This enormous savings occurs because the continual intro- 
duction or reworking of new product test software is 

eliminated 

Acknowledgments 

I would like to thank Claude Ashen for his support and guid- 
ance on structured analysis and system design. Steve Waile 
for his futuristic vision and at limes blind faith that the test 
system would be completed in lime to ship his products, and 
Wes Ponick for his enormous help in putt ing I his project 
into words. 



Authors 



August 1994 



6 Scientific Graphing Calculator 

Diana K. Byrne 

With HP since 1988. Diana 
Byrne is an R&D project man- 
ager for calculators at the 
Corvallis Division She was 
born in Portland, Oregon and 
received a BS degree in 
mathematics from Portland 
State University in 1982, an 
MS degree in mathematics 
from the University of Oregon in 1987. and an MS de- 
gree in computer science from the same university in 
1988 On her first HP project, she developed software 





for plotting, graphics, and the EquationWriter for the 
HP 48SX calculator She was R&D project manager 
for the HP 48G/GX calculator software and hardware 
She is coauthor of a 1 991 HP Journal article on HP 
48SX interfaces and applications. Diana has two 
sons. Her favorite leisure activities are bicycling and 
reading. 

Charles M Patton 

Software scientist Charlie 
Patton received a BA degree 
in mathematics and physics 
from Princeton University in 
1972 and MA and PhD de- 
grees in mathematics from 
the Slate University of New 
York in 1974 and 1977 He 
joined the HP Corvallis Divi- 
sion in 1982 as a software R&D engineer and has 
contributed to the development of the mathematics 
ROMs for the HP 75, HP 71B. HP 28C/S. and HP 
48S/SX calculators. For the HP 48G/GX calculators, 
he worked on the RPL system and the 3D graphing 
routines, and was a general consultant and trouble- 
shooter. He is currently working on several projects in 
the areas of software research, computer visualiza- 
tion, and symbolic and numeric techniques. He's also 
coauthormg a calculus textbook that incorporates 
technology as a teaching tool He's named an inven- 
tor in three patents on operating systems and sym- 
bolic computation methods for handheld machines, 
and is the author or coauthor of technical papers on 
general relativity, representation theory, handheld 
computation, and calculus He is a member of the 
American Mathematical Society, the American Asso- 
ciation for the Advancement of Science, and the 
Internet Society His outside interests include bird- 
watching, native plants, rafting and canoeing, fishing, 
gardening, Celtic harp, concertina, and calligraphy A 
long-time HP telecommuter, he is also involved in 
remote sensing and internetworking 




David Anion 

David Arnett is a hardware 
design engineer at the 
Corvallis Division and has 
been with HP since 1991 He 
designed hardware and was 
the manufacturing liaison for 
the HP 48G/GX calculators. 
Currently, he designs hard- 
ware for the HP OmniBook 
product line David was born in Cleveland, Ohio and 
attended Brigham Young University, from which he 
received a BSEE degree in 1989. He worked on avionics 
design at General Dynamics and on superconductivity 
research at Oregon State University before |oining HP 
He's a member of the IEEE David is married, has two 
children, and enjoys music, both as a performer and 
as a conduclor 

Ted W. Beers 

Software R&D engineer Ted 
Beers came to HP's Corvallis 
Division in 1985 and has 
contributed to the develop- 
ment of the HP 48S and the 
HP 28C/S calculators His 
work on the HP 48S includes 
the interactive stack, high- 
level display management, 
and user customization. He worked on user memory 
organization for the HP 28S and performed extensive 
testing on the HP 28C. For the HP 48G/GX calculators, 
he was responsible for the user interface elements. 
His work on a software technique tor data and text 
entry has resulted in a patent, and he is the author or 
coauthor of three other technical articles, one written 
while he was in high school Ted was born in West 
Lafayette, Indiana and received a BS degree in com- 
puter and electrical engineering from Purdue Univer- 
sity in 1984 He's married and enjoys hiking with his 




66 AlUattI 1994 Hew lell-i'a.kanl Journal 

© Copr. 1949-1998 Hewlett-Packard Co. 




wife and two beagles He's also interested m garden- 
ing, house design home improvement, philosophy, 
and the environment 

Paul J McClellan 

A software engineer at the 
Corvallis Division since 
1979. Paul McClellan has 
developed software tor sev- 
eral HP calculator families 
including the HP 15. HP 71. 
HP 28 and HP 48. He also 
implemented and main- 
tained user interface soft- 
ware for OSF/Motif For the HP 48G/GX calculators, 
he was responsible for design and implementation o' 
new numerical functionality He's currently working 
on software for future products He is named an inven- 
tor m two patents related to calculator development 
and is a coauthor of several HP Journal articles He's 
also a member of the IEEE and the Society for Indus- 
trial and Applied Mathematics Bom in Salem. Oregon. 
Paul received a BS degree in mathematics and physics 
from the University of Oregon in 1974 and a PhD in 
statistics from Oregon State University m 1984 He 
also has a degree in computer science from the 
National Technological University (19911 He worked 
at Tektronix before joining HP He is married and has 
two sons His outside interests include mountain 
climbing, nordic and alpine skiing, and reading 

23 HP-PAC 



Johannes Mahn 

I Willi HP since 1988, 
^^F* Johannes Mahn is a me- 

| I chanical design engineer at 

n- P the BOblmgen Manofaaur- 
I ing Operalion He was the 
project manager responsible 
foi design, development, 
and testing ol the mechani- 
cal concept for HP-PAC Ear- 
lier, he contributed to the mechanical design of the 
HP M1350A intrapartum fetal monitor and worked in 
process engineering for the electrical test area at the 
BOblmgen printed circuit shop He is named as a co- 
inventor for two patents related to the HP-PAC chas- 
sis and casing A native of Sindelfingen. Germany, he 
received a Diplom Ingenieur in precision mechanics 
from the Esslingen Engineering School in 1988 Jo- 
hannes is married, has two children, and enjoys 
mountain biking, handball, and watercoloring 



named as a coinventor for two patents relaied to the 
HP-PAC chassis and casing Jurgen is married and 
likes tennis, snowooatding. Diking, motorcycling, and 
traveling 



Jurgen Haberle 

Mechanical design engineer 
Jurgen Haberle was born in 
Sindelfingen, Germany and 
attended the University for 
Applied Science from which 
he received a Diplom Inge- 
nieur in precision mechanics 
in 1988 He joined the HP 
BOblmgen Manufacturing 
Operation the same year and has been responsible 
for luol engineering, mechanical design, and project 
management He worked on the concept and defini- 
tion of HP-PAC as well as mechanical design and 
testing Currently a proiect manager lor HP-PAC. he is 





Siegfried Kopp 

W BJ -Turing Opera - 

*?* f 

I fifteen years He worked on 
materials engineering for 
the HP-PAC project A tool- 
maker before joining HP he 
received a diploma as a master toolmaker in 1986 At 
HP. he has supervised a tool shop and machine center 
and worked in materials engineering He is named as 
a coinventor for three patents related to the HP-PAC 
chassis and casing and an STE fastening method for 
sheet metal Siegfried was born in Stuttgart. Baden- 
Wurttemberg, Germany He and his wife have a 
daughter and son He likes the out-of-doors, especially 
biking and hiking 

Tim Schwegler 

Tim Schwegler was born in 
Aschaffenburg. Bayern, Ger- 
many and received a Diplom 
Ingenieur in precision me- 
chanics from the Fachhoch- 
schule Karlsruhe in 1989 He 
joined HP's BOblmgen Manu- 
facturing Operation in 1989. 
where he was a materials 
engineer and since 1991 has been a project manager 
for HP-PAC His responsibilities have included sup- 
plier development, agency contacts, and marketing. 
Currently, he's with the Entry Systems Division in Fort 
Collins. Colorado He is named a coinventor for three 
patents related to an STE fastening method fur sheet 
metal and the HP-PAC chassis and casing, and for 
two pending patents on a heat sink attachment 
method and a fan for a heat sink attachment Tun is 
married and has a daughter and sun He likes outdooi 
activities, including biking, hiking, and skiing 



29 Eye Diagram Analysis 

Christopher M. Miller 

^ _ a Chris Miller graduated with 

^^Hr^^jBi a BSfcE degree from the Uni- 

versny ol California at 
Jj — A » Beikeley in 1975 and with 
an MSEE degree from the 
) uruaat 
H i Los Angeles m 1978 In 
^MMj £ 1 979, he |uined the technical 
staff of Hewlett-Packard 
Laboratories in Palo Alto. California, where he 
worked on high-speed silicon bipolar and GaAs inte- 
grated circuits Chris is currently an R&D project 
manager in the Lightwave Operation located in Santa 
Rosa, California In addition to the HP 71501 eye dia- 
gram analyzer, some of the products his project teams 
have previously introduced include the HP 71400 and 
HP 83810 lightwave signal analyzers and the HP 
1 1982 wide-bandwidth amplified lightwave con- 
verter Born in Merced. California, he is married and 




has two sons He enjovs running, weight training, 
and managing Little League baseball teams 

38 Thermal Management in SFC 

Connie Nathan 

at the little Falls Operation 
of the Analytical Products 
Group. Connie Nathan was 
formerly a hardware devel- 
opment engineer on the 
second-generation HP super 
critical fluid chromatography 
(SFCI system Her responsi- 
bilities included the redesign of a flame ionization 
detector, the design ol the interfaces of the GC oven 
to the pump module, and the pump module package 
design Alter that project, she continued to work on 
SFC postrelease enhancements tor two years She is 
currently developing and implementing analytical 
product support plans Born in Rochester. New York, 
she graduated m 1980 from the Massachusetts Insti- 
tute of Technology witfi a BSME degree and in 1981 
completed work for an MSME degree from Stanford 
University Before joining HP in 1989. she worked in 
manufacturing and product design for Eastman Kodak 
and Mohawk Data Sciences She is a member ol the 
National Society of Black Engineers Alumni Extension 
and the Forum to Advance Minorities in Engineering 
She has also served as a mentor to college women at 
the University of Delaware 

Barbara A. Hackbarth 

^^^^ Barbara Hackbarth was born 

jflHHAL 1,1 Austin. Minnesota. She 

y^^^K studied mechanical engi- 

\-» » neermg at the University of 

— ^^Bl Minnesota, from which she 

j^^SBVjl received a BSME degree 

same veaf * nianufactunng 
^^^^ development engineer foi 
the Analytical Products Group in Little Falls, Delaware, 
her responsibilities have included manufacturing de 
velopment lor the HP 1050 Series and HP 1090 liquid 
chromatography product lines and bringing the HP 
G1205A supercritical fluid chromatograph into manu- 
facturing production She is currently involved with 
reliability and product improvements for the G1205A 
She is a mentor in the University of Delaware women 
engineers mentoring program She enjoys outdoor 
activities, travel, and many sports including tennis, 
golf, swimming, and skiing She also plays on HP 
volleyball, golf, and soccer teams 

43 Linear Vascular Transducers 



Matthew G. Mooney 




With HP since 1988. Matt 
Mooney was Ihe project 
leader for Ihe image quality 
improvement project de- 
scribed in this issue A 
transducer engineer at the 
Imaging Systems Business 
Unit, lie has worked on ther- 
mal design, manufacturing 



© Copr. 1949-1998 Hewlett-Packard Co. 



Angnai ISMHewtett-PackardJouinm <>7 



process development, and acoustic design and prod- 
uct definition tor several HP transducers, including 
the HP 21244, the HP 21246, the HP 21255A, the HP 
21255B. and the HP 21258B He's currently investigat- 
ing new transducers for advanced ultrasound imaging 
applications. A native of Burlington, Massachusetts, 
he attended Worcester Polytechnic Institute IBSMF 
19881 and later received an MSME degree from 
Northeastern University (19921 He is a coauthor of 
one other technical article. Matt is married and has 
two sons, whom he cares for while his wife attends 
law school classes. His hobbies include weightlifting. 
skiing, golf, and making wooden toys. 

Martha Grewe Wilson 

A transducer development 
engineer, Martha Wilson 
joined the Imaging Systems 
Business Unit in 1989 Her 
responsibilities have in- 
cluded developing a new 
backing material and a new 
interconnect scheme for a 
family of transducers. She 
has also worked on designs and fabrication pro- 
cesses for a pediatric transducer and three vascular 
transducers. She was responsible for the selection 
and qualification of the material used as the second 
matching layer far the transducer described in this 
issue. Martha was born in Minneapolis, Minnesota 
and received a BS degree in materials science engi- 
neering from Rensselaer Polytechnic Institute in 1986 
and an MS degree in solid state science from Penn- 
sylvania State University in 1 989 She is a member of 
the American Ceramic Society, a coauthor of two 
published papers, and author of two papers for inter- 
nal HP conferences She's married and has an eight- 
month-old son She likes running, swimming, skiing, 
and playing piano. 





52 Driver Redesign 



Catherine L. Kilcrease 

Project manager Keti 
Kilcrease works at the HP 
Information Networks Divi- 
sion. She came to HP in 
1985 A California native, 
she was born in Los Angeles 
and received a BS degree in 
biological science from the 
University of California at 
Davis in 1980 and an MS degree in computer science 
from California Polytechnic State University at San 
Luis Obispo in 19B4. She designed a serial printer 
driver for the MPE/iX operating system on the HP 
3000 computer system and enhanced and supported 
several modules in terminal and serial printer drivers 
She was the technical lead engineer for the rede- 
signed terminal and printer driver described in this 
issue and now manages a team of engineers who are 
developing processor independent netware on the 
PA-RISC platform She is the author or coauthor of two 
papers on structured analysis and design Keti is mar- 
ried and has two children She plays soccer, coaches 
in a youth soccer league, and enioys spending time 
with her sons. 



62 Data-Driven Test Systems 



Adele S. Landis 

Adele Landis joined HP in 1982 and is a software 
technician at the Network Test Division. Initially, she 
was a production line technician, working on a vari- 
ety of network analyzers and accessories Later, she 
developed test software for the HP 8720. HP 8753. 
and HP 871 1 network analyzers, as well as for nu- 
merous circuits, sweepers, and antenna systems. She 
designed, developed, and implemented the test sys- 
tems described in this issue. Adele was born in Areata. 
California and has an AA degree from College of the 
Redwoods. She is studying for a degree in manage- 
ment from Sonoma State University Her outside in- 
terests include running, bicycling, hiking, home 
improvement, and raising miniature horses. 



Fr Worldwide Rost er/1 90LDC 00107892 

5731 

To- LEWIS, KAREN 

HP CORPORATE HEADQUARTERS 
DDIV 0000 20BR 

IDR #20360 



HEWLETT-PACKARD 

JOURNAL 




PACKARD 

August t994 Volume 45 • Number 4 



Technical Information Irom lh» laboratories ol 
Hcwlan V ackord Company 

Hawlen-Packara Company. P.O. Box 51827 
Palo Alio. Cali'O'nia. 94303-072* U SA 



5962-01 19E 



© Copr. 1949-1998 Hewlett-Packard Co.