Greenberg - 1995 - Technology Transfer and Commercialization

Download as pdf or txt
Download as pdf or txt
You are on page 1of 6

Download Request: Current Document: 1

Time Of Request: Wednesday, April 17, 2013 08:19:06 EST


Send To:

ACADUNIV, 157986
UNIVERSITà DEGLI STUDI DI VENEZIA
SAN GIOBBE CANNAREGIO 873
VENEZIA, ITA 30121

Terms: (HLEAD(Technology Transfer and Commercialization) AND DATE IS February


1995)

Source: Aerospace America


Project ID:
Page 1

1 of 1 DOCUMENT

Aerospace America

February, 1995

Technology transfer and commercialization


BYLINE: by Joel S. Greenberg, president, Princeton Synergetics

SECTION: TECHNOLOGY; Pg. 39

LENGTH: 2890 words

The goal of technology transfer is to see that technology -- in the form of hardware, software, patents, licenses, or
knowledge -- developed by one organization will be used by others. This involves creating an awareness of what is
available to be transferred and then increasing the likelihood that it will be utilized.

Technology transfer and commercialization go hand in hand. Together they produce opportunities for investment,
creating awareness and providing incentives by reducing required investment and lowering perceived risk. The private
sector generally refers to this as advertising and promotion. Advertising creates awareness; promotion, through
incentives, alters purchase/investment decisions. Because of these similarities, government policymakers can draw on a
large body of knowledge developed by the private sector to move technology and products to market.

Scope of activities
Government agencies, backed by legislative measures, have developed aggressive policies with respect to transferring
technology among themselves and to various industry segments. Legislation includes the Stevenson-Wydler
Technology Innovation Act of 1980; the Bayh-Dole Act of 1980, which authorized granting exclusive and partially
exclusive licenses; the Cooperative Research Act of 1984, permitting industry to form consortia; the Federal
Technology Transfer Act of 1986, making technology transfer a responsibility of all federal lab scientists; and the
National Competitiveness Technology Transfer Act of 1989. Mechanisms federal agencies have developed include:

*Cooperative agreements, entered into by the government with industry, universities, and others, to support or stimulate
research. They generally result in cost-shared research with the nonfederal participant.

*Cost-shared contracts/subcontracts, where government often agrees not to disseminate andy commercially valuable
date for a limited period of time. Cost-shared contracts include in-cash and/or in-kind arrangements.

*Cooperative R&D agreements (CRADAs), which are agreements between government labs and nonfederal parties in
which both participants provide personnel, services, facilities, or equipment for specified R&D. The nonfederal parties
may also provide funds; no direct funding is furnished by the government lab. Rights to intellectual property are
negotiated between lab and participant, and certain data that are generated may be protected for up to five years.

*R&D consortia, in which funding may be shared, but usually no funds are exchanged between participants.

*Exchange programs, where the organization supplying the personnel bears the costs.

*Licensing, whereby less-than-ownership rights to intellectual property, such as a software copyright, are transferred to
permit use by the licensee. the potential licensee must present plans for commercialization.
Page 2
Technology transfer and commercialization Aerospace America February, 1995

*User facility agreements, allowing private parties to conduct R&D at government labs. For proprietary R&D, the lab
is paid for the full cost of the activity. Intellectual property rights go to the party conducting the R&D.

*Work for others, wherein proprietary work for an industry may be done by a government lab staff using lab facilities,
with full costs charged to the industry. Title to intellectual property generally belongs to the sponsor. Government
retains a nonexclusive, royalty-fee license to such property.

Other activities, such as the National Institute of Standards and Technology's Advanced Technology Program and
ARPA's Technology Reinvestment Project, focus on commercialization of advanced technology. The key element is
that government agencies and industry try to identify common requirements and create cooperative progrms to develop
the needed technology.

There are also myriad activities aimed at creating awareness. These include publications; workshops and other
conferences; preparation and distribution of strategic plans; and electronic bulletin boards. Other resources, in the form
of large databases, can put researchers in touch with the federal lab doing work in a given field or can allow those
seeking technology solutions to particular problems to search, via keyboard, for applicable technologies and contacts.
These include the Federal Laboratory Consortium; the Federal Laboratories Database, developed by the Mid-Atlantic
Technology Transfer Center; databases maintained by the National Technology Transfer Center, such as Business Gold;
a directory of federal laboratory and technology resources published by NTIS; and Knowledge Express, a private
on-line technology service.

Other mechanisms also affect technology transfer. These include experiments and demonstrations (such as NASA's
Flight Experiments and Advanced Communications Technology Satellite, or ACTS, programs), and the setting of
standards in areas such as appliance efficiency and automobile mileage. Experiments and demonstrations aim to reduce
uncertainty and risk, which influence utilization and investment decisions. Setting of standard can encourage use of
new technology for purposes of meeting the standards (though this is not usually the major reason it is done). Normally
programs for creating awareness are aimed at the providers of products and services. Consumers can, however, through
their purchase decisions, force manufacturers to provide improved products that require use of advanced technology.
For example, DOE's appliance efficiency labeling program influences consumer choices by encouraging selection of
more energy-efficient products.

Thus, although the scope of technology transfer and commercialization activities is quite broad, the common intent is to
influence investment and utilization decision. In addition, the customer base for technology is diverse, with respect to
both business sector and acceptable risk level. In NASA's case, potential users of technology include NASA
organizations, other government agencies, and industries. Each of these secotrs plays a role in influencing decisions.

Assessments and evaluations


Through its investment decisions, government can provide incentives for achieving technology transfer. When it does
so, it is acting as an investment banker -- that is, it is allocating scarce resources and should strive to maximize resulting
benefits. It can be argued that government has a fiduciary responsibility to taxpayers in the same way that the
investment banker has responsibilities to shareholders (unfortunately, there is a significant difference in accountability).
Thus, there is a need to preform assessments and evaluations -- the former coming before the fact and the latter after.

Assessments provide guidance for establishing what mix of activities would bring about technology transfer (for
example, establishing the set of flight experiments that should be performed in order to reduce the technology user's
preceptions of potential risk). Evaluations measure actual performance, thereby providing information for improving
the overall technology process. Methods must be established for ensuring the maximum value is obtained from limited
budgets.

NASA's Flight Experiment and ACTS programs, though not normally considered part of its technology transfer
program, are actually two very important and expensive elements of that effort. The first provides an illustration of
Page 3
Technology transfer and commercialization Aerospace America February, 1995

assessment methods, the second of evaluation methods.

Flight experiments may be conducted for various reasons: to measure variables; to search for unknown phenomena,
whether detrimental or beneficial, so that they may be studied and taken into account; and to provide a convincing
demonstration of capabilities or system performance. In each case, the underlying purpose is to reduce uncertainty, in
the eyes of both the program engineers or other decision-makers who would use a new technology, and upper
management or investors who would approve or fund a program that uses it.

The value of an experiment (and the associated resources used for it) relates to how its outcome will affect decisions.
The selection of experiments should involve both qualitative and quantitative factors. The current selection process
relies primarily upon qualitative measures. To illustrate quantitative methods, tow examples are presented, dealing with
mission-oriented research and engineering demonstrations.

Mission-oriented experiments
Mission-oriented experiments seek to reduce the uncertainty engineers have regarding some aspect of a system.
Without such experiments, engineers would feel unsure of whehter a design would work, and would take measures to
guard against failure. Prior to an experiment there is significant uncertainty as to the performance level, x, that can be
achieved. The current state of knowledge (the a priori assessment as judged by "experts") is that the true value of x will
be in the range of A to B, with the indicated likelihood. In the absence of better knowledge, the design engineer will
normally be risk averse and select a design point close to A. However, associated with each design point (value of x)
will be a different "economic" or other value. For example, if the variable is solar cell efficiency, then different levels
of efficiency will directly affect satellite design, cost, and over-all mission value.

The goal of an experiment is to alter the state of knowledge through a measured result. The result will lead to a new
design point slightly lower than the measured outcome(since there will still be some uncertainty after the experiment).
As all outcomes in the range of A to B are possible as per the a priori likelihood function, the experiment's value can be
stated as the weighted or average value based on the possible outcomes less the value based on use of the a priori design
point.

Establishing the value of a mission-oriented experiment requires expert opinion on several points, among them the
range and form of uncertainty with respect to the design variable in the absence of an experiment; the design point that
would likely be selected if the experiment were not performed; and the value associated with different design points.
This approach adds significantly to the metrics currently used for choosing flight experiments that rely primarily on
nonquantitatively focused metrics.

Decision trees
In the case of an engineering demonstration, which may be viewed as a go/no-go test of an engineering system or
subsystem that is in the hardware phase, decision trees may be used to capture the essence of the uncertainty to be
resolved. A typical decision tree would show how the operation of the subsystem being demonstrated is key to the
success of a mission -- if the subsystem works, them mission is very likely to be a success; if it fails, the mission will be
a total failure. Probability estimates for the possible outcomes would normally be based upon expert opinion.

ON the right-hand side of the tree, the tips of the branches represent possible outcomes, with probabilities of occurrence
shown. If costs can be associated with each branch of tree, and benefits with each outcome, it will then be possible to
estimate the value of the demonstration as the expected value of the yes decision minus the expected value of a no
decision. The mission's benefit might be measured in terms of a revenue stream for a commercial system, a benefit to
society for an applications mission, or a cost savings for a science mission (where the mission would likely be redone in
the event of a failure).

Whereas assessment are based on a prior estimates, evaluations require observed results as well. Evalauations reflect
what was versus what might have been. Assessments are concerned with anticipating the value that might result from
Page 4
Technology transfer and commercialization Aerospace America February, 1995

performing an experiment. Evaluations are concerned with estimating the value obtained from the conduct of an
experiment.

ACTS, for example, is a demonstration program aimed in influencing both communication technology decisions and
choices concerning the provision of new communication services. The value of the ACTS flight demonstration
program stems from the difference in the a priori and a posteriori design points and the change in the anticipated
likelihood of use of the technology (of service) based upon the a priori estimates and the a posteriori experiment results.
This last factor is extremely important since it is necessary to make judgments before performing the experiment to
establish the "what might have been: scenario, which will be compared with the measured results. (In the case of ACTS
this will still require judgments of what is likely to happen given the measured experiment results.)

Since ACTS has already been launched without the necessary priori judgments having been made an experiments are
well under way, it is no longer possible to estimate credibly what would have been. Thus a significant opportunity to
perform an evaluation of one of NASA's largest technology transfer activities has been lost. The program has already
resulted in significant technology transfer, with Motorola's Iridium LEO communications satellite system incorporating
ACTS beam switching technology. This transfer occurred prior to ACTS' launch. The ACTS flight demonstration
therefore added no value in this particular arena, execpt that it may yet affect other organizations' decisions.

Metrics
Metrics are measures that indicate relative or absolute value, where "value" has meaning in terms of assessments and
evaluations. In addition to their public relations value, metrics serve as management tools for assessing and evaluating
performance. Since technology transfer involves numerous processes that occur across multiple disciplines and
organizations, appropriate metries and the methods for quantifying them vary considerably. Also, the choice of
appropriate metrics depends on the availability of data and may change with time as new data emerge.

Metrics should provide answers to a broad range of questions related to the mulation, operation, and results of
technology transfer activities. These questions include: How well are we likely to do? Are we likely to receive
adequate value for our investment? How well have we performed? Is the program structured efficiently? How has
performance changed? Have we received adequate value for our investment?

Answering these questions requires use of a broad range of metrics, including some that are activity and effectiveness
related. Activity measures involve anticipated or measured level of activity, such as number of queries and papers
presented or cited per trait of time. Effectiveness measures involve anticipated or measured performance relative to
specified goals are example, expected or measured industy investment per dollar of government investment or present
value of savings where the goals are increasing industry investment and achieving cost savings from the use of specific
technology, respectively. Since metries are necessay for both program planning and program evaluation, past
performance and judgments must be relied upon for the met and measurements combined with judgments for the latter.

Activity-related metrics such as number of queries per unit of time may prove useful for comparing different
alternatives that aim to increase the number of queries and for indicating process change over time. But such a metric
will likely have little or no value for influencing "portfolio" decisions in the absence of a relationship between the
metric and technology transfer benefits. Activity-related metrics normally have little utility for answering such
questions as "should X dollars be spent to increase the number of queries by y percent?" unless there is an established
relationship between the number of queries and the benefits from the queries.

A challenging problem
Measuring the impact of a technology transfer activity can be challenging, and often will depend on the availability of
data in both related and apparently unrelated areas.

For example, consider an energy information dissemination project whose goal is to reduce energy consumption
through improved purchase and use decisions, with a possible metric being energy savings per dollar of project
Page 5
Technology transfer and commercialization Aerospace America February, 1995

investment. A hypothetical illustration of consumption, budget, and temperature shows that an evaluation performeet
by measuring energy consumption, both before and during the dissemination project, will result in a zero value for the
selected metric. If temperature were also measured and a relationship developed between it and energy consumption,
then the confounding influence of temperature could be taken into account and an adjustment made to the measured
consumption (this is indicated as "deduced" consumption and indicates the likely energy consumption if the project
were not undertaken).

The result is a nonzero value of the selected metric; savings are now in terms of deduced consumption less average
energy consumption prior to the project. The point of the example is that it is necessary to consider all of the important
factors likely to influence the chosen metric anti establish data collection procedures for, and relationships between, the
factors and the metric.

LOAD-DATE: February 10, 1995

LANGUAGE: ENGLISH

GRAPHIC: Graph 1, A typical decision treeshows how the operation of the subsystem being demonstrated affects a
mission's success. On the right side, the tips of the branches represent possible outcomes and show probabilities of
occurrence; Graph 2, In measuring th eimpact of an information dissemination program on energy consumption,
investigators must consider major confounding factors, such as temperature, and their relationships wih the selected
metric; Graph 3, Experiments reduct uncertainty regarding some aspect or "design point" of a system. Before an
experiment, an a priori assessment by experts places the value of a performance level, X, in the range of A to B, with
the indicated likelihood. The experiment then yields a measured result, leading to a new, slightly lower desing point.
The experiment's value is the average value based on possible outcomes less the value based on use of the a priori
design point; Graph 4, GOVERNMENT ROLE IN TECHNOLOGY TRANSFER COMMERCIALIZATION
PROCESS

Copyright 1995 American Institute of Aeronautics and Astronautics, Inc.;


All Rights Reserved

You might also like