An Agent-Based Bayesian Forecasting Model For Enhanced Network Security

Download as pdf or txt
Download as pdf or txt
You are on page 1of 8

An Agent-based Bayesian Forecasting Model for Enhanced Network

Security
J. PIKOULAS, W.J. BUCHANAN, Napier University, Edinburgh, UK.
M. MANN ION, Glasgow Caledonian University, Glasgow, UK.
K. TRIANTAFYLLOPOULOS, University of Warwick, UK.

Security monitoring software monitor. This monitors different operations of a computer network and
outputs the results to system administrators (for example, Abacus Sentry, COPS, Tripwire and Tiger).
Network monitoring software. This monitors user's
behaviour or monitors incoming or outgoing traffic
(for example. Argus, Arpwatch and ISS).
Firewall software and hardware. This runs on the
Internethtranet entrance to a computer network, and
checks all incoming network traffic for its contents at
the network and transport layers of the OS1 model. At
the network layer. typically the Intemet Protocol (IP)
addresses are filtered for their source and/or destination. and at the transport layer. the TCP ports and
monitored (thus FTP and TELNET traffic could be
blocked for incoming data traffic, but SMTP (electronic mail) could be allowed).
These methods are generally centralised applications with
no real time response and have no mechanism to foresee
future user events. These methods also have a central focal
point for security (typically a main server), which could
itself become the focus of an attack (such as a denial-ofservice attack. where the server is bombarded with hoax
requests, which eventually reduces its quality of service to
its clients).
The method involved in this research is distributed.
and does not depend on a central point of failure. It also
gathers user behavioural information and it makes a prediction on what the user might do in the future. This paper
presents a distributed approach to network security using
agents. and presents a novel application of the Bayesian
forecasting technique to predict user actions. The Bayesian
method has. in the past. been used for weather forecasting
and has been expanded so that it can be used to provide
enhanced network security by trying to predict user actions. For this a system can determine if a user is acting
unpredictably or has changed their normal working pattern. Results are also given which show that the new
model can predict user actions. and a set of experiments
are proposed for hrther exploitation of the method.
In choosing a computer network security solutions. the
dominant issues are: cost; the desired level of security: and
the characteristics of the existing operating system envi-

Abstract
Security has become a major issue in many organisations,
but most systems still rely on operating systems. and a
user ID and password system to provide user authentication and validation. Thqv also tend to be centralized in
their approach which makes them open to an attack. This
paper presents a distributed approach to nehvork securip
using ugents. and presents a novel application of the
Bavesian fosecasting technique to predict user actions.
The Bayesian method has been used in the past on
weather forecasting and has been expanded so that it can
be used to provide enhanced network security b-v t y i n g to
predict user acrions. For this a system can determine i f a
user is acting unpredictab(v or has changed their normal
working pattern. Results are also given which show that
the new model can predict user actions, and a set of experiments are proposed f o r jiirther exploitation of the
method.

1. Introduction
Computer security is a major concern for organizations.
Whilst security violations can be caused by external users
hacker.^). Carter and Catz [ 1 J have shown that the primary
threat comes from individuals inside an organisation.
I-lencc much morc emphasis has to be placed on internal
security mechanisms.
External network attacks can be categorised [4] into IP
spoofing attacks [SI. Packet-sniffjng [ 6 ] . sequence number
prediction attacks and trust-access attacks. Categories of
internal attack include Passwords attacks [7]. session hijacking attacks. shared library attacks. social engineering
attacks. and technological vulnerability attack.
Computer network security programs can be categorised
as follo\vs [ 3 ] :
Security enhancement software. This enhances or
replaces an operating system's built-in security software (for example. Mangle It. Pass\+d+ and Shadow).
Authentication and encryption software. This encrypts and decrj.pts computer files (for example. Kerberos. MDS. RIPEM. and TIS Firewall Toolkit).

247
0-7695-1086-8/01$10.00 02001 IEEE

4 Bayesian Intrusion Detection System

ronment. Three mechanisms for illegal behaviour detection are commonly used in computer network security
programs [8]. and can be applied to all five categories of
computer security program.

We used a Bayesian multivariate statistical model because


our problem is a linear multivariate problem and it is simpler faster and more accurate to use a linear model than a
non-linear model like neural networks [24]. In order to test
this an intelligent agent security enhancement software
system was constructed. in which a core software agent
resides on one server in a Windows NT network system
and user end software agents reside in each user workstation. The software for each type of agent was written in
SUN Java JDK Version 1.2 on a Microsof Windows NT
Version 4 environment running over a 10/100 Mbps network. There is one server and I O clients. Figure 1 shows a
core agent communicating with many user agents. A
communication thread is a unique process that the core
agent creates to transmit data to the user end agent in response to message transmitted from the user end agent.
Unique processes enable the core agent to communicate
with each user agent effectively and efficiently thereby
enabling a fast response to network monitoring. Once the
core agent has responded to a user agent. the process is
killed.
The system uses a hybrid detection technique. \\here
invalid behaviour is determined by comparing a user's
current behaviour with their typical behaviour and by
comparing their current behaviour with a set of general
rules governing valid behaviour formed by systems administrators. Typical behaviour is contained in a user historical profile.

Statistical Anomaly Detection


Statistical anomaly detection systems analyse audit-log
data to detect abnormal behaviour 191. A profile of expected online behaviour for a normal user is predefined
and derived from how an organisation expects a user to
behave and from a system administrator's experience of
the way a user is expected to use the resources of a system.
Typically. the audit logs are analysed and processed for
statistical patterns of events for typical operations for
which to determine usage patterns. These patterns are
compared to the user's profile.
The system then warns the administrator that there has
been a possible intrusion when a profile is different to a
usage pattern. The major drawback with this technique is
that it cannot predict extreme changes in user behaviours,
as changes in a user's behaviour normally identify a security breach.

Rule Based Detection


Rule-based detection systems use a set of generalised rules
that define abnormal behaviour (IO. 12.13). These rules are
formed by analysing previous different patterns of attack.
by different people. The drawback of this system is that
the basic rules are predefined by system administrators.
and cannot detect any new attack techniques. If a user exhibits behaviour that is not prescribed by the existing
rules, the user can harm the system without being detected.

Hybrid Detection
Hybrid detection systems are a combination of statistical
anomaly detection and rule-based detection systems.
These. typically, use rules to detect known methods of
intrusion and statistical based methods to detect new
methods of intrusion.
CMDS (Computer Misuse Detection System) [I41 is a
security-monitoring package that provides a method to
watch for intrusions. such as bad logins or file modifications. It also monitors for the dif-ficult detection problems
such as socially engineered passwords, trusted user file
browsing and data theft that might indicate industrial espionage. CMDS supports a wide variety of operating systems and application programs. The drawback of this system is that it uses statistical analysis to make additional
rules for the system. This is a drawback, as it can only
detect attack patterns that have been used in the past and
identified as attack patterns. or predefined by the system
operators. It also generates long reports and graphs of the
system performance that require to be interpreted by a
security expert.

Figure I:
Agent Environment Topology

The user agent software has four components:


A sensor. The sensor monitors the various software
applications (such as a word processor or a spreadsheet) that are currently being run by the user on that
workstation. When a user logs-in the sensor polls the
user's activity every five seconds and records the
user's identifier and each application's name and
process identifier.
A transmitter. After the first polling by the sensor.
the transmitter sends this information to the core
agent. The core agent then responds by sending a user

248

historical profile. With an audit-log file for a period of


one month. we observed that the size of an average
user profile was between 400KB and 600KB. with a
download time of between three and five seconds.
A profile reader. The profile-reader reads the user's
historical profile.
A comparator. This compares the user's historical
profile with the information read by the sensor. If the
current behaviour profile does not fall within the accepted behaviour pattern defined by the user historical
profile. the comparator provides the transmitter with
the following information: user identifier. invalid behaviour type and corresponding invalid behaviour
type data.. This is then sent to the core agent.

Forecasting method d p l a c a t i n g user wofle for applicatiors

4 Faster p r c e w n g

C"WE7ll

kmca,ted model

When invalid behaviour occurs: several courses of action


are available, such as:

5 Prediction Model

user.
2. Kill the specific application that has caused invalid
behaviour.
3. Prevent the end user from running any further applications.
Cases 2 and 3 can be achieved locally at the client workstation. and in Case 1. the user agent informs the core
agent and the core agent informs the systems administrator. The user agent terminates when a user logs off.
Figure 2 shows the complete model for the forecasting
system. where a core agent reads the user profile. which is
then received by the user agent. The user agent then predicts the usage against the forecast. Eventually when the
user logs off the user profile is updated and sent back to
the core agent.
In the traditional method of forecasting. a user event
would be averaged over long time intervals (in Figure 3).

........
.

N m uier
profile

Figure 3: Traditional method of generating user profile for


applications

1. Warning message to the system administrator or end

, ,

Current
" 5 5 proflk

.......................................................................................

When our intrusion detection system is installed. the prediction part monitors the user behaviour for 15 times. After that. it evaluates itself for five times. After this it is
ready to make an accurate prediction. Our model has three
stages of operation. The stages are:

I.
2.

3.

Core agent
sendl lorerarung
inlormarion

......

Observation stage. In this stage the model is monitoring the user and records its behaviour.
Evaluation stage. In this stage the model makes a
prediction and also monitors the user actual movements and calculates the result. This stage is critical.
because the model modifies itself according to the
environment that it operates in.
One-step prediction. In this stage the model makes
a single step prediction. For example. assume that the
user is logged in for 15 times and the model is configured. and it is ready to start predicting user moves.
Instead of making a five or ten step prediction. likc
other mathematical models. our model makes a prediction for the next step. When the user logs in and
out of our model. it takes the actual behaviour ofthe
user. compares it with the one step prediction that it
has performed before and calculates the error. So the
next time a prediction is made for this user it will include also the data of the last user behaviour. With
this procedure we maximise the accuracy of the prediction system.

user age"*
r e c u m i the
updared model
lor the user

The proposed forecasting method improves this by requiring much less memory storage. Figure 4 shows a generic
model for the predicting using parameters for a given window size (n)?time units and prediction number ( 5 ) .

Figure 2: Agent forecasting model

249

some kind of knowledge, that is a subjective belief, expressed in terms of a distribution. This is the prior distribution of (@,I Q-,) at time t. In other words, it is what we
know before Y, becomes available. Once this happens, we
revise this prior belief, using the likelihood function, to
find the posterior distribution (e, ID,) or revised. which is
better and more accurate. Then according to simple calculations, we find the prior of time t-l and we calculate the
posterior at f+ I . only when information of the data
Y,,, comes in to the system (e.g. in our case is the real behaviour of the user). The model used becomes:

Time unk (I)

S."
p8nmt.n:
"=I5
z=5
TmS ""k = I hour

Autoregressive moving average model. The general


model introduced by Box and Jenkins (1 976) includes
autoregressive as well as moving average parameters,
and explicitly includes differencing in the formulation
of the model. Specifically. the three types of parameters in the model are: the autoregressive parameters
Go). the number of differencing passes (4,and moving average parameters ( 9 ) . In the notation introduced
by Box and Jenkins. models are summarized as
ARIMA ( p . d and 4 ) ; so. for example, a model described as (0. l . 2 ) means that it contains 0 (zero)
autoregressive (p) parameters and 2 moving average
( 9 ) parameters which were computed for the series after it was differenced once.
Identification. As mentioned earlier. the input series
for ARIMA needs to be stationary. that is. it should
have a constant mean. variance. and autocorrelation
through time. Therefore, usually the series first needs
to be differenced until it is stationary (this also often
requires log transforming the data to stabilize the
variance). The number of times the series needs to be
differenced to achieve stationary is reflected in the d
parameter (see the previous paragraph). In order to
determine the necessary level of differencing. one
should examine the plot of the data and autocorrelogram. Significant changes in level (strong upward or
downward changes) usually require first-order nonseasonal (lug=l)differencing; strong changes of slope
usually require second order non-seasonal diff'erencing. Seasonal patterns require respective seasonal differencing (see below). If the estimated autocorrelation
coefficients decline slowly at longer lags. first-order
differencing is usually needed. However. one should
keep in mind that some time series may require little
or no differencing. and that over differenced series
produce less stable coefficient estimates.
At this stage n e also need to decide how many autoregressive @) and moving average ( 4 ) parameters are
necessaiy to yield an effective. but still efficient.
model of the process (that is with the fewest parameters and greatest number of degrees of freedom among

Figure 4: Forecastingcalculation

The general multivariate model (DLM) is given by the


next equations:

We use multivariate models because we want to incorporate and forecast several variables simultaneously. Again
note that the fact that the parameters 8, change both deterministically (through f ) and stochastically (through the
variance W, ). and thus make the model dynamic. Also
standard ARIMA (Auto-Regressive Integrated Moving
Average) models are a special and restrictive case of the
above model. when you set F, = F . G, =G and W, =
(all these three components are constant over time). This is
restrictive since all these components are likely to change
over time because e.g. ( I ) changes over time and there are
other external sources of variation (such as extra subjective information about a variable). Moreover, equation ( 2 )
is not observable. This means that we never are going to
see any evolution or trend in a diagram or a graph. This is
a hidden model that cannot assume ,'K to be constant over
time. There is another large problem that we cannot ignore
in multivariate models. The variance matrix
will not be
known. Often. in standard time series. it is assumed known
and they easily jump to another problem. However. in
practice. this is extremely difficult to set it as a known
matrix. It is very difficult to propose what variance to use
to a system where 20 applications are considered and only
20 or 30 vectors are collected as data.
So for all these reasons we need to consider the dynamic
models. Also. the system could provide forecasting as
much ahead as we like. proving very accurate according to
the results. For this purpose we used a Bayesian framework. which virtually means that at time f we will have

250

all models that fit the data). In practice. the values of


the p or q parameters are rarely greater than two (see
below for more specific recommendations).
Estimation and Forecasting. At the next step (estimation). the parameters are estimated (using function
minimization procedures). so that the sum of squared
residuals is minimised. The estimates of the parameters are used in the last stage (forecasting) to calculate
new values of the series (beyond those included in the
input data set) and confidence intervals for those predicted values. The estimation process is performed on
transformed (differenced) data; before the forecasts
are generated. the series needs to be integrated so that
the forecasts are expressed in values compatible with
the input data. This automatic integration feature is
represented by the letter I in the name of the ARIMA
methodology.

which are more natural to overall long formulae that


ARIMA works out. We note that because ARIMA is quite
complicated, many practitioners end up to a simple, very
simple subclass of ARIMA model, not even sometimes
stating the assumptions. This produces results that sometimes do not correspond to the real application. The only
difficulty with the DLMs is the specification of the initial
values, such that the algorithm may be put into practice. In
general this requires to be solved by the experience of the
individual practitioner.
In our case, we have to specify the following:
mo ,CO,so,
no, p ,S,F, * mo is the mean of (eoI D o ) and CO
its variance. The choices made are:
mo = O . This is set when we expect that the prior
distribution (0, I D o ) (the distribution of the parameter 0 at time I given D o - any initial information
which is explicitly known) will not give any drift to
Yl . The fact that we expect this to happen. but we are
not sure. so there is here an uncertainty, which is expressed by the variance Co. It is natural and common
policy to assume CO= / . the identity matrix. But care
must be taken when we are very uncertain about our
choice we MUST increase the diagonal elements of
C , . Of course, this affects all the following results
somehow, but the approach is more realistic. In general we will have more data vectors than 15. or 20
(our case). hence initial values will dominate the actual estimates in a decreasing rate.
CO= I . This is motivated by our belief that nzo is not
important to the following values of Y,, t=l. ... So is
typically. almost always set to / and it has not got any
special meaning. The only one we can find is that it is
chosen such that according to the formula that we
have to calculate S, .So must lead to acceptable results (symmetric matrices). The no can be set to 0 (a
case which implies n, = I , without great loss) or
no = 1 (a case which implies nl = p + 1 ). The choice of
no is not crucial since there is theorem that states that
S, converges to C as goes to infinity and it does not
depend on n o . But it must take small values.

In addition to the standard autoregressive and moving average parameters. ARlMA models may also include a constant. as described above. The interpretation of a statistically significant constant depends on the model that is fit.
Specifically:
if there are no autoregressive parameters in the
model, then the expected value of the constant is
p the mean of the series;
if there are autoregressive parameters in the series. then the constant represents the intercept.
If the series is differenced. the constant represents the
mean or intercept of the differenced series; For example. if
the series is differenced once. and there are no autoregressive parameters in the model. the constant represents the
mean of the differenced series. and therefore the linear
trcnd slope ofthe un-differenced series.
ARlMA models are similar to our model. They use the
existing data to calculate the parameters of the model. But
it: for example. some external information is available. For
example. we may know that it is thex user and although he
does not have an illegal user profile. it is very probable
that at a specific point of time he will perform a huge invasion to an important application. ARIMA will try to
change the parameters to ad.just the model. but even in this
case. it is doubtful how well the model will do in all the
applications. With our DLM it is not a problem. Simply
we add to the prior information we have. the external information. This is named expert intervention, and the revised posterior takes into account the new knowledge. Our
system is not assumed perfect when the model is fitted.
and we let information. no matter what its sort. to make us
learn and improve the system.
Now our model is slightly different than the one we use
for illustration purposes. We find recurrence relationships.

6 . The 6 choice is discussed with details in Ameen


and Harrison (1982a) where it is shown that it must be
0 85 < 6 < I and quite high. Thus we have set it 0.95.
p . The p is a discount factor as well. In this document we state that it has to be smaller than 6 , as. in
general. S, is not so much influenced by the data as it
is

251

m,. Note that 6 is in

A, and so it influences nz, .

we increase the accuracy of the model and can make more


accurate predictions (Figure 5).
In our model we can observe this by looking at Figure
and Figure . In these we can observe that our model prediction is very close to the actual users behaviour for the
application number one at the specific time t=19. We
achieved this accuracy by applying the intervention technique. We can also observe that the ARIMA model did not
make any prediction for this particular user behaviour.

The components are defined as:


The mean of the influence of 0,,Y, from Do our
initial info.
CO: Dispersion of the above influence.
So : No meaning, and is an auxiliary quantity for S, .
no : No meaning, and is an auxiliary quantity for n, .
p : Factor of the influence of the data to the estimate
mo :

SI

6:

"I

F, :

Results and experiments

Factor of the influence of the data to the estimate

The first set of experiments are made in order to test our


security environment to the extent that it works and to get
some results from our proposed statistical model and compare it with other statistical models.

A basic quantity that expresses the linearity of the


model and gives different trends to the several values of U,, both for time series analysis (what has
happened in the past) and forecasting (what will
happen in the future).

Prediction for Application I (using model)

1-5

Inmiion rime (hours)

Finally, we make clear that when we say factor in the


above explanation we do not mean any percentage or
whatever. Factor means discount factor, which means that
the estimates of m, and S, are discounted somehow and
in different rate, since both are influenced by data.

Palrmcten:
"-15
Time unic = I how

0.5.

0 45
04

0 35
03
0 25
02

0 15
01

0 05
I

10

II

12 I 3 14 15 16 17 18 19 20

Time (hours)

Figure 6: The Real Observations of the Model

Ttme unit

Our environment is vastly improved with the use of the


prediction mechanism. Our system is not using the real
time data that its agents gather only for real time detection.
Our addition of the prediction model in our environment.
increases its functionality and its usability to the maximum.
Figure 8 shows one user that logged on to the system 20
times and had one hour sessions each time. We monitored
all their moves and all the applications that he used. In our
prediction model we had only three applications to predict.
The intervals are from 0 to 1 and they denote an hour. So.
for example. 0.3 means that the user used this program for
0.3 of the hour (18 minutes). in this specific hour of the
system usage.
We used our prediction mechanism for the last five observations. As we can observe from the results. if we compare the graphs in Figure 6, which are the real observations for the three applications. and the graphs in Figure 7.
we can see that the two figures are almost identical. We
cannot say the same if we compare the real readings from
the results of the ARIMA model. We can see that they are

(4

Sample pulmetcn:

"=I5
215

Time unit= I hour

Figure 5: Forecasting calculation with intervention

Intervention
Intervention is a mechanism for improving the prediction
accuracy. It is used when there is additional information
about the future behaviour of the system, and can be added
to the model prior the prediction. For example if there is
some users that are keen on using illegal software or there
are new users that there is not enough information about
their behaviour, by applying the intervention mechanism,

252

less precise with the actual readings and they fail to predict the action of the user in application 1 at the time interval nineteen, in comparison with our model that predicted it with a very close figure.

to verify that the environment works and to show that our


statistical proposed model gives better results than the
models that are widely used up to now. In the next stage.
we are planning to expand the number our experiments.
and also the number of the applications that we use. and
the number o f users involved. We also planning to fully
exercise our model by instructing users to have some extreme behaviour for sort periods and normal behaviour for
long periods so we want to see if the model detects and
predicts extreme user behaviour (Figure 8).

Panmeten:
Prediction for Application I (using ARIMA)

,115
t=5

Innnon ume (hours)

T m e unit

I hour

0.5

Using

0.45

ARlHA model

\;

0.4
0.35

I t

. .
.I .I
I
. 1.

0.3
0.25

0.1

Vviation of

prediction window
(I t o 100)

0.15
0. I
0.05

Time (hours)

Figure 7: The proposed Model 5 step prediction

Evaluation

V v i i t i o n oftime unit (10 min to I hour)

Figure 8 : Experimental setup

Our proposed model is a multivariate linear model that is a


simple and fast adoptive model. It requires far less preparation than other models like. for example. the neural net
weights that you have to decide before you build your neural net model. Our proposed environment reacted as expected to all the tests that were applied. The monitoring of
the user behaviour was successful and the overhead on the
system resources was minimum.
There was a 1 to 2% increase on the CPU usage, when
the user agent was monitoring the user moves, and the
prediction task only took two seconds to complete with the
three applications and for a fully operational system with
20-25 applications. we estimate that it will take no more
than five seconds.
Our proposed environment is collecting information
about the user every five seconds. The prediction procedure is taking place at the end of each hour of a user's use
of the system. If the user log off before one hour completes. the calculation of the prediction takes place whenever the user will finish log on to the system again and
completes one hour.
Another difference of our model is that the statistical
models that are in use now. work inside acceptable parameters. only because they make too may assumptions
about the initial parameters. a factor that h e believe makes
them give results that does not represent actual situations.

8 References
Carter and Catz, Computer Crime: an emerging challenge for law enforcement, FBI Law Enforcement Bulleting, pp 1-8, December 1996.
Roger Blake, Hackers in The M i s r . Northwestem University, December 2, 1994.
National Institutes of Health. Center for Information
Technologv.
http.//\wv.alw.nihgov/Security/securityprog.html#com
mercial, October 1998.
W.J. Buchanan. Handbook of Data Communications and
Networks, Kluwer, 1998.
SamsNet, A Hacker's Guide to Protecting Yoiir Internet
Site and Network.
URL:http://mx.nsu.ru/Mas-Security/ch28/ch28
htm
NetworklCE Corporation, Packet .%fling. http:
//www.networkice.com/ advice/ Underground tlacking/ Methods/ Technical/ Packct_sniffing/default.htni
Alan Ramsbottom, FAO: N T Crvplographic Passirord
Attacks 82 Defences, 1997,
http: //wvw.omikron de/ -ecr/ nthack/ samfaq.htm.
Chris Herringshaw, Defecting Attacks on Networks,
IEEE Computer Magazine. pp 1&17. Dec. 1997.
Debra Anderson, Detecting Uniisiral Program Behavior
Using the MDES Statistical Component, IDS Report
SRI Project 2596, Contract Number 910097C (Trusted
Information Systems) under F30602-9 1 -C-0067 (Rome
Labs), 1995.
T. Lunt, H . Javitz, A. Valdes, et al. A Real-Time lntrusion Detection Expert Svstem (IDES). SRI Project 6784,

Future Work
The experiments that are conducted up to now were setup

253

Feb. 1992. SRI Intemational Technical Report.


J Pikoulas and K Triantafyllopoulos, Bayesian Mullivariate Regressionfor Predicting User Behaviour in a
Soflivare .Agent Computer Secirriry Systenc', 2 0 ' ~Intema-

tional Symposium on Forecasting, Lisbon, Portugal,


June 2 I , 2000.
Sandeep Kumar and Gene Spafford, A Pattern Matching
model for Misuse Intrusion Detection, Proceedings of
the 17th National Computer Security Conference, Oct.
1994.
Mark Crosbie and Gene Spafford, Active Defence of a
Computer Svsteni using Autonomotis Agents, COAST
Group, Dept. of Computer Science, Prudue University,
Technical Report (95-008),2-3, Feb 1995.
The Computer Misuse Detection System,
http:Nwv.cmds.netl, 1998.
Pikoulas J, Mannion M and Buchanan W. Sofhvare
Agents and Coniputer Network Securi!v, the 7th IEEE
lntemational Conference on the Engineering of Computer Based Systems. pp 2 I 1 - 2 17, Apr. 2000.
Jean 0. Dickey, Christian L. Keppenne, and Steven L.
Marcus. FORECAST/l\iC REGIONAL CLIMATE
CHANGE WITH .-tDC.if!VC'EDSTATISTICAL
METHODS, Jet Propulsion Laboratory, Califomia Institute of Technology, Pasadena.
Professor Hossein Arsham, Statistical Data Ana/wis:
Prove i t with Data. University of Baltimore,
http.l/ubmail.ubalt.edu/-harshan~/stat-data/opre330,htm.
Carlin B. and T. Louis. &yes and Empirical Bqyes
Methodsfor Data .-tnalvsts, Chapman and Hall. 1996.
Stanford University. GEh'SC2iV: 13 Poir~erjWtuoffor
Gene Prediction, Vol. 8, N.1, 1999.
Steven L. Salzberg , Arthur L. Delcher , Simon Kasif and
Owen White, Microbial gene identification using interpolated Markov models. pp. 544-548, Nucleic Acids Research, 1998, Vol. 26. No. 2. 1998 Oxford University
Press.
The Great Lakes Forecasting Svs/em. The Ohio State
University (OSU) and the National Oceanic and Atmospheric Administration (NOAA) Great Lakes Environmental Research Laboratory (GLERL),
http:Nsuperior.eng.ohio-

state edu/main/noframes/about. html


Sandia National Laboratories, ,4 Sinart. Agent based
siniirlatron model.
http:l/uww-aspen.cs.sandia.gov/, Feb. 2000.
J.R.M. Ameen and P.J. Han-ison, iL'ornial discount
Bayesian models, Joumal of Bayesian Statistics, 1985.
Georges A. Darbellay and Marek Slama, Forecasting the
sort ternt demand for electrici(~..Do neirraf nehvorks
stand a better chance?. Intemational Joumal of
Forecasting. pp. 71-83, 2000

254

You might also like