SPE-165374-Global Model For Failure Prediction For Rod Pump Artificial Lift Systems

Download as pdf or txt
Download as pdf or txt
You are on page 1of 10

SPE 165374

Global Model for Failure Prediction for Rod Pump Artificial Lift Systems
Yintao Liu, Ke-Thia Yao, USC Information Sciences Institute; Cauligi S.Raghavenda, Anqi Wu, Dong Guo,
Jingwen Zheng, University of Southern California; Lanre Olabinjo, Oluwafemi Balogun, Chevron; Iraj
Ershaghi,University of Southern California
Copyright 2013, Society of Petroleum Engineers

This paper was prepared for presentation at the SPE Western Regional & AAPG Pacific Section Meeting, 2013 Joint Technical Conference held in Monterey, California, USA, 1925 April 2013.

This paper was selected for presentation by an SPE program committee following review of information contained in an abstract submitted by the author(s). Contents of the paper have not been
reviewed by the Society of Petroleum Engineers and are subject to correction by the author(s). The material does not necessarily reflect any position of the Society of Petroleum Engineers, its
officers, or members. Electronic reproduction, distribution, or storage of any part of this paper without the written consent of the Society of Petroleum Engineers is prohibited. Permission to
reproduce in print is restricted to an abstract of not more than 300 words; illustrations may not be copied. The abstract must contain conspicuous acknowledgment of SPE copyright.


Abstract

This paper presents a new generalized global model approach for failure prediction for rod pumps. By embedding
domain knowledge into an Expectation Maximization clustering algorithm, the proposed global model is able to
statistically recognize pre-failure and failure patterns from normal patterns during the training stage. Compared
with previous field-specific models, the enriched training set for the global model learns from much larger scale of
normal and failure examples from all fields with which a generalized Support Vector Machine (SVM) is trained and
is able to predict failures for all fields. The data set for this paper is taken from five real-world assets using rod
pump artificial lift systems which contain nearly 2,000 rod pumps. The results show that the global model of failure
prediction is capable of capturing future rod pump and tubing failures that produce acceptable precision and
recall. The resulting global model is scalable and can be used for predicting failures for proactive maintenance to
reduce lost oil production. Results from our case studies with multiple fields data show that precision and recall
are better than 65% with this global model.

Our prior work [1] used machine learning techniques to generate high quality failure prediction models with good
accuracy. However, these efforts suffer from two major drawbacks. First, this used machine learning techniques
that require labeled datasets for training the model. Generating these labeled datasets is human-intensive and
time-consuming. Second, this model is field-specific which is only applicable to the specific field from which the
labeled dataset is derived. These field-specific models generally perform poorly on other fields because of the
differences in the data characteristic caused by field geology, operational procedure, etc. Moreover, these models
have to be maintained independently, which accordingly raises nontrivial maintenance costs.

Keywords: failure prediction, global model, rod pump, artificial lift, semi-supervised learning

Introduction

Artificial lift systems are widely used in oil industry to enhance production from reservoirs with levels too low to
directly lift fluids to the surface. Among various artificial lift techniques in the industry (such as Gas Lift, Hydraulic
Pumping Units, Electric Submersible Pump, Progressive Cavity Pump and Rod Pump techniques), the Sucker
Rod Pump technique is the most commonly used. In a typical oil field, there are hundreds of wells, and there are
many fields operated by the same organization that may have various geological formations. Well failures in oil
field assets lead to production loss and can greatly increase the operational expense. Experts with rich
experience are capable of identifying anomalies via combining different types of information such as the wells
recent performance, its events log and its neighboring wells performance. Such anomalies, once identified, have
high probability to be followed by a failure in the future. Such anomalies might have already been causing
economical losses. Thus either proactive maintenance or repair workover needs to be scheduled to reduce
economic losses. However, with limited number of trained personnel and resources for managing large fields,
such proactive operation is impossible to be at full speed. Therefore, automating the monitoring and operating
such fields will be important to achieve higher economic benefits. These fields are instrumented with volumes of
2 SPE 165374
data being collected including historical event log and time series parametric data. Such field data, along with
experts knowledge, are very useful to data mining methodologies to predict well failures. Successful failure
predictions can dramatically improve production performance, such as by adjusting operating parameters to
forestall failures or by scheduling maintenance to reduce unplanned repairs and to minimize downtime. In a Smart
Oil Field, the decision support center uses measurements from fields for efficient management and operation.

The reasons for rod pump failures can be broadly classified into two main categories: mechanical and chemical.
Mechanical failures are caused by improper design improper manufacturing or from wear and tear during
operations. Well conditions may contribute to excessive wear and tear, such as sand intrusions, gas pounding,
rod cutting and asphalting. Chemical failures are caused by the corrosive nature of the fluid being pumped
through the systems. For example, the fluid may contain H
2
S or bacteria. For rod pumps one of the major
mechanical failures is called Tubing Failure, where its tubing is leaking because of accumulated mechanical
frictions and cutting events. A tubing leak does not cause a rod pump to shut down. It is happening down-hole
which makes it difficult for a field specialist to identify its anomalous status via visual or sound inspection. If not
discovered in time, the leaking causes continuous loss of production and reduces the rod pumps efficiency
significantly.
The main focus of this paper is in predicting down-hole tubing leaks and pump failures of sucker rod pump
production wells on a daily basis across heterogeneous fields using data mining and machine learning
approaches. By prediction we mean to detect the early signals such as mechanical frictions, prod events that
potentially lead to a tubing leak, and once the leak happens, we are also able to detect it in time. This problem
can be categorized as anomaly detection problem. Various techniques have been applied on similar problems. [2,
3] uses hard drives S.M.A.R.T. (Self Monitoring and Reporting Technology) log is the major source of predicting
disk drive failures. The S.M.A.R.T. log is comprised of drive aging, drive power cycles, errors corrected by inner
ECC code, mechanical shock, and so on. A Nave Bayes classifier is constructed to learn the past failures so that
it can estimate the failure probability for other disks. [4, 5] uses on OTC (over the counter) drug sales, as well as
customers information such as gender, age, seasonal information such as weather, temperature to construct a
complex Bayesian network to probabilistically infer disease outbreaks.

Our prior work [1] used machine learning techniques to generate high quality failure prediction models with good
accuracy. However, it suffers from two major drawbacks. First, it uses traditional machine learning techniques
that require labeled datasets for training the models. Generating these labeled datasets is human-intensive and
time-consuming. Second, this model is field-specific which is only applicable to the specific field from which the
labeled dataset is derived. Field-specific models generally perform poorly on other fields because of the
differences in the data characteristic caused by field geology, operational procedure, etc. Moreover, these models
have to be maintained independently, which accordingly raises nontrivial maintenance costs.

In this paper, we present a generalized global model for failure prediction that works across multiple rod pumps
located across multiple fields. Machine learning based labeling approach that involves clustering and rule-based
filtering is used to automate the labeling process. Integration of training sets across multiple fields showed that
this labeling approach is effective. Our experimental results show that the precision and recall for failure
predictions are very good. Furthermore, a single global model can be employed for predicting failures for rod
pumps in multiple fields while reducing maintenance cost.

Problem Description

All the wells data in this study are collected by Pump Off Controllers (POCs). These POCs gather and record
periodic well sensor measurements indicating production and well status through load cells, motor sensors,
pressure transducers and relays. Some attributes recorded by these sensors are card area, peak surface load,
minimum surface load, strokes per minute, surface stroke length, flow line pressure, pump fillage, priorday cycles,
and daily run time. From that one could calculate GB torque, polished rod HP, and net DH pump efficiency.These
attributes are measured daily, sent over wireless network and recorded in a LOWISdatabase. LOWIS stands for
Life of Well Information Software and saves the historical data about well information. In the LOWIS database
these attribute values are indexed by a well identifier and a date.

Based on our previous study we discovered that the trends of the 4 most reliable attributes are the key indicators
for a potential anomaly, plus two calculated features:
Existing attributes from LOWIS for trends extraction: card area, peak surface load, minimum surface load,
daily runtime;
Calculated attributes:
SPE 165374 3
o card unchanged days: If all card measures (area, peak and min load) reject to change, this
integer field keeps increasing with missing dates considered as unchanged.
o daily runtime ratio: percentage of runtime with respect to a full 24 hour running.
Combining the trends and calculated attributes, we can formulate failure prediction as a machine learning problem
that given a new features of a new date from a well, the output is whether the well is staying normal, having
potential failure, or is failing.

Failure Prediction Algorithms for Rod Pumps

In order to formulate it as a machine learning problem, more specifically, classification problem, we know that we
have to separate the task into three steps after preprocessing: feature extraction, semi-supervised learning and
evaluation. In this chapter, we will describe these steps in our approache one by one.

Feature Extraction

As was previously studied in [1], we extract both long-term and short-term trends for existing attributes for each
time sample by the dual moving median process as is illustrated in Figure 1.

Figure 1 Moving median feature extraction process

For each attribute, three medians are calculated for trends extraction:

Global median: long-term performance, e.g. past 3 months;
Median 1: short-term performance, e.g. most recent week;
Median 2: current performance, shorter than the short-term window, e.g. recent 3 days.
The actual features are then calculated by dividing Median 1 and 2 with global median. A trivial value o is added
to avoid computational outliers.

We discover that there is correlation between unchanging features with potential failures. The observation is
based on seeing many sudden failures after episodes of unchanging features. It is natural for daily data to
fluctuate because of real-world uncertainties that govern complex systems. According to reliability studies of many
fields [6], because models of complex systems are usually not accurate enough to predict reliably where critical
thresholds may occur, it would be useful to build a set of reliable statistical procedure to test whether the
environment is in a stable state, or at a critical tipping point in the future. This statistical procedure for our
problem, that exhibits the statistics for measuring system reliability, is the number of days that reliable critical

DualMoving
Median Process
4 SPE 165374
attributes do not change (CardArea, PeakSurfLoad, MinSurfLoad). Here, by unchanging we mean values not
changing by a single digit. Figure 2 is an example of this situation. The system has been shown experiencing
unreliable states when the CardUnchangeDays accumulates between March, 2012 and April, 2012. This is
followed by a critical downfall of CardArea which ultimately leads to a Tubing Failure. In many cases, this
unchanged days marks either POC problems that fail to read the parameters correctly or the actual situation
when the rod pump is problematic.

Figure 2 CardUnchangedDays correlates to sudden failures

RunTimeRatio is another parameter that we added for prediction. In our previous version, trends of DailyRunTime
are used. However the ratio of the runtime with regards to 24 hours/day is also relevant to a failure. From Figure
3, because of the sliding window process, we can observe that the trend DailyRunTime2 has not reached the
tipping point around early May, 2012 when the system has already almost stopped functioning. RunTimeRatio
captures this information in its earliest stage. So by including this feature we are expecting our prediction can be
slightly earlier compared with previous predictions.


Figure 3 Impact of RunTimeRatio
SPE 165374 5


Clustering Labeling with Rule Enhancement

Labeling for training data is one of the key components of failure predictions. In statistical learning, the
assumption is that there is one or more generative distributions that generates each type of failure and normal
examples. The data reveals only the portion of the observations.

Table 1 Rule-enhanced clustering labeling algorithm for failure prediction
Input: failure vs. normal rate r, number of failure training example n.
Output: labeled training set S with
+1

n training wellss signatures


1. Collect all failures of the same type from all available FMTs in a sampling pool J
2. S =
3. Randomly sample P J, |P,
tpc
] = {X
t
]
t=1
n
, where X
n
is the only non-missing signature before
this failure date
4. If n < then
1. Goto Step 7 // Not enough signatures for labeling
5. Else
1. |piiois, iux] = EN(P, numbei of clusteis = 3) # Expectation maximization of GMM
2. Clustei
normaI
= iux
max

prIors(I)
, Clustei
IaIIurc
= iux(n)
3. If f
typc
= Tubing Failuie then
1.
CardArca
=
1
count(Idx=CIustcr
IaIue
)
X
I
(CaiuAiea2)
I,Idx(I)=CIustcr
IaIue

2. If
CardArca
> z then // Card is not shrinking enough
1. Goto Step 7
3. Endif
4. Endif
5. If Clustei
normaI
= Clustei
IaIIurc
and Piiois(iux
normaI
) > u.S then
1. Clustei
prc-IaIIurc
iux that iemain assigneu
2. S = S {(X

, noimal)| iux(i) = Clustei


normaI
]
3. S = S |(X

, f
typc
)| iux(i) = Clustei
IaIIurc
|
4. S = S {(X

, piefail
typc
)| iux(i) = Clustei
prc-IaIIurc
]
6. Endif
6. Endif
7. J = J - {P]
8. Repeat Step 3 until failure training reaches n or J = .
9. Collect all normal wells signatures from all available FMTs in a sampling pool Q
10. Randomly sample Q, = {X
t
]
t=1
n
, where X
n
is the most recent non-missing signature before
of the wells valid training range
11. If n < then
1. Goto Step 16 // Not enough signatures for labeling
12. Else
1. |piiois, iux] = EN(P, numbei of clusteis = 2) # Expectation maximization of GMM
2. Clustei
normaI
= iux
max

prIors(I)

3. If Piiois(iux
normaI
) > u.S then // Major cluster
1. S = S {(X

, noimal)| iux(i) = Clustei


normaI
]
4. Endif
13. Endif
14. Q = Q -{]
15. Repeat Step 10 until failure training reaches |nrj or Q = .

Under this theory, we can rely on more roughly labeled training data to achieve a reliable model that would work
for multiple fields. The field-specific models biggest drawback is its expensive training process which prevents it
from being adapted in multiple fields. What we have learned is that we are confident with labeling many tubing
6 SPE 165374
leak failures that exhibit consistent failure trends, but less confident for the failures without such trends. For such
types of failures, we can label using a rule-enhanced statistical clustering algorithm. For rod pump failures,
because of its various factors of its root causes, we will rely on similar process but with less rule constraints.

For prediction together with analyzing it based on a timeline, we are able to identify some historical failures which
show a clear normalpre-failurefailure transition process. In this process, normal should be considered as a
major observation. However when it gets closer to failure, pre-failure signatures would show up that statistically
differ from normal examples distribution. However a range of signatures mixed with normal and pre-failure are
allowed. All signatures finally converge to failures when data is close to the real failure events. Rules are used to
constrain the clustering results to follow this process. For failures that fall out of this rule, we rule this out as a
training candidate [1]. This training set construction process is done by scanning all existing failure and normal
wells, for which the output clusters are labeled as the corresponding pre-failure type, failure type and normal
examples as illustrated in Figure 4. The labeled set, which we call it the training set, is then used to train a multi-
class SVM so that it is capable of making prediction given future features.


Figure 4 High level abstraction for training set construction

Table 1 shows the rule-enhanced clustering labeling algorithm for prediction. The 3-class clustering is for
categorizing the failure signatures roughly which is then filtered by rules. Weka is used as the library for clustering
[7]. We assume that if there are not at least 100 non-missing signatures, the sample size is too small. And when
the clustering is done, the clusters have to be discriminative by their priors so that normal cluster is dominating
by taking over 50% of the time, and the failure cluster has to touch the real failure the last example, while
leaving the remaining cluster for pre-failure signatures. This process can be done in parallel, and sampling rules
of training wells can be adjusted but here we use pure random order. For normal examples, similar process is
applied but instead of 3-class clustering, we use 2-class clustering so that the major class is normal with over
70% distribution, while the smaller one is then discarded as noise.

With the labeled training set, a multi-class support vector machine (SVM) [8, 9] is then trained to learn from all
these labeled cases. After the training process, the prediction model can then be evaluated.

Evaluation

In order to evaluate our algorithms, we first define the terms in the contingency table in Table 2 and our prediction
evaluation timeline in Figure 5. Table 2 defines the term that we are going to use in this section. In Figure 5, for
evaluating a failure well, its recorded failure is marked by the red box with a gap when the failure may not be
recognized and thought to be normal. Since the beginning of the red box of Rec. Failure, this is when field
specialist detects the failure and records it in the database. The red Failure box shows the date that the true
failure begins, and the yellow boxes are the pre-signals that happen prior to the failure. The blue boxes are
normal time, where no failure or pre-failure signals existed. A failure prediction is true only if it is within D days
Failurelabeling
Clusteringand
rulefilteringfor
existingfailures
Assignlabelsand
enlargefailure
trainingexamples
Normallabeling
Clusteringandrule
filteringfornormal
wells
Enlargenormal
trainingexamples
whilediscarding
noises
MulticlassSVM
SPE 165374 7
from the actual failure. This process is done on each validation well, and for those failure wells whose failures are
successfully predicted, they are considered to be true positives (TP). For the normal wells that have failure alerts
indicated, they are false positives. For the wells that have failures not predicted ahead of time or not predicted at
all, they are considered to be false negatives (FN). A normal well that has no failure predicted is considered to be
a true negative (TN).

Table 2 Contingency Table for failure prediction evaluation
True event True normal
Prediction/detection: alarm True positive (TP)
(correct alarm)
False positive (FP)
(false alarm)
Prediction/detection: no alarm False negative (FN)
(missing alarm)
True negative (TN)
(correct no alarm)

Different from our previous evaluation approaches described in [1, 8, 10], we create a validation set that is
independent from training data which has enough gap to present time so that we know their ground truth. In our
previous approaches, even false positive predictions are difficult to establish, because we cannot be certain that it
is a truly false prediction - it could be a failure in the future, which is the precise value of our methodology.
Therefore, by creating this time gap that is big enough for us to be certain of the wells true failure/normal status,
we can confidently claim how well our algorithm works upon this validation set.
Figure 5 Failure prediction evaluation timeline
It is always unwise to produce overwhelming number of alarms. Moreover, because failures are rare, compared
with massive amounts of normal data, the goal for a good prediction model should be to predict as many failures
as possible while not making too many false positives. At the same time among all the predicted alarms, we want
as many of them really happen in the future. In that case we have the following evaluation criteria:

Precision: is defined as the ratio of true predicted events over all predicted events.

prccision =
IP
IP +FP


Recall: is the ratio of corrected predicted events to the number of true events.

rccoll =
IP
IP +FN


Notrecognized
andthoughtit
isnormal
Pre Signal Failure
Rec Failur
PS2 PS1

Prediction

Algorithms
detecttrue
failure
Algorithms
predictstrue
failure
8 SPE 165374
These two metrics will serve as the major evaluation criteria for failure predictions.

Experiments

To evaluate our proposed failure prediction global model, we used the dataset that was collected from five actual
fields which has 1947 rod pump artificial lift systems. The training data are between 2010-01-01 and 2011-01-01,
and the validation data range is between 2011-01-01 and 2012-01-01.

We set the correct prediction threshold to be 100 days which means that if within 100 days of an actual failure we
have prediction, then it is a true positive. If a failure prediction is made but the actual failure happens beyond 100
days, then this is considered to be a false positive. For a normal well, if we have failure prediction alerts, then it is
considered to be a false negative. During our experiments, because for some operations other than work overs,
such as electrical maintenances that may turn down the rod pump or change the value of the trends, we filtered
out the alerts that were produced within 15 days of the down time.

For the training set construction, we set the ratio to be 0.8 with 50 failures, which means that the training set
contains 50 failure wells samples as well as 350 normal wells samples. We used radial basis kernel for SVM [9,
11, 12].

Table 3 Global model evaluation confusion matrix
True Failure True Normal
Predicted Failure 278 145
Predicted Normal 141 1383

Table 3 shows our results for evaluating the global model, from which we can infer that our precision is 65.7% and
recall is 66.4%.

We also maintained the field-specific results including field-specific precision and recall in Table 4. Field 1 has the
greatest recall as 88.5%, but it also has a lower precision of 53.5% compared with field 2, 3, and 4. Field 5 has
the lowest precision and recall. We discovered that rather than the failures which exhibit trends before failing,
Field 5 has more sudden failures - failures caused by sudden event like rod parted, joint split than other fields.

Table 4 Global model evaluation confusion matrix by fields
Field Prediction True Failure True Normal Precision (%) Recall (%)
1


Failure 54 47
53.5 88.5
Normal 7 392
2
Failure 73 18
80.2 69.5
Normal 32 190
3
Failure 72 25
74.2 69.9
Normal 31 271
4
Failure 39 15
72.2 60.0
Normal 26 193
5
Failure 40 40
50.0 47.1
Normal 45 337

: used for demonstrating field-specific model in [1].



Compared with field-specific model which has 87% for recalls and 42% for precision, the global model even has
better results: 1.5% higher in recall and 11.5% higher in precision. In general, because of the generalization that
involves multiple fields training samples, global model tends to be less aggressive than field-specific model while
predicting failures. However global model learns from more failures across multiple fields that makes it adaptive to
more failure signatures. This cross-field learning also prevents global model to generate as many false alerts like
the field-specific model. Most importantly, global model is scalable and can be genearalized to more fields.

Figure 6 shows a good example for successful tubing leak predictions. In the figure, we visualize four major
attributes as time serieses aligned well by date, The bottom chart shows the downtime records as teal in the
SPE 165374 9
middle line, recorded failure as red for tubing failure in the top line and failure predictions for tubing leak as grey in
the bottom line. This example successfully began to predict a tubing leak because it recognized the failure trend
beginning on mid Oct, 2011 and repeated two more times in Jan, 2012. Then it truly failed after the fourth
predicted failure because of tubing failure caused by tubing holes.


Figure 6 Good example for successful prediction that leads to a tubing leak: signature that indicates a
tubing leak began to occur in late Octr, 2011 and it happened another two times, and then discovered as a
tubing failure in mid Feb, 2012.


Figure 7 Sudden failure: hard to predict by trends
10 SPE 165374
Because prediction relies heavily on the dynamics of data by their trends, if there is no significant trends, no
matter global model or field-specific model would find it difficult to predict ahead of time. Figure 7 is an example
for sudden failure that no clear trend can be identified by our algorithms. Even the SMEs considered this as an
impossible prediction task based on these attributes because they were in a perfectly normal range before failure.
For such failures, they can only be detected rather than predicted.

Conclusions

We have presented a global model for failure prediction for rod pump artificial lift systems. Unlike our prior work
that is expensive for model building and maintenance because of the field-specific constraint, we extend our
methodology to learn across multiple fieds via an automated labeling algorithm that uses clustering and rule-
based filtering. Our results show that the global model produces acceptable results compared to field-specific
model with higher precision. Rather than developing models by fields that involves human-intensive and time-
consuming process to label failures, a single global model that automatically builds up the training set by our
proposed algorithm can be easily scaled to more fields with significantly lower maintenance cost. In the near
future, this global model will be deployed in actual fields to collect its perfomrnace data for validation.


Reference

[1] Y. Liu, K.-T. Yao, S. Liu, C. S. Raghavendra, O. Balogun and L. Olabinjo, "Semi-supervised Failure
Prediction for Oil Production Wells," in IEEE 11th International Conference on Data Mining Workshops,
Vancouver, Canada., 2011.
[2] G. Hamerly and C. Elkan, "Bayesian approaches to failure prediction for disk drives," in In Proc. 18th ICML,
2001.
[3] G. F. Hughes, J. F. Murray, K. Dreutz-Delgado and C. Elkan, "Improved Disk Drive Failure Warnings," IEEE
TRANSACTIONS ON RELIABILITY, vol. 51, no. 3, pp. 350 - 357, September 2002.
[4] W.-K. Wong, A. Moore, G. Cooper and M. Wagner, "Bayesian Network Anomaly Pattern Detection for
Disease Outbreaks," in Proceedings of the Twentieth International Conference on Machine Learning, Menlo
Park, California, 2003.
[5] A. Goldenberg, G. Shmueli, R. A. Caruana and S. E. Fienberg, "Statistical challenges facing early outbreak
detection in biosurveillance," in Proceedings of the National Academy of Sciences of the United States of
America, 2002.
[6] M. Scheffer, J. Bascompte, W. A. Brock, V. Brovkin, S. R. Carpenter, V. Dakos, H. Held, E. H. v. Nes, M.
Rietkerk and G. Sugihara, "Early-warning signals for critical transitions," Nature, pp. 461(7260):53-9, 3 Sep
2009.
[7] M. Hall, E. Frank, G. Holmes, B. Pfahringer, P. Reutemann and I. H. Witten, "The WEKA Data Mining
Software: An Update," SIGKDD Explorations, vol. 11, no. 1.
[8] Y. Liu, K.-T. Yao, S. Liu, C. S. Raghavendra, L. T. Lenz, L. Olabinjo and e. al., "Failure Prediction for Artificial
Lift Systems," in Proceedings of SPE (Society of Petroleum Engineers) Western Regional Meeting, Anaheim,
California, 2010.
[9] P.-H. Chen, C.-J. Lin and B. Scholkopf, "A tutorial on nu-Support Vector Machines," in Learning with Kernels,
MIT Press, 2002.
[10] S. Liu, C. S. Raghavendra, Y. Liu, K.-T. Yao, T. L. Lenz, L. Olabinjo and e. al, "Automatic Early Failure
Detection for Rod Pump Systems," in Annual Technical Conference and Exhibition (ATCE 2011), Denvor,
2011.
[11] C. -C. Chang and C. -J. Lin, "LIBSVM: a library for support vector machines.," 2001.
[12] T. Huang, R. Weng and C. Lin, "Generalized Bradley-Terry Models and Multi-class Probability Estimates,"
Journal of Machine Learning Research, pp. 7(2006), 85-115, 2004.
[13] F. Salfner, M. Lenk and M. Malek, "A Survey of Online Failure Prediction Methods," ACM Comput. Surv., p.
42, March 2010.

You might also like