Data Mining
Data Mining
ADD ON COURSE
DATA MINING
COURSE MATERIAL
INTRODUCTION TO DATA MINING
DATA WAREHOUSE
A data warehouse is a collection of data marts representing historical data from
different operations in the company. This data is stored in a structure optimized for
querying and data analysis as a data warehouse. Table design, dimensions and organization
should be consistent throughout a data warehouse so that reports or queries across the data
warehouse are consistent. A data warehouse can also be viewed as a database for historical
data from different functions within a company.
The term Data Warehouse was coined by Bill Inmon in 1990, which he defined in
the following way: "A warehouse is a subject-oriented, integrated, time-variant and non-
volatile collection of data in support of management's decision making process". He defined
the terms in the sentence as follows:
Subject Oriented: Data that gives information about a particular subject instead of
about a company's ongoing operations.
Integrated: Data that is gathered into the data warehouse from a variety of sources and
merged into a coherent whole.
Time-variant: All data in the data warehouse is identified with a particular time period.
Non-volatile: Data is stable in a data warehouse. More data is added but data is never removed.
This enables management to gain a consistent picture of the business. It is a single, complete
And consistent store of data obtained from a variety of different sources made available to
end users in what they can understand and use in a business context. It can be
Used for decision Support
Used to manage and control business
Used by managers and end-users to understand the business and make judgments
Data Warehousing is an architectural construct of information systems that provides
users with current and historical decision support information that is hard to access or
present in traditional operational data stores.
Other important terminology
Data Mart: Departmental subsets that focus on selected subjects. A data mart is a segment
of a data warehouse that can provide data for reporting and analysis on a section, unit,
department or operation in the company, e.g. sales, payroll, production. Data marts are
sometimes complete individual data warehouses which are usually smaller than the
corporate data warehouse.
Drill-down: Traversing the summarization levels from highly summarized data to the
underlying current or old detail
Data warehouses are designed to perform well with aggregate queries running on
large amounts of data.
The structure of data warehouses is easier for end users to navigate, understand
and query against unlike the relational databases primarily designed to handle
lots of transactions.
Data warehouses enable queries that cut across different segments of a company's
operation. E.g. production data could be compared against inventorydata even if
they were originally stored in different databases with different structures.
Queries that would be complex in very normalized databases could be easier to
build and maintain in data warehouses, decreasing the workload on transaction
systems.
Data warehousing is an efficient way to manage and report on data that is
from a variety of sources, non uniform and scattered throughout a company.
Data warehousing is an efficient way to manage demand for lots of information
from lots of users.
Data warehousing provides the capability to analyze large amounts of historical
data for nuggets of wisdom that can provide an organization with competitive
advantage.
• Operational Data:
Focusing on transactional function such as bank card withdrawalsand
deposits
Detailed
Updateable
Reflects current data
• Informational Data:
□ Focusing on providing answers to problems posed by decision
makers
□ Summarized
□ Non updateable
Data Warehouse Characteristics
The major distinguishing features between OLTP and OLAP are summarized as follows.
1. Users and system orientation: An OLTP system is customer-oriented and is used for
transaction and query processing by clerks, clients, and information technology professionals. An
OLAP system is market-oriented and is used for data analysis by knowledge workers, including
managers, executives, and analysts.
2. Data contents: An OLTP system manages current data that, typically, are too detailed to be
easily used for decision making. An OLAP system manages large amounts of historical data,
provides facilities for summarization and aggregation, and stores and manages information at
different levels of granularity. These features make the data easier for use in informed decision
making.
3. Database design: An OLTP system usually adopts an entity-relationship (ER) data model and
an application oriented database design. An OLAP system typically adopts either a star or
snowflake model and a subject-oriented database design.
4. View: An OLTP system focuses mainly on the current data within an enterprise or department,
without referring to historical data or data in different organizations. In contrast, an OLAP
system often spans multiple versions of a database schema. OLAP systems also deal with
information that originates from different organizations, integrating information from many data
stores. Because of their huge volume, OLAP data are stored on multiple storage media.
5. Access patterns: The access patterns of an OLTP system consist mainly of short, atomic
transactions. Such a system requires concurrency control and recovery mechanisms. However,
accesses to OLAP systems are mostly read-only operations although many could be complex
queries. Comparison between OLTP and OLAP systems.
The most popular data model for data warehouses is a multidimensional model. This model
can exist in the form of a star schema, a snowflake schema, or a fact constellation schema. Let's
have a look at each of these schema types.
Star schema: The star schema is a modeling paradigm in which the data warehouse
contains (1) a large central table (fact table), and (2) a set of smaller attendant tables
(dimension tables), one for each dimension. The schema graph resembles a starburst, with
the dimension tables displayed in a radial pattern around the central fact table.
Fact constellation: Sophisticated applications may require multiple fact tables to share
dimension tables. This kind of schema can be viewed as a collection of stars, and hence is
called a galaxy schema or a fact constellation.
Figure Fact constellation schema of a data warehouse for sales and shipping.
Example for Defining Star, Snowflake, and Fact Constellation Schemas
A Concept Hierarchy
A Concept hierarchy defines a sequence of mappings from a set of low level Concepts to higher
level, more general Concepts. Concept hierarchies allow data to be handled at varying levels of
abstraction
OLAP operations on multidimensional data.
1. Roll-up: The roll-up operation performs aggregation on a data cube, either by climbing-up a
concept hierarchy for a dimension or by dimension reduction. Figure shows the result of a roll-up
operation performed on the central cube by climbing up the concept hierarchy for location. This
hierarchy was defined as the total order street < city < province or state <country.
2. Drill-down: Drill-down is the reverse of roll-up. It navigates from less detailed data to more
detailed data. Drill-down can be realized by either stepping-down a concept hierarchy for a
dimension or introducing additional dimensions. Figure shows the result of a drill-down operation
performed on the central cube by stepping down a concept hierarchy for time defined as day <
month < quarter < year. Drill-down occurs by descending the time hierarchy from the level of
quarter to the more detailed level of month.
3. Slice and dice: The slice operation performs a selection on one dimension of the given cube,
resulting in a sub cube. Figure shows a slice operation where the sales data are selected from the
central cube for the dimension time using the criteria time=‖Q2". The dice operation defines a
sub cube by performing a selection on two or more dimensions.
4. Pivot (rotate): Pivot is a visualization operation which rotates the data axes in view in order
to provide an alternative presentation of the data. Figure shows a pivot operation where the item
and location axes in a 2-D slice are rotated.
This subsection presents a business analysis framework for data warehouse design.The basic steps
involved in the design process are also described.
The top-down view allows the selection of relevant information necessaryfor the data
warehouse.
The data source view exposes the information being captured, stored andmanaged by
operational systems.
The data warehouse view includes fact tables and dimension tables
Finally the business query view is the Perspective of data in the datawarehouse from the
viewpoint of the end user.
The bottom tier is ware-house database server which is almost always a relational database
system. The middle tier is an OLAP server which is typically implemented using either
(1) a Relational OLAP (ROLAP) model, (2) a Multidimensional OLAP (MOLAP) model. The top
tier is a client, which contains query and reporting tools, analysis tools, and/or data mining tools
(e.g., trend analysis, prediction, and so on).
From the architecture point of view, there are three data warehouse models: the enterprise
warehouse, the data mart, and the virtual warehouse.
and can range in size from a few gigabytes to hundreds of gigabytes, terabytes, or
beyond.
Data mart: A data mart contains a subset of corporate-wide data that is of value to a
specific group of users. The scope is connected to specific, selected subjects. For example,
a marketing data mart may connect its subjects to customer, item, and sales. The data
contained in data marts tend to be summarized. Depending on the source of data, data
marts can be categorized into the following two classes:
(i) Independent data marts are sourced from data captured from one or more
operational systems or external information providers, or from data generated
locally within a particular department or geographic area.
(ii). Dependent data marts are sourced directly from enterprise data warehouses.
In this section we will discussed about the 4 major process of the data warehouse. They are
extract (data from the operational systems and bring it to the data warehouse), transform (the
data into internal format and structure of the data warehouse), cleanse (to make sure it is of
sufficient quality to be used for decision making) and load (cleanse data is put into the data
warehouse).
The four processes from extraction through loading often referred collectively as Data Staging.
EXTRACT
Some of the data elements in the operational database can be reasonably be expected to be useful
in the decision making, but others are of less value for that purpose. For this reason, it is necessary
to extract the relevant data from the operational database before bringing into the data warehouse.
Many commercial tools are available to help with the extraction process. Data Junction is one of
the commercial products. The user of one of these tools typically has an easy- to-use windowed
interface by which to specify the following:
(i) Which files and tables are to be accessed in the source database?
(ii) Which fields are to be extracted from them? This is often done internally by
SQL Select statement.
(iii) What are those to be called in the resulting database?
(iv) What is the target machine and database format of the output?
(v) On what schedule should the extraction process be repeated?
TRANSFORM
The operational databases developed can be based on any set of priorities, which keeps changing
with the requirements. Therefore those who develop data warehouse based on these databases are
typically faced with inconsistency among their data sources. Transformation process deals with
rectifying any inconsistency (if any).
One of the most common transformation issues is ‗Attribute Naming Inconsistency‘. It is common
for the given data element to be referred to by different data names in different databases.
Employee Name may be EMP_NAME in one database, ENAME in the other. Thus one set of Data
Names are picked and used consistently in the data warehouse. Once all the data elements have
right names, they must be converted to common formats. The conversion may encompass the
following:
CLEANSING
Information quality is the key consideration in determining the value of the information. The
developer of the data warehouse is not usually in a position to change the quality of its underlying
historic data, though a data warehousing project can put spotlight on the data quality issues and
lead to improvements for the future. It is, therefore, usually necessary to go through the data
entered into the data warehouse and make it as error free as possible. This process is known as
Data Cleansing.
Data Cleansing must deal with many types of possible errors. These include missing data and
incorrect data at one source; inconsistent data and conflicting data when two or more source are
involved. There are several algorithms followed to clean the data, which will be discussed in the
coming lecture notes.
LOADING
Loading often implies physical movement of the data from the computer(s) storing the source
database(s) to that which will store the data warehouse database, assuming it is different. This
takes place immediately after the extraction phase. The most common channel for data movement
is a high-speed communication link. Ex: Oracle Warehouse Builder is the API from Oracle, which
provides the features to perform the ETL task on Oracle Data Warehouse.
Data cleaning problems
This section classifies the major data quality problems to be solved by data cleaning and data
transformation. As we will see, these problems are closely related and should thus be treated in a
uniform way. Data transformations [26] are needed to support any changes in the structure,
representation or content of data. These transformations become necessary in many situations, e.g.,
to deal with schema evolution, migrating a legacy system to a new information system, or when
multiple data sources are to be integrated. As shown in Fig. 2 we roughly distinguish between
single-source and multi-source problems and between schema- and instance-related problems.
Schema-level problems of course are also reflected in the instances; they can be addressed at the
schema level by an improved schema design (schema evolution), schema translation and schema
integration. Instance-level problems, on the other hand, refer to errors andinconsistencies in the
actual data contents which are not visible at the schema level. They are the primary focus of data
cleaning. Fig. 2 also indicates some typical problems for the various cases. While not shown in
Fig. 2, the single-source problems occur (with increased likelihood) in the multi-source case, too,
besides specific multi-source problems.
Single-source problems
The data quality of a source largely depends on the degree to which it is governed by schema and
integrity constraints controlling permissible data values. For sources without schema, such as files,
there are few restrictions on what data can be entered and stored, giving rise to a high probability
of errors and inconsistencies. Database systems, on the other hand, enforce restrictions of a specific
data model (e.g., the relational approach requires simple attribute values,referential integrity, etc.)
as well as application-specific integrity constraints. Schema-related data quality problems thus
occur because of the lack of appropriate model-specific or application-specific integrity
constraints, e.g., due to data model limitations or poor schema design, or because only a few
integrity constraints were defined to limit the overhead for integrity control. Instance-specific
problems relate to errors and inconsistencies that cannot be prevented at the schema level (e.g.,
misspellings).
For both schema- and instance-level problems we can differentiate different problem scopes:
attribute (field), record, record type and source; examples for the various cases are shown in Tables
1 and 2. Note that uniqueness constraints specified at the schema level do not prevent duplicated
instances, e.g., if information on the same real world entity is entered twice with different attribute
values (see example in Table 2).
Multi-source problems
The problems present in single sources are aggravated when multiple sources need to be integrated.
Each source may contain dirty data and the data in the sources may be represented differently,
overlap or contradict. This is because the sources are typically developed, deployed and maintained
independently to serve specific needs. This results in a large degree of heterogeneity w.r.t. data
management systems, data models, schema designs and the actual data.
At the schema level, data model and schema design differences are to be addressed by the
steps of schema translation and schema integration, respectively. The main problems w.r.t.
schema design are naming and structural conflicts. Naming conflicts arise when the same name
is used for different objects (homonyms) or different names are used for the same object
(synonyms). Structural conflicts occur in many variations and refer to different representations of
the same object in different sources, e.g., attribute vs. table representation, different component
structure, different data types, different integrity constraints, etc.
In addition to schema-level conflicts, many conflicts appear only at the instance level (data
conflicts). All problems from the single-source case can occur with different representations in
different sources (e.g., duplicated records, contradicting records,…). Furthermore, even when
there are the same attribute names and data types, there may be different value representations
(e.g., for marital status) or different interpretation of the values (e.g., measurement units Dollar vs.
Euro) across sources. Moreover, information in the sources may be provided at different
aggregation levels (e.g., sales per productvs. sales per product group) or refer to different points in
time (e.g. current sales as of yesterday for source 1 vs. as of last week for source 2).
A main problem for cleaning data from multiple sources is to identify overlapping data, in
particular matching records referring to the same real-world entity (e.g., customer). This problem
is also referred to as the object identity problem, duplicate elimination or the merge/purge problem.
Frequently, the information is only partially redundant and the sources may complement each other
by providing additional information about an entity. Thus duplicate information should be purged
out and complementing information should be consolidated and merged in order to achieve a
consistent view of real world entities.
The two sources in the example of Fig. 3 are both in relational format but exhibit schema and data
conflicts. At the schema level, there are name conflicts (synonyms Customer/Client, Cid/Cno,
Sex/Gender) and structural conflicts (different representations for names and addresses). At the
instance level, we note that there are different gender representations (―0‖/‖1‖ vs. ―F‖/‖M‖) and
presumably a duplicate record (Kristen Smith). The latter observation also reveals that while
Cid/Cno are both source-specific identifiers, their contents are not comparable
between the sources; different numbers (11/493) may refer to the same person while different
persons can have the same number (24). Solving these problems requires both schema integration
and data cleaning; the third table shows a possible solution. Note that the schema conflicts should
be resolved first to allow data cleaning, in particular detection of duplicates based on a uniform
representation of names and addresses, and matching of the Gender/Sex values.
Definition of transformation workflow and mapping rules: Depending on the number of data
sources,their degree of heterogeneity and the ―dirtyness of the data, a large number of data
transformation and cleaning steps may have to be executed. Sometime, a schema translation is
used to map sources to a common data model; for data warehouses, typically a relational
representation is used. Early data cleaning steps can correct single-source instance problems and
prepare the data for integration. Later steps deal with schema/data integration and cleaning multi-
source instance problems, e.g., duplicates.
For data warehousing, the control and data flow for these transformation and cleaning steps should
be specified within a workflow that defines the ETL process.
The schema-related data transformations as well as the cleaning steps should be specified
by a declarative query and mapping language as far as possible, to enable automatic generation of
the transformation code. In addition, it should be possible to invoke user-written cleaning codeand
special purpose tools during a data transformation workflow. The transformation steps may request
user feedback on data instances for which they have no built-in cleaning logic.
Transformation: Execution of the transformation steps either by running the ETL workflow for
loading and refreshing a data warehouse or during answering queries on multiple sources.
Backflow of cleaned data: After (single-source) errors are removed, the cleaned data should also
replace the dirty data in the original sources in order to give legacy applications the improved data
too and to avoid redoing the cleaning work for future data extractions. For data warehousing, the
cleaned data is
available from the data staging area (Fig. 1).
Data analysis
Metadata reflected in schemas is typically insufficient to assess the data quality of a source,
especially if only a few integrity constraints are enforced. It is thus important to analyse the actual
instances to obtain real (reengineered) metadata on data characteristics or unusual value patterns.
This metadata helps finding data quality problems. Moreover, it can effectively contribute to
identify attribute correspondences between source schemas (schema matching), based on which
automatic data transformations can be derived.
There are two related approaches for data analysis, data profiling and data mining. Data
profiling focuses on the instance analysis of individual attributes. It derives information such as
the data type, length, value range, discrete values and their frequency, variance, uniqueness,
occurrence of null values, typical string pattern (e.g., for phone numbers), etc., providing anexact
view of various quality aspects of the attribute.
Table: shows examples of how this metadata can help detecting data quality problems.
Metadata repository
Metadata are data about data. When used in a data warehouse, metadata are the data that define
warehouse objects. Metadata are created for the data names and definitions of the given
warehouse. Additional metadata are created and captured for time stamping any extracted data,
the source of the extracted data, and missing fields that have been added by data cleaning or
integration processes. A metadata repository should contain:
A description of the structure of the data warehouse. This includes the warehouse schema,
view, dimensions, hierarchies, and derived data definitions, as well as data mart locations
and contents;
Operational metadata, which include data lineage (history of migrated data and the
sequence of transformations applied to it), currency of data (active, archived, or purged),
and monitoring information (warehouse usage statistics, error reports, and audit trails);
the algorithms used for summarization, which include measure and dimension definition
algorithms, data on granularity, partitions, subject areas, aggregation, summarization, and
predefined queries and reports;
The mapping from the operational environment to the data warehouse, which includes
source databases and their contents, gateway descriptions, data partitions, data extraction,
cleaning, transformation rules and defaults, data refresh and purging rules, and security
(user authorization and access control).
Data related to system performance, which include indices and profiles that improve data
access and retrieval performance, in addition to rules for the timing and scheduling of
refresh, update, and replication cycles; and
Business metadata, which include business terms and definitions, data ownership
information, and charging policies.
greater scalability
2. Multidimensional OLAP (MOLAP)
Array-based multidimensional storage engine (sparse matrix techniques)
Cube Operation
Computation
Partition arrays into chunks (a small sub cube which fits in memory).
Compressed sparse array addressing: (chunk_id, offset)
Compute aggregates in ―multiway‖ by visiting cube cells in the order which
minimizes the # of times to visit each cell, and reduces memory access and storage
cost.
The bitmap indexing method is popular in OLAP products because it allows quick searching in
data cubes.
The bitmap index is an alternative representation of the record ID (RID) list. In the bitmap
index for a given attribute, there is a distinct bit vector, By, for each value v in the domain of the
attribute. If the domain of a given attribute consists of n values, then n bits are needed for each
entry in the bitmap index
The join indexing method gained popularity from its use in relational database query
processing. Traditional indexing maps the value in a given column to a list of rows having that
value. In contrast, join indexing registers the joinable rows of two relations from a relational
database. For example, if two relations R(RID;A) and S(B; SID) join on the attributes A and B,
then the join index record contains the pair (RID; SID), where RID and SID are record identifiers
from the R and S relations, respectively.
1. Determine which operations should be performed on the available cuboids. This involves
transforming any selection, projection, roll-up (group-by) and drill-down operations specified in
the query into corresponding SQL and/or OLAP operations. For example, slicing and dicing of a
data cube may correspond to selection and/or projection operations on a materialized cuboid.
2. Determine to which materialized cuboid(s) the relevant operations should be applied. This
involves identifying all of the materialized cuboids that may potentially be used to answer the
query.
1. Information processing
2. Analytical processing
multidimensional analysis of data warehouse data
Note:
From On-Line Analytical Processing to On Line Analytical Mining (OLAM) called from
data warehousing to data mining
On-Line Analytical Mining (OLAM) (also called OLAP mining), which integrates on- line
analytical processing (OLAP) with data mining and mining knowledge in multidimensional
databases, is particularly important for the following reasons.
Comprehensive information processing and data analysis infrastructures have been or will
be systematically constructed surrounding data warehouses, which include accessing, integration,
consolidation, and transformation of multiple, heterogeneous databases,
ODBC/OLEDB connections, Web-accessing and service facilities, reporting and OLAP analysis
tools.
Effective data mining needs exploratory data analysis. A user will often want to traverse
through a database, select portions of relevant data, analyze them at different granularities, and
present knowledge/results in different forms. On-line analytical mining provides facilities for data
mining on different subsets of data and at different levels of abstraction, by drilling, pivoting,
filtering, dicing and slicing on a data cube and on some intermediate data mining results.
4. On-line selection of data mining functions.
By integrating OLAP with multiple data mining functions, on-line analytical mining
provides users with the flexibility to select desired data mining functions and swap data mining
tasks dynamically.
A metadata directory is used to guide the access of the data cube. The data cube can be
constructed by accessing and/or integrating multiple databases and/or by filtering a data warehouse
via a Database API which may support OLEDB or ODBC connections. Since an OLAM engine
may perform multiple data mining tasks, such as concept description, association,classification,
prediction, clustering, time-series analysis,etc., it usually consists of multiple, integrated data
mining modules and is more sophisticated than an OLAP engine.
The major reason that data mining has attracted a great deal of attention in information
industry in recent years is due to the wide availability of huge amounts of data and the
imminent need for turning such data into useful information and knowledge. The
information and knowledge gained can be used for applications ranging from business
management, production control, and market analysis, to engineering design and science
exploration.
27
Database Management Systems
(1970s-early 1980s)
1) Hierarchical and network database system
2) Relational database system
3) Data modeling tools: entity-relational models, etc
4) Indexing and accessing methods: B-trees, hashing etc.
5) Query languages: SQL, etc.
User Interfaces, forms and reports
6) Query Processing and Query Optimization
7) Transactions, concurrency control and recovery
8) Online transaction Processing (OLTP)
Data mining: on what kind of data? / Describe the following advanced database
systems and applications: object-relational databases, spatial databases, text
databases, multimedia databases, the World Wide Web.
In principle, data mining should be applicable to any kind of information repository. This
includes relational databases, data warehouses, transactional databases, advanced
database systems,
flat files, and the World-Wide Web. Advanced database systems include object- oriented
and object-relational databases, and special c application-oriented databases, such as spatial
databases, time-series databases, text databases, and multimedia databases.
Flat files: Flat files are actually the most common data source for data mining algorithms,
especially at the research level. Flat files are simple data files in text or binary format with a
structure known by the data mining algorithm to be applied. The data in these files can be
transactions, time-series data, scientific measurements, etc.
The most commonly used query language for relational database is SQL, which allows
retrieval and manipulation of the data stored in the tables, as well as the calculation of
aggregate functions such as average, sum, min, max and count. For instance, an SQL query
to select the videos grouped by category would be:
Data mining algorithms using relational databases can be more versatile than data mining
algorithms specifically written for flat files, since they can take advantage of the structure
inherent to relational databases. While data mining can benefit from SQL for data selection,
transformation and consolidation, it goes beyond what SQL could provide, such as
predicting, comparing, detecting deviations, etc.
Data warehouses
A data warehouse is a repository of information collected from multiple sources, stored
under a unified schema, and which usually resides at a single site. Data warehouses are
constructed via a process of data cleansing, data transformation, data integration, data
loading, and periodic data refreshing. The figure shows the basic architecture of a data
warehouse.
In order to facilitate decision making, the data in a data warehouse are organized around
major subjects, such as customer, item, supplier, and activity. The data are storedto
provide information from a historical perspective and are typically summarized.
Transactional databases
In general, a transactional database consists of a flat file where each record represents a
transaction. A transaction typically includes a unique transaction identity number (trans ID),
and a list of the items making up the transaction (such as items purchased in a store) as
shown below:
SALES
Trans-ID List of item_ID‘s
T100 I1,I3,I8
…….. ………
• Time-Series Databases: Time-series databases contain time related data such stock
market data or logged activities. These databases usually have a continuous flow of
New data coming in, which sometimes causes the need for a challenging real
time analysis. Data mining in such databases commonly includes the study of trends and
correlations between evolutions of different variables, as well as the prediction of trends and
movements of the variables in time.
• A text database is a database that contains text documents or other word descriptions in
the form of long sentences or paragraphs, such as product specifications, error or bug
reports, warning messages, summary reports, notes, or other documents.
• A multimedia database stores images, audio, and video data, and is used in applications
such as picture content-based retrieval, voice-mail systems, video-on-demand systems, the
World Wide Web, and speech-based user interfaces.
• The World-Wide Web provides rich, world-wide, on-line information services, where
data objects are linked together to facilitate interactive access. Some examples of
distributed information services associated with the World-Wide Web include America
Online, Yahoo!, AltaVista, and Prodigy.
Data mining functionalities are used to specify the kind of patterns to be found in
data mining tasks. In general, data mining tasks can be classified into two categories:
Descriptive
predictive
Descriptive mining tasks characterize the general properties of the data in the database.
Predictive mining tasks perform inference on the current data in order to make
predictions.
Describe data mining functionalities and the kinds of patterns they can discover
(or)
Define each of the following data mining functionalities: characterization, discrimination,
association and correlation analysis, classification, prediction, clustering, and evolution
analysis. Give examples of each data mining functionality, using a real-life database that
you are familiar with.
Concept/class description: characterization and discrimination
Data can be associated with classes or concepts. It describes a given set of data in a concise
and summarative manner, presenting interesting general properties of the data. These
descriptions can be derived via
Data characterization
It is a summarization of the general characteristics or features of a target class of data.
Example:
A data mining system should be able to produce a description summarizing the
characteristics of a student who has obtained more than 75% in every semester; the result
could be a general profile of the student.
Data Discrimination is a comparison of the general features of target class data objects with
the general features of objects from one or a set of contrasting classes.
Example
The general features of students with high GPA‘s may be compared with the general
features of students with low GPA‘s. The resulting description could be a general
comparative profile of the students such as 75% of the students with high GPA‘s are fourth-
year computing science students while 65% of the students with low GPA‘s are not.
occur frequently together in a given set of data. For example, a data mining system may
find association rules like
Example:
A grocery store retailer to decide whether to but bread on sale. To help determine the impact
of this decision, the retailer generates association rules that show what other products are
frequently purchased with bread. He finds 60% of the times that bread is sold so are pretzels
and that 70% of the time jelly is also sold. Based on these facts, he triesto capitalize on
the association between bread, pretzels, and jelly by placing some pretzels and jelly at the
end of the aisle where the bread is placed. In addition, he decides not to place either of
these items on sale at the same time.
Classification:
Classification can be defined as the process of finding a model (or function) that describes
and distinguishes data classes or concepts, for the purpose of being able to usethe model to
predict the class of objects whose class label is unknown. The derived model is based on
the analysis of a set of training data (i.e., data objects whose class label is known).
Example:
An airport security screening station is used to deter mine if passengers are potential terrorist
or criminals. To do this, the face of each passenger is scanned and its basic
pattern(distance between eyes, size, and shape of mouth, head etc) is identified. This
pattern is compared to entries in a database to see if it matches any patterns that are
associated with known offenders
1) IF-THEN rules,
3) Neural Network
Prediction:
Find some missing or unavailable data values rather than class labels referred toas
prediction. Although prediction may refer to both data value prediction and class label
prediction, it is usually confined to data value prediction and thus is distinct from
classification. Prediction also encompasses the identification of distribution trends based on
the available data.
Example:
Predicting flooding is difficult problem. One approach is uses monitors placed at various
points in the river. These monitors collect data relevant to flood prediction: water level,
rain amount, time, humidity etc. These water levels at a potential flooding point in the river
can be predicted based on the data collected by the sensors upriver from this point. The
prediction must be made with respect to the time the data were collected.
Classification differs from prediction in that the former is to construct a set of models (or
functions) that describe and distinguish data class or concepts, whereas the latter is to
predict some missing or unavailable, and often numerical, data values. Their similarity is
that they are both tools for prediction: Classification is used for predicting the class label of
data objects and prediction is typically used for predicting missing numerical data values.
Clustering analysis
The objects are clustered or grouped based on the principle of maximizing the intraclass
similarity and minimizing the interclass similarity.
Example:
A certain national department store chain creates special catalogs targeted to various
demographic groups based on attributes such as income, location and physical
characteristics of potential customers (age, height, weight, etc). To determine the target
mailings of the various catalogs and to assist in the creation of new, more specific catalogs,
the company performs a clustering of potential customers based on the determined
attribute values. The results of the clustering exercise are the usedby management to
create special catalogs and distribute them to the correct target population based on the
cluster for that catalog.
Outlier analysis: A database may contain data objects that do not comply with general
model of data. These data objects are outliers. In other words, the data objects which do not
fall within the cluster will be called as outlier data objects. Noisy data or exceptional data
are also called as outlier data. The analysis of outlier data is referred to as outlier mining.
Example
Outlier analysis may uncover fraudulent usage of credit cards by detecting purchases
of extremely large amounts for a given account number in comparison to regular charges
incurred by the same account. Outlier values may also be detected with respect to the
location and type of purchase, or the purchase frequency.
Data evolution analysis describes and models regularities or trends for objects whose
behavior changes over time.
Example:
The data of result the last several years of a college would give an idea if quality
of graduated produced by it
Correlation analysis
Correlation analysis is a technique use to measure the association between two variables.
A correlation coefficient (r) is a statistic used for measuring the strength of a supposed
linear association between two variables. Correlations range from -1.0 to +1.0 in value.
A correlation coefficient of 1.0 indicates a perfect positive relationship in which high values
of one variable are related perfectly to high values in the other variable, and conversely, low
values on one variable are perfectly related to low values on the other variable.
A correlation coefficient of 0.0 indicates no relationship between the two variables. That
is, one cannot use the scores on one variable to tell anything about the scores on the second
variable.
Answer:
• Discrimination differs from classification in that the former refers to a comparison of the
general features of target class data objects with the general features of objects from one or
a set of contrasting classes, while the latter is the process of finding a set of models (or
functions) that describe and distinguish data classes or concepts for the purpose of being
able to use the model to predict the class of objects whose class label is unknown.
Discrimination and classification are similar in that they both deal with the analysis of class
data objects.
• Characterization differs from clustering in that the former refers to a summarization ofthe
general characteristics or features of a target class of data while the latter deals with the
analysis of data objects without consulting a known class label. This pair of tasks is
similar in that they both deal with grouping together objects or data that are related
• Classification differs from prediction in that the former is the process of finding a setof
models (or functions) that describe and distinguish data class or concepts while the latter
predicts missing or unavailable, and often numerical, data values. This pair of tasks is
similar in that they both are tools for
Prediction: Classification is used for predicting the class label of data objects and
prediction is typically used for predicting missing numerical data values.
(4) Novel.
A pattern is also interesting if it validates a hypothesis that the user sought to confirm.
An interesting pattern represents knowledge.
There are many data mining systems available or being developed. Some are specialized
systems dedicated to a given data source or are confined to limited data mining
functionalities, other are more versatile and comprehensive. Data mining systems can be
categorized according to various criteria among other classification are the following:
· Classification according to mining techniques used: Data mining systems employ and
provide different techniques. This classification categorizes data mining systems according
to the data analysis approach used such as machine learning, neural networks, genetic
algorithms, statistics, visualization, database oriented or data warehouse-oriented, etc. The
classification can also take into account the degree of user interaction involved in the data
mining process such as query-driven systems, interactive exploratory systems, or
autonomous systems. A comprehensive system would provide a wide variety of data
mining techniques to fit different situations and options, and offer different degrees of user
interaction.
• Task-relevant data: This primitive specifies the data upon which mining is to
be performed. It involves specifying the database and tables or data warehouse containing
the relevant data, conditions for selecting the relevant data, the relevant attributesor
dimensions for exploration, and instructions regarding the ordering or grouping of the data
retrieved.
• Knowledge type to be mined: This primitive specifies the specific data mining function
to be performed, such as characterization, discrimination, association, classification,
clustering, or evolution analysis. As well, the user can be more specific and provide pattern
templates that all discovered patterns must match. These templates or meta patterns (also
called meta rules or meta queries), can be used to guide the discovery process.
• Background knowledge: This primitive allows users to specify knowledge they have
about the domain to be mined. Such knowledge can be used to guide the knowledge
discovery process and evaluate the patterns that are found. Of the several kinds of
background knowledge, this chapter focuses on concept hierarchies.
• Pattern interestingness measure: This primitive allows users to specify functions that
are used to separate uninteresting patterns from knowledge and may be used to guide the
mining process, as well as to evaluate the discovered patterns. This allows the user
to confine the number of uninteresting patterns returned by the process, as a data mining
process may generate a large number of patterns. Interestingness measurescan be
specified for such pattern characteristics as simplicity, certainty, utility and novelty.
The differences between the following architectures for the integration of a data mining
system with a database or data warehouse system are as follows.
• No coupling:
The data mining system uses sources such as flat files to obtain the initial data set to
be mined since no database system or data warehouse system functions are implemented
as part of the process. Thus, this architecture represents a poor design choice.
• Loose coupling:
The data mining system is not integrated with the database or data warehouse system beyond
their use as the source of the initial data set to be mined, and possible use in storage
of the results. Thus, this architecture can take advantage of the flexibility, efficiency and
features such as indexing that the database and data warehousing systems may provide.
However, it is difficult for loose coupling to achieve high scalability and good performance
with large data sets as many such systems are memory- based.
Some of the data mining primitives such as aggregation, sorting or pre computation of
statistical functions are efficiently implemented in the database or data warehouse system,
for use by the data mining system during mining-query processing. Also, some frequently
used inter mediate mining results can be pre computed and stored in the database or data
warehouse system, thereby enhancing the performance of the data mining system.
• Tight coupling:
The database or data warehouse system is fully integrated as part of the data mining system
and thereby provides optimized data mining query processing. Thus, the data mining sub
system is treated as one functional component of an information system. Thisis a highly
desirable architecture as it facilitates efficient implementations of data
mining functions, high system performance, and an
integrated information processing environment
From the descriptions of the architectures provided above, it can be seen that tight coupling
is the best alternative without respect to technical or implementation issues. However, as
much of the technical infrastructure needed in a tightly coupled system isstill evolving,
implementation of such a system is non-trivial. Therefore, the most popular architecture
is currently semi tight coupling as it provides a compromise between loose and tight
coupling.
Major issues in data mining
_ Mining different kinds of knowledge in databases: Since different users can be interested
in different kinds of knowledge, data mining should cover a wide spectrum of data analysis
and knowledge discovery tasks, including data characterization,
discrimination, association, classification, clustering, trend and
deviation analysis, and similarity analysis. These tasks may use the same database in
different ways and require the development of numerous data mining techniques.
_ Data mining query languages and ad-hoc data mining: Knowledge in Relational
query languages (such as SQL) required since it allow users to pose ad-hoc queries
_ Handling outlier or incomplete data: The data stored in a database may reflect outliers:
noise, exceptional cases, or incomplete data objects. These objects may confuse the analysis
process, causing over fitting of the data to the knowledge model constructed. As a result,
the accuracy of the discovered patterns can be poor. Data cleaning methods and data
analysis methods which can handle outliers are required.
_ Handling of relational and complex types of data: Since relational databases and
data warehouses are widely used, the development of efficient and effective data mining
systems for such data is important.
Data preprocessing
Data preprocessing describes any type of processing performed on raw data to prepare it for
another processing procedure. Commonly used as a preliminary data mining practice,data
preprocessing transforms the data into a format that will be more easily and effectively
processed for the purpose of the user.
Data preprocessing describes any type of processing performed on raw data to prepare it
for another processing procedure. Commonly used as a preliminary data mining practice,
data preprocessing transforms the data into a format that will be more easily and
effectively processed for the purpose of the user
If no quality data , then no quality mining results. The quality decision is alwaysbased
on the quality data.
If there is much irrelevant and redundant information present or noisy and
unreliable data, then knowledge discovery during the training phase is more difficult
Incomplete data: lacking attribute values, lacking certain attributes of interest, or containing
only aggregate data. e.g., occupation=― ‖.
o Data cleaning
o Data integration
o Data transformation
o Data reduction
Obtains reduced representation in volume but produces the same or similar analytical
results
Data discretization
Part of data reduction but with particular importance, especially for numerical data
In other words, in many real-life situations, it is helpful to describe data by a single number
that is most representative of the entire collection of numbers. Such a number is called a
measure of central tendency. The most commonly used measures are as follows. Mean,
Median, and Mode
Mean: mean, or average, of numbers is the sum of the numbers divided by n. That is:
Example 1
The marks of seven students in a mathematics test with a maximum possible mark of 20
are given below:
15 13 18 16 14 17 12
Find the mean of this set of data values.
Solution:
Midrange
The midrange of a data set is the average of the minimum and maximum values.
Median: median of numbers is the middle number when the numbers are written in order.
If is even, the median is the average of the two middle numbers.
Example 2
The marks of nine students in a geography test that had a maximum possible mark of 50
are given below:
47 35 37 32 38 39 36 34 35
In general:
If the number of values in the data set is even, then the median is the average of thetwo middle values.
Example 3
Solution:
Arrange the data values in order from the lowest value to the highest
value: 10 12 13 16 17 18 19 21
The number of values in the data set is 8, which is even. So, the median is the average
of the two middle values.
Trimmed mean
A trimming mean eliminates the extreme observations by removing observations from
each end of the ordered sample. It is calculated by discarding a certain percentageof the
lowest and the highest scores and then computing the mean of the remaining scores.
Mode of numbers is the number that occurs most frequently. If two numbers tie for most
frequent occurrence, the collection has two modes and is called bimodal.
The mode has applications in printing . For example, it is important to print more of the
most popular books; because printing different books in equal numbers would cause a
shortage of some books and an oversupply of others.
Likewise, the mode has applications in manufacturing. For example, it is important to
manufacture more of the most popular shoes; because manufacturing different shoes in
equal numbers would cause a shortage of some shoes and an oversupply of others.
Example 4
Find the mode of the following data set:
48 44 48 45 42 49 48
Solution:
It is possible for a set of data values to have more than one mode.
If there are two data values that occur most frequently, we say that the set of data
values is bimodal.
If there is three data values that occur most frequently, we say that the set of data
values is trimodal
If two or more data values that occur most frequently, we say that the set of
data values is multimodal
If there is no data value or data values that occur most frequently, we say that
the set of data values has no mode.
The mean, median and mode of a data set are collectively known as measures of central
tendency as these three measures focus on where the data is centered or clustered. To
analyze data using the mean, median and mode, we need to use the most appropriate
measure of central tendency. The following points should be remembered:
The mean is useful for predicting future results when there are no extreme values
in the data set. However, the impact of extreme values on the mean may be
important and should be considered. E.g. the impact of a stock market crash on
average investment returns.
The median may be more useful than the mean when there are extreme
values in the data set as it is not affected by the extreme values.
The mode is useful when the most common item, characteristic or value of a
data set is required.
Measures of Dispersion
Measures of dispersion measure how spread out a set of data is. The two most commonly
used measures of dispersion are the variance and the standard deviation. Rather than
showing how data are similar, they show how data differs from its variation,spread, or
dispersion.
Other measures of dispersion that may be encountered include the Quartiles, Inter quartile
range (IQR), Five number summary, range and box plots
Variance and Standard Deviation
Very different sets of numbers can have the same mean. You will now study two measures
of dispersion, which give you an idea of how much the numbers in a set differfrom the
mean of the set. These two measures are called the variance of the set and the
standard deviation of the set
Percentile
o Percentiles are values that divide a sample of data into one hundred groups containing
(as far as possible) equal numbers of observations.
o The pth percentile of a distribution is the value such that p percent of the observations
fall at or below it.
o The most commonly used percentiles other than the median are the 25th percentile and
the 75th percentile.
o The 25th percentile demarcates the first quartile, the median or 50th percentile
demarcates the second quartile, the 75th percentile demarcates the third quartile, and the
100th percentile demarcates the fourth quartile.
Quartiles
Quartiles are numbers that divide an ordered data set into four portions, each containing
approximately one-fourth of the data. Twenty-five percent of the data values come before
the first quartile (Q1). The median is the second quartile (Q2); 50% of the data
values come before the median. Seventy-five percent of the data values come before the third
quartile (Q3).
Q1=25th percentile=(n*25/100), where n is total number of data in the given data set
Q2=median=50th percentile=(n*50/100)
th
Q3=75 percentile=(n*75/100)
The inter quartile range is the length of the interval between the lower quartile (Q1) and the
upper quartile (Q3). This interval indicates the central, or middle, 50% of a data set.
IQR=Q3-Q1
Range
The range of a set of data is the difference between its largest (maximum) and smallest
(minimum) values. In the statistical world, the range is reported as a single number, the
difference between maximum and minimum. Sometimes, the range is often reported as
―from (the minimum) to (the maximum),‖ i.e., two numbers.
Example1:
Given data set: 3, 4, 4, 5, 6, 8
The range of data set is 3–8. The range gives only minimal information about the spread
of the data, by defining the two extremes. It says nothing about how the data are distributed
between those two endpoints.
Example2:
In this example we demonstrate how to find the minimum value, maximum value,
and range of the following data: 29, 31, 24, 29, 30, 25
The Five-Number Summary of a data set is a five-item list comprising the minimum
value, first quartile, median, third quartile, and maximum value of the set.
A box plot is a graph used to represent the range, median, quartiles and inter quartile range
of a set of data values.
(i) Draw a box to represent the middle 50% of the observations of the data
set. (ii) Show the median by drawing a vertical line within the box.
(iii) Draw the lines (called whiskers) from the lower and upper ends of the box to the
minimum and maximum values of the data set respectively, as shown in the following
diagram.
X is the set of data values.
Min X is the minimum value in the
data Max X is the maximum value in
the data set.
=75
=79
Step 4: Q3=75th percentile value
=11*(75/100)th value
= 82
Step 5: Min X= 71
Step 6: Max X=85
Step 7: Range= 85-71 = 14
Since the medians represent the middle points, they split the data into four equal parts. In
other words:
Q1 is the fourth value in the list and Q3 is the twelfth: Q1 = 14.4 and Q3 = 14.9.
Then IQR = 14.9 – 14.4 = 0.5.
Outliers will be any points below:
Q1 – 1.5×IQR = 14.4 – 0.75 = 13.65 or above Q3 + 1.5×IQR = 14.9 + 0.75 = 15.65.
Then the outliers are at 10.2, 15.9, and 16.4.
The values for Q1 – 1.5×IQR and Q3 + 1.5×IQR are the "fences" that mark off
the "reasonable" values from the outlier values. Outliers lie outside the fences.
1 Histogram
A histogram is a way of summarizing data that are measured on an interval scale (either
discrete or continuous). It is often used in exploratory data analysis to illustrate the major
features of the distribution of the data in a convenient form. It divides up the range of
possible values in a data set into classes or groups. For each group,a rectangle is
constructed with a base length equal to the range of values in that specific group, and
an area proportional to the number of observations falling into that group. This means that
the rectangles might be drawn of non-uniform height.
The histogram is only appropriate for variables whose values are numerical and measured
on an interval scale. It is generally used when dealing with large data sets (>100
observations)
A histogram can also help detect any unusual observations (outliers), or any gaps in thedata
set.
2 Scatter Plot
A scatter plot is a useful summary of a set of bivariate data (two variables), usually drawn
before working out a linear correlation coefficient or fitting a regression line. It gives a
good visual picture of the relationship between the two variables, and aids the
interpretation of the correlation coefficient or regression model.
Each unit contributes one point to the scatter plot, on which points are plotted but not
joined. The resulting pattern indicates the type and strength of the relationship between
the two variables.
Positively and Negatively Correlated Data
A scatter plot will also show up a non-linear relationship between the two variables and
whether or not there exist any outliers in the data.
3 Loess curve
It is another important exploratory graphic aid that adds a smooth curve to a scatter plot in
order to provide better perception of the pattern of dependence. The word loess is short for
―local regression.‖
4 Box plot
The picture produced consists of the most extreme values in the data set (maximum and
minimum values), the lower and upper quartiles, and the median.
5 Quintile plot
□ Displays all of the data (allowing the user to assess both the overall behavior
and unusual occurrences)
□ Plots quintile information
□ For a data xi data sorted in increasing order, fi indicates that
approximately 100 fi% of the data are below or equal to the value xi
The f quintile of the data is found. That data value is denoted q(f). Each data point can
be assigned an f-value. Let a time series x of length n be sorted from smallest to largest
values, such that the sorted values have rank. The f-value for each observation is computed
as . 1,2,..., n . The f-value for
A normal distribution is often a reasonable model for the data. Without inspecting the data,
however, it is risky to assume a normal distribution. There are a number of graphs that can
be used to check the deviations of the data from the normal distribution. The most useful
tool for assessing normality is a quintile or QQ plot. This is a scatter plot with the quantiles
of the scores on the horizontal axis and the expected normal scores on the vertical
axis.
In other words, it is a graph that shows the quintiles of one univariate distribution against the
corresponding quintiles of another. It is a powerful visualization tool in that it allows the
user to view whether there is a shift in going from one distribution to another.
The steps in constructing a QQ plot are as follows:
First, we sort the data from smallest to largest. A plot of these scores against the
expected normal scores should reveal a straight line.
The expected normal scores are calculated by taking the z-scores of (I - ½)/n where I is the
rank in increasing order.
Curvature of the points indicates departures of normality. This plot is also useful for
detecting outliers. The outliers appear as points that are far away from the overall pattern op
points
A quintile plot is a graphical method used to show the approximate percentage of values
below or equal to the indepequintile information for all the data, where the values
measured for the independent variable are plotted against their corresponding quintile.
Data Cleaning
Data cleaning routines attempt to fill in missing values, smooth out noise while
identifying outliers, and correct inconsistencies in the data.
Missing Values
The various methods for handling the problem of missing values in data tuples include:
(a) Ignoring the tuple: This is usually done when the class label is missing (assumingthe
mining task involves classification or description). This method is not very effective unless
the tuple contains several
attributes with missing values. It is especially poor when the percentage of missing values
per attribute
varies considerably.
(b) Manually filling in the missing value: In general, this approach is time- consuming
and may not be a reasonable task for large data sets with many missing values, especially
when the value to be filled in is not easily determined.
(c) Using a global constant to fill in the missing value: Replace all missing attribute
values by the same constant, such as a label like ―Unknown,‖ or −∞. If missing values
are replaced by, say, ―Unknown,‖ then the mining program may mistakenly think
that they form an interesting concept, since they all have a value in common — that of
―Unknown.‖ Hence, although this method is simple, it is not recommended.
(d) Using the attribute mean for quantitative (numeric) values or attribute mode for
categorical (nominal) values, for all samples belonging to the same class as the given
tuple: For example, if classifying customers according to credit risk, replace the missing
value with the average income value for customers in the same credit risk category as that
of the given tuple.
(e) Using the most probable value to fill in the missing value: This may be determined
with regression, inference-based tools using Bayesian formalism, or decision tree
induction. For example, using the other customer attributes in your data set, you may
construct a decision tree to predict the missing values for income.
Noisy data:
Noise is a random error or variance in a measured variable. Data smoothing tech is used for
removing such noisy data.
1 Binning methods: Binning methods smooth a sorted data value by consulting the
neighborhood", or values around it. The sorted values are distributed into a number of
'buckets', or bins. Because binning methods consult the neighborhood of values, they
perform local smoothing.
In this technique,
o Sorted data for price (in dollars): 4, 8, 9, 15, 21, 21, 24, 25, 26, 28, 29, 34
o Partition into (equi-depth) bins(equi depth of 3 since each bin contains three
values):
- Bin 1: 4, 8, 9, 15
o Smoothing by bin
means: - Bin 1:
9, 9, 9, 9
o Smoothing by bin
boundaries: - Bin 1:
4, 4, 4, 15
In smoothing by bin means, each value in a bin is replaced by the mean value of the bin. For
example, the mean of the values 4, 8, and 15 in Bin 1 is 9. Therefore, each original value in
this bin is replaced by the value 9. Similarly, smoothing by bin medians can be employed,
in which each bin value is replaced by the bin median. In smoothing by bin boundaries, the
minimum and maximum values in a given bin are identified as the bin
boundaries. Each bin value is then replaced by the closest boundary value.
Suppose that the data for analysis include the attribute age. The age values for the data
tuples are (in
increasing order): 13, 15, 16, 16, 19, 20, 20, 21, 22, 22, 25, 25, 25, 25, 30, 33, 33, 35, 35,
35, 35, 36, 40, 45, 46, 52, 70.
(a) Use smoothing by bin means to smooth the above data, using a bin depth of 3.
Illustrate your steps.
Comment on the effect of this technique for the given data.
The following steps are required to smooth the above data using smoothing by bin meanswith
a bin
depth of 3.
• Step 1: Sort the data. (This step is not required here as the data are already sorted.)
• Step 4: Replace each of the values in each bin by the arithmetic mean calculated forthe
bin.
Bin 1: 14, 14, 14 Bin 2: 18, 18, 18 Bin 3: 21,
21, 21 Bin 4: 24, 24, 24 Bin 5: 26, 26, 26 Bin
6: 33, 33, 33 Bin 7: 35, 35, 35 Bin 8: 40, 40,
40 Bin 9: 56, 56, 56
2 Clustering: Outliers in the data may be detected by clustering, where similar values are
organized into groups, or ‗clusters‘. Values that fall outside of the set of clusters may be
considered outliers.
□ Linear regression involves finding the best of line to fit two variables, so
that one variable can be used to predict the other.
Using regression to find a mathematical equation to fit the data helps smooth out the noise.
Field overloading: is a kind of source of errors that typically occurs when developers
compress new attribute definitions into unused portions of already definedattributes.
Unique rule is a rule says that each value of the given attribute must be different from
all other values of that attribute
Consecutive rule is a rule says that there can be no missing values between the lowest and
highest values of the attribute and that all values must also be unique.
Null rule specifies the use of blanks, question marks, special characters or other stringsthat
may indicate the null condition and how such values should be handled.
Issues:
Some redundancy can be identified by correlation analysis. The correlation between two
variables A and B can be measured by
□ The result of the equation is > 0, then A and B are positively correlated, which
means the value of A increases as the values of B increases. The higher value may
indicate redundancy that may be removed.
□ The result of the equation is = 0, then A and B are independent and there is no
correlation between them.
□ If the resulting value is < 0, then A and B are negatively correlated where the values
of one attribute increase as the value of one attribute decrease which means each
attribute may discourages each other.
-also called Pearson‘s product moment coefficient
Examp
le:
Data Transformation
Data transformation can involve the following:
Normalization
In which data are scaled to fall within a small, specified range, useful for classification
algorithms involving neural networks, distance measurements such as nearest neighbor
classification and clustering. There are 3 methods for data normalization. They are:
1) min-max normalization
2) z-score normalization
3) normalization by decimal scaling
v m
v'
eanA
stand
_ devA
This method is useful when min and max value of attribute A are unknown or
when outliers that are dominate min-max normalization.
These techniques can be applied to obtain a reduced representation of the data set that ismuch
smaller in volume, yet closely maintains the integrity of the original data. Data
reduction includes,
1. Data cube aggregation, where aggregation operations are applied to the data in the
construction of a data cube.
2. Attribute subset selection, where irrelevant, weakly relevant or redundant
attributes or dimensions may be detected and removed.
3. Dimensionality reduction, where encoding mechanisms are used to reduce the data
set size. Examples: Wavelet Transforms Principal Components Analysis
4. Numerosity reduction, where the data are replaced or estimated by alternative,
smaller data representations such as parametric models (which need store only the
model parameters instead of the actual data) or nonparametric methods such as
clustering, sampling, and the use of histograms.
5. Discretization and concept hierarchy generation, where raw data values for
attributes are replaced by ranges or higher conceptual levels. Data Discretization is a
form of numerosity reduction that is very useful for the automatic generation
of concept hierarchies.
Data cube aggregation: Reduce the data to the concept level needed in the analysis.
Queries regarding aggregated information should be answered using data cube when
possible. Data cubes store multidimensional aggregated information. The following figure
shows a data cube for multidimensional analysis of sales data with respect to annual sales
per item type for each branch.
Each cells holds an aggregate data value, corresponding to the data point
in multidimensional space.
Data cubes provide fast access to pre computed, summarized data, thereby benefiting
on-line analytical processing as well as data mining.
The cube created at the lowest level of abstraction is referred to as the base cuboid.
A cube for the highest level of abstraction is the apex cuboid. The lowest level of a data cube
(base cuboid). Data cubes created for varying levels of abstraction are sometimes referred to
as cuboids, so that a ―data cube" may instead refer to a lattice of cuboids. Each higher level
of abstraction further reduces the resulting data size.
The following database consists of sales per quarter for the years 1997-1999.
Suppose, the analyzer interested in the annual sales rather than sales per quarter, the above
data can be aggregated so that the resulting data summarizes the total sales per year instead
of per quarter. The resulting data in smaller in volume, without loss of information necessary
for the analysis task.
Dimensionality Reduction
It reduces the data set size by removing irrelevant attributes. This is a method of attribute
subset selection are applied. A heuristic method of attribute of sub set selection is explained
here:
Attribute sub selection / Feature selection
Feature selection is a must for any data mining product. That is because, when you build
a data mining model, the dataset frequently contains more information than is needed to
build the model. For example, a dataset may contain 500 columns that describe
characteristics of customers, but perhaps only 50 of those columns are used to build a
particular model. If you keep the unneeded columns while building the model, more CPU
and memory are required during the training process, and more storage space is required
for the completed model.
In which select a minimum set of features such that the probability distribution of
different classes given the values for those features is as close as possible to the original
distribution given the values of all features
1. Step-wise forward selection: The procedure starts with an empty set of attributes. The
best of the original attributes is determined and added to the set. At each subsequent iteration
or step, the best of the remaining original attributes is added to the set.
2. Step-wise backward elimination: The procedure starts with the full set of attributes.
At each step, it removes the worst attribute remaining in the set.
Data compression
In data compression, data encoding or transformations are applied so as to obtain a reduced
or ―compressed" representation of the original data. If the original data can be
reconstructed from the compressed data without any loss of information, the data
compression technique used is called lossless. If, instead, we can reconstruct only an
approximation of the original data, then the data compression technique is called lossy.
Effective methods of lossy data compression:
□ Wavelet transforms
□ Principal components analysis.
Wavelet compression is a form of data compression well suited for image compression.
The discrete wavelet transform (DWT) is a linear signal processing technique that, when
applied to a data vector D, transforms it to a numerically different vector, D0, of wavelet
coefficients.
The general algorithm for a discrete wavelet transform is as follows.
1. The length, L, of the input data vector must be an integer power of two. This
condition can be met by padding the data vector with zeros, as necessary.
□ data smoothing
□ calculating weighted difference
3. The two functions are applied to pairs of the input data, resulting in two sets of data
of length L/2.
4. The two functions are recursively applied to the sets of data obtained in the previous
loop, until the resulting data sets obtained are of desired length.
5. A selection of values from the data sets obtained in the above iterations are designated
the wavelet coefficients of the transformed data.
If wavelet coefficients are larger than some user-specified threshold then it can be retained.
The remaining coefficients are set to 0.
The principal components (new set of axes) give important information about variance. Using
the strongest components one can reconstruct a good approximation of the originalsignal.
Numerosity Reduction
Data volume can be reduced by choosing alternative smaller forms of data. This tech.
can be
□ Parametric method
□ Non parametric method
Parametric: Assume the data fits some model, then estimate model parameters, and store
only the parameters, instead of actual data.
Non parametric: In which histogram, clustering and sampling is used to store
reduced form of data.
Numerosity reduction techniques:
1 Regression and log linear models:
□ Can be used to approximate the given data
□ In linear regression, the data are modeled to fit a straight line
using Y = α + β X, where α, β are coefficients
• Multiple regression: Y = b0 + b1 X1 + b2 X2.
– Many nonlinear functions can be transformed into the above.
Log-linear model: The multi-way table of joint probabilities is approximated by a
product of lower-order tables.
2 Histogram
Divide data into buckets and store average (sum) for each
bucket A bucket represents an attribute-value/frequency
pair
It can be constructed optimally in one dimension using dynamic programming
It divides up the range of possible values in a data set into classes or groups.For
each group, a rectangle (bucket) is constructed with a base length equal to the range
of values in that specific group, and an area proportional to the number of
observations falling into that group.
The buckets are displayed in a horizontal axis while height of a bucket represents
the average frequency of the values.
Example:
The following data are a list of prices of commonly sold items. The numbers have
been sorted.
1, 1, 5, 5, 5, 5, 5, 8, 8, 10, 10, 10, 10, 12, 14, 14, 14, 15, 15, 15, 15, 15, 15, 18, 18, 18,
18, 18, 18, 18, 18, 20, 20, 20, 20, 20, 20, 20, 21, 21, 21, 21, 25, 25, 25, 25, 25, 28, 28,
30, 30, 30. Draw histogram plot for price where each bucket should have equi-width of
10
The buckets can be determined based on the following partitioning rules, including the
following.
1. Equi-width: histogram with bars having the same
width 2. Equi-depth: histogram with bars having the
same height
3. V-Optimal: histogram with least variance (countb*valueb)
4. MaxDiff: bucket boundaries defined by user specified threshold
V-Optimal and MaxDiff histograms tend to be the most accurate and practical. Histograms
are highly effective at approximating both sparse and dense data, as well as highly skewed,
and uniform data.
Clustering techniques consider data tuples as objects. They partition the objects into groups
or clusters, so that objects within a cluster are ―similar" to one another and
―dissimilar" to objects in other clusters. Similarity is commonly defined in terms of
how ―close" the objects are in space, based on a distance function.
Quality of clusters measured by their diameter (max distance between any two objects in the
cluster) or centroid distance (avg. distance of each cluster object from its centroid)
Sampling
Sampling can be used as a data reduction technique since it allows a large data set to be
represented by a much smaller random sample (or subset) of the data. Suppose that a largedata
contains N tuples. Let's have a look at some possible samples for D.
1. Simple random sample without replacement (SRSWOR) of size n: This is created
by drawing n of the
N tuples from D (n < N), where the probably of drawing any tuple in D is 1=N, i.e., all
tuples are equally likely.
Discretization techniques can be used to reduce the number of values for a given
continuous attribute, by dividing the range of the attribute into intervals. Interval labels
can then be used to replace actual data values.
Concept Hierarchy
A concept hierarchy for a given numeric attribute defines a Discretization of the attribute.
Concept hierarchies can be used to reduce the data by collecting and replacing low level
concepts (such as numeric values for the attribute age) by higher level concepts (such as
young, middle-aged, or senior).
Discretization and Concept hierarchy for numerical data:
There are five methods for numeric concept hierarchy generation. These
include: 1. Binning
, 2. histogram analysis, 3. clustering analysis,
4. entropy-based Discretization, and
5. data segmentation by ―natural partitioning".
An information-based measure called ―entropy" can be used to recursively partition the
values of a numeric attribute A, resulting in a hierarchical Discretization.
Example:
Suppose that profits at different branches of a company for the year 1997 cover a wide
range, from -$351,976.00 to $4,700,896.50. A user wishes to have a concept hierarchy for
profit automatically generated
Suppose that the data within the 5%-tile and 95%-tile are between -$159,876 and
$1,838,761. The results of applying the 3-4-5 rule are shown in following figure
Step 1: Based on the above information, the minimum and maximum values are: MIN
= -$351, 976.00, and MAX = $4, 700, 896.50. The low (5%-tile) and high (95%-tile) values
to be considered for the top or first level of segmentation are: LOW = -$159, 876, and HIGH
= $1, 838,761.
Step 2: Given LOW and HIGH, the most significant digit is at the million dollar digit position
(i.e., msd =
1,000,000). Rounding LOW down to the million dollar digit, we get LOW‘ = -$1; 000; 000;
and rounding
HIGH up to the million dollar digit, we get HIGH‘ = +$2; 000; 000.
Step 3: Since this interval ranges over 3 distinct values at the most significant digit, i.e., (2;
000; 000-(-1, 000; 000))/1, 000, 000 = 3, the segment is partitioned into 3 equi-width sub
segments according to the 3-4-5 rule: (-$1,000,000 - $0], ($0 -
$1,000,000], and ($1,000,000 - $2,000,000]. This represents the top tier of the hierarchy.
Step 4: We now examine the MIN and MAX values to see how they ―fit" into the first
level partitions. Since the first interval, (-$1, 000, 000 - $0] covers the MIN value, i.e., LOW‘
< MIN, we can adjust the left boundary of this interval to make the interval smaller. The
most
significant digit of MIN is the hundred thousand digit position. Rounding MIN down to
this position, we get MIN0‘ = -$400, 000.
Therefore, the first interval is redefined as (-$400,000 - 0]. Since the last interval,
($1,000,000-$2,000,000] does not cover the MAX value, i.e., MAX > HIGH‘, we need to
create a new interval to cover it. Rounding up MAX at its most significant digit position, the
new interval is ($2,000,000 - $5,000,000]. Hence, the top most level of the hierarchy
contains four partitions, (-$400,000 - $0], ($0 - $1,000,000], ($1,000,000
- $2,000,000], and ($2,000,000 - $5,000,000].
Step 5: Recursively, each interval can be further partitioned according to the 3-4-5 rule to
form the next lower level of the hierarchy:
- The first interval (-$400,000 - $0] is partitioned into 4 sub-intervals: (-$400,000 -
-$300,000], (-$300,000 - -$200,000], (-$200,000 - -$100,000], and (-$100,000 -
$0].
Concept hierarchy generation for category data
Classification:
*used for prediction(future analysis ) to know the unknown attributes with their values.by using
classifier algorithms and decision tree.(in data mining)
*which constructs some models(like decision trees) then which classifies the attributes..
*already we know the types of attributes are 1.categorical attribute and 2.numerical attribute
*these classification can work on both the above mentioned attributes.
Prediction: prediction also used for to know the unknown or missing values..
1. which also uses some models in order to predict the attributes
2. models like neural networks, if else rules and other mechanisms
There are two issues regarding classification and prediction they are
Issues (1): Data Preparation
Issues (2): Evaluating Classification Methods
Issues (1): Data Preparation: Issues of data preparation includes the following
1) Data cleaning
*Preprocess data in order to reduce noise and handle missing values (refer
preprocessing techniques i.e. data cleaning notes)
2) Relevance analysis (feature selection)
Remove the irrelevant or redundant attributes (refer unit-iv AOI Relevance
analysis)
3) Data transformation (refer preprocessing techniques i.e data cleaning notes) Generalize
and/or normalize data
96
*efficiency in disk-resident databases
5. Interpretability:
*understanding and insight provided by the model
6. Goodness of rules
97