The Process of Data Analysis
The Process of Data Analysis
The Process of Data Analysis
the goal of discovering useful information, informing conclusions, and supporting decision-
making.[1] Data analysis has multiple facets and approaches, encompassing diverse
techniques under a variety of names, and is used in different business, science, and social
science domains.[2] In today's business world, data analysis plays a role in making decisions
more scientific and helping businesses operate more effectively.[3]
Data mining is a particular data analysis technique that focuses on statistical modeling and
knowledge discovery for predictive rather than purely descriptive purposes, while business
intelligence covers data analysis that relies heavily on aggregation, focusing mainly on
business information.[4] In statistical applications, data analysis can be divided
into descriptive statistics, exploratory data analysis (EDA), and confirmatory data
analysis (CDA).[5] EDA focuses on discovering new features in the data while CDA focuses
on confirming or falsifying existing hypotheses.[6][7] Predictive analytics focuses on the
application of statistical models for predictive forecasting or classification, while text
analytics applies statistical, linguistic, and structural techniques to extract and classify
information from textual sources, a species of unstructured data. All of the above are varieties
of data analysis.[8]
Data integration is a precursor to data analysis, and data analysis is closely linked to data
visualization and data dissemination.[9]
"Procedures for analyzing data, techniques for interpreting the results of such procedures,
ways of planning the gathering of data to make its analysis easier, more precise or more
accurate, and all the machinery and results of (mathematical) statistics which apply to
analyzing data."[12]
There are several phases that can be distinguished, described below. The phases are iterative,
in that feedback from later phases may result in additional work in earlier
phases.[13] The CRISP framework, used in data mining, has similar steps.
Data requirements[edit]
The data is necessary as inputs to the analysis, which is specified based upon the
requirements of those directing the analytics (or customers, who will use the finished product
of the analysis).[14][15] The general type of entity upon which the data will be collected is
referred to as an experimental unit (e.g., a person or population of people). Specific variables
regarding a population (e.g., age and income) may be specified and obtained. Data may be
numerical or categorical (i.e., a text label for numbers).[13]
Data collection[edit]
Data is collected from a variety of sources.[16][17] A list of data sources are available for study
& research. The requirements may be communicated by analysts to custodians of the data;
such as, Information Technology personnel within an organization.[18] Data
collection or data gathering is the process of gathering and measuring information on
targeted variables in an established system, which then enables one to answer relevant
questions and evaluate outcomes. The data may also be collected from sensors in the
environment, including traffic cameras, satellites, recording devices, etc. It may also be
obtained through interviews, downloads from online sources, or reading documentation.[13]
Data processing[edit]
The phases of the intelligence
cycle used to convert raw information into actionable intelligence or knowledge are
conceptually similar to the phases in data analysis.
Data, when initially obtained, must be processed or organized for analysis.[19][20] For instance,
these may involve placing data into rows and columns in a table format (known as structured
data) for further analysis, often through the use of spreadsheet or statistical software.[13]
Data cleaning[edit]
Main article: Data cleansing
Once processed and organized, the data may be incomplete, contain duplicates, or contain
errors.[21][22] The need for data cleaning will arise from problems in the way that the datum
are entered and stored.[21] Data cleaning is the process of preventing and correcting these
errors. Common tasks include record matching, identifying inaccuracy of data, overall quality
of existing data, deduplication, and column segmentation.[23] Such data problems can also be
identified through a variety of analytical techniques. For example; with financial information,
the totals for particular variables may be compared against separately published numbers that
are believed to be reliable.[24][25] Unusual amounts, above or below predetermined thresholds,
may also be reviewed. There are several types of data cleaning, that are dependent upon the
type of data in the set; this could be phone numbers, email addresses, employers, or other
values.[26][27] Quantitative data methods for outlier detection, can be used to get rid of data
that appears to have a higher likelihood of being input incorrectly.[28] Textual data spell
checkers can be used to lessen the amount of mistyped words. However, it is harder to tell if
the words themselves are correct.[29]
Mathematical formulas or models (also known as algorithms), may be applied to the data
in order to identify relationships among the variables; for example,
[34][35]
using correlation or causation. In general terms, models may be developed to evaluate a
specific variable based on other variable(s) contained within the dataset, with some residual
error depending on the implemented model's accuracy (e.g., Data = Model + Error).[36][11]
Inferential statistics includes utilizing techniques that measure the relationships between
particular variables.[37] For example, regression analysis may be used to model whether a
change in advertising (independent variable X), provides an explanation for the variation in
sales (dependent variable Y).[38] In mathematical terms, Y (sales) is a function
of X (advertising).[39] It may be described as (Y = aX + b + error), where the model is
designed such that (a) and (b) minimize the error when the model predicts Y for a given range
of values of X.[40] Analysts may also attempt to build models that are descriptive of the data,
in an aim to simplify analysis and communicate results.[11]
Data product[edit]
A data product is a computer application that takes data inputs and generates outputs,
feeding them back into the environment.[41] It may be based on a model or algorithm. For
instance, an application that analyzes data about customer purchase history, and uses the
results to recommend other purchases the customer might enjoy.[42][13]
Communication[edit]
Data visualization is used to help understand the
results after data is analyzed.[43]
Main article: Data and information visualization
Once data is analyzed, it may be reported in many formats to the users of the analysis to
support their requirements.[44] The users may have feedback, which results in additional
analysis. As such, much of the analytical cycle is iterative.[13]
When determining how to communicate the results, the analyst may consider implementing a
variety of data visualization techniques to help communicate the message more clearly and
efficiently to the audience.[45] Data visualization uses information displays (graphics such as,
tables and charts) to help communicate key messages contained in the data.[46] Tables are a
valuable tool by enabling the ability of a user to query and focus on specific numbers; while
charts (e.g., bar charts or line charts), may help explain the quantitative messages contained
in the data.[47]
Quantitative messages[edit]
Stephen Few described eight types of quantitative messages that users may attempt to
understand or communicate from a set of data and the associated graphs used to help
communicate the message.[48] Customers specifying requirements and analysts performing
the data analysis may consider these messages during the course of the process.[49]
Author Jonathan Koomey has recommended a series of best practices for understanding
quantitative data.[60] These include:
For the variables under examination, analysts typically obtain descriptive statistics for them,
such as the mean (average), median, and standard deviation.[61] They may also analyze
the distribution of the key variables to see how the individual values cluster around the
mean.[62]
An illustration of the MECE principle used for data
analysis.
The consultants at McKinsey and Company named a technique for breaking a quantitative
problem down into its component parts called the MECE principle.[63] Each layer can be
broken down into its components; each of the sub-components must be mutually exclusive of
each other and collectively add up to the layer above them.[64] The relationship is referred to
as "Mutually Exclusive and Collectively Exhaustive" or MECE. For example, profit by
definition can be broken down into total revenue and total cost.[65] In turn, total revenue can
be analyzed by its components, such as the revenue of divisions A, B, and C (which are
mutually exclusive of each other) and should add to the total revenue (collectively
exhaustive).[66]
Regression analysis may be used when the analyst is trying to determine the extent to which
independent variable X affects dependent variable Y (e.g., "To what extent do changes in the
unemployment rate (X) affect the inflation rate (Y)?").[73] This is an attempt to model or fit an
equation line or curve to the data, such that Y is a function of X.[74][75]
Necessary condition analysis (NCA) may be used when the analyst is trying to determine the
extent to which independent variable X allows variable Y (e.g., "To what extent is a certain
unemployment rate (X) necessary for a certain inflation rate (Y)?").[73] Whereas (multiple)
regression analysis uses additive logic where each X-variable can produce the outcome and
the X's can compensate for each other (they are sufficient but not necessary),[76] necessary
condition analysis (NCA) uses necessity logic, where one or more X-variables allow the
outcome to exist, but may not produce it (they are necessary but not sufficient). Each single
necessary condition must be present and compensation is not possible.[77]
Users may have particular data points of interest within a data set, as opposed to the general
messaging outlined above. Such low-level user analytic activities are presented in the
following table. The taxonomy can also be organized by three poles of activities: retrieving
values, finding data points, and arranging data points.[78][79][80][81]