DIKW Theory

Download as pdf or txt
Download as pdf or txt
You are on page 1of 7

The DIKW Theory

When raw data is collected, it gets mixed up and the view seems jumbled. The DIKW Model by
Fricke (2018) on Russell Ackoff (1989) describes how the data can be processed and transformed
into information, knowledge, and wisdom.
The DIKW Hierarchy comprises of the following:
“D” = Data
“I” = Information
“K” = Knowledge
“W” = Wisdom
The DIKW model of transforming data into wisdom can be viewed from two different concepts,
which are contextual and understanding.
From the contextual concept, one moves from a phase of gathering data parts (data), the connection
of raw data parts (information), formation of whole meaningful contents (knowledge) and
conceptualizing and joining those whole meaningful contents (wisdom).
From the understanding concept, the DIKW Pyramid can be viewed as a process starting with
researching & absorbing, doing, interacting, and reflecting.
The DIKW hierarchy can also be represented in terms of time. For instance, the data, information,
and knowledge levels can be seen as the past while the final step - wisdom - represents the future.
The "Data" of DIKW Hierarchy:
The first step in this DIKW model is Data. Collection of raw data is the main requirement for
coming up with a meaningful result in the end. Any measurements, logging, tracking, records
and many others are all considered as data. Since the raw data is collected in bulk, it includes
various things both useful and not so useful contents.
These are completely raw data and do not provide any meaningful result that can be used by the
Information Technology (IT) Service provider. Therefore the data doesn't answer any question
nor draw any conclusion.
To understand how the Data is transformed into utilizable results using the DIKW Pyramid
model, we will discuss each of the subsequent steps of the of DIKW hierarchy (i.e. - information,
knowledge, and wisdom) using sample scenarios.
For Example, Let’s assume the scenario is - 300 Users visits FEU on Canvas daily to take online
lessons. This is the raw data we have got from statistics.

The "Information" of DIKW Pyramid:


Information can be termed as the data that has been given a meaning by defining relational
connections. Here, the word "meaning" represents processed and understandable data that may or
may not be a useful piece of content from the organization perspective.
In information processing system, a relational database creates information from the data stored
within it.
The information hierarchy stage of DIKW Pyramid reveals the relationships in the data, and then
the analysis is carried out to find the answer to Who, What, When and Where questions.
Now let come to the above example, the data “300 users visit FEU on Canvas per day”, which is
quite a generic number to get any insight. Now, to do the capacity planning and availability
planning, we must process it through information stage of DIKW Hierarchy.
Now we can get answers like - 150 Users visit Nursing Pharmacology, 145 User visits Nursing
Research, 5 Users just visits the dashboards. Out of them, 60% is in the age group of 18-22
years, 20% in age group of 22-26 Years. Also, we get 70% of our visitors between 9 AM to 11
PM.
In the given scenario above, the generic data now become an information answering the
questions of Who, What, When, and Where. This is the output we can get from information
stage.
The "Knowledge" of DIKW Model:
Knowledge is the third level of DIKW Model. Knowledge means the appropriate collection of
information that can make it be useful.
Knowledge stage of DIKW hierarchy is a deterministic process. When someone "memorizes"
information due to its usefulness, then it can be said that they have accumulated knowledge.
Every piece of knowledge itself has useful meanings, but it can't generate further knowledge on
its own.
In information management system, most of the applications you use, such as modelling,
simulation etc, exercise some sort of stored knowledge.
The knowledge step tries to find the answer to the "How" question. Specific measures are
pointed out, and the information derived in the previous step is used to answer this question.
With respect to our scenario, we must find the answer that “How do student nurses between the
age group of 18-22 years use our modular approach?”

The “Wisdom” of DIKW Hierarchy:


The Wisdom is the fourth and the last step of the DIKW Hierarchy. It is a process to get the final
result by calculating through extrapolation of knowledge. It considers the output from all the
previous levels of DIKW Model and processes them through special types of human
programming (such as the moral, ethical codes, etc.).
Therefore, Wisdom can be thought as the process by which you can take a decision between the
right and wrong, good and bad, or any improvement decisions.
Alternatively, we can say that in wisdom stage, the knowledge found in the previous stage is
applied and implemented in practical life.
Wisdom is the topmost level in the DIKW pyramid and answers the questions related to "Why".
In case of our example scenario, one example of wisdom gained might be that due to 70 % of the
student nurses visit our modules to get help with their lessons and technology needs.

Analyzing Organizational Issues Using the DIKW Hierarchy:


Data: A way to identify the raw external inputs such as the facts and figures that are yet to be
interpreted.
Information: Analyze the raw data to determine the organizational needs. An important aspect
of information management is that apart from answering questions it can also help to find other
solutions in organizational contexts.
Knowledge: Determines how something is remembered by an individual or how information is
applied by them.
Wisdom: Uncover why the derived knowledge is applied by individuals in a specific way. i.e. -
finding the reason behind any decision-making.

The Usage and Limitations of DIKW Model:


Same as all other models, DIKW Model also has its own limits. You may have noticed that the
DIKW Hierarchy is quite linear and follows a logical sequence of steps to add more meaning to
data in every step forward. But the reality is often quite different than that. The Knowledge
stage, for example, is practically more than just a next stage of information.
One of the principal critiques of this DIKW Pyramid is that it’s a hierarchical process and misses
several important aspects of knowledge. In today's world, where we use various ways to capture
and process more and more unstructured data, sometimes forces us to bypass few steps of
DIKW.
Though the previous statement is quite true, however, the result still stays the same, such as what
we do with the data warehouse and transforming data through big data analytics into decisions
and actions (Wisdom).

Computer System
1. Computer Hardware
Hardware refers to the physical parts of the computer. It allows the user to enter data into the
computer, performs the actions of the computer’s processing, and produces the computer output.
(Kozier, 2016) The size, shape, and type varies depending on the purpose of the computer. The
essential components of computer hardware are the central processing units (CPUs) and the
different types of input and output devices that may also vary from one or more types.
2. Computer Hardware Systems
The CPU is in the box that comprises the computer hardware necessary to process and store
data. The power supply, disk drives, chips, and connections for all other computer hardware (also
known as peripherals) are also located with the CPU. The performance of the CPU or the
determinant of how fast the CPU’s performance is known by three components:

• CPU processor cores and clock-speed, which is typically measures in gigahertz


• The amount of random-access memory (RAM)
• The speed of data location or transfer rate of the disk drives.

Processor cores and clock speed are very different functions, but they’re working toward the
same goal. Many computer experts talk about which you should give more emphasis to when
purchasing or selecting a computer - but they depend on each other equally to help your
computer function at its best. Processor cores are individual processing units within the
computer’s central processing unit (CPU). The processor core receives instructions from a single
computing task, working with the clock speed to quickly process this information and
temporarily store it in the Random Access Memory (RAM). Permanent information is saved to
your hard drive when you request it. Most computers now have multiple processor cores that
enable your computer to finish multiple tasks at once. Having the ability to run numerous
programs and request multiple tasks like editing a document and at the same time streaming a
video, as well as opening a new program, is made possible with multiple processor core units.
A computer’s processor clock speed determines how quickly the central processing unit (CPU)
can retrieve and interpret instructions. This helps your computer complete more tasks by getting
them done faster. Clock speeds are measured in gigahertz (GHz), with a higher number then the
higher is the clock speed. Multi-core processors were developed to help CPUs run faster as it
became more difficult to increase clock speed.
Faster clock speeds mean that users can see tasks ordered from the CPU to be completed faster,
making the user’s experience seamless and reducing the time waiting to interface with
applications and programs.
Based on Sirois (2018) on a review for HP Computers, the reasonable fast CPU based on average
usage is between 3.5 to 4.0 GHz.
3. Open Source and Free Software
Software are the instructions being given to the hardware to perform certain tasks. They are
classified based on availability and shareability as to free and open-source software and propriety
or closed software.
Free and open-source software (FOSS) allows users and programmers to edit, modify or reuse
the software's source code. This gives developers the opportunity to improve program
functionality by modifying it.
The term “free” indicates that the software does not have constraints on copyrights. The term
“open source” indicates the software is in its project form, enabling easy software development
from expert developers collaborating worldwide without any need for reverse engineering.
Free and open-source software may also be referred to as free/libre open-source software
(FLOSS) or free/open-source software (F/OSS).
The basic and old classifications of software include:System and Application Software.
System Software helps the user, hardware, and application software to interact and function
together. These types of computer software allow an environment or platform for other software
and applications to work in. This is why system software is essential in managing the whole
computer system.
When you first power up your computer, it is the system software that is initially loaded into
memory. Unlike application software, the System software is not used by end-users like you. It
only runs in the background of your device, at the most basic level while you use other
application software. This is why system software is also called “low-level software”.
Application Software or most popularly known as “apps” are what users regularly engage with
the most of the time. These types of computer software are productive end-user programs that
help you perform tasks. It can range from word processing to image editing, voice
communication or conferences, internet browsers, and many others.
With the advancement of technology, the software classification continues to change but the
usability remains to be the main basis for its classification.
4. Data Assessment
Data quality assessment (DQA) is the process of scientifically and statistically evaluating data in
order to determine whether they meet the quality required for projects or business processes and
are of the right type and quantity to be able to actually support their intended use. It can be
considered a set of guidelines and techniques that are used to describe data, given an application
context, and to apply processes to assess and improve the quality of data.
Data quality assessment (DQA) exposes issues with technical and business data that allow the
organization to properly plan for data cleansing and enrichment strategies. This is usually done
to maintain the integrity of systems, quality assurance standards and compliance concerns.
Generally, technical quality issues such as inconsistent structure and standard issues, missing
data or missing default data, and errors in the data fields are easy to spot and correct, but more
complex issues should be approached with more defined processes.
DQA is usually performed to fix subjective issues related to business processes, such as the
generation of accurate reports, and to ensure that data-driven and data-dependent processes are
working as expected.
DQA processes are aligned with best practices and a set of prerequisites as well as with the five
dimensions of data quality:

• Accessibility
• Accuracy and reliability
• Serviceability
• Methodological soundness
• Assurances of integrity

5. Personal, Professional, and Educational Informatics


Personal Informatics
Information services, often accessible via a mobile device, that search, sort, mine, correlate or
otherwise filter information for a person based on their preferences, transaction logs, location,
social networks, and other personal data.
Professional Informatics
Health informatics professionals use their knowledge of healthcare, information systems,
databases, and information technology security to gather, store, interpret and manage the massive
amount of data generated when care is provided to patients. Developing data-driven solutions to
improve patient health.
Educational Informatics
Education informatics is an emerging sub-discipline of education and informatics that
"incorporate[s] new technologies and learning strategies to enhance the capture, organization,
and utilization of information within the field of education."
While this sub-discipline typically covers K-12 and higher education, it is easily expanded to
business- and enterprise-level education.

You might also like