Data Warehouse Schemas
Data Warehouse Schemas
A schema is a collection of database objects, including tables, views, indexes, and synonyms.
You can arrange schema objects in the schema models designed for data warehousing in a
variety of ways.
Star Schemas :
The star schema (also called star-join schema, data cube, or multi-dimensional schema) is
the simplest style of data warehouse schema. The star schema consists of one or more fact
tables referencing any number of dimension tables
The facts that the data warehouse helps analyze are classified along different dimensions:
The fact table holds the main data. It includes a large amount of aggregated data, such
as price and units sold. There may be multiple fact tables in a star schema.
Dimension tables, which are usually smaller than fact tables, include the
attributes that describe the facts. Often this is a separate table for each dimension.
Dimension tables can be joined to the fact table(s) as needed.
Dimension tables have a simple primary key, while fact tables have a set of foreign keys
which make up a compound primary key consisting of a combination of relevant dimension
keys.
Advantages :
Provide a direct and intuitive mapping between the business entities being analyzed
by end users and the schema design.
Provide highly optimized performance for typical star queries.
Are widely supported by a large number of business intelligence tools, which may
anticipate or even require that the data-warehouse schema contain dimension tables
Snow Flake Schemas : The snowflake schema is represented by centralized fact
tables which are connected to multiple dimensions. In the snowflake schema, dimensions
are normalized into multiple related tables, whereas the star schema's dimensions are
denormalized with each dimension represented by a single table.
Snowflake schemas are often better with more sophisticated query tools that isolate
users from the raw table structures and for environments having numerous queries with
complex criteria.
Advantages :
Some OLAP multidimensional database modeling tools that use dimensional data
marts as data sources are optimized for snowflake schemas.
A snowflake schema can sometimes reflect the way in which users think about data.
Users may prefer to generate queries using a star schema in some cases, although this may or
may not be reflected in the underlying organization of the database.
A multidimensional view is sometimes added to an existing transactional database to
aid reporting. In this case, the tables which describe the dimensions will already exist and
will typically be normalized. A snowflake schema will therefore be easier to implement.
If a dimension is very sparse (i.e. most of the possible values for the dimension have
no data) and/or a dimension has a very long list of attributes which may be used in a query,
the dimension table may occupy a significant proportion of the database and snowflaking
may be appropriate.
* Steps in designing Star Schema Identify a business process for analysis(like sales).
* Identify measures or facts (sales dollar).
* Identify dimensions for facts(product dimension, location dimension, time dimension,
organization dimension).
* List the columns that describe each dimension.(region name, branch name, region name).
A snowflake schema is a term that describes a star schema structure normalized through the
use of outrigger tables. i.e dimension table hierachies are broken into simpler tables. In star
schema example we had 4 dimensions like location, product, time, organization and a fact
table(sales).
In OLAP, this Snowflake schema approach increases the number of joins and poor
performance in retrieval of data. In few organizations, they try to normalize the dimension
tables to save space. Since dimension tables hold less space, Snowflake schema approach
may be avoided.
Fact Table
Fact Table
The centralized table in a star schema is called as FACT table. A fact table typically
has two types of columns: those that contain facts and those that are foreign keys to
dimension tables. The primary key of a fact table is usually a composite key that is
made up of all of its foreign keys.
Eg: "Sales Dollar" is a fact(measure) and it can be added across several dimensions.
Fact tables store different types of measures like additive, non additive and semi
additive measures.
* Measure Types Additive - Measures that can be added across all dimensions.
* Non Additive - Measures that cannot be added across all dimensions.
* Semi Additive - Measures that can be added across few dimensions and not with
others.
A fact table might contain either detail level facts or facts that have been aggregated
(fact tables that contain aggregated facts are often instead called summary tables). In
the real world, it is possible to have a fact table that contains no measures or facts.
These tables are called as Factless Fact tables.
Steps in designing Fact Table Identify a business process for analysis(like sales):
* Identify measures or facts (sales dollar).
* Identify dimensions for facts(product dimension, location dimension, time
dimension, organization dimension).
* List the columns that describe each dimension.(region name, branch name, region
name).
* Determine the lowest level of summary in a fact table(sales dollar).
Informatica Functions
TEST FUNCTIONS
1.1 ISNULL
The ISNULL function returns whether a value is NULL. It is available in the Designer and
the Workflow Manager.
ISNULL( value )
Example : The following example checks for null values in the items table:
IS_DATE( value )
Example : The following expression checks the INVOICE_DATE port for valid dates:
IS_DATE( INVOICE_DATE )
1.3 IS_NUMBER
The IS_NUMBER returns whether a string is a valid number. It is available in the Designer
and the Workflow Manager.
IS_NUMBER( value )
Example : The following expression checks the ITEM_PRICE port for valid numbers:
IS_NUMBER( ITEM_PRICE )
IS_SPACES( value )
Example : The following expression checks the ITEM_NAME port for rows that consist
entirely of spaces:
Special Functions
DECODE
The DECODE function searches a port for the specified value. It is available in the Designer
and the Workflow Manager.
Example: We might use DECODE in an expression that searches for a particular ITEM_ID
and returns the ITEM_NAME:
'Tank', 'NONE' )
ITEM_ID RETURN VALUE
10 Flashlight
14 Regulator
17 NONE
4.2 IIF
The IIF function returns one of two values we specify, based on the results of a condition. It
is available in the Designer and the Workflow Manager.
IIF functions can be nested if there is more than one condition to be tested. But it is always a
better option to go for DECODE function when the number of conditions is large since
DECODE function is less costlier compared to IIF function.
DECODE(TRUE,
MARKS>=90,'A',
MARKS>=75,'B',
MARKS>=65,'C',
MARKS>=55,'D',
MARKS>=45,'E',
'F')
When the number of conditions increase we will be able to appreciate the simplicity of the
DECODE function and the complexity of the IIF function.
In both the cases , If MARKS>90 it will return 'A' though it satisfies all the conditions
given. It is because it returns when the first condition is satisfied. Therefore even if a port
satisfies two or more the conditions it will take only the first one. Therefore Ordering is
important in IIF and DECODE functions.
4.3 ERROR:
The ERROR function causes the Informatica Server to skip a record and throws an error
message defined by the user. It is available in the Designer.
ERROR( string )
Example : The following example shows how you can reference a mapping that calculates
theaverage salary for employees in all departments of your company, but skips negative
values. The following expression nests the ERROR function in an IIF expression so that if the
Informatica Server finds a negative salary in the Salary port, it skips the row and displays an
error:
IIF( SALARY < 0, ERROR ('Error. Negative salary found. Row skipped.', EMP_SALARY )
10000 10000
IIF(IS_DATE(DATE_PROMISED,'MM/DD/YY'),TO_DATE(DATE_PROMISED),ERROR('
Invalid Date'))
4.4 LOOKUP:
The LOOKUP function searches for a particular value in a lookup source column. It is
available in the Designer.
Example : The following expression searches the lookup source :TD.SALES for a
specific itemID and price, and returns the item name if both searches find a match:
Date Functions
Days (01-31). We can use any of these format strings to specify the entire day portion of
adate. For example, if we pass 12-APR-1997 to a date function, we can use any of these
format strings specify 12.
Hour of day (0 to 23), where zero is 12 AM (midnight). We can use any of these formats to
specify the entire hour portion of a date. For example, if we pass the date 12-APR-1997
2:01:32 PM, we can use HH, HH12, or HH24 to specify the hour portion of the date.
MI
Minutes.
Month portion of date (0 to 59). We can use any of these format strings to specify the entire
month portion of a date. For example, if we pass 12-APR-1997 to a date function, we can use
MM, MON, or MONTH to specify APR.
SS , SSSS
Year portion of date (1753 to 9999). We can use any of these format strings to specify the
entire year portion of a date. For example, if we pass 12-APR-1997 to a date function, we can
use Y, YY, YYY, or YYYY to specify 1997.
3.1 ADD_TO_DATE
The ADD_TO_DATE function adds a specified amount to one part of a date/time value, and
returns a date in the same format as the specified date.
Note: If we do not specify the year as YYYY, the Informatica Server assumes the date is in
the current century. It is available in the Designer and the Workflow Manager.
Example : The following expression adds one month to each date in the DATE_SHIPPED
port. If we pass a value that creates a day that does not exist in a particular month, the
Informatica Server returns the last day of the month. For example, if we add one month to Jan
31 1998, the Informatica Server returns Feb 28 1998.
Also note, ADD_TO_DATE recognizes leap years and adds one month to Jan 29 2000:
The following expression subtracts 10 days from each date in the DATE_SHIPPED port:
The following expression subtracts 15 hours from each date in the DATE_SHIPPED port:
In ADD_TO_DATE function, if the argument passed evaluates to a date that does not exist in
a particular month, the Informatica Server returns the last day of the month.
3.2 DATE_COMPARE
The DATE_COMPARE function returns a value indicating the earlier of two dates. It is
available in the Designer and the Workflow Manager.
DA
3.3 DATE_DIFF
The DATE_DIFF function returns the length of time between two dates, measured in the
specified increment (years, months, days, hours, minutes, or seconds). It is available in the
Designer and the Workflow Manager.
Example: The following expressions return the number of days between the
DATE_PROMISED and the DATE_SHIPPED ports:
DATE_DIFF
DATE_DIFF
We can combine DATE functions and TEST functions so as to validate the dates.
For example, while using the DATE functions like DATE_COMPARE and DATE_DIFF, the
dates given as inputs can be validated using the TEST function IS_DATE and then passed to
them if valid.
3.4 GET_DATE_PART
The GET_DATE_PART function returns the specified part of a date as an integer value,
based on the default date format of MM/DD/YYYY HH24:MI:SS. It is available in the
Designer and the Workflow Manager.
Example: The following expressions return the day for each date in the DATE_SHIPPED
port:
GE
GE
3.5 LAST_DAY
The LAST_DAY function returns the date of the last day of the month for each date in a port.
It is available in the Designer and the Workflow Manager.
LAST_DAY( date )
Example : The following expression returns the last day of the month for each date in the
ORDER_DATE port:
LAST_DAY( ORDER_DATE )
The following expression has LAST_DAY and TO_DATE functions nested or combined
together.
The MAX function returns the latest date found in a group. It is available in the Designer.
Example: The following expression returns the maximum order date for flashlights:
ITEM_NAME ORDER_DATE
Flashlight Apr 20 1998
Regulator System May 15 1998
Flashlight Sep 21 1998
Diving Hood Aug 18 1998
Halogen Flashlight Feb 1 1998
Flashlight Oct 10 1998
3.7 MIN
The MIN function returns the earliest date found in a group. It is available in the Designer.
ITEM_NAME ORDER_DATE
Flashlight Apr 20 1998
Regulator System May 15 1998
Flashlight Sep 21 1998
Diving Hood Aug 18 1998
Halogen Flashlight Feb 1 1998
Flashlight Oct 10 1998
3.8 ROUND
The ROUND function rounds one part of a date. It is available in the Designer and the
Workflow Manager.
Example: The following expressions round the month portion of each date in the
DATE_SHIPPED port.
3.9 SET_DATE_PART
The SET_DATE_PART function sets one part of a date/time value to a specified value. It is
available in the Designer and the Workflow Manager.
Example: The following expressions change the month to June for the dates in the
DATE_PROMISED port. The Informatica Server displays an error when we try to create a
date that does not exist, such as changing March 31 to June 31:
Similarly the SET_DATE_PART function can be used to round off Year, Day or Time
portions.
3.10 TRUNC
The TRUNC function truncates dates to a specific year, month, day, hour, or minute. It is
available in the Designer and the Workflow Manager.
Example: The following expressions truncate the year portion of dates in the
DATE_SHIPPED port:
Similarly the TRUNC function can be used to truncate Month , Day or Time portions.
The functions TRUNC & ROUND can be nested in order to manipulate dates.
Filter Transformation
• Active and connected transformation.
We can filter rows in a mapping with the Filter transformation. We pass all the rows from a
source transformation through the Filter transformation, and then enter a Filter condition for
the transformation. All ports in a Filter transformation are input/output and only rows that
meet the condition pass through the Filter Transformation.
Example: to filter records where SAL>2000
• Import the source table EMP in Shared folder. If it is already there, then don’t Import.
• In shared folder, create the target table Filter_Example. Keep all fields as in EMP table.
• Create the necessary shortcuts in the folder.
Creating Mapping:
1. Open folder where we want to create the mapping.
2. Click Tools -> Mapping Designer.
3. Click Mapping -> Create -> Give mapping name. Ex: m_filter_example
4. Drag EMP from source in mapping.
5. Click Transformation -> Create -> Select Filter from list. Give name and click Create. Now
click done.
6. Pass ports from SQ_EMP to Filter Transformation.
7. Edit Filter Transformation. Go to Properties Tab
8. Click the Value section of the Filter condition, and then click the Open button.
9. The Expression Editor appears.
10. Enter the filter condition you want to apply.
11. Click Validate to check the syntax of the conditions you entered.
12. Click OK -> Click Apply -> Click Ok.
13. Now connect the ports from Filter to target table.
14. Click Mapping -> Validate
15. Repository -> Save
Create Session and Workflow as described earlier. Run the workflow and see the data in
target table.
How to filter out rows with null values?
To filter out rows containing null values or spaces, use the ISNULL and IS_SPACES
Functions to test the value of the port. For example, if we want to filter out rows that Contain
NULLs in the FIRST_NAME port, use the following condition:
IIF (ISNULL (FIRST_NAME), FALSE, TRUE)
This condition states that if the FIRST_NAME port is NULL, the return value is FALSE and
the row should be discarded. Otherwise, the row passes through to the next Transformation.
Performance tuning:
Filter transformation is used to filter off unwanted fields based on conditions we Specify.
1. Use filter transformation as close to source as possible so that unwanted data gets
Eliminated sooner.
2. If elimination of unwanted data can be done by source qualifier instead of filter,Then
eliminate them at Source Qualifier itself.
3. Use conditional filters and keep the filter condition simple, involving TRUE/FALSE or 1/0
Expression Transformation
• Passive and connected transformation.
Use the Expression transformation to calculate values in a single row before we write to the
target.
For example, we might need to adjust employee salaries, concatenate first and last names, or
convert strings to numbers.
Use the Expression transformation to perform any non-aggregate calculations.
Example: Addition, Subtraction, Multiplication, Division, Concat, Uppercase conversion,
lowercase conversion etc. We can also use the Expression transformation to test conditional
statements before we output the results to target tables or other transformations.
Example: IF, Then, Decode There are 3 types of ports in Expression Transformation:
• Input
• Output
• Variable: Used to store any temporary calculation.
Calculating Values : To use the Expression transformation to calculate values for a single
row, we must include the following ports:
• Input or input/output ports for each value used in the calculation: For example: To calculate
Total Salary, we need salary and commission.
• Output port for the expression: We enter one expression for each output port. The return
value for the output port needs to match the return value of the expression. We can enter
multiple expressions in a single Expression transformation. We can create any number of
output ports in the transformation.
Example: Calculating Total Salary of an Employee
• Import the source table EMP in Shared folder. If it is already there, then don’t import.
• In shared folder, create the target table Emp_Total_SAL. Keep all ports as in EMP table
except Sal and Comm in target table. Add Total_SAL port to store the calculation.
• Create the necessary shortcuts in the folder.
Creating Mapping:
1. Open folder where we want to create the mapping.
2. Click Tools -> Mapping Designer.
3. Click Mapping -> Create -> Give mapping name. Ex: m_totalsal
4. Drag EMP from source in mapping.
5. Click Transformation -> Create -> Select Expression from list. Give name and click
Create. Now click done.
6. Link ports from SQ_EMP to Expression Transformation.
7. Edit Expression Transformation. As we do not want Sal and Comm in target, remove check
from output port for both columns.
8. Now create a new port out_Total_SAL. Make it as output port only.
9. Click the small button that appears in the Expression section of the dialog box and enter
the expression in the Expression Editor.
10. Enter expression SAL + COMM. You can select SAL and COMM from Ports tab in
expression editor.
11. Check the expression syntax by clicking Validate.
12. Click OK -> Click Apply -> Click Ok.
13. Now connect the ports from Expression to target table.
14. Click Mapping -> Validate
15. Repository -> Save Create Session and Workflow as described earlier. Run the workflow
and see the data in target table.
As COMM is null, Total_SAL will be null in most cases. Now open your mapping and
expression transformation. Select COMM port, In Default Value give 0. Now apply changes.
Validate Mapping and Save. Refresh the session and validate workflow again. Run the
workflow and see the result again. Now use ERROR in Default value of COMM to skip rows
where COMM is null.
Syntax: ERROR(‘Any message here’) Similarly, we can use ABORT function to abort the
session if COMM is null.
Syntax: ABORT(‘Any message here’) Make sure to double click the session after doing any
changes in mapping. It will prompt that mapping has changed. Click OK to refresh the
mapping. Run workflow after validating and saving the workflow. Performance
tuning :Expression transformation is used to perform simple calculations and also to do
Source lookups. 1. Use operators instead of functions. 2. Minimize the usage of string
functions. 3. If we use a complex expression multiple times in the expression transformer,
then Make that expression as a variable. Then we need to use only this variable for all
computations.
Rank Transformation
• Active and connected transformation.
A Router transformation is similar to a Filter transformation because both transformations
allow you to use a condition to test data. A Filter transformation tests data for one condition
and drops the rows of data that do not meet the Condition. However, a Router transformation
tests data for one or more conditions And gives you the option to route rows of data that do
not meet any of the conditions to a default output group.
Example: If we want to keep employees of France, India, US in 3 different tables, then
we can use 3 Filter transformations or 1 Router transformation.
Mapping A uses three Filter transformations while Mapping B produces the same result with
one Router transformation.
A Router transformation consists of input and output groups, input and output ports, group
filter conditions, and properties that we configure in the Designer.
Working with Groups
A Router transformation has the following types of groups:
• Input: The Group that gets the input ports.
• Output: User Defined Groups and Default Group. We cannot modify or delete Output ports
or their properties.
The Default Group: The Designer creates the default group after we create one new user-
defined group. The Designer does not allow us to edit or delete the default group. This group
does not have a group filter condition associated with it. If all of the conditions evaluate to
FALSE, the IS passes the row to the default group.
Example: Filtering employees of Department 10 to EMP_10, Department 20 to EMP_20 and
rest to EMP_REST
• Source is EMP Table.
• Create 3 target tables EMP_10, EMP_20 and EMP_REST in shared folder. Structure should
be same as EMP table.
• Create the shortcuts in your folder.
Creating Mapping:
1. Open folder where we want to create the mapping.
2. Click Tools -> Mapping Designer.
3. Click Mapping-> Create-> Give mapping name. Ex: m_router_example
4. Drag EMP from source in mapping.
5. Click Transformation -> Create -> Select Router from list. Give name and
Click Create. Now click done.
6. Pass ports from SQ_EMP to Router Transformation.
7. Edit Router Transformation. Go to Groups Tab
8. Click the Groups tab, and then click the Add button to create a user-defined Group. The
default group is created automatically..
9. Click the Group Filter Condition field to open the Expression Editor.
10. Enter a group filter condition. Ex: DEPTNO=10
11. Click Validate to check the syntax of the conditions you entered.
We cannot pass rejected data forward in filter but we can pass it in router. Rejected data is in
Default Group of router.
Sorter Transformation
• Connected and Active Transformation
• The Sorter transformation allows us to sort data.
• We can sort data in ascending or descending order according to a specified sort key.
• We can also configure the Sorter transformation for case-sensitive sorting, and specify
whether the output rows should be distinct.
When we create a Sorter transformation in a mapping, we specify one or more ports as a sort
key and configure each sort key port to sort in ascending or descending order. We also
configure sort criteria the Power Center Server applies to all sort key ports and the system
resources it allocates to perform the sort operation.
The Sorter transformation contains only input/output ports. All data passing through the
Sorter transformation is sorted according to a sort key. The sort key is one or more ports that
we want to use as the sort criteria.
Performance Tuning:
Sorter transformation is used to sort the input data.
1. While using the sorter transformation, configure sorter cache size to be larger than the
input data size.
2. Configure the sorter cache size setting to be larger than the input data size while Using
sorter transformation.
3. At the sorter transformation, use hash auto keys partitioning or hash user keys Partitioning.
Rank Transformation
• Active and connected transformation
The Rank transformation allows us to select only the top or bottom rank of data. It Allows us
to select a group of top or bottom values, not just one value.
During the session, the Power Center Server caches input data until it can perform The rank
calculations.
Rank Index
The Designer automatically creates a RANKINDEX port for each Rank transformation. The
Power Center Server uses the Rank Index port to store the ranking position for Each row in a
group.
For example, if we create a Rank transformation that ranks the top five salaried employees,
the rank index numbers the employees from 1 to 5.
• The RANKINDEX is an output port only.
• We can pass the rank index to another transformation in the mapping or directly to a target.
• We cannot delete or edit it.
Defining Groups
Rank transformation allows us to group information. For example: If we want to select the
top 3 salaried employees of each Department, we can define a group for Department.
• By defining groups, we create one set of ranked rows for each group.
• We define a group in Ports tab. Click the Group By for needed port.
• We cannot Group By on port which is also Rank Port.
1) Example: Finding Top 5 Salaried Employees
• EMP will be source table.
• Create a target table EMP_RANK_EXAMPLE in target designer. Structure should be same
as EMP table. Just add one more port Rank_Index to store RANK INDEX.
• Create the shortcuts in your folder.
Creating Mapping:
RANK CACHE
Creating Mapping:
SQ PROPERTIES TAB
1) SOURCE FILTER:
We can enter a source filter to reduce the number of rows the Power Center Server queries.
Note: When we enter a source filter in the session properties, we override the customized
SQL query in the Source Qualifier transformation.
Steps:
1. In the Mapping Designer, open a Source Qualifier transformation.
2. Select the Properties tab.
3. Click the Open button in the Source Filter field.
4. In the SQL Editor Dialog box, enter the filter. Example: EMP.SAL)2000
5. Click OK.
Validate the mapping. Save it. Now refresh session and save the changes. Now run the
workflow and see output.
When we use sorted ports, the Power Center Server adds the ports to the ORDER BY clause
in the default query.
By default it is 0. If we change it to 1, then the data will be sorted by column that is at the top
in SQ. Example: DEPTNO in above figure.
• If we want to sort as per ENAME, move ENAME to top.
• If we change it to 2, then data will be sorted by top two columns.
Steps:
1. In the Mapping Designer, open a Source Qualifier transformation.
2. Select the Properties tab.
3. Enter any number instead of zero for Number of Sorted ports.
4. Click Apply -> Click OK.
Validate the mapping. Save it. Now refresh session and save the changes. Now run the
workflow and see output.
3) SELECT DISTINCT:
If we want the Power Center Server to select unique values from a source, we can use the
Select Distinct option.
• Just check the option in Properties tab to enable it.
• The Power Center Server runs pre-session SQL commands against the source database
before it reads the source.
• It runs post-session SQL commands against the source database after it writes to the target.
• Use a semi-colon (;) to separate multiple statements.
Entering a user-defined join is similar to entering a custom SQL query. However, we only
enter the contents of the WHERE clause, not the entire query.
• We can specify equi join, left outer join and right outer join only. We Cannot specify full
outer join. To use full outer join, we need to write SQL Query.
Steps:
1. Open the Source Qualifier transformation, and click the Properties tab.
2. Click the Open button in the User Defined Join field. The SQL Editor Dialog Box appears.
3. Enter the syntax for the join.
4. Click OK -> Again Ok.
Validate the mapping. Save it. Now refresh session and save the changes. Now run the
workflow and see output.
Join Type Syntax
Equi Join DEPT.DEPTNO=EMP.DEPTNO
Left Outer Join {EMP LEFT OUTER JOIN DEPT ON DEPT.DEPTNO=EMP.DEPTNO}
Right Outer Join {EMP RIGHT OUTER JOIN DEPT ON DEPT.DEPTNO=EMP.DEPTNO}
6) SQL QUERY
For relational sources, the Power Center Server generates a query for each Source Qualifier
transformation when it runs a session. The default query is a SELECT statement for each
source column used in the mapping. In other words, the Power Center Server reads only the
columns that are connected to another Transformation.
In mapping above, we are passing only SAL and DEPTNO from SQ_EMP to Aggregator
transformation. Default query generated will be:
• SELECT EMP.SAL, EMP.DEPTNO FROM EMP
Viewing the Default Query
1. Open the Source Qualifier transformation, and click the Properties tab.
2. Open SQL Query. The SQL Editor displays.
3. Click Generate SQL.
4. The SQL Editor displays the default query the Power Center Server uses to Select source
data.
5. Click Cancel to exit.
Note: If we do not cancel the SQL query, the Power Center Server overrides the default query
with the custom SQL query.
We can enter an SQL statement supported by our source database. Before entering the query,
connect all the input and output ports we want to use in the mapping.
Example: As in our case, we can’t use full outer join in user defined join,
we can write SQL query for FULL OUTER JOIN:
SELECT DEPT.DEPTNO, DEPT.DNAME, DEPT.LOC, EMP.EMPNO, EMP.ENAME,
EMP.JOB, EMP.SAL, EMP.COMM, EMP.DEPTNO FROM EMP FULL OUTER JOIN
DEPT ON DEPT.DEPTNO=EMP.DEPTNO WHERE SAL>2000
• We also added WHERE clause. We can enter more conditions and write More complex
SQL.
We can write any query. We can join as many tables in one query as Required if all are in
same database. It is very handy and used in most of the projects.
Important Points:
• When creating a custom SQL query, the SELECT statement must list the port names in the
order in which they appear in the transformation.
Example: DEPTNO is top column; DNAME is second in our SQ mapping.
So when we write SQL Query, SELECT statement have name DNAME first, DNAME
second and so on. SELECT DEPT.DEPTNO, DEPT.DNAME
• Once we have written a custom query like above, then this query will Always be used to
fetch data from database. In our example, we used WHERE SAL>2000. Now if we use
Source Filter and give condition SAL) 1000 or any other, then it will not work. Informatica
will always use the custom query only.
• Make sure to test the query in database first before using it in SQL Query. If query is not
running in database, then it won’t work in Informatica too.
• Also always connect to the database and validate the SQL in SQL query editor.
Aggrigator Transformation
1) Aggregate Expressions
• In Aggregator transformation, there can be multiple single level functions or multiple nested
functions.
• An Aggregator transformation cannot have both types of functions together.
• MAX( COUNT( ITEM )) is correct.
• MIN(MAX( COUNT( ITEM ))) is not correct. It can also include one aggregate function
nested within another aggregate function
Conditional Clauses
We can use conditional clauses in the aggregate expression to reduce the number of rows
used in the aggregation. The conditional clause can be any clause that evaluates to TRUE or
FALSE.
• SUM( COMMISSION, COMMISSION > QUOTA )
Non-Aggregate Functions
2) Group By Ports
• The Power Center Server stores data in the aggregate cache until it completes Aggregate
calculations.
• It stores group values in an index cache and row data in the data cache. If the Power Center
Server requires more space, it stores overflow values in cache files.
Note: The Power Center Server uses memory to process an Aggregator transformation with
sorted ports. It does not use cache memory. We do not need to configure cache memory for
Aggregator transformations that use sorted ports.
The index cache holds group information from the group by ports. If we are using Group By
on DEPTNO, then this cache stores values 10, 20, 30 etc.
• All Group By Columns are in AGGREGATOR INDEX CACHE. Ex. DEPTNO
1) Example: To calculate MAX, MIN, AVG and SUM of salary of EMP table.
• EMP will be source table.
• Create a target table EMP_AGG_EXAMPLE in target designer. Table should contain
DEPTNO, MAX_SAL, MIN_SAL, AVG_SAL and SUM_SAL
• Create the shortcuts in your folder.
Creating Mapping:
Creating Mapping:
5. Specify the join condition in Condition tab. See steps on next page.
6. Set Master in Ports tab. See steps on next page.
7. Mapping -> Validate
8. Repository -> Save.
• Create Session and Workflow as described earlier. Run the Work flow and see the data in
target table.
• Make sure to give connection information for all tables.
JOIN CONDITION:
The join condition contains ports from both input sources that must match for the Power
Center Server to join two rows.
Example: DEPTNO=DEPTNO1 in above.
1. Edit Joiner Transformation -> Condition Tab
2. Add condition
• We can add as many conditions as needed.
• Only = operator is allowed.
If we join Char and Varchar data types, the Power Center Server counts any spaces that pad
Char values as part of the string. So if you try to join the following:
Char (40) = “abcd” and Varchar (40) = “abcd”
Then the Char value is “abcd” padded with 36 blank spaces, and the Power Center Server
does not join the two fields because the Char field contains trailing spaces.
Note: The Joiner transformation does not match null values.
MASTER and DETAIL TABLES
In Joiner, one table is called as MASTER and other as DETAIL.
• MASTER table is always cached. We can make any table as MASTER.
• Edit Joiner Transformation -> Ports Tab -> Select M for Master table.
Table with less number of rows should be made MASTER to improve Performance.
Reason:
• When the Power Center Server processes a Joiner transformation, it reads rows from both
sources concurrently and builds the index and data cache based on the master rows. So table
with fewer rows will be read fast and cache can be made as table with more rows is still being
read.
• The fewer unique rows in the master, the fewer iterations of the join comparison occur,
which speeds the join process.
JOIN TYPES
In SQL, a join is a relational operator that combines data from multiple tables into a single
result set. The Joiner transformation acts in much the same manner, except that tables can
originate from different databases or flat files.
Types of Joins:
• Normal
• Master Outer
• Detail Outer
• Full Outer
Note: A normal or master outer join performs faster than a full outer or detail outer join.
Example: In EMP, we have employees with DEPTNO 10, 20, 30 and 50. In DEPT, we have
DEPTNO 10, 20, 30 and 40. DEPT will be MASTER table as it has less rows.
Normal Join:
With a normal join, the Power Center Server discards all rows of data from the master and
detail source that do not match, based on the condition.
• All employees of 10, 20 and 30 will be there as only they are matching.
This join keeps all rows of data from the detail source and the matching rows from the master
source. It discards the unmatched rows from the master source.
• All data of employees of 10, 20 and 30 will be there.
• There will be employees of DEPTNO 50 and corresponding DNAME and LOC Columns
will be NULL.
This join keeps all rows of data from the master source and the matching rows from the detail
source. It discards the unmatched rows from the detail source.
• All employees of 10, 20 and 30 will be there.
• There will be one record for DEPTNO 40 and corresponding data of EMP columns will be
NULL.
A full outer join keeps all rows of data from both the master and detail sources.
• All data of employees of 10, 20 and 30 will be there.
• There will be employees of DEPTNO 50 and corresponding DNAME and LOC Columns
will be NULL.
• There will be one record for DEPTNO 40 and corresponding data of EMP Columns will be
NULL.
JOINER CACHES
Joiner always caches the MASTER table. We cannot disable caching. It builds Index cache
and Data Cache based on MASTER table.
1) Joiner Index Cache:
• All Columns of MASTER table used in Join condition are in JOINER INDEX CACHE.
• Example: DEPTNO in our mapping.
2) Joiner Data Cache:
• Master column not in join condition and used for output to other transformation or target
table are in Data Cache.
• Example: DNAME and LOC in our mapping example.
Performance Tuning:
Example: If EMPNO is the key, we can keep only one record in target and can’t maintain
history. So we use Surrogate key as Primary key and not EMPNO.
For example, we might connect NEXTVAL to two target tables in a mapping to generate
unique primary key values.
Sequence in Table 1 will be generated first. When table 1 has been loaded, only then
Sequence for table 2 will be generated.
CURRVAL:
CURRVAL is NEXTVAL plus the Increment By value.
We typically only connect the CURRVAL port when the NEXTVAL port is Already
connected to a downstream transformation.
If we connect the CURRVAL port without connecting the NEXTVAL port, the Integration
Service passes a constant value for each row.
when we connect the CURRVAL port in a Sequence Generator Transformation, the
Integration Service processes one row in each block.
We can optimize performance by connecting only the NEXTVAL port in a Mapping.
6. Transformation -> Create -> Select Sequence Generator for list -> Create -> Done
8. Validate Mapping
Setting Required/Optional
Description
Start Value
Required
Start value of the generated sequence that we want IS to use if we use Cycle option.
Default is 0.
Increment By
Required
End Value
Optional
Maximum value the Integration Service generates.
Current Value Optional
First value in the sequence.If cycle option used, the value must be greater than or equal
to the start value and less the end value.
Cycle Optional
If selected, the Integration Service cycles through the sequence range. Ex: Start Value:1
End Value 10 Sequence will be from 1-10 and again start from 1.
Reset Optional
By default, last value of sequence during session is saved to repository. Next time the
sequence is started from the valued saved.
If selected, the Integration Service generates values based on the original current value
for each session.
Points to Ponder:
1) If Current value is 1 and end value 10, no cycle option. There are 17 records in
source. A) In this case session will fail.
2) If we connect just CURR_VAL only,
a) the value will be same for all records.
3) If Current value is 1 and end value 10, cycle option there. Start value is 0.
There are 17 records in source.
b) Sequence: 1 2 – 10. 0 1 2 3 –
To make above sequence as 1-10 1-20, give Start Value as 1. Start value is used along
with Cycle option only.
If Current value is 1 and end value 10, cycle option there. Start value is 1.
There are 17 records in source. Session runs. 1-10 1-7. 7 will be saved in repository.
If we run session again, sequence will start from 8.
Use reset option if you want to start sequence from CURR_VAL every time.
Define the Properties available in Sequence Generator transformation in brief.
Ans.
Sequence
Generator Description
Properties
Start value of the generated sequence that we want the Integration Service
to use if we use the Cycle option. If we select Cycle, the Integration
Start Value
Service cycles back to this value when it reaches the end value.
Default is 0.
Default is 2147483647.
Current value of the sequence. Enter the value we want the Integration
Current Value Service to use as the first value in the sequence.
Default is 1.
If selected, when the Integration Service reaches the configured end value
Cycle for the sequence, it wraps around and starts the cycle again, beginning
with the configured Start Value.
Restarts the sequence at the current value each time a session runs.
Reset
This option is disabled for reusable Sequence Generator transformations.
SQL Transformation
You can pass the database connection information to the SQL transformation as input data at
run time. The transformation processes external SQL scripts or SQL queries that you create in
an SQL editor. The SQL transformation processes the query and returns rows and database
errors.
When you create an SQL transformation, you configure the following options:
Mode:-
The SQL transformation runs in one of the following modes:
• Script mode. The SQL transformation runs ANSI SQL scripts that are externally located.
You pass a script name to the transformation with each input row. The SQL transformation
outputs one row for each input row.
• Query mode.
The SQL transformation executes a query that you define in a query editor. You can pass
strings or parameters to the query to define dynamic queries or change the selection
parameters. You can output multiple rows when the query has a SELECT statement.
• Passive or active transformation. The SQL transformation is an active transformation by
default. You can configure it as a passive transformation when you create the transformation.
• Database type. The type of database the SQL transformation connects to.
• Connection type. Pass database connection information to the SQL transformation or use a
connection object.
Script Mode
An SQL transformation running in script mode runs SQL scripts from text files. You pass
each script file name from the source to the SQL transformation Script Name port. The script
file name contains the complete path to the script file.
When you configure the transformation to run in script mode, you create a passive
transformation. The transformation returns one row for each input row. The output row
contains results of the query and any database error.
Use the following rules and guidelines for an SQL transformation that runs in script mode:
• You can use a static or dynamic database connection with script mode.
• To include multiple query statements in a script, you can separate them with a semicolon.
• You can use mapping variables or parameters in the script file name.
• The script code page defaults to the locale of the operating system. You can change the
locale of the script.
• The script file must be accessible by the Integration Service. The Integration Service must
have read permissions on the directory that contains the script.
• The Integration Service ignores the output of any SELECT statement you include in the
SQL script. The SQL transformation in script mode does not output more than one row of
data for each input row.
• You cannot use scripting languages such as Oracle PL/SQL or Microsoft/Sybase T-SQL in
the script.
• You cannot use nested scripts where the SQL script calls another SQL script.
• A script cannot accept run-time arguments.
Query Mode
• When you configure the SQL transformation to run in query mode, you create an active
transformation.
• When an SQL transformation runs in query mode, it executes an SQL query that you define
in the transformation.
• You pass strings or parameters to the query from the transformation input ports to change
the query statement or the query data.
You can create the following types of SQL queries in the SQL transformation:
• Static SQL query. The query statement does not change, but you can use query parameters
to change the data. The Integration Service prepares the query once and runs the query for all
input rows.
• Dynamic SQL query. You can change the query statements and the data. The Integration
Service prepares a query for each input row.
Use the following rules and guidelines when you configure the SQL transformation to run in
query mode:
• The number and the order of the output ports must match the number and order of the fields
in the query SELECT clause.
• The native data type of an output port in the transformation must match the data type of the
corresponding column in the database. The Integration Service generates a row error when
the data types do not match.
• When the SQL query contains an INSERT, UPDATE, or DELETE clause, the
transformation returns data to the SQL Error port, the pass-through ports, and the Num Rows
Affected port when it is enabled. If you add output ports the ports receive NULL data values.
• When the SQL query contains a SELECT statement and the transformation has a pass-
through port, the transformation returns data to the pass-through port whether or not the
query returns database data. The SQL transformation returns a row with NULL data in the
output ports.
• You cannot add the "_output" suffix to output port names that you create.
• You cannot use the pass-through port to return data from a SELECT query.
• When the number of output ports is more than the number of columns in the SELECT
clause, the extra ports receive a NULL value.
• When the number of output ports is less than the number of columns in the SELECT clause,
the Integration Service generates a row error.
• You can use string substitution instead of parameter binding in a query. However, the input
ports must be string data types.
After you create the SQL transformation, you can define ports and set attributes in the
following transformation tabs:
• Ports. Displays the transformation ports and attributes that you create on the SQL Ports tab.
• Properties. SQL transformation general properties.
• SQL Settings. Attributes unique to the SQL transformation.
• SQL Ports. SQL transformation ports and attributes.
Note: You cannot update the columns on the Ports tab. When you define ports on the SQL
Ports tab, they display on the Ports tab.
Properties Tab
Configure the SQL transformation general properties on the Properties tab. Some
transformation properties do not apply to the SQL transformation or are not configurable.
Create Mapping :
Step 1: Creating a flat file and importing the source from the flat file.
• Create a Notepad and in it create a table by name bikes with three columns and three
records in it.
• Create one more notepad and name it as path for the bikes. Inside the Notepad just type in
(C:\bikes.txt) and save it.
• Import the source (second notepad) using the source->import from the file. After which we
are goanna get a wizard with three subsequent windows and follow the on screen instructions
to complete the process of importing the source.
• Check the status of a target database before loading data into it.
• Determine if enough space exists in a database.
• Perform a specialized calculation.
• Drop and recreate indexes. Mostly used for this in projects.
Data Passes Between IS and Stored Procedure One of the most useful features of stored
procedures is the ability to send data to the stored procedure, and receive data from the stored
procedure. There are three types of data that pass between the Integration Service and the
stored procedure:
Input/output parameters: Parameters we give as input and the parameters returned from
Stored Procedure.
Status codes: Status codes provide error handling for the IS during a workflow. The stored
procedure issues a status code that notifies whether or not the stored procedure completed
successfully. We cannot see this value. The IS uses it to determine whether to continue
running the session or stop. Specifying when the Stored Procedure Runs
Normal: The stored procedure runs where the transformation exists in the mapping on a row-
by-row basis. We pass some input to procedure and it returns some calculated values.
Connected stored procedures run only in normal mode.
Pre-load of the Source: Before the session retrieves data from the source, the stored
procedure runs. This is useful for verifying the existence of tables or performing joins of data
in a temporary table.
Post-load of the Source: After the session retrieves data from the source, the stored
procedure runs. This is useful for removing temporary tables.
Pre-load of the Target: Before the session sends data to the target, the stored procedure runs.
This is useful for dropping indexes or disabling constraints.
Post-load of the Target: After the session sends data to the target, the stored procedure runs.
This is useful for re-creating indexes on the database.
Stored Procedures:
Connect to Source database and create the stored procedures given below:
Example: To give input as DEPTNO from DEPT table and find the MAX, MIN, AVG and
SUM of SAL from EMP table.
• DEPT will be source table. Create a target table SP_CONN_EXAMPLE with fields
DEPTNO, MAX_SAL, MIN_SAL, AVG_SAL & SUM_SAL.
• Write Stored Procedure in Database first and Create shortcuts as needed.
Creating Mapping:
Creating Mapping:
PROC_RESULT use:
• If the stored procedure returns a single output parameter or a return value, we the reserved
variable PROC_RESULT as the output variable.
Example: DEPTNO as Input and MAX Sal as output :
:SP.SP_UNCONN_1_VALUE(DEPTNO,PROC_RESULT)
• If the stored procedure returns multiple output parameters, you must create variables for
each output parameter.
Example: DEPTNO as Input and MAX_SAL, MIN_SAL, AVG_SAL and SUM_SAL
as output then:
1. Create four variable ports in expression VAR_MAX_SAL,VAR_MIN_SAL,
VAR_AVG_SAL and iVAR_SUM_SAL.
2. Create four output ports in expression OUT_MAX_SAL, OUT_MIN_SAL,
OUT_AVG_SAL and OUT_SUM_SAL.
3. Call the procedure in last variable port says VAR_SUM_SAL.
:SP.SP_AGG (DEPTNO, VAR_MAX_SAL,VAR_MIN_SAL, VAR_AVG_SAL,
PROC_RESULT)
Example 2:
DEPTNO as Input and MAX_SAL, MIN_SAL, AVG_SAL and SUM_SAL as O/P Stored
Procedure to drop index in Pre Load of Target Stored Procedure to create index in Post Load
of Target
• DEPT will be source table. Create a target table SP_UNCONN_EXAMPLE with fields
DEPTNO, MAX_SAL, MIN_SAL, AVG_SAL & SUM_SAL.
• Write Stored Procedure in Database first and Create shortcuts as needed. Stored procedures
are given below to drop and create index on target.Make sure to create target table first.
Stored Procedures to be created in next example in Target Database:
Create or replace procedure CREATE_INDEX
As
Begin
Execute immediate 'create index unconn_dept on SP_UNCONN_EXAMPLE(DEPTNO)';
End;
/
• Within a mapping. Within a mapping, you use the Transaction Control transformation to
define a transaction. You define transactions using an expression in a Transaction Control
transformation. Based on the return value of the expression, you can choose to commit, roll
back, or continue without any transaction changes.
• Within a session. When you configure a session, you configure it for user-defined commit.
You can choose to commit or roll back a transaction if the Integration Service fails to
transform or write any row to the target.
When you run the session, the Integration Service evaluates the expression for each row that
enters the transformation. When it evaluates a commit row, it commits all rows in the
transaction to the target or targets. When the Integration Service evaluates a roll back row, it
rolls back all rows in the transaction from the target or targets. If the mapping has a flat file
target you can generate an output file each time the Integration Service starts a new
transaction. You can dynamically name each target flat file.
Properties Tab
On the Properties tab, you can configure the following properties:
• Transaction control expression
• Tracing level
Enter the transaction control expression in the Transaction Control Condition field. The
transaction control expression uses the IIF function to test each row against the condition.
Use the following syntax for the expression:
The expression contains values that represent actions the Integration Service performs based
on the return value of the condition. The Integration Service evaluates the condition on a row-
by-row basis. The return value determines whether the Integration Service commits, rolls
back, or makes no transaction changes to the row.
When the Integration Service issues a commit or roll back based on the return value of the
expression, it begins a new transaction. Use the following built-in variables in the Expression
Editor when you create a transaction control expression:
Use the following rules and guidelines when you create a mapping with a Transaction Control
transformation:
• If the mapping includes an XML target, and you choose to append or create a new
document on commit, the input groups must receive data from the same transaction control
point.
• Transaction Control transformations connected to any target other than relational, XML, or
dynamic MQSeries targets are ineffective for those targets.
• You must connect each target instance to a Transaction Control transformation.
• You can connect multiple targets to a single Transaction Control transformation.
• You can connect only one effective Transaction Control transformation to a target.
• You cannot place a Transaction Control transformation in a pipeline branch that starts with a
Sequence Generator transformation.
• If you use a dynamic Lookup transformation and a Transaction Control transformation in
the same mapping, a rolled-back transaction might result in unsynchronized target data.
• A Transaction Control transformation may be effective for one target and ineffective for
another target. If each target is connected to an effective Transaction Control transformation,
the mapping is valid.
• Either all targets or none of the targets in the mapping should be connected to an effective
Transaction Control transformation.
Go to the Properties tab and click on the down arrow to get in to the expression editor
window. Later go to the Variables tab and Type IIF(EMpno=7654,) select the below things
from the built in functions.
IIF (EMPNO=7654,TC_COMMIT_BEFORE,TC_CONTINUE_TRANSACTION)
• Connect all the columns from the transformation to the target table and save the mapping.
• Select the Metadata Extensions tab. Create or edit metadata extensions for the Transaction
Control transformation.
• Click OK.
Step 3: Create the task and the work flow.
Step 4: Preview the output in the target table.
Lookup Transformaiton
A Lookup is a Passive, Connected or Unconnected Transformation used to look up data in a
relational table, view, synonym or flat file. The integration service queries the lookup table to
retrieve a value based on the input source value and the lookup condition.
A connected lookup recieves source data, performs a lookup and returns data to the pipeline;
While an unconnected lookup is not connected to source or target and is called by a
transformation in the pipeline by :LKP expression which in turn returns only one column
value to the calling transformation.
Lookup can be Cached or Uncached. If we cache the lookup then again we can further go for
static or dynamic or persistent cache,named cache or unnamed cache .
By default lookup transformations are cached and static.
Lookup Ports Tab
The Ports tab of Lookup Transformation contains
Input Ports:
Create an input port for each lookup port we want to use in the lookup condition. We must
have at least one input or input/output port in a lookup transformation.
Output Ports:
Create an output port for each lookup port we want to link to another transformation. For
connected lookups, we must have at least one output port. For unconnected lookups, we must
select a lookup port as a return port (R) to pass a return value.
Lookup Port:
The Designer designates each column of the lookup source as a lookup port.
Return Port:
An unconnected Lookup transformation has one return port that returns one column of data to
the calling transformation through this port.
Notes:
We can delete lookup ports from a relational lookup if the mapping does not use the lookup
ports which will give us performance gain. But if the lookup source is a flat file then deleting
of lookup ports fails the session.
Now let us have a look on the Properties Tab of the Lookup Transformation
Lookup condition:
The condition to lookup values from the lookup table based on source input data. For
example, IN_EmpNo=EmpNo.
Connection Information:
Query the lookup table from the source or target connection. In case of flat file lookup we can
give the file path and name, whether direct or indirect.
Source Type:
Determines whether the source is relational database table,flat file or source qualifier
pipeline.
Tracing Level:
It provides the amount of detail in the session log for the transformation. Options available
are Normal, Terse, Vebose Initialization, Verbose Data.
Lookup cache directory name:
Determines the directory name where the lookup cache files will reside.
Lookup will used this named persistent cache file based on the base lookup table.
When checked, integration service rebuilds lookup cache from lookup source when the
lookup instance is called in the session.
Insert Else Update:
Insert the record if not found in cache, else update it. Option is available when using dynamic
lookup cache.
Datetime Format:
Used when source type is file to determine the date and time format of lookup columns.
Thousand Separator:
By default it is None, used when source type is file to determine the thousand separator.
Decimal Separator:
By default it is "." else we can use "," and used when source type is file to determine the
thousand separator.
Sorted Input:
Checked whenever we expect the input data to be sorted and is used when the source type is
flat file.
When checked it assumes that the lookup source is not going to change during the session
run.
Union Transformation
• Active and Connected transformation.
Union transformation is a multiple input group transformation that you can use to merge data
from multiple pipelines or pipeline branches into one pipeline branch. It merges data from
multiple sources similar to the UNION ALL SQL statement to Combine the results from two
or more SQL statements.
• we can create multiple input groups, but only one output group.
• we can connect heterogeneous sources to a Union transformation.
• all input groups and the output group must have matching ports. The Precision, data type,
and scale must be identical across all groups.
• The Union transformation does not remove duplicate rows. To remove Duplicate rows, we
must add another transformation such as a Router or Filter Transformation.
• we cannot use a Sequence Generator or Update Strategy transformation upstream from a
Union transformation.
Creating Mapping:
1. Open folder where we want to create the mapping.
2. Click Tools -> Mapping Designer.
3. Click Mapping-> Create-> Give mapping name. Ex: m_union_example
4. Drag EMP_10, EMP_20 and EMP_REST from source in mapping.
5. Click Transformation -> Create -> Select Union from list. Give name and click Create.
Now click done.
6. Pass ports from SQ_EMP_10 to Union Transformation.
7. Edit Union Transformation. Go to Groups Tab
8. One group will be already there as we dragged ports from SQ_DEPT_10 to Union
Transformation.
9. As we have 3 source tables, we 3 need 3 input groups. Click add button to add 2 more
groups. See Sample Mapping
10. We can also modify ports in ports tab.
11. Click Apply -> Ok.
12. Drag target table now.
13. Connect the output ports from Union to target table.
14. Click Mapping -> Validate
15. Repository -> Save
• Create Session and Workflow as described earlier. Run the Workflow and see the data in
target table.
• Make sure to give connection information for all 3 source Tables.
Normalizer Transformation
• Active and Connected Transformation.
• The Normalizer transformation normalizes records from COBOL and relational sources,
allowing us to organize the data.
• Use a Normalizer transformation instead of the Source Qualifier transformation when we
normalize a COBOL source.
• We can also use the Normalizer transformation with relational sources to create multiple
rows from a single row of data.
Example 1: To create 4 records of every employee in EMP table.
• EMP will be source table.
• Create target table Normalizer_Multiple_Records. Structure same as EMP and datatype of
HIREDATE as VARCHAR2.
• Create shortcuts as necessary.
Creating Mapping :
Source:
Roll_Number Name ENG HINDI MATHS
100 Amit 78 76 90
101 Rahul 76 78 87
102 Jessie 65 98 79
Target :
Roll_Number Name Marks
100 Amit 78
100 Amit 76
100 Amit 90
101 Rahul 76
101 Rahul 78
101 Rahul 87
102 Jessie 65
102 Jessie 98
102 Jessie 79
What if we want to update, delete or reject rows coming from source based on some
condition?
Example: If Address of a CUSTOMER changes, we can update the old address or keep both
old and new address. One row is for old and one for new. This way we maintain the historical
data.
Update Strategy is used with Lookup Transformation. In DWH, we create a Lookup on target
table to determine whether a row already exists or not. Then we insert, update, delete or reject
the source record as per business need.
In Power Center, we set the update strategy at two different levels:
1. Within a session
2. Within a Mapping
1. Update Strategy within a session:
When we configure a session, we can instruct the IS to either treat all rows in the same way
or use instructions coded into the session mapping to flag rows for different database
operations.
Session Configuration:
Edit Session -> Properties -> Treat Source Rows as: (Insert, Update, Delete, and Data
Driven). Insert is default. Specifying Operations for Individual Target Tables:
You can set the following update strategy options:
Within a mapping, we use the Update Strategy transformation to flag rows for insert, delete,
update, or reject.
Operation Constant Numeric Value
INSERT DD_INSERT 0
UPDATE DD_UPDATE 1
DELETE DD_DELETE 2
REJECT DD_REJECT 3
Frequently, the update strategy expression uses the IIF or DECODE function from the
transformation language to test each row to see if it meets a particular condition.
IIF( ( ENTRY_DATE > APPLY_DATE), DD_REJECT, DD_UPDATE )
Or
IIF( ( ENTRY_DATE > APPLY_DATE), 3, 2 )
• The above expression is written in Properties Tab of Update Strategy T/f.
• DD means DATA DRIVEN
We can configure the Update Strategy transformation to either pass rejected rows to the next
transformation or drop them.
Steps:
1. Create Update Strategy Transformation
2. Pass all ports needed to it.
3. Set the Expression in Properties Tab.
4. Connect to other transformations or target.
Performance tuning:
Source: PATIENT_PRE_DTL
Target: PATIENT_PRE_TBLS
The source table contains the patient ID, Name and tablets
(Separated by #) which doctor have prescribed for a patient.
The data from source should be populated into the target as
mentioned in the above example. A single row from source table
has to be converted into multiple rows based on number of
tablets, and load into the target table. This is a typical scenario
where we cannot use Normalizer Transformation since we
have no information about the occurrence (Number of tablets
prescribed by doctor may be varying for patient to patient).
String str=Tablets;
String[] temp;
String delimiter = ‘#’;
temp = str.split(delimiter);
for (int i =0; i< temp.length; i++){
Tablets = temp[i];
generateRow();
}