Tera Data
Tera Data
Tera Data
You should have a message a few lines earlier that indicates some prior
execution of MLOAD against this table was started but not completed (nor
cleaned up) so MLOAD thinks this should be a restart. But the script you
are executing isn't the same as the one that failed, so UTY1005 says it
can't autorestart either. See the MultiLoad manual sections on
Terminating / Restarting jobs.
Any change in the script while restarting log table results in UTY1005 altered error
as mentioned above.
select BILL_NO,
max(case when
max(case when
max(case when
max(case when
Bill_Month
Bill_Month
Bill_Month
Bill_Month
='Apr'
='May'
='Jun'
='Jul'
then
then
then
then
MNTH_MRC
MNTH_MRC
MNTH_MRC
MNTH_MRC
end)
end)
end)
end)
as
as
as
as
April,
May,
June,
July
2051,joe,blue,07/23
2052,John,green,07/21
2052,Rick,green,07/23
please tell me how to achieve this.
I have very less time to figure this out, so am posting here :). Thank you all
Hi ,
We can use UNION ALL , as below
select
id,
name,
fav_color,
date
From Table_A1
union all
select
id,
updated_column,
old_value,
update_dt
from table_a2
3
4
6
output
2
3
4
6
2
3
3
5
if matcing value found col should join on matching value else nearest lower value.
But this result in a product join, which might be ok if you got another joiin condition.
I prefer following approach:
UNION both columns, find the last value using an OLAP function and then join back to both tables:
1
2
3
4
5
6
7
8
9
10
11
select v2,
max(v1)
over (order by coalesce(v1,v2), v2
rows unbounded preceding) as newV1
from
(
select i as v1, null as v2 from table1
union
select null as v1, i as v2 from table2
) as dt
qualify v2 is not null
1
2
3
4
5
The following are the important tips while choosing the primary index.
1. Data Distribution.
You need to analyze the number of distinct values in the table . If the primary index of the
table contains less number of null values and more distinct values,it will give better
the performance.
2. Access frequency.
The column has to be frequently used in the where clause during the row selection.
The column should be that which is frequently used in join process.
3. Volatility
The column should not be frequently changed.
Query and perfomance tuning is not something you can get off the shelf. It depends upon the query.
But since you asked basic query tuning steps I will lay out few things. But keep in mind that over
collection of your statistics or unnecessary stats collection will have a negative impact.
Pick those queries that are worthy tuning. For instance queries that consume lots of CPU, IO and spool
consumption, skewed (PJI and UII greater than 3). Also check if these queries are run frequently.
Checkpoints:
1. Having state and obsolete statistics is much worse than having no stats.
2. Refresh your stats and make sure you have stats collected on the where and ON criteria. Make sure
you have stats collected on your NUSI.
3. Check in the queries if a join condition is missed.
4. if there are UNION operations as opposed to UNION ALL and also disntinct on the top which is
redundant.
5. Assuming that you have done all this and even then your query is performing poorly, I would
recommended to check your explain plan and look for some key indicators showing poor performance
of your queries: redistribution, no confidence, product joins, updating of primary indexes (if any, I've
seen at different sites doing this, data redistribution is costly in an update operation).
6. If in your explain plan you have some redistributions going on then make sure you have a proper
join condition in place, Implicit data type conversion in joins or you might have missed a join criteria
or you are not using a PI or at least proper column.
7. if you have a run away query chewing up resources then make sure your join criteria are atleast
same data type with stats in place. Make sure you have a equi join rather than cartesian and try to
make the query simple rather than huge constraint criteria (predicates) this will throw-away the
estimation with the row retrieval.
I only listed a very few there are number of ways of looking and tuning. you can implement global or
join indexes hash indexes etc. But one common start point is your explain plan. Hope this helps to
further in your perfomance tuning efforts.
. Why is this query taking such a long time? If I remove qualify Row_number() Over
(Order by SLSTY) between 1000 and 2000 statement and replace it with Order bySLSTY
statement, it runs pretty fast (less than 10 secs).
Select
Business,
Item_no,
Brand,
Vendor,
Sum(sls_ty) slsty,
Sum(sls_ly),
Sum(sls_reg_ty)
from database_name1.IP_Table_Name
Group by
Business,
Item_no,
Brand,
Vendor
qualify Row_number() Over (Order by SLSTY) between 0 and 1000
Unicode vs Latin
Hi,
I see that Teradata uses UNICODE for data dictionary tables or system tables and Latin for user data.
May I know the reasons and advantages of doing this?
SOUNDEX
Returns a character string that represents the Soundex code for string_expression
The following process outlines the Soundex coding guide:
3Assign the following number to the remaining letters after the first letter:
1 = B, F, P, V
2 = C, G, J, K, Q, S X, Z
3 = D, T
4=L
5 = M, N
6=R
4If two or more letters with the same code are adjacent in the original name or adjacent except for any intervening H or
W, omit all but the first.
5Convert the form letter, digit, digit, digit, by adding trailing zeros if less than three digits.
6Drop the rightmost digits if more than three digits.
7Names with adjacent letters having the same equivalent number are coded as one letter with a single number
Surname prefixes are generally not used.
Statement
SELECT SOUNDEX(12345);
SELECT SOUNDEX('b');
Thanks.
Similar to this Select '$' || x'0A' || '$';
It does not work between J and O (both upper and lower case) and Zz.I think if one has to use it more
often either create a table for HEXtoASCII values and do lookup on that table. For that matter it can
be CHARtoASCII table. It is one time INSERT for the all ASCII values but can be reused.Or, just create
the UDF function once and for all. Anyways, I was only looking for a short term solution at this time.
Thanks
Here is a non UDF solution: case substring(char2hexint(col1) from 1 for 1) when '0' then 0 when '1'
then 1 when '2' then 2 when '3' then 3 when '4' then 4 when '5' then 5 when '6' then 6 when '7' then
7 when '8' then 8 when '9' then 9 when 'A' then 10 when 'B' then 11 when 'C' then 12 when 'D' then
13 when 'E' then 14 when 'F' then 15 end * 16 + case substring(char2hexint(col1) from 2 for 1) when
'0' then 0 when '1' then 1 when '2' then 2 when '3' then 3 when '4' then 4 when '5' then 5 when '6'
then 6 when '7' then 7 when '8' then 8 when '9' then 9 when 'A' then 10 when 'B' then 11 when 'C'
then 12 when 'D' then 13 when 'E' then 14 when 'F' then 15 end as asciivalfrom mytable; *** Query
completed. 12 rows found. 2 columns returned. *** Total elapsed time was 1 second.col1 asciival--------------A 65B 66C 67D 68E 69F 70G 71H 72I 73J 74K 75L 76
SEL
CASE
WHEN DB.COL1 IS NOT NULL THEN ' ' || DB.COL1
ELSE ''
END
FROM TABLEA
) DT COL1
f that column contains any non-Latin characters TRANSLATE will fail, you might add WITH ERROR to
replace bad chars with an error character (hex '1A'):
There's no ISO8859_1 or LATIN1_0A character set in Teradata, only Latin and Unicode.
A session character set might be LATIN1_0A, then the Unicode data is automatically converted.
But if you got Unicode data why do you want to convert it to Latin?
Dieter
INSTR
The following query returns the result 20 indicating the position of 'ch' in 'chip'. This is the second occurrence of 'ch'
with the search starting from the second character of the source string.
SELECT INSTR('choose a chocolate chip cookie','ch',2,2);
rb.dvr_srnm as Driver_Last,
cast(ph.paph_fin_trans_ref_id as decimal(19,0)) as refid,
fin_tran.paymt_mdia_proc_sys_cde as Settlement, *****
fin_tran.prim_acct_frst_six_dgt_nbr as First_Six ******
from
rfs_rv.pre_applied_pymts_hdr ph
join
rfs.stns s on ph.pymt_stn_id = s.stn_id
join
rfs.mthd_of_pymts mp on ph.mop_mop_cd = mp.mop_cd
join
rfs_rv.pre_applied_pymts_det pd on ph.pymt_id = pd.pap_pymt_id
join
paymt.fin_tran ft on fin_tran.fin_tran_ref_id =cast(ph.paph_fin_trans_ref_id as decimal(19,0))
left outer join (
select
ra.rnt_agr_nbr,
ra.ecr_ticket_no,
ra.ecre_rent_cntrct_nbr,
ra.ecr_lgcy_resv_nbr,
ra.co_tmsp,
ra.ci_tmsp,
sto.grp_brn_id as ChkOutStn,
sti.grp_brn_id as ChkInStn,
dr.dvr_srnm,
dr.dvr_frst_name
from
rfs_rv.rnt_agrs ra
join
rfs.stns sto on ra.sta_stn_id_orig_co = sto.stn_id
join
rfs.stns sti on ra.sta_stn_id_orig_co = sti.stn_id
join
rfs_rv.dvr_rras dr on ra.rnt_agr_nbr = dr.rdy_rnt_agr_nbr
where
dr.main_dvr_flg = 'MR'
) rb on pd.ticket_no = rb.ecr_ticket_no
where
mp.mop_desc = ?
and ph.CR_CARD_NBR = ?
and ph.pymt_dt between '2015-05-30 00:00:00' and '2015-06-26 23:59:59'
and ph.cust_nbr = ?
Hard to be sure without being able to explain/run the sql but it looks like you have
paymt.fin_tran aliased as ft and you are refering to it in you select list as fin_tran
And in the ON condition: fin_tran.fin_tran_ref_id instead of ft.fin_tran_ref_id
This one is where the error is coming from. Once you fix that, then the one RGlass pointed out in the
select list would ad an unconstrained join of fin_tran to your query making it likely to get incorrect
answers and run a very long time.
Thank you for your advice. That did the trick.
101|12-25-1986|24/10/2008
102|01-23-1982|28/11/2006
.IMPORT VARTEXT '|' FILE=C:ABC.txt;
.REPEAT *
USING
emp_id (VARCHAR(3)),
emp_dob (VARCHAR(10)),
emp_doj (VARCHAR(10)),
INSERT INTO my_db.my_emp_tb
values
(
:emp_id,
:emp_dob,
:emp_doj
);
==> ERROR 2666 : Invalid date supplied.
HTH.
Cheers.
Carlos.
.SET ECHOREQ ON
Partition Name wise Row Count of a partition table
How to find partition_name,count(*) from a partition table.
Each Partition wise row count and row count in No Partition.
There's no partition nyme, just a number:
select partition, count(*)from tab group by 1 order by 1