Wednesday, 24 August 2016

DataTypes


select db_block_size from v$parameter;


Number DataType

Oracle Database stores numeric data in variable-length format.Each value is stored in scientific notation, with 1 byte used to store the exponent and up to 20 bytes to store the mantissa.The resulting value is limited to 38 digits of precision. Oracle Database does not store leading and trailing zeros. For example, the number 412 is stored in a format similar to 4.12 x 102, with 1 byte used to store the exponent(2) and 2 bytes used to store the three significant digits of the mantissa(4,1,2). Negative numbers include the sign in their length.
Taking this into account, the column size in bytes for a particular numeric data value NUMBER(p), where p is the precision of a given value, can be calculated using the following formula:
ROUND((length(p)+s)/2))+1
where s equals zero if the number is positive, and s equals 1 if the number is negative.
Date & Time DataType
Oracle Database uses its own internal format to store dates. Date data is stored in fixed-length fields of 7 bytes each, corresponding to century(BC / AD), year, month, day, hour, minute, and second.
Date - 7 Bytes
Timestamp(upto 6 presion) - 7 to 11 Bytes,    24-AUG-2016 08:14:38.321983 PM 
Timestamp with Time Zone - 13 bytes
Timestamp with Local Time - 11 Bytes

Friday, 29 April 2016

Compound Triggers - Avoid Mutating Error


1. Compound trigger makes it easier to program an approach where you want the actions you implement for the various timing points to share common data.
To achieve the same effect with simple triggers, you had to model the common state with an ancillary package

2.  Compound triggers to avoid the mutating-table error

Ref. http://docs.oracle.com/cd/B28359_01/appdev.111/b28370/triggers.htm#CHDFEBFJ

CREATE OR REPLACE TRIGGER compound_trigger
FOR UPDATE OF salary ON employees
COMPOUND TRIGGER
-- Declarative part (optional)-- Variables declared here have firing-statement duration.threshold CONSTANT SIMPLE_INTEGER := 200;
BEFORE STATEMENT IS  BEGIN   NULL;  END BEFORE STATEMENT;

BEFORE EACH ROW IS  
   BEGIN   NULL;  
END BEFORE EACH ROW;


AFTER EACH ROW IS
   BEGIN  NULL; 
END AFTER EACH ROW;


AFTER STATEMENT IS BEGIN NULL; END AFTER STATEMENT;
END  compound_trigger;

The database executes all triggers of the same type before executing triggers of a different type. 
If you have multiple triggers of the same type on the same table, and the order in which they execute is important, use the FOLLOWS  clause.

Thursday, 28 April 2016

Reserve Key Index

These are B*Tree indexes whereby the bytes in the key are reversed. Reverse key indexes can be used to obtain a more even distribution of index entries throughout an index that is populated with
increasing values.
For example, if I am using a sequence to generate a primary key, the sequence will generate values like 987500, 987501, 987502, and so on. These sequence values are monotonic, so if I were using a
conventional B*Tree index, they would all tend to go the same right-handside block, thus increasing contention for that block. With a reverse key index, Oracle will logically index 205789, 105789, 005789, and so on instead. Oracle will reverse the bytes of the data to be stored before placing them in the index, so values that would have been next to each other in the index before the byte reversal will instead be far apart. This reversing of the bytes spreads out the inserts into the index structure over many blocks.



Reverse Key Indexes address a specific problem but may in turn introduce a number of problems themselves.
One problem is the simple fact index entries are no longer sorted in their natural order. Value 123456 is no longer adjacent to value 123457 in the index structure, they’re likely to be found in completely different leaf blocks. Therefore a range predicate (such as BETWEEN 123450 and 123460) can no longer be found by a single index probe, Oracle would be forced to search for each specific index value separately as each value in the range is likely to be in differing leaf blocks.

This makes it all just too difficult and troublesome for the Cost Based Optimizer (CBO). As a result, the CBO totally ignores Reverse Key Indexes when processing Range Predicates (eg. BETWEEN, LIKE etc.). and only equality predicates works.

Monday, 25 April 2016

10053 Event to Trace CBO Choices


Why does autotrace sometimes not show me the actual plan used:-
  1. explain plan does not bind variable peek, a real query would.
  2. explain plan sees all binds as varchar2's - in real life they are numbers, dates, varchar2's and so on. Hence, implicit conversions may or may not be observed by explain plan that would be observed in real life.
  3. explain plan does a hard parse, uses the statistics/dynamic samples that are IN PLACE RIGHT NOW.  A query that was run for real "a little while ago" or "a little while from now" might see different inputs.
  4. explain plan uses the current operational environment to come up with a plan.  You might be explaining in a FIRST_ROWS session, whereas the query was actually executed in a ALL_ROWS session (or any one of the optimizer parameters could be different).
  5. any combination of 1, 2, 3, 4...


refer: https://asktom.oracle.com/pls/asktom/f?p=100:11:0%3A%3A%3A%3AP11_QUESTION_ID:63445044804318


Note that this trace event is undocumented, unsupported, and works only in certain 
circumstances.

They provide information to the support analysts that can be useful in diagnosing 
optimizer-related issues. However, an example will show you what you might expect to see 
in the trace file generated by this event. In order to generate the CBO trace file, we 
need to set an event and then simply parse a query. One technique is as follows:

big_table@ORA920> ALTER SESSION SET EVENTS 
  2 '10053 trace name context forever, level 1';
Session altered.

big_table@ORA920> explain plan for 
  2 select * from big_table where object_id = 55;
Explained.



Now we are ready to inspect the trace file. You can use the same technique outlined in 
the "TKPROF" section of Chapter 2 to get a trace filename for your session (see that 
chapter for details if you do not know how to identify your session's trace file). Upon 
exiting SQL*Plus and editing the trace file, you will see something like this:

/usr/oracle/ora920/OraHome1/admin/ora920/udump/ora920_ora_23183.trc
Oracle9i Enterprise Edition Release 9.2.0.1.0 - Production
With the Partitioning, OLAP and Oracle Data Mining options
JServer Release 9.2.0.1.0 - Production
ORACLE_HOME = /usr/oracle/ora920/OraHome1
System name:    Linux
Node name:  tkyte-pc-isdn.us.oracle.com
Release:    2.4.18-14
Version:    #1 Wed Sep 4 13:35:50 EDT 2002
Machine:    i686
Instance name: ora920
Redo thread mounted by this instance: 1
Oracle process number: 18
Unix process pid: 23183, image: oracle@tkyte-pc-isdn.us.oracle.com 
This is just the standard trace file header. The interesting stuff comes next:
*** SESSION ID:(15.1158) 2003-01-26 16:54:53.834
QUERY
explain plan for select * from big_table where object_id = 55
***************************************
PARAMETERS USED BY THE OPTIMIZER
********************************
OPTIMIZER_FEATURES_ENABLE = 9.2.0
OPTIMIZER_MODE/GOAL = Choose
 lots chopped out here
DB_FILE_MULTIBLOCK_READ_COUNT = 16
_NEW_SORT_COST_ESTIMATE = TRUE
_GS_ANTI_SEMI_JOIN_ALLOWED = TRUE
_CPU_TO_IO = 0
_PRED_MOVE_AROUND = TRUE
***************************************

 There, you will find the parameters that affect the CBO. The parameters that start with _ 
are undocumented parameters. Most of the other parameters are discussed in the previous 
sections. These are the parameters you have free access to modify as you see fit.

Caution: Never set _ parameters in a real system without the expressed guidance and 
consent of support. This is not just me "stating the company line" here; this is for 
real. Undocumented parameters have unanticipated side effects (they could be seriously 
damaging to your data, your security, and many other things). Undocumented parameters 
change in meaning from release to release. Every time you upgrade, you need to ask 
support, "Do I still need this undocumented parameter?" I will not discuss the meaning, 
range of values, or use of any of these undocumented parameters.

Next in the trace file you'll find the base statistical information used:

BASE STATISTICAL INFORMATION
***********************
Table stats    Table: BIG_TABLE   Alias: BIG_TABLE
  TOTAL ::  CDN: 1833857  NBLKS:  22188  AVG_ROW_LEN:  84
-- Index stats
  INDEX NAME: OBJECT_ID_IDX  COL#: 4
    TOTAL :: LVLS: 2  #LB: 4202  #DK: 1833792  LB/K: 1  DB/K: 1  CLUF: 21921
  INDEX NAME: OBJECT_TYPE_IDX  COL#: 6
    TOTAL :: LVLS: 2  #LB: 5065  #DK: 27  LB/K: 187  DB/K: 2414  CLUF: 65187
_OPTIMIZER_PERCENT_PARALLEL = 0

You'll see this for all of the referenced objects in the query. This in itself can help 
you diagnose the problem right away. If you know that the cardinality of the table 
BIG_TABLE is really 1 and the number of blocks is 1, right here you can see what the 
problem is. The optimizer sees cardinality (CDN) of 1,833,857 rows and 22,188 blocks 
(NBLKS). So, the statistics would be out of date if there were really just one row. 
Here's a quick explanation of what the abbreviations mean:

CDN    Cardinality, a count of rows
NBLKS    Number of blocks
AVG_ROW_LEN    The computed average row length
COL#    Column numbers in the table the index is on (select * from user_tab_columns where 
column_id = 4 and table_name = 'BIG_TABLE' would reveal the name of the column in this 
case)
LVLS    Number of levels in the B*Tree
#LB    Number of leaf blocks
#DK    Number of distinct keys
LB/K    Number of leaf blocks per key value on average
DB/K    Number of base table data blocks per key value; how many table accesses (logical 
I/O's) would be made using an equality predicate on this index.  Directly related to the 
cluster factor below.
CLUF    Clustering factor of this index; a measure of how sorted a base table is with 
respect to this index.

Many times, a quick scan of this information can pinpoint the problem right away.