728x90

Question

This technote provides troubleshooting information for the situation where your table data is corrupted and you do not have a recent good backup to restore your data. It describes a method for dumping the table data.

Answer

You can dump table data by using the ddel option of the db2dart command. The database needs to be offline to run the db2dart command.

This example extracts data from the EMPLOYEE table residing in the default USERSPACE1 tablespace with id 2 in the SAMPLE database in a DB2 V9.7 environment.

$db2dart sample /DDEL 

Table object data formatting start.
Please enter
Table ID or name, tablespace ID, first page, num of pages:
EMPLOYEE,2,0,5

14 of 14 columns in the table will be dumped.
Column numbers and datatypes of the columns dumped:
0 CHAR() -FIXED LENGTH CHARACTER STRING
1 VARCHAR() -VARIABLE LENGTH CHARACTER STRING
2 CHAR() -FIXED LENGTH CHARACTER STRING
3 VARCHAR() -VARIABLE LENGTH CHARACTER STRING
4 CHAR() -FIXED LENGTH CHARACTER STRING
5 CHAR() -FIXED LENGTH CHARACTER STRING
6 DATE
7 CHAR() -FIXED LENGTH CHARACTER STRING
8 SMALLINT
9 CHAR() -FIXED LENGTH CHARACTER STRING
10 DATE
11 DECIMAL
12 DECIMAL
13 DECIMAL
Default filename for output data file is TS2T6.DEL,
do you wish to change filename used? y/n
n

Filename used for output data file is TS2T6.DEL. If existing file, data will be appended to it.

Formatted data being dumped ...
Dumping Page 0 ....
Table object data formatting end.

The requested DB2DART processing has completed successfully!
Complete DB2DART report found in:
/home/v97fp2/sqllib/db2dump/DART0000/SAMPLE.RPT


Note: Above example dumps 5 pages of data from the table Employee. If you are not sure of the number of pages, use a large number such as 99999 so that all the pages of the table will get dumped.


Sample contents in the output file (called TS2T6.DEL in this example) found in the location /home/v97fp2/sqllib/db2dump/DART0000 are as follows:

$cat TS2T6.DEL 

"000010","CHRISTINE","I","HAAS","A00","3978",19950101,"PRES ",18,"F",19630824,+0152750.00,+0001000.00,+0004220.00
.....

"200340","ROY","R","ALONZO","E21","5698",19970705,"FIELDREP",16,"M",19560517,+0031840.00,+0000500.00,+0001907.00



http://www-01.ibm.com/support/docview.wss?uid=swg21509964

728x90
728x90


Question

Why DB2 did not reuse/rename the (full size) transaction logs even after they have been successfully archived?

Cause

If a log file is archived and contains no open transactions, the DB2 database manager does not delete the file but renames it as the next log file when such a file is needed. This process improves performance because creating a new log file instead of renaming the file would require all pages to be written out to guarantee that the necessary disk space or other storage space is available.

The database manager retains up to 8 extra log files in the active log path for renaming purposes.

Aim is that there is existing log file can be used to rename when high workload comes:
(1) at that point the log consumption will become very quickly.
(2) creating new log file is a high cost operation since DB2 need to write to log file in advance, otherwise, the disk space may not be actually allocated to DB2.

Answer

You can collect "db2pd -db <database> -dpsdbcb" data.

Please provide the output file of db2pd command to IBM DB2 Service to check rename array for confirmation.


http://www-01.ibm.com/support/docview.wss?uid=swg21644647

728x90
728x90


Question

How do you identify the type of backup taken and the logs needed to rollforward the database using the db2ckbkp utility in DB2?

Answer

When the history information is not available for a backup image, the db2ckbkp utility can be used to identity if the backup was taken online or offline, and which logs are needed to rollforward the database.

You can use the db2ckbkp utility with the -h option and look for the "Backup Mode" parameter in the media header information. If the "Backup Mode" parameter is set to "0", the backup was taken offline; if it is set to "1", the backup was taken online. Furthermore, in case of an online backup, you can see if the logs are included by looking at the "Includes Logs" parameter.

=====================
MEDIA HEADER REACHED:
=====================
Server Database Name           -- SAMPLE
Server Database Alias          -- SAMPLE
Client Database Alias          -- SAMPLE
Timestamp                      -- 20100730130237
Database Partition Number      -- 0
Instance                       -- db2inst1
Sequence Number                -- 1
Release ID                     -- D00
Database Seed                  -- A05E5954
DB Comment's Codepage (Volume) -- 0
DB Comment (Volume)            --                               
DB Comment's Codepage (System) -- 0
DB Comment (System)            --                               
Authentication Value           -- -1
 Backup Mode                    -- 1
 Includes Logs                  -- 1
Compression                    -- 1
Backup Type                    -- 0
Backup Gran.                   -- 0
Status Flags                   -- 20
System Cats inc                -- 1
Catalog Partition Number       -- 0
DB Codeset                     -- UTF-8
DB Territory                   -- 
LogID                          -- 1279748926
LogPath                        -- /home/db2inst1/db2inst1/NODE0000/SQL00001/SQLOGDIR/
Backup Buffer Size             -- 524288
Number of Sessions             -- 1
Platform                       -- 12


If the backup was taken online, you can use the db2ckbkp utility with the -l and -o options to identify which logs are required to rollforward the database. Under the LOG.FILE.HEADER section, the "Head Extent ID" line will give you the first log the first log required, and the second number on the "Backup End Marker" will give you the last log required.

BufAddr  PoolID Token Type     Offset   FileSize ObjectSize   OrigSize Object Name
-------- ------ ----- ---- ---------- ---------- ---------- ---------- -----------
00000000:     0     0    8          0      12288      12288          0 "LOG.FILE.HEADER"

          Consistency Counter one = 2245
               Log Format Version = 10
                  Database Log ID = 1279748926
      Number of Primary Log Files = 10
    Number of Secondary Log Files = 2
                    Log File Size = 32
           Transaction Table Size = 128
                          baseLso = 43857761
                       lowtranlsn = 0000000002A431DC
                       minbufflsn = 0000000002A18528
                          headlsn = 0000000002A18528
              hdrSNextExtToRename = 0
                   Head Extent ID = 55
            Last Commit Timestamp = 0  1970-01-01-00.00.00 GMT
                   Last Extent ID = 1280505751
                         Base TID = 00000000038B
                    Database seed = 2690537812
         Log Chain Wrap Indicator = 0
                    rawLogDevSize = 0
                 rawLogHeadExtNum = 0
               Past Log File Size = 32, starting from extent 0
               Past Log File Size = 0, starting from extent 0
               Past Log File Size = 0, starting from extent 0
                    Log File Path = Default
                   Dual Log Flags = 00000000
             Mirror Log File Path = 
                         Last CId = 1280505739
          Soft checkpoint maximum = 131072
                   Log File Flag1 = 00004020
                   Log File Flag2 = 00000000
                   Log File Flag3 = 00000000
                      lfhLogAtBkp = (0, 0, 0)
                     FirstArchNum = 56
                    FirstArchNum2 = SQLPG_EXTNUM_UNDEFINE
                    RenameArchNum = 50
     On-line Backup End Timestamp = 1280505777
           Recoverable Start Time = 1280503612
   Rollforward stoptime in effect = 0
             Backup End Timestamp = 0
                Backup End Marker = 1280505777, 78, 0000000002D0324C


Please refer to the db2ckbkp Information Center link in the 'Related URL' section below to obtain further details on how to use this utility.


Related information

db2ckbkp - Check backup command


http://www-01.ibm.com/support/docview.wss?uid=swg21442128

728x90
728x90


Question

Why does the DB2 optimizer sometimes choose an access plan which uses an index scan followed by a full table scan instead of just a full tablescan?

After gathering a query explain, you may see in certain cases that the access plan shows an index scan that is joined with a full table scan. 

For example (with a table that has only 300 rows):

300 
TBSCAN 
( 4) 
26.0089 


300 
SORT 
( 5) 
26.0086
2
|
300
FETCH
( 6)
25.9339
2
/---+----\
300 300
IXSCAN TABLE: 
( 7) TBL1
0.152007 
0
|
300
INDEX:
TBL1_INDEX1

Answer

In certain cases the Optimizer will choose to do a full index scan and join it with the table scan, because there may be 'EMPTY' pages in the table. Performing an index scan can help avoid these empty pages, as opposed to a full table scan.

To calculate the number of empty pages in a table, query the FPAGES and NPAGES columns in the SYSCAT.TABLES view and then subtract the NPAGES value (the number of pages that contain rows) from the FPAGES value (the total number of pages in use). Empty pages can occur when entire ranges of rows are deleted.

As the number of empty pages increases, so does the need for table reorganization. Reorganizing a table reclaims empty pages and reduces the amount of space that a table uses. In addition, because empty pages are read into the buffer pool during a table scan, reclaiming unused pages can improve scan performance. The quickest solution is to REORG the table to remove the empty pages.


http://www-01.ibm.com/support/docview.wss?uid=swg21656021

728x90
728x90

 

Problem(Abstract)

Slow transaction commit times may be seen occasionally when the recovery history file grows very large.

Cause

Slow transaction commit times may sometimes be caused because of a slowdown when writing to the History File.

If the database History File size has grown to a large size (symptoms may start appearing at 10MB and up), then it might start hampering the commit performance by a few seconds.

In general, the mechanism of commit is as follows:
1) Thread doing Commit puts in the request to write the log record and waits
2) if MINCOMMIT=1 the db2loggw thread takes the record and writes the record to disk
3) if it is the end of log file, it requests a new log file to be set up and db2loggr fulfills the request
4) db2loggr has to write into the history file that the log file has been cut
5) Also, writes the same info in the backup file as well.
6) Sends a request to db2logmgr to archive the log file
7) db2loggw resumes writing

If by any chance the history file has become large enough that it cannot be held into memory any longer, it has to write to disk to update it. And since it has to do it serially, it has to read all the records in a serial fashion and append to the history file, this can cause a slowdown.

Resolving the problem

1) Prune the History File as much as possible using the PRUNE HISTORY command and see if the performance becomes consistent.

2) Consider lowering the value of REC_HIS_RETENTN

 

http://www-01.ibm.com/support/docview.wss?uid=swg21646712&myns=swgimgmt&mynp=OCSSEPGG&mync=E

728x90
728x90

Question

Sometimes users might notice their sessions are stuck in "Commit Active" state for over a period of time and then the state goes away. What could be the possible reason for that ?

Cause

Session in "Commit Active" means it's performing the commit of a Unit of Work, but not being able to finish the same yet for some reasons.

Answer

Out of many possible reasons following could be some common ones :

- The database is using HADR and at the standby side it's hitting a momentary HADR log receive buffer full situation.

- Network issues affecting HADR communications between primary and standby

- The database has very large history file

- Disk bottleneck at the storage side where the database transaction logs are kept

 

 

To identify HADR receive buffer full situation just run "db2pd -hadr" at the standby side while the "Commit Active" state is present at the primary side of the HADR and see what the output of StandByRcvBufUsed field shows. If it's near 100% then it can cause the "Commit Active" situation in the primary side. Consider increasing DB2_HADR_BUF_SIZE registry setting to address the issue.

If a network issue is suspected between two sides of HADR pair then that should be taken to the network administrator to check on possible network bottleneck or any other kinds of network issues.

To check whether history file have become too big. Check for the file db2rhist.asc file under database directory. If it's big (example, more than 100MB or so) then try to prune it down.

Bottleneck in disk containing the active log directory or, mirror log path can cause the "Commit Active" situation most. Usually, iostat output collected while the "Commit Active" is in place can show if the disk bottleneck is in place. As for example, under AIX operating system collecting "iostat -TDla 1 30" might show the column "%tm" reaching near 100% for the disk containing log paths. If the %tm is 100% and all the data flow in that disk shows zero, that will indicate a momentary stalled disk situation.


 

http://www-01.ibm.com/support/docview.wss?uid=swg21650932&myns=swgimgmt&mynp=OCSSEPGG&mync=E

728x90
728x90


Question

You might encounter locking issues that require you to monitor the status of the database. Why don't snapshot monitoring commands, such as the SYSIBMADM.LOG_UTILIZATION administrative view, work in these locking situations?

Cause

Some snapshot monitoring commands or administrative views acquire DB2 internal latches for resource serialization. These latches might be owned by EDUs on the problematic database.

Answer

The MON_GET_* table functions, such as the MON_GET_TABLESPACE and MONREPORT module, don't require a DB2 internal latch.

Unfortunately, the MON_GET_TRANSACTION_LOG table function is not available in releases prior to DB2 Version 10.1. However, you can calculate used log space with the db2pd -db DBNAME -dbcfg -logs output.

The following represents the calculation of the total available log space.

LOGDATAPAGE = 4096 (LOGPAGE) - 20 (OVERHEADBYTES) = 4076

Total available log space
= LOGFILSIZ * LOGDATAPAGE * ( LOGPRIMARY + LOGSECOND )


The following represents the calculation of the total used log space.

Total used log space
= Current LSO - StartLSO of First active log file + Compensation log space


Where "First active log Number" is the number in the "First active log file" name, found in the db2pd -dbcfg output, and "Compensation log space" is the total of all "LogSpace" values in db2pd -transactions output.

The following is an example of how to query the "Total available log space" and "Total used log space" with the SYSIBMADM.LOG_UTILIZATION administrative view.

$ date; db2 "select char(DB_NAME,16) DB_NAME, LOG_UTILIZATION_PERCENT, TOTAL_LOG_USED_KB, TOTAL_LOG_AVAILABLE_KB, TOTAL_LOG_USED_TOP_KB, DBPARTITIONNUM from SYSIBMADM.LOG_UTILIZATION"

Wed Mar 6 02:38:04 EST 2013

DB_NAME LOG_UTILIZATION_PERCENT TOTAL_LOG_USED_KB TOTAL_LOG_AVAILABLE_KB TOTAL_LOG_USED_TOP_KB DBPARTITIONNUM
---------------- ----------------------- -------------------- ---------------------- --------------------- --------------
DB01 90.80 7402 749 7402 0

1 record(s) selected.



In the following example, the Total available log space calculation equates to 8152K and Total used log space equates to 7402K, based on thedb2pd output.

The following is the "Total available log space" and "Total used log space", calculated from the db2pd output.

LOGDATAPAGE = 4096 - 20 = 4076

Total available log space
= LOGFILSIZ * LOGDATAPAGE * ( LOGPRIMARY + LOGSECOND )
= 256 * 4076 * 8
= 8347648 = 8152K

Total used log space
= Current LSO - StartLSO of First active log file + Compensation log space
= Current LSO 46082460 - StartLSO 41803457 + all LogSpace (1320458 + 1980634)
= 4279003 + (1320458 + 1980634) = 7580095 = 7402.4365234375K = 7402K

TOTAL_LOG_AVAILABLE_KB = 8152K - 7402K = 750K


$ date; db2pd -db DB01 -dbcfg -logs -transactions

Wed Mar 6 02:38:06 EST 2013
...

Database Member 0 -- Database DB01 -- Active -- Up 0 days 00:11:51 -- Date 03/06/2013 02:38:07

Database Configuration Settings:
...

LOGBUFSZ (4KB) 64 64
LOGFILSIZ (4KB) 256 256
LOGPRIMARY 5 5
LOGSECOND 3 3
First active log file S0000001.LOG S0000001.LOG
...

Database Member 0 -- Database DB01 -- Active -- Up 0 days 00:11:52 -- Date 03/06/2013 02:38:08

Logs:
Current Log Number 5
Pages Written 25
Cur Commit Disk Log Reads 0
Cur Commit Total Log Reads 0
Method 1 Archive Status Success
Method 1 Next Log to Archive 5
Method 1 First Failure n/a
Method 2 Archive Status n/a
Method 2 Next Log to Archive n/a
Method 2 First Failure n/a
Log Chain ID 0
Current LSO 46082460
Current LSN 0x0000000000046772

Address StartLSN StartLSO State Size Pages Filename
0x0700000063BB0940 000000000003A183 41803457 0x00000000 256 256 S0000001.LOG
0x0700000063BB11C0 000000000003D0A4 42846913 0x00000000 256 256 S0000002.LOG
0x0700000063BB1A40 0000000000040146 43890369 0x00000000 256 256 S0000003.LOG
0x0700000063BB22C0 00000000000431E9 44933825 0x00000000 256 256 S0000004.LOG
0x0700000063BB00C0 000000000004628B 45977281 0x00000000 256 256 S0000005.LOG
0x0700000070FBC580 0000000000000000 47020737 0x00000000 256 256 S0000006.LOG
0x0700000070F44100 0000000000000000 48064193 0x00000000 256 256 S0000007.LOG
0x0700000070FD00C0 0000000000000000 49107649 0x00000000 256 256 S0000008.LOG

Database Member 0 -- Database DB01 -- Active -- Up 0 days 00:11:52 -- Date 03/06/2013 02:38:08

Transactions:
Address AppHandl [nod-index] TranHdl Locks State Tflag Tflag2 Firstlsn Lastlsn Firstlso Lastlso LogSpace SpaceReserved TID AxRegCnt GXID ClientUserID ClientWrkstnName ClientApplName ClientAccntng
0x07000000606AF380 28 [000-00028] 3 20002 WRITE 0x00000000 0x00000000 0x000000000003A2C0 0x000000000003F16D 41862316 43550312 1320458 3008538 0x000000000139 1 0 n/a n/a n/a n/a
0x07000000606B4480 29 [000-00029] 4 0 READ 0x00000000 0x00000000 0x0000000000000000 0x0000000000000000 0 0 0 0 0x0000000000F2 1 0 n/a n/a n/a n/a
0x07000000606B9580 30 [000-00030] 5 0 READ 0x00000000 0x00000000 0x0000000000000000 0x0000000000000000 0 0 0 0 0x000000000150 1 0 n/a n/a n/a n/a
0x07000000606BE680 31 [000-00031] 6 0 READ 0x00000000 0x00000000 0x0000000000000000 0x0000000000000000 0 0 0 0 0x0000000000F4 1 0 n/a n/a n/a n/a
0x07000000606C3780 32 [000-00032] 7 0 READ 0x00000000 0x00000000 0x0000000000000000 0x0000000000000000 0 0 0 0 0x0000000000F6 1 0 n/a n/a n/a n/a
0x07000000606C8880 33 [000-00033] 8 0 READ 0x00000000 0x00000000 0x0000000000000000 0x0000000000000000 0 0 0 0 0x0000000000F7 1 0 n/a n/a n/a n/a
0x07000000606CD980 34 [000-00034] 9 0 READ 0x00000000 0x00000000 0x0000000000000000 0x0000000000000000 0 0 0 0 0x0000000000F8 1 0 n/a n/a n/a n/a
0x07000000606D2A80 35 [000-00035] 10 0 READ 0x00000000 0x00000000 0x0000000000000000 0x0000000000000000 0 0 0 0 0x0000000000F9 1 0 n/a n/a n/a n/a
0x07000000606D7B80 36 [000-00036] 11 0 READ 0x00000000 0x00000000 0x0000000000000000 0x0000000000000000 0 0 0 0 0x0000000000FA 1 0 n/a n/a n/a n/a
0x07000000606DCC80 37 [000-00037] 12 0 READ 0x00000000 0x00000000 0x0000000000000000 0x0000000000000000 0 0 0 0 0x0000000000FC 1 0 n/a n/a db2evml_DB2DETAILDEADLOCK n/a
0x07000000606E1D80 39 [000-00039] 13 30003 WRITE 0x00000000 0x00000000 0x000000000003F16E 0x0000000000046771 43550396 46082376 1980634 4512698 0x000000000101 1 0 n/a n/a n/a n/a

References:
Space requirements for log files http://pic.dhe.ibm.com/infocenter/db2luw/v10r1/topic/com.ibm.db2.luw.admin.dbobj.doc/doc/c0004930.html

db2pd - Monitor and troubleshoot DB2 database commandhttp://pic.dhe.ibm.com/infocenter/db2luw/v10r1/topic/com.ibm.db2.luw.admin.cmd.doc/doc/r0011729.html


http://www-01.ibm.com/support/docview.wss?uid=swg21626153

728x90
728x90


Technote (FAQ)


Question

Are there recommended AIX® Virtual Memory Manager settings for DB2® database product?

Answer

While it is difficult to prescribe a standard recommendation for all virtual memory manager tunables on AIX systems running the DB2 product, there are some important specific settings where this is possible. This document focuses primarily on parameters which have non-default settings and are considered "Best Practice". These settings apply to all versions of DB2 product running on AIX 5.3, 6.1, and 7.1, and also apply to systems using both cached and non-cached (Direct I/O) file system I/O.

Recommendations:

The "Best Practice" recommendation for DB2 product running on AIX systems is :

maxperm%=90
maxclient%=90
minperm%=3
lru_file_repage=0

You can implement these settings with the following command, which is dynamic and takes effect immediately: 

vmo -p -o maxperm%=90 -o maxclient%=90 -o minperm%=3 -o lru_file_repage=0


Optional: 
page_steal_method=1 

Note that maxperm%, maxclient%, lru_file_repage, and page_steal_method are restricted tunables on AIX 6.1 and 7.1, and you should not tune these restricted tunables without first consulting AIX support.

You can view the current settings by executing the command "vmo -a". on AIX 6.1 and 7.1, you can view the current settings for restricted tunables by executing the command "vmo -a -F". You can view a subset of settings such as minperm%, maxperm%, and maxclient%, by executing the command"vmstat -v" and you do not require root authority to issue this command.

Definitions:

Computational memory - Application memory (such as DB2), kernel memory

Permanent memory - File cache, such as JFS, JFS2, NFS, and GPFS.

Client memory - JFS2, NFS, and GPFS, which means all permanent memory except JFS.

maxperm% - the threshold above which permanent memory pages will be evicted in favor of computational memory pages. The default value of this setting for AIX 5L is 80%, and for AIX 6.1 and 7.1 is 90% but it is now a restricted setting for AIX 6.1 and 7.1.

maxclient% - the threshold above which client memory pages will be evicted in favor of computational memory pages. The default value of this setting for AIX 5L is 80% and for AIX 6.1 and 7.1 is 90% but it is now a restricted setting for AIX 6.1 and 7.1.

minperm% - the threshold below which computational memory pages will be evicted in favor of permanent memory pages. The default value of this setting for AIX 5L is 20% and for AIX 6.1 and 7.1 is 3%.

lru_file_repage - when the number of permanent memory pages (numperm) falls between minperm and maxperm, or the number of client memory pages falls between minperm and maxclient, this setting indicates whether repaging rates are considered when deciding to evict permanent memory pages or computational memory pages. Setting this to 0 tells AIX to ignore repaging rates and favor evicting permament memory pages, and thus keeping more computational memory in RAM. The default value for AIX 5L is 1 or true, which means by default it considers the repaging rate. The default value for AIX 6.1 and 7.1 default is 0 or false but it is now a restricted setting.

page_steal_method - selects the AIX LRU scanning method. "0" is the default value for AIX 5.3 and tells AIX to use the legacy page frame table method. "1" is the default value for AIX 6.1 and 7.1 and tells AIX to use a list-based scanning method. This is a restricted tunable on AIX 6.1 and 7.1.

Explanation:

Since the lru_file_repage tunable and related AIX behavior changes were introduced, the recommended settings of key virtual memory manager parameters have changed. The older strategy for DB2 product running on AIX systems was to reduce file cache usage with a combination of moderately low maxperm or maxclient settings, such as 15-30%, and a much lower minperm setting, such as 5-10%. This strategy was required to prevent the paging out of computational memory on systems with a very active file cache.

The new tuning strategy takes advantage of lru_file_repage, and it includes settings which apply to all DB2 products running on AIX systems. These settings allow for a large filesystem cache where desirable, but not at the expense of paging out computational memory. Following these guidelines allows DB2 to take maximum advantage of available system memory without risking the effects of unnecessary paging due to file cache activity. It should be noted that the"lru_file_repage=0" setting is especially important for the optimal performance of DB2's Self-Tuning Memory Manager (STMM). STMM assumes that file pages above minperm can be used for other purposes, such as bufferpools, without paging out computational memory.

The new virtual memory manager settings are a general AIX recommendation and are not restricted to only systems running DB2 product. These settings are also the default settings on AIX 6.1 and 7.1. They have been adopted by the IBM Smart Analytics System and InfoSphere Balanced Warehouse and are now the recommended best practice for all DB2 products running on AIX systems.

An optional setting, "page_steal_method=1", has been included as an acceptable setting for DB2 product running on AIX systems. It can improve AIX performance by switching to list-based LRU scanning, and this is also the default on AIX 6.1 and 7.1. However, the benefits for DB2 product running on AIX systems are not as widespread as the other tunables, and it is not strictly a "Best Practice".

Other Resources:

For details on the AIX concepts discussed in this technote, please see the following links :
http://www.ibm.com/developerworks/aix/library/au-vmm/
http://publib.boulder.ibm.com/infocenter/systems/index.jsp
http://www.redbooks.ibm.com/redbooks/pdfs/sg246039.pdf


http://www-01.ibm.com/support/docview.wss?uid=swg21328602

728x90
728x90
Test scenario:
CREATE TABLE aaa.test( x integer not null, y integer not null, z integer not null )
DB20000I  The SQL command completed successfully.

alter table aaa.test pctfree 10 (테이블은 alter 작업필요)
DB20000I  The SQL command completed successfully.
create index aaa.ind on aaa.test(x,z) pctfree 40 (인덱스는 생성시 지정가능함)
DB20000I  The SQL command completed successfully.
select pctfree from syscat.indexes where indschema='AAA' and indname='IND'
PCTFREE
-------                      <<<<<<<<<<
     40
  1 record(s) selected.
insert into aaa.test with tmp(x,y,z) as ( values (1,1,1) union all select x+1, y+1, z+1 from tmp where x < 100 ) select * from tmp
DB20000I  The SQL command completed successfully.
runstats on table aaa.test with distribution and detailed indexes all
DB20000I  The RUNSTATS command completed successfully.
select pctfree from syscat.tables where tabschema='aaa' and tabname='TEST'
PCTFREE
-------
     10                      <<<<<<<<<<<<<
  1 record(s) selected.


http://www-01.ibm.com/support/docview.wss?uid=swg1LI74223

728x90
728x90

By specifying the CURSOR file type when using the LOAD command, you can load the results of an SQL query directly into a target table without creating an intermediate exported file.

Additionally, you can load data from another database by referencing a nickname within the SQL query, by using the DATABASE option within the DECLARE CURSOR statement, or by using the sqlu_remotefetch_entry media entry when using the API interface.

There are three approaches for moving data using the CURSOR file type. The first approach uses the Command Line Processor (CLP), the second the API, and the third uses the ADMIN_CMD procedure. The key differences between the CLP and the ADMIN_CMD procedure are outlined in the following table.

Table 1. Differences between the CLP and ADMIN_CMD procedure.
DifferencesCLPADMIN_CMD_procedure
SyntaxThe query statement as well as the source database used by the cursor are defined outside of the LOAD command using a DECLARE CURSOR statement.The query statement as well as the source database used by the cursor is defined within the LOAD command using the LOAD from (DATABASE database-alias query-statement)
User authorization for accessing a different databaseIf the data is in a different database than the one you currently connect to, the DATABASE keyword must be used in the DECLARE CURSOR statement. You can specify the user id and password in the same statement as well. If the user id and password are not specified in the DECLARE CURSOR statement, the user id and password explicitly specified for the source database connection are used to access the target database.If the data is in a different database than the one you are currently connected to, the DATABASE keyword must be used in the LOAD command before the query statement. The user id and password explicitly specified for the source database connection are required to access the target database. You cannot specify a userid or password for the source database. Therefore, if no userid and password were specified when the connection to the target database was made, or the userid and password specified cannot be used to authenticate against the source database, the ADMIN_CMD procedure cannot be used to perform the load.

To execute a LOAD FROM CURSOR operation from the CLP, a cursor must first be declared against an SQL query. once this is declared, you can issue the LOAD command using the declared cursor's name as the cursorname and CURSOR as the file type.

For example:

  1. Suppose a source and target table both reside in the same database with the following definitions:
    Table ABC.TABLE1 has 3 columns:
    • ONE INT
    • TWO CHAR(10)
    • THREE DATE
    Table ABC.TABLE2 has 3 columns:
    • ONE VARCHAR
    • TWO INT
    • THREE DATE
    Executing the following CLP commands will load all the data from ABC.TABLE1 into ABC.TABLE2:
    DECLARE mycurs CURSOR FOR SELECT TWO, onE, THREE FROM abc.table1
       LOAD FROM mycurs OF cursor INSERT INTO abc.table2
    Note: The above example shows how to load from an SQL query through the CLP. However, loading from an SQL query can also be accomplished through the db2Load API. Define the piSourceList of the sqlu_media_list structure to use the sqlu_statement_entry structure and SQLU_SQL_STMT media type and define the piFileType value as SQL_CURSOR.
  2. Suppose the source and target tables reside in different databases with the following definitions:
Table ABC.TABLE1 in database 'dbsource' has 3 columns:
  • ONE INT
  • TWO CHAR(10)
  • THREE DATE
Table ABC.TABLE2 in database 'dbtarget' has 3 columns:
  • ONE VARCHAR
  • TWO INT
  • THREE DATE
Provided that you have enabled federation and cataloged the data source ('dsdbsource'), you can declare a nickname against the source database, then declare a cursor against this nickname, and invoke the LOAD command with the FROM CURSOR option, as demonstrated in the following example:
CREATE NICKNAME myschema1.table1 FOR dsdbsource.abc.table1 
DECLARE mycurs CURSOR FOR SELECT TWO,ONE,THREE FROM myschema1.table1 
LOAD FROM mycurs OF cursor INSERT INTO abc.table2 
Or, you can use the DATABASE option of the DECLARE CURSOR statement, as demonstrated in the following example:
DECLARE mycurs CURSOR DATABASE dbsource USER dsciaraf USING mypasswd 
FOR SELECT TWO,ONE,THREE FROM abc.table1 
LOAD FROM mycurs OF cursor INSERT INTO abc.table2

Using the DATABASE option of the DECLARE CURSOR statement (also known as the remotefetch media type when using the Load API) has some benefits over the nickname approach:

Performance

Fetching of data using the remotefetch media type is tightly integrated within a load operation. There are fewer layers of transition to fetch a record compared to the nickname approach. Additionally, when source and target tables are distributed identically in a multi-partition database, the load utility can parallelize the fetching of data, which can further improve performance.

Ease of use

There is no need to enable federation, define a remote datasource, or declare a nickname. Specifying the DATABASE option (and the USER andUSING options if necessary) is all that is required.

While this method can be used with cataloged databases, the use of nicknames provides a robust facility for fetching from various data sources which cannot simply be cataloged.

To support this remotefetch functionality, the load utility makes use of infrastructure which supports the SOURCEUSEREXIT facility. The load utility spawns a process which executes as an application to manage the connection to the source database and perform the fetch. This application is associated with its own transaction and is not associated with the transaction under which the load utility is running.

Note:
  1. The previous example shows how to load from an SQL query against a cataloged database through the CLP using the DATABASE option of the DECLARE CURSOR statement. However, loading from an SQL query against a cataloged database can also be done through the db2LoadAPI, by defining the piSourceList and piFileTypevalues of the db2LoadStruct structure to use the sqlu_remotefetch_entry media entry and SQLU_REMOTEFETCH media type respectively.
  2. As demonstrated in the previous example, the source column types of the SQL query do not need to be identical to their target column types, although they do have to be compatible.

Restrictions

When loading from a cursor defined using the DATABASE option (or equivalently when using the sqlu_remotefetch_entry media entry with thedb2Load API), the following restrictions apply:
  1. The SOURCEUSEREXIT option cannot be specified concurrently.
  2. The METHOD N option is not supported.
  3. The usedefaults file type modifier is not supported.



http://publib.boulder.ibm.com/infocenter/db2luw/v9r7/topic/com.ibm.db2.luw.admin.dm.doc/doc/c0005437.html

728x90

+ Recent posts