728x90

Problem(Abstract)

A quick method is needed to identify all indices in a database that have been marked 'bad' by IBM® Informix Dynamic Server™ (IDS) database server.

Resolving the problem

You may use this SQL query to identify all indices that have been marked as "bad" in your IDS instance:

    (SELECT st.tabname, si.idxname, hex(spt.flags) flags
    FROM  sysindexes si, systables st, sysmaster:sysptnkey spt
    WHERE st.tabid = si.tabid AND
    st.partnum = spt.partnum AND
    sysmaster:bitval(spt.flags, 64) = 1 AND
    st.partnum !=0 AND
    si.idxname NOT IN  (
             SELECT sfs.indexname
             FROM sysfragments sfs
             WHERE sfs.tabid = st.tabid)
    GROUP BY 1,2,3
    )
    UNION ALL
    (SELECT st.tabname, sf.indexname idxname , hex(spt.flags) flags
    FROM  systables st, sysmaster:sysptnkey spt, sysfragments sf
    WHERE st.tabid = sf.tabid
    AND sf.fragtype = 'I'
    AND sf.partn = spt.partnum
    AND sysmaster:bitval(spt.flags, 64) = 1
    GROUP BY 1,2,3
    )
    ORDER BY 1  

Repeat the query in each database in your instance. To repair any bad indices, drop and recreate them.



http://www-01.ibm.com/support/docview.wss?uid=swg21193229

728x90
728x90

Question

Informix Server: Buffered vs. Unbuffered Logging

Answer

Overview

Starting with Continuus/CM Release 4.5, the Informix Server's default configuration was modified to use the unbuffered logging method for maintaining its logical log buffers for each database.   The purpose of this bulletin is to:

·        Explain logical logging and the various logging modes

·        Describe the steps in determining which logging method is currently be used for each database within an existing Informix online Server (UNIX and NT Servers).

·        Outline how to change the logging mode (UNIX and NT Servers).

Details  

Informix Servers configured with logging allow the server to recover from system crashes, such as power outages or system crashes, to a consistent state. The Informix Server uses the logical log (a collection of logs) to store database transactions (changes). The goal is to first restore the system to its most recent point of known physical consistency, which is the last checkpoint. It then applies the transactions found in the logical logs to the system, replaying all the transactions since the time of the checkpoint.   It rolls back any transactions that were not committed at the time of the failure.   With Informix servers, the same set of logs is shared by all databases; transactions from different databases are interleaved within the transaction logs.

Because the amount of data written to these logs can be quite large, the data is first cached in buffers, to be flushed out to disk at a later point. This helps reduce the amount of physical I/O needed to write the data to disk. By default, the server allocates three buffers for use with the logical logs and stores them in the Resident Portion of shared memory. 

The flushing of the logical log buffer is determined by the logging mode of the database(s) being accessed as follows: 

Buffered logging The buffer will not be flushed until it is full. There is a potential loss of data should a system crash occur before the buffer is flushed to disk.

Unbuffered logging The buffer is flushed as soon as any transaction is complete (i.e. a commit or rollback record is written to the log buffer). This mode guarantees that the log data will be saved.

If there are transactions against databases with both buffered and unbuffered logging (i.e. some databases use buffered logging, others use unbuffered logging), then the buffer will be flushed either when it is full or when transactions against the database(s) with unbuffered logging complete. The events that cause the logical log buffer to be flushed to disk:

·         A checkpoint.

·         One of the logical log buffers becomes full.

·         A transaction is completed in a database that uses unbuffered logging.

·         A database session ends for a database that does not use transaction logging. 

Note: By default, CM Synergy databases are set up to have LTAPEDEV set to /dev/null. Therefore, when the logical logs get full, the backup of the logical logs is thrown away. Essentially, transaction logging is turned off. The reason for this is that most sites are not willing to incur the overhead of swapping out tapes or backing up files when the logs become full to gain the ability to recover to the latest transaction. 

Determining Current Logging Mode

Informix 5.x (C/CM 4.5 on UNIX)

All UNIX-based Informix Servers in CM Synergy versions up to and including 4.5 use Informix version 5.X. one method to determine the logging mode of a database using Informix 5.X is to login as user informix on the server machine, set the environment variables and run "tbmonitor".   The steps are outlined below: 

1.        Login as user informix on the server machine.

2.        Set the Informix environment variables (assuming CCM_HOME is set correctly) : 

using csh:

                % setenv INFORMIXDIR $CCM_HOME/informix

                % setenv TBCONFIG <servername>

                % setenv PATH $INFORMIXDIR/bin:$PATH 

using sh or ksh:        

                % INFORMIXDIR=$CCM_HOME/informix; export INFORMIXDIR

                % TBCONFIG=<servername>; export TBCONFIG

                % PATH=$INFORMIXDIR/bin:$PATH; export PATH 

3.    Start the Informix tbmonitor facility.

      %tbmonitor

4.      Select Status (User arrow keys to highlight and then hit Enter).

5.      Select Databases and scroll down for the database you are interested in. The log status will

be in the right hand column. (B=buffered logging, U=unbuffered logging, N=no logging)

6.     Press ESC to return to the Status Menu, then select Exit -> Exit to exit tbmonitor.

Informix 7.x (all C/CM 5.0, 5.0.1, 5.1 and C/CM 4.5 on Windows NT) and Informix 9.x (5.0, 5.0.1, 5.1 on LINUX)

The logging mode of the database can be determined by querying the sysdatabases table from the sysmaster database on the server machine and checking the column of "is_logging" and "is_buff_log". The steps are outlined below: 

1.        Login as user informix on the server machine.

2.        Set the Informix environment variables by executing %CCM_HOME%\informix\setenv.cmd on Windows NT, or on the other platforms as follows : 

Set the environment variables: INFORMIXDIR, INFORMIXSERVER, onCONFIG and PATH: 

using csh:

                % setenv INFORMIXDIR $CCM_HOME/informix

                % setenv INFORMIXSERVER <servername>

                % setenv PATH $INFORMIXDIR/bin:$PATH 

using sh or ksh:        

                 % INFORMIXDIR=$CCM_HOME/informix; export INFORMIXDIR

                % INFORMIXSERVER=<servername>; export INFORMIXSERVER

                % PATH=$INFORMIXDIR/bin:$PATH; export PATH 

3.        Run "dbaccess".

4.        Choose the following menus: (Use arrow keys to highlight and hit enter).

      Database -> Select -> sysmaster -> Exit -> Query Language -> New

5.        Type the following (spaces are important before and after the '=') : 

select * from sysdatabases where name = "dbname"  

(Replace dbname with the name of the database in all lower case letters.) 

6.    Hit the ESC key and then choose Run from the menu. 

      The output should appear as below: 

            name        dbname

            partnum     3145760

            owner         ccm_root

            created     05/11/1998

            is_logging   1

            is_buff_log 0

            is_ansi        0

            is_nls         0

            flags          1

 

  If "is_logging" is 1 then the database is in logging mode (can be buffered or unbuffered).

  If "is_buff_log" is 1 then it's buffered logging. 

7.    Select Exit -> Exit to exit dbaccess.

Change Logging Mode

Informix 5.x (C/CM 4.5 on UNIX)

Switching between buffered and unbuffered logging can be done using tbtape.

1.        Run "su informix" to become user informix on server machine.

2.        Set up environment: 

using csh:

                % setenv INFORMIXDIR $CCM_HOME/informix

                % setenv TBCONFIG <servername>

                % setenv PATH $INFORMIXDIR/bin:$PATH 

using sh or ksh:        

                % INFORMIXDIR=$CCM_HOME/informix; export INFORMIXDIR

                % TBCONFIG=<servername>; export TBCONFIG

                % PATH=$INFORMIXDIR/bin:$PATH; export PATH 

3.      Run "tbtape -U dbname" will set the logging mode to Unbuffered logging.

4.      Run "tbtape -B dbname" will set the logging mode to Buffered logging.

Informix 7.x (all C/CM 5.0, 5.0.1, 5.1 and C/CM 4.5 on Windows NT) and Informix 9.x (5.0, 5.0.1, 5.1 on LINUX)

Switching between buffered and unbuffered logging can be done using ontape.

1.        Login as user ccm_root or informix on Server machine.

2.        Change directory to %CCM_HOME\informix.

4.        Set the Informix environment variables by executing %CCM_HOME%\informix\setenv.cmd on Windows NT, or on the other platforms as follows : 

Set the environment variables: INFORMIXDIR, INFORMIXSERVER, onCONFIG and PATH: 

using csh:

                % setenv INFORMIXDIR $CCM_HOME/informix

                % setenv INFORMIXSERVER <servername>

                % setenv onCONFIG <servername>

                % setenv PATH $INFORMIXDIR/bin:$PATH

using sh or ksh:        

                % INFORMIXDIR=$CCM_HOME/informix; export INFORMIXDIR

                % INFORMIXSERVER=<servername>; export INFORMIXSERVER

                % onCONFIG=<servername>; export onCONFIG

                % PATH=$INFORMIXDIR/bin:$PATH; export PATH 

4.        Run ontape -U <dbname>" will set the logging mode to Unbuffered logging.

5.        Run ontape -B <dbname>" will set the logging mode to Buffered logging.

CM Synergy 4.5, 5.x, 6.xTB155ContentProduct versionPartner ContentInternal ContentCategoryReference I


http://www-01.ibm.com/support/docview.wss?uid=swg21325178

728x90
728x90

Question

How can you find out the type of existing enterprise replication; whether it is update anywhere or primary to target replication?

Answer

For Informix Dynamic Server version 7.31, "cdr list repl" command does not tell you whether it is a "Update Anywhere" or "Primary Target" replication. 

One method of checking the ER type is by checking partdef and repdef table from syscdr database for the definition, and finding out whether they are using Primary Target or Update Anywhere. If it is Update Anywhere, the partmode will be "P" for the repid entry. If it is Primary Target, you should see a "P" and "R" for the pair. 


For both 7.31 and 10.00, you can check from syscdr using the following SQL (you can modify the query to your own needs): 

dbaccess -> syscdr-> query 
select a.repid, a.repname, a.repstate, b.partstate, b.partmode, 
b.flags, b.selecstmt, b.servid from repdef a, partdef b 
where a.repid = b.repid; 

Example Results: 

From below results, you can tell there are 2 types of replications: 

1. repid 6881287( servid 105, servid 106) is update anywhere, as both 
partmode is "P", means Primary. 

2. repid 6881294 ( servid 105, servid 106) is Primary ->target type 
replication, as servid 105 's partmode is "P" and servid 106 's partmode is "R", 
means read only. 

repid 6881287 
repname rep_state 
repstate 2 
partstate 2 
partmode P 
flags 0 
selecstmt select * from state 
servid 105 

repid 6881287 
repname rep_state 
repstate 2 
partstate 2 
partmode P 
flags 0 
selecstmt select * from state 
servid 106 

repid 6881294 
repname rep_customer 
repstate 2 
partstate 2 
partmode P 
flags 0 
selecstmt select * from customer 
servid 105 

repid 6881294 
repname rep_customer 
repstate 2 
partstate 2 
partmode R 
flags 0 
selecstmt select * from customer 
servid 106 



For IDS 9.40, 10.00 and above, you can also check from the command line using " cdr list repl ".

Example Results: 

1. for replication "rep_state, you can see the REPLMODE are both "PRIMARY for both servers. Hence, we know this is an update anywhere type. 

2. for replication "rep_customer, you can see the REPLMODE fro server 105 is "PRIMARY" and for server 106 is : READ-ONLY". Hence, we know this is an Primary - Target type. 

Sever 105 
> cdr list repl 

CURRENTLY DEFINED REPLICATES 
------------------------------- 
REPLICATE: rep_state 
STATE: Inactive on:g_er1 
CONFLICT: Timestamp 
FREQUENCY: immediate 
QUEUE SIZE: 0 
PARTICIPANT: stores_demo:informix.state 
OPTIONS: transaction,ats,fullrow 
REPLID: 6881287 / 0x690007 
REPLMODE: PRIMARY on:g_er1 
APPLY-AS: INFORMIX on:g_er1 

REPLICATE: rep_customer 
STATE: Inactive on:g_er1 
CONFLICT: Ignore 
FREQUENCY: immediate 
QUEUE SIZE: 0 
PARTICIPANT: stores_demo:informix.customer 
OPTIONS: transaction,ats,fullrow 
REPLID: 6881294 / 0x69000e 
REPLMODE: PRIMARY on:g_er1 
APPLY-AS: INFORMIX on:g_er1 


Sever 106 
> cdr list repl 

CURRENTLY DEFINED REPLICATES 
------------------------------- 
REPLICATE: rep_state 
STATE: Inactive on:g_er2 
CONFLICT: Timestamp 
FREQUENCY: immediate 
QUEUE SIZE: 0 
PARTICIPANT: stores_demo:informix.state 
OPTIONS: transaction,ats,fullrow 
REPLID: 6881287 / 0x690007 
REPLMODE: PRIMARY on:g_er2 
APPLY-AS: INFORMIX on:g_er2 

REPLICATE: rep_customer 
STATE: Inactive on:g_er2 
CONFLICT: Ignore 
FREQUENCY: immediate 
QUEUE SIZE: 0 
PARTICIPANT: stores_demo:informix.customer 
OPTIONS: transaction,ats,fullrow 
REPLID: 6881294 / 0x69000e 
REPLMODE: READ-ONLY on:g_er2 
APPLY-AS: INFORMIX on:g_er2


https://www-304.ibm.com/support/docview.wss?uid=swg21288554

728x90
728x90

Problem(Abstract)

Obtaining C-ISAM version information--How to determine what version of CISAM you have installed and what serial number was used to install it.

Resolving the problem

In order to determine what version of IBM Informix C-ISAM is installed on your system, run thedblog command that came with C-ISAM and give it a single option of -V. This will print out the product version as well as the serial number that was used to install the product.


    Note: For historical reasons the -V option reports the product name as INFORMIX-SQL rather than C-ISAM.

The dblog program will be located in either the directory in which you installed C-ISAM or in a bindirectory inside that directory.
    Example:

    In this example C-ISAM version 7.25.UC1 was installed in the directory/product/cisam725uc1 using a serial number of ACN#J266163.
      $ /product/cisam725uc1/bin/dblog -V
      INFORMIX-SQL Version 7.25.UC1
      Software Serial Number ACN#J266163

    Example:

    In this example C-ISAM version 5.10.UD4 was installed in the directory/product/cisam510ud4 using a serial number of ACN#J266163.
      $ /product/cisam510ud4/dblog -V
      INFORMIX-SQL Version 5.10.UD4  
      Software Serial Number ACN#J266163



https://www-304.ibm.com/support/docview.wss?uid=swg21083595

728x90
728x90

Question

How to tell the version of Informix Products or Libraries Installed

Answer

It may be neccessary to find the version of the Informix product or library that is installed in a particular $INFORMIXDIR directory.

Described are three methods to display version information for database server and client products. Make sure the following environment variables are set: 

    INFORMIXDIR
    PATH


Method 1

The following table describes the command to be run for a particular product:

Product​​
Command​​
INFORMIX DYNAMIC SERVER​​dbaccess -V​​
INFORMIX STANDARD ENGINE (SE)​​dbaccess -V​​
Informix 4GL Compiler Development ​​c4gl -V or i4gl -V ​​
INFORMIX-4GL Rapid Development System​​r4gl -V ​​
INFORMIX-4GL Interactive Debugger ​​fgldb -V​​
INFORMIX-SQL, Runtime Facility​​isql -V​​
INFORMIX-Client SDK​​ifx_getversion clientsdk​​


Method 2

To display the version of an application product or a library, issue the ifx_getversion command.

Usage: ifx_getversion [product_name | library_name]

The following table describes a list of products and libraries:

Product/Library​​
​​ifx_getversion​​
clientsdk​​ifx_getversion clientsdk​​
esql​​ifx_getversion esql​​
java | libjavai​​ifx_getversion libjava​​
dmi | libdmi​​ifx_getversion libdmi​​
c++ | libc++​​ifx_getversion libc++​​
libcli[.a|.so]​​ifx_getversion libcli.so​​
libgls[.a|.so]​​ifx_getversion libgls.so​​
libos[.a|.so]​​ifx_getversion libos.so​​
libasf[.a|.so]​​ifx_getversion libasf.so​​
libsql[.a|.so]​​ifx_getversion libsql.so​​
libgen[.a|.so] ​​ifx_getversion libgen.so​​


where:
    libasf are the connectivity libraries.
    libgen are the general libraries.
    libOS are the OS supporting libraries. These are just there to determine if libraries exist.

example:
    ifx_getversion libcli.so
    IBM/Informix-Client SDK Version 3.50.UC3
    IBM/Informix ODBC LIBRARY Version 3.50.0000.UC3

    ifx_getversion libgls.so
    INFORMIX LIBGLS LIBRARY Version 4.50.UC4

    ifx_getversion libos.so
    IBM/Informix-Client SDK Version 3.50.UC3
    IBM/Informix LIBOS LIBRARY Version 3.50.UC3

    ifx_getversion libasf.so
    IBM/Informix-Client SDK Version 3.50.UC3
    IBM/Informix LIBASF LIBRARY Version 3.50.UC3

    ifx_getversion libsql.so
    IBM/Informix-Client SDK Version 3.50.UC3
    IBM/Informix LIBSQL LIBRARY Version 3.50.UC3

    ifx_getversion libgen.so
    IBM/Informix-Client SDK Version 3.50.UC3
    IBM/Informix LIBGEN LIBRARY Version 3.50.UC3

    libasf are the connectivitylibraries.
    lib gen are the general libraries.
    LibOS are the OS supporting libraries. These are just there to determine if customer has libraries

Method 3

Another method to determine the product versions is to review all the files containing the product versions.

Follow these steps:

1. ls -l $INFORMIXDIR/etc/*cr 
2. cat $INFORMIXDIR/etc/*cr 


example:
ls -l $INFORMIXDIR/etc/*cr 

-rw-r--r-- 1 informix informix 114 Oct 17 2008 ASF-cr
-rw-r--r-- 1 informix informix 117 Oct 17 2008 CLI-cr
-rw-r--r-- 1 informix informix 69 Oct 172008ClientSDK-cr
-rw-r--r-- 1 informix informix 99 Oct 17 2008 CPLUS-cr
-rw-r--r-- 1 informix informix 114 Oct 17 2008 DMI-cr
-rw-r--r-- 1 informix informix 118 Oct 17 2008 ESQL-cr
-rw-r--r-- 1 informix informix 114 Oct 17 2008 GENLIB-cr
-rw-r--r-- 1 informix informix 102 Oct 28 2008 GLS-cr
-rw-r--r-- 1 informix informix 107 Oct 28 2008 IIF-cr
-rw-r--r-- 1 informix informix 107 Oct 28 2008 MSG-cr
-rw-r--r-- 1 informix informix 113 Oct 17 2008 OSLIB-cr
-rw-r--r-- 1 informix informix 114 Oct 17 2008 SQLI-cr

cat $INFORMIXDIR/etc/*cr 

IBM/Informix-Client SDK Version 3.50.UC3
IBM/Informix LIBASF LIBRARY Version 3.50.UC3
Copyright (C) 1991-2008 IBM
IBM/Informix-Client SDK Version 3.50.UC3
IBM/Informix ODBC LIBRARY Version 3.50.0000.UC3
Copyright (C) 1991-2008 IBM
IBM/Informix-Client SDK Version 3.50.UC3
Copyright (C) 1991-2008 IBM
IBM/Informix-Client SDK Version 3.50.UC3
IBM/Informix C++ API 3.50.UC3
Copyright (C) 1991-2008 IBM
IBM/Informix-Client SDK Version 3.50.UC3
IBM/Informix LIBDMI LIBRARY Version 3.50.UC3
Copyright (C) 1991-2008 IBM
IBM/Informix-Client SDK Version 3.50.UC3
IBM/Informix EMBEDDED SQL for C Version 3.50.UC3
Copyright (C) 1991-2008 IBM
IBM/Informix-Client SDK Version 3.50.UC3
IBM/Informix LIBGEN LIBRARY Version 3.50.UC3
Copyright (C) 1991-2008 IBM
INFORMIX LIBGLS LIBRARY Version 4.50.UC4
Copyright (C) 1986-2008 IBM Corporation, All rights reserved
IBM Informix Dynamic Server Version 11.50.UC3
Copyright (C) 1986-2008 IBM Corporation, All rights reserved
IBM Informix Dynamic Server Version 11.50.UC3
Copyright (C) 1986-2008 IBM Corporation, All rights reserved
IBM/Informix-Client SDK Version 3.50.UC3
IBM/Informix LIBOS LIBRARY Version 3.50.UC3
Copyright (C) 1991-2008 IBM
IBM/Informix-Client SDK Version 3.50.UC3
IBM/Informix LIBSQL LIBRARY Version 3.50.UC3
Copyright (C) 1991-2008 IBM



https://www-304.ibm.com/support/docview.wss?uid=swg21389048

728x90
728x90

Question


What version of Informix Dynamic Server (IDS) do you have?
What version of engine do you have?
What version of database server do you have?
What is the version of your instance?

Answer

BEFORE YOU BEGIN

The following environment variables must be set:

    Tip: See the Related information section for documents about setting environment variables.


STEPS

Use one of these methods to check the version or serial number of a specific IBM Informix® Dynamic Server™ instance:

COMMON PROBLEMS

Symptom: You receive the error:
    onstat: Command not found


Resolution: The environment variable PATH is not set correctly. At minimum PATH must include $INFORMIXDIR/bin

Symptom: You receive the error:
    'onstat' is not recognized as an internal or external command, operable program or batch file.


Resolution: The environment variable PATH is not set correctly. At minimum PATH must include %INFORMIXDIR%/bin



http://www-01.ibm.com/support/docview.wss?uid=swg21164708

728x90
728x90

적용 가능한 인포믹스 버전 :  7.2, 7.3, 9.2, 9.3, 9.4, 10.0

Problem
Determine shared memory segment for a particular Informix instance - How to?
 
Solution

BEFORE YOU BEGIN

This Korn-Shell script will help you to identify which segments are in use by which Informix® instance. 

The output is similar to UNIX command ipcs –m, except the last columnDBSERVERNAME. The DBSERVERNAME representing the database server name. The value is equivalent to the ONCONFIG configuration file parameter DBSERVERNAME


STEPS

    1. Copy the file checkipc.txt to a work directory. 

    2. Change the name to checkipc

    3. Execute the script using the following syntax:

      sh checkipc

Sample Output

The example shows that all the shared memory segments grouped by informix are in use by active Informix instances. There are 4 segments that belong totest940 instance, and 3 segments that belong to test731 instance.


Shared Memory

TYPE  ID        KEY        MODE OWNER    GROUP  DBSERVERNAME
   m  582 0x52564801 --rw-rw----  root informix   test940
   m   39 0x52564802 --rw-rw----  root informix   test940
   m 2872 0x52564803 --rw-rw----  root informix   test940
   m 5365 0x52564804 --rw-rw-rw-  root informix   test940
   m 2866 0x52604801 --rw-rw----  root informix   test731
   m 3219 0x52604802 --rw-rw----  root informix   test731
   m    4 0x52604803 --rw-rw-rw-  root informix   test731


checkipc.txt


http://www-01.ibm.com/support/docview.wss?uid=swg21114995

checkipc.txt
0.0MB
728x90
728x90

Question

Using IDS 7.31.xD1 or higher you can use oncheck -me to reorganize your tables.

Answer

Starting in 7.31.xD1, a new feature was introduced in oncheck. It allows you to reduce the number of extents of a table.

How to enable environment variable RASHELP:
To use this new feature the environment variable RASHELP must be set to 1 before we start the database server. 
Example:
In ksh or bash:
export RASHELP=1
oninit
In csh:
setenv RASHELP 1
oninit

How to get the new usage of oncheck:
Then we can see additional syntax of oncheck: 

    oncheck --

    ONCHECK
    Usage:  oncheck {-cCheckOption | -pPrintOption | -mMergeOption}
                    [-y | -n] [-q]
                    [ { database[:[owner.]table[,fragdbs|#index]] | TBLspace number
                        | Chunk number }
                      { rowid | page number | target number of extents } ]
    ...
-m - MergeOption
e - Merge of TBLspace/partition extents
[TBLspace/partition number and target number of extents]
...
      partnum
        The partition number for the target table.
      extents
        The number of surviving extents.
How to get partnum:
1) Run command oncheck -pt database:tablename
2) Find the value of Partition partnum
Example:
oncheck -pt stores:customer
...
Partition partnum 2097183
...
Or 
1) Choose the database your table reside in. Then run appropriate sql to get the value from the syscatalog table
Example:
select tabname, partnum from systables where tabname="customer"
tabname partnum

customer 2097183

Partition partnum or partnum is we are looking for.

How to reorganize the table by oncheck:
Example:

Suppose that table t1 has 2 extents and we want to reorganize them in just one extent:

$ oncheck -pt dbname:t1

TBLspace Report for dbname:informix.t1

    Physical Address               103103    
    Creation date                  07/07/2003 15:50:08
    TBLspace Flags                 801  Page Locking
                                        TBLspace use 4 bit bit-maps
    Maximum row size               2000      
    Number of special columns      0         
    Number of keys                 0         
    Number of extents              2         
    Current serial value           1         
    First extent size              4         
    Next extent size               4         
    Number of pages allocated      8         
    Number of pages used           6         
    Number of data pages           5         
    Number of rows                 5         
    Partition partnum              1048744   
    Partition lockid               1048744   

    Extents                       
         Logical Page  Physical Page        Size
                    0         1031bb           4
                   4         1031c3           4


$ oncheck -me 1048744 1

TBLspace merge extents report for TBLspace 0x1000a8

    Number of extents
        Original      2
        Target        1

        Current       1
        Freed         2

$ oncheck -pt dbname:t1

TBLspace Report for dbname:informix.t1

    Physical Address               103103    
    Creation date                  07/07/2003 15:50:08
    TBLspace Flags                 801  Page Locking
                                        TBLspace use 4 bit bit-maps
    Maximum row size               2000      
    Number of special columns      0         
    Number of keys                 0         
    Number of extents              1         
    Current serial value           1         
    First extent size              4         
    Next extent size               4         
    Number of pages allocated      8         
    Number of pages used           6         
    Number of data pages           5         
    Number of rows                 5         
    Partition partnum              1048744   
    Partition lockid               1048744   

    Extents                       
         Logical Page  Physical Page        Size
                    0         1031c7           8 



http://www-01.ibm.com/support/docview.wss?uid=swg21141582

728x90
728x90

Problem(Abstract)

This article has a query that reports the total pages allocated to user tables across a database server.

Resolving the problem

INTRODUCTION

You want to determine the size of all user tables on an individual table basis. For example, you might want to know if the total pages for a table is approaching the maximum number. The query reports all the user-created tables for all databases. These columns are in the report:


Note: The query returns the same data as the oncheck -pT commands. The difference is that the query reports table data for all databases.


BEFORE YOU BEGIN

Determine if any of your user table names begin with "sys". System catalog tables begin with the "sys" prefix and the query selects against this prefix.


STEPS

Run this SQL statement from the sysmaster database:

    SELECT s.name dbspace, n.dbsname database, 
    n.tabname table, ti_nptotal total, ti_npused used, 
    ti_npdata data
    FROM systabinfo i, systabnames n, sysdbspaces s
    WHERE i.ti_partnum = n.partnum 
    AND partdbsnum(i.ti_partnum) = s.dbsnum
    AND n.dbsname != "sysuser"
    AND n.dbsname != "sysmaster"
    AND n.dbsname != "sysutils"
    AND n.tabname NOT LIKE "sys%"
    AND n.tabname != "TBLSpace";

Tip: By adjusting the WHERE clause you can modify the query to report data from a different set of databases or to include the system catalog tables.



http://www-01.ibm.com/support/docview.wss?uid=swg21175863

728x90
728x90

Question

Before IDS 11, to collect database server's information, all operations or actions will be done one by one and manually. To improve it, utilize some scripts to do it automatically and conveniently.

Answer

Steps to use these scripts:

1. At first, run health_check.sh on database server site to collect data

2. Then, run health_operation.sh with the package generated by health_check.sh and do some simple analysis based on these data

#!/bin/sh
###############################################################################
#
# Module: health_check.sh
# Description: Collect informix information for health check
#
# Change Log
#
# Date Name Description.................
###############################################################################
#Bundled file's name form: INSTANCENAME_YYYYMMDD_POSITION_MACHINE.tar.[bz2|gz|Z]
#Notice: Can not run this script on IDS 7.14 or before version as not include evidence.sh

usage()
{
printf "Usage: health_check.sh position\n"
exit 1

} 

# check usage; exit if incorrect. valid argument counts is 1.
[ $# -ne 1 ] && usage

#Initialize environment
AWK=""
if [ -z "$AWK" ]
then for awker in gawk nawk awk
do
for dir in `echo $PATH | sed 's/:/ /g'`
do
if [ -x "$dir/$awker" ]
then AWK="$dir/$awker"
break 2
fi
done
done
fi

DATE=`date '+%Y%m%d'`
DATAPATH=./${INFORMIXSERVER}/${DATE}
POSITION=$1
MACHINE=`uname -n`
ONLINE_LOG=`onstat -c |${AWK} -v dir=${INFORMIXDIR} '/^MSGPATH/ {sub(/\\$INFORMIXDIR/,dir,$2);print $2}'`
BAR_ACT_LOG=`onstat -c |${AWK} -v dir=${INFORMIXDIR} '/^BAR_ACT_LOG/ {sub(/\\$INFORMIXDIR/,dir,$2);print $2}'`

[ ! -d ${DATAPATH} ] && mkdir -p ${DATAPATH}

#Check for message log
printf "Operating message log...\n"
egrep -i "assert|err|fail" "${ONLINE_LOG}">${DATAPATH}/online.err
egrep -i "warn" "${ONLINE_LOG}">${DATAPATH}/online.warn
egrep -i "modified" "${ONLINE_LOG}">${DATAPATH}/online.modified
egrep -i "duration" "${ONLINE_LOG}">${DATAPATH}/online.duration

egrep -i "assert|err|fail" "${BAR_ACT_LOG}">${DATAPATH}/bar_act.err

cp ${ONLINE_LOG} ${DATAPATH}/online.log
cp ${BAR_ACT_LOG} ${DATAPATH}/bar_act.log

#Check for SMI tables & collect database's schema
printf "Operating SMI tables & collect database's schema...\n"
for database in `dbaccess sysmaster 2>${DATAPATH}/log <<! | ${AWK} '
BEGIN {flag = 0;}
!/^$/{
if ( flag == 1 )
print $1
if ( $1 == "name" )
if ( NF == 2 )
print $2
else
flag = 1;
}'
select name from sysdatabases
where name not in ("sysmaster", "sysutils", "sysuser", "syscdr", onpload", "sysadmin")
!
`
do
dbschema -d ${database} -ss ${DATAPATH}/${database}.sql
dbaccess ${database} >${DATAPATH}/log 2>&1 <<!

set isolation dirty read;

--find sequential scan
unload to ${DATAPATH}/seq_scans_$database.txt
select p.dbsname, p.tabname, t.nrows, sum(p.seqscans) tot_seqscans
from sysmaster:sysptprof p, systables t
where p.dbsname = "${database}"
and p.tabname = t.tabname
and t.tabid > 99
and t.nrows > 20000
group by 1,2,3
having sum(p.seqscans) > 3
order by 3 desc,4 desc;

--find lock wait
unload to ${DATAPATH}/lockwait_$database.txt 
select p.dbsname, p.tabname, t.nrows, t.locklevel,sum(p.lockwts) lockwts
from sysmaster:sysptprof p, systables t
where p.dbsname = "${database}"
and p.tabname = t.tabname
and t.tabid > 99
group by 1,2,3,4
having sum(p.lockwts) > 0
order by 5 desc;

--监控index层数意义不大,因为客户对于index的层数是不可控的
unload to ${DATAPATH}/idx_lvl_$database.txt 
select t.tabname, i.idxname, i.levels
from systables t, sysindexes i
where t.tabid = i.tabid
and t.tabid > 99
and i.levels > 4
order by 3 desc;

--evaluate index's unique keys in the first column, find high duplicated index
unload to ${DATAPATH}/idx_unique_$database.txt
select t.tabname, i.idxname, t.nrows, i.nunique
from systables t, sysindexes i
where t.tabid =i.tabid
and t.tabid > 99
and t.nrows > 10000
and i.nunique < 0.01*t.nrows 
order by 3 desc,4 asc;

--evaluate table's io ratio
unload to ${DATAPATH}/table_io_$database.txt 
select p.dbsname, p.tabname, (p.isreads + p.pagreads) diskreads, (p.iswrites + p.pagwrites) diskwrites,(p.isreads + p.pagreads + p.iswrites + p.pagwrites) disk_io
from sysmaster:sysptprof p, systables t
where p.dbsname = "${database}"
and p.tabname = t.tabname
and t.tabid > 99
order by 5 desc;
!

done

dbaccess sysmaster >${DATAPATH}/log 2>&1 <<!

--get dbspace's usage percent
unload to ${DATAPATH}/dbs_spaces.txt 
select name dbspace, sum(chksize) allocated, sum(nfree) free,
round(((sum(chksize) - sum(nfree))/sum(chksize))*100) pcused
from sysmaster:sysdbspaces d, sysmaster:syschunks c
where d.dbsnum = c.dbsnum
group by 1
order by 4 desc;

--get chunk's io ratio
unload to ${DATAPATH}/chunk_io.txt
select d.name, fname path_name, sum(pagesread) diskreads, sum(pageswritten) diskwrites
from sysmaster:syschkio c, sysmaster:syschunks k, sysmaster:sysdbspaces d
where d.dbsnum = k.dbsnum
and k.chknum = c.chunknum
group by 1, 2
order by 3 desc;

!

#Get integrated and evident information of the instance
printf "Collect integrated and evident information of the instance ...\n"
#Tip:There is a little problem at here when we use $INFORMIXDIR to indicate the path in IDS 11.10 or above
sed -e "s/tail -100/tail -5000/" -e "s/DEBUG=off/DEBUG=on/" -e "s/onSTAT -g ses$/onSTAT -g ses 0/" ${INFORMIXDIR}/etc/evidence.sh > ${DATAPATH}/evidence.sh
sh ${DATAPATH}/evidence.sh 2 0 ${DATAPATH}/evi.txt 0 0 0 0 1

#Estimate the maximum rows can be hold in a single table fragment
printf "Estimating the maximum rows in a single table fragment...\n"
cal_maxrows.sh > ${DATAPATH}/maxrows.txt

#Estimate the maximum extents can be hold in a single table fragment
printf "Estimating the maximum extents in a single table fragment...\n"
cal_restextents.sh > ${DATAPATH}/maxextents.txt

mv ./oncheck.pt.* ${DATAPATH}

#Sanity check
printf "Operating sanity check...\n"
oncheck -pr > ${DATAPATH}/oncheck.pr
oncheck -pc > ${DATAPATH}/oncheck.pc
oncheck -pe > ${DATAPATH}/oncheck.pe
egrep -i err ${DATAPATH}/oncheck* | grep -v syserrors > ${DATAPATH}/oncheck.err

#Compress the output of check
printf "Operating compress action...\n"
for packer in bzip2.bz2 gzip.gz compress.Z
do
compress=`echo $packer | sed 's/\..*//'`
suffix=`echo $packer | sed 's/.*\.//'`
for dir in `echo $PATH | sed 's/:/ /g'`
do
if [ -x "$dir/$compress" ]
then COMPRESS="$dir/$compress"
COMPRESS_SUFFIX=$suffix
break 2
fi
done
done
tar cvf - ${DATAPATH} | ${COMPRESS} > ./${INFORMIXSERVER}_${DATE}_${POSITION}_${MACHINE}.tar.${COMPRESS_SUFFIX}

printf "Do you want to backup all database server's log and clean them? [Y/n]"
ANS=''
read ANS
if [ "${ANS}" = "Y" -o "${ANS}" = "y" ]
then
cat ${ONLINE_LOG} | ${COMPRESS} > ${ONLINE_LOG}_${DATE}.${COMPRESS_SUFFIX}
cat ${BAR_ACT_LOG} | ${COMPRESS} > ${BAR_ACT_LOG}_${DATE}.${COMPRESS_SUFFIX}
> ${ONLINE_LOG}
> ${BAR_ACT_LOG}
fi

#Clear the instance's counter number
printf "Do you want to reset all count of database server? [Y/n]"
ANS=''
read ANS
if [ "${ANS}" = "Y" -o "${ANS}" = "y" ]
then
onstat -z
fi

#Clean interim files



#!/bin/sh
###############################################################################
#
# Module: cal_maxrows.sh
# Description: Estimate the maximum rows can be inserted into a single tablespace
#
# Change Log
#
# Date Name Description.................
#
###############################################################################

case `uname -s` in Linux) ECHO="echo -e" ;;
*) ECHO="echo" ;;
esac
ev_echo_n()
{
$ECHO $*\\c
}

usage()
{
ev_echo_n "usage: cal_maxrows.sh [database [tablename]]\n"
exit 1

} 

# check usage; exit if incorrect. valid argument counts are 0, 1 and 2.
[ $# -ne 0 -a $# -ne 1 -a $# -ne 2 ] && usage

AWK=""
if [ -z "$AWK" ]
then for awker in gawk nawk awk
do
for dir in `echo $PATH | sed 's/:/ /g'`
do
if [ -x "$dir/$awker" ]
then AWK="$dir/$awker"
break 2
fi
done
done
fi

################################################################################
# #
# MAIN BODY OF SCRIPT #
# #
################################################################################

#initialize environment
database=$1
table=$2
db_filename=db_$$.txt
tab_filename=tab_$$.txt
trap 'rm -f $db_filename $tab_filename;exit 1' 1 2 15

buffersize=`oncheck -pr | ${AWK} '/Page Size/ {print $3}'`

#get database's and table's information
if [ "x${database}" = "x" ]
then
dbaccess sysmaster >/dev/null 2>&1 <<!
unload to ${db_filename} select name from sysdatabases 
where name not in ("sysmaster", "sysutils", "sysuser", "syscdr", onpload", "sysadmin")
!
else
ev_echo_n "${database}|" > ${db_filename}
fi

for database in `${AWK} -F '|' '{print $1}' ${db_filename}`
do
if [ "x${table}" = "x" ]
then
dbaccess ${database} >/dev/null 2>&1 <<!
unload to ${tab_filename} select tabname from systables where tabid >= 100 and tabtype='T'
!
else
ev_echo_n "${table}|" > ${tab_filename}
fi

#calculate the accurate value we want to get
(cat ${tab_filename};onstat -d;oncheck -pt ${database}) |tee ./oncheck.pt.${database}| ${AWK} -v bz=${buffersize} '
function cal(rowsize, bz, rows, PN_4BITS)
{
printf "%d\t%d\t%d\t", bz, rowsize, rows;
pageuse = bz - 28;
maxpages = (PN_4BITS == 1 ? int((int(pageuse/16))*32*16777215/((int(pageuse/16))*32+1)) : int((int(pageuse/8))*32*16777215/((int(pageuse/8))*32+1)));
ratio = 0.80;

if ( rowsize + 4 <= pageuse ) {
j = int(pageuse / (rowsize + 4));
j > 255 ? j = 255 : j;
maxrows=maxpages * j * ratio;
} else {
partremsize = (rowsize % (pageuse - 8)) + 4
partratio = partremsize / pageuse;
if ( partratio <= 0.1 )
partratio = 0.1;
else if ( partratio <= 0.3 )
partratio = 0.3;
else if ( partratio <= 0.5 )
partratio = 0.5;
else 
partratio = 1;
maxpages = (PN_4BITS == 1 ? int((int(pageuse/16))*32*maxpages/((int(pageuse/16))*32+1)) : int((int(pageuse/8))*32*maxpages/((int(pageuse/8))*32+1)));
maxrows=(maxpages / (int(rowsize / (pageuse - 8)) + partratio)) * ratio;
}
printf ("%15d\t%f\n", maxrows, 100 * (rows / maxrows));
}

BEGIN {flag=0;tab_flag=0;Dbspaces_flag=0;}

#get relevant tables
/.*\|$/ && NF == 1 {split($0, a, "|"); table[a[1]]=NR;}

#get fragment information
/^Dbspaces/ {Dbspaces_flag=1;}
Dbspaces_flag == 1 {
if ( $2 ~ /[0-9]+/ ) {
Dbspaces[$2] = $NF;
} else if ( $0 ~ /maximum$/ ) {
Dbspaces_flag = 0;
}
}

/^TBLspace Report for/ || /Table fragment.*in/ {
if ( $0 ~ /^TBLspace Report for/ ) {
fragment=0;
split($4,r,"[.:]"); 
databasename=r[1];
tablename=r[3];
if ( tablename in table ) {
flag=1; 
tab_flag=1;
} else {
tab_flag=0;
next;
}
}
if ( $0 ~ /Table fragment.*in/ && tab_flag == 1) {
flag=1; 
}
if ( fragment == 0 && $0 ~ /Table fragment.*in/ ) {
fragment = 1; next;
}
printf "%-20.20s %-30.30s ", databasename, tablename;
}
/TBLspace use.*bit bit-maps/ && flag == 1 {PN_4BITS = ($3 == 4 ? 1 : 0); }
/Maximum row size/ && flag == 1 {rowsize=$4;}
/Number of rows/ && flag == 1 {rows=$4;}
/Partition partnum/ && flag == 1 {printf "%-20.20s ", Dbspaces[int($3/1048576)]}
/Extents/ && flag == 1 {flag=0; cal(rowsize, bz, rows, PN_4BITS);}
' | sort -n -r -k 8
rm ${tab_filename}
printf "%-20.20s %-30.30s %-20.20s %s\t%s\t%s\t%15.15s\t%s\n" db table fragment pgsz rz rows maxrows ratio
done
rm ${db_filename}



#!/bin/sh
###############################################################################
#
# Module: cal_restextents.sh
# Description: Estimate the maximum extents can be allocated for a single tablespace
#
# Change Log
#
# Date Name Description.................
#
###############################################################################

AWK=""
if [ -z "$AWK" ]
then for awker in gawk nawk awk
do
for dir in `echo $PATH | sed 's/:/ /g'`
do
if [ -x "$dir/$awker" ]
then AWK="$dir/$awker"
break 2
fi
done
done
fi

SRC_VER=`onstat - | ${AWK} '/Version/ {
keep_next = -1
for (i = 1; i < NF; i++) {
if ($i == "Version")
keep_next = i + 1
else if (i == keep_next)
print $i
}
}'`
if [ `expr "$SRC_VER" : "\(.*\..*\)\..*"|sed 's/\.//'` -lt 940 ]
then
pre94=1
else
pre94=0
fi

case `uname -s` in Linux) ECHO="echo -e" ;;
*) ECHO="echo" ;;
esac
ev_echo_n()
{
$ECHO $*\\c
}

usage()
{
ev_echo_n "usage: cal_restextents.sh [database [tablename]]\n"
exit 1

} # end of usage().
# check usage; exit if incorrect. valid argument counts are 0, 1 and 2.
[ $# -ne 0 -a $# -ne 1 -a $# -ne 2 ] && usage

################################################################################
# #
# MAIN BODY OF SCRIPT #
# #
################################################################################

#initialize environment
database=$1
table=$2
db_filename=db_$$.txt
tab_filename=tab_$$.txt
trap 'rm -f $db_filename $tab_filename;exit 1' 1 2 15

#get database's and table's information
if [ "x${database}" = "x" ]
then
dbaccess sysmaster >/dev/null 2>&1 <<!
unload to ${db_filename} select name from sysdatabases
where name not in ("sysmaster", "sysutils", "sysuser", "syscdr", onpload", "sysadmin")
!
else
ev_echo_n "${database}|" > ${db_filename}
fi

for database in `${AWK} -F '|' '{print $1}' ${db_filename}`
do
if [ "x${table}" = "x" ]
then
dbaccess ${database} >/dev/null 2>&1 <<!
unload to ${tab_filename} select tabname from systables where tabid >= 100 and tabtype='T'
!
else
ev_echo_n "${table}|" > ${tab_filename}
fi

#calculate the accurate value we want to get
(cat ${tab_filename};onstat -d;oncheck -pt ${database}) |tee ./oncheck.pt.${database}| ${AWK} -v pre94=${pre94} '
function todec(str) {
hstr="0123456789ABCDEF";
res=0;
n=length(str);
for(i=1;i<=n;i++) {
digit[i]=substr(toupper(str),i,1);
num=index(hstr,digit[i])-1;
res=res+(num*16^(n-i));
}
return res;
}

BEGIN {flag=0;}

#get relevant tables
/.*\|$/ && NF == 1 {split($0, a, "|"); table[a[1]]=NR;}

#get fragment information
/^Dbspaces/ {Dbspaces_flag=1;}
Dbspaces_flag == 1 {
if ( $2 ~ /[0-9]+/ ) {
Dbspaces[$2] = $NF;
} else if ( $0 ~ /maximum$/ ) {
Dbspaces_flag = 0;
}
}

/^TBLspace Report for/ || /Table fragment.*in/ {
if ( $0 ~ /^TBLspace Report for/ ) {
fragment=0;
split($4,r,"[.:]"); 
databasename=r[1];
tablename=r[3];
if ( tablename in table ) {
flag=1; 
tab_flag=1;
} else {
tab_flag=0;
next;
}
}
if ( $0 ~ /Table fragment.*in/ && tab_flag == 1) {
flag=1; 
}
if ( fragment == 0 && $0 ~ /Table fragment.*in/ ) {
fragment = 1; next;
}
printf "%-20.20s %-30.30s ", databasename, tablename;
}
/Physical Address/ && flag == 1 {
if ( pre94 == 1 ) {
printf "%d\t%d\t", todec($3)/1048576, todec($3)%1048576;
} else {
split($3,r,":")
printf "%d\t%d\t", r[1], r[2];
}
}
/Number of extents/ && flag == 1 {printf "%d\t", $4;}
/Partition partnum/ && flag == 1 {printf "%s\t\t", Dbspaces[int($3/1048576)];}
/Extents/ && flag == 1 {flag = 0; printf "\n";}
' | while read database table chk pg extent fragment 
do
printf "%-20.20s %-30.30s %-20.20s %d\t%d\t%d" ${database} ${table} ${fragment} ${chk} ${pg} ${extent}
oncheck -pP ${chk} ${pg} | ${AWK} -v ext=${extent} -v pre94=${pre94} '
flag == 1 { 
if ( pre94 == 1 ) {
printf "\t%d\t%d\t%f", $7, int($7/8), 100*(ext/(int($7/8)+ext));
} else {
printf "\t%d\t%d\t%f", $8, int($8/8), 100*(ext/(int($8/8)+ext));
}
flag=0;
} 
/frcnt/ {flag=1}'
printf "\n"
done | sort -r -n -k 9
rm ${tab_filename}
printf "%-20.20s %-30.30s %-20.20s %s\t%s\t%s\t%s\t%s\t%s\n" db tab fragment chk pg ext frcnt rest ratio
done
rm ${db_filename}



#!/bin/sh
###############################################################################
#
# Module: health_operation.sh
# Description: Operate informix information collected from health check
#
# Change Log
#
# Date Name Description.................
# Archive file format INSTANCENAME_YYYYMMDD_POSITION_MACHINE.tar.[bz2|gz|Z]
###############################################################################
#Bundled file's name form: INSTANCENAME_YYYYMMDD_POSITION_MACHINE.tar.[bz2|gz|Z]

usage()
{
printf "Usage: health_operation.sh archived_files\n"
exit 1

} 

# check usage; exit if incorrect. valid argument at least great than 0.
[ $# -eq 0 ] && usage

#Operate all bundled files entered one by one
for files in $*
do

#Decompress the health check information
for packer in bzip2.bz2 gzip.gz gzip.Z
do
compress=`echo $packer | sed 's/\..*//'`
suffix=`echo $packer | sed 's/.*\.//'`
if [ "${suffix}" = "`echo "${files}" | sed 's/.*\.//'`" ] 
then
base=`basename ${files} .tar.${suffix}`
INS=`echo $base | sed -e 's/_[0-9][0-9][0-9][0-9][0-9][0-9][0-9][0-9]_.*$//'`
DATE=`echo $base | sed -e 's/.*_\([0-9][0-9][0-9][0-9][0-9][0-9][0-9][0-9]\)_.*$/\1/'`
POSITION=`echo $base | sed -e 's/.*_[0-9][0-9][0-9][0-9][0-9][0-9][0-9][0-9]_//' -e 's/_.*//'`
MACHINE=`echo $base | sed -e 's/.*_//'`
printf ${INS},${DATE},${POSITION},${MACHINE},
SERVER=${INS}
[ ! -d ${POSITION}/${MACHINE} ] && mkdir -p ${POSITION}/${MACHINE}
$compress -c -d ${files} | tar xf - -C ./${POSITION}/${MACHINE} 2>/dev/null
break;
fi
done

#Prepare pretreatment environment
DATADIR=${POSITION}/${MACHINE}/${SERVER}/${DATE}
PREDIR=${POSITION}/${MACHINE}/${SERVER}/pre${DATE}
DATEBASE_DATE=`echo ${DATE} | awk '{print substr($0,5,2)"/"substr($0,7,2)"/"substr($0,0,4)}'`
[ ! -d ${PREDIR} ] && mkdir -p ${PREDIR}

#Operate the evidence information
awk -v server=${SERVER} -v position=${POSITION} -v machine=${MACHINE} -v date=${DATEBASE_DATE} -v predir=${PREDIR} '

BEGIN { num = 0; }

function initflag()
{
pflag=0;
}

function on_p() 
{
onstat_p_output=predir"/onstat_p.txt";
printf "%s|%s|%s|%s|read|%d|%d|%d|%f|\n", server, position, machine, date, onstat_p["dskreads"], onstat_p["pagreads"], onstat_p["bufreads"], onstat_p["read_cache"] >onstat_p_output
printf "%s|%s|%s|%s|write|%d|%d|%d|%f|\n", server, position, machine, date, onstat_p["dskwrits"], onstat_p["pagwrits"], onstat_p["bufwrits"], onstat_p["write_cache"] >>onstat_p_output
}

/onstat -p/ {initflag(); pflag=1;}

pflag == 1 {
if ( $0 ~ /^[0-9][0-9]*/ ) {
num ++;
if (num == 1) {
onstat_p["dskreads"] = $1;
onstat_p["pagreads"] = $2;
onstat_p["bufreads"] = $3;
onstat_p["read_cache"] = $4;
onstat_p["dskwrits"] = $5;
onstat_p["pagwrits"] = $6;
onstat_p["bufwrits"] = $7;
onstat_p["write_cache"] = $8;
} else if (num == 2) {
onstat_p["write"] = $5;
onstat_p["rewrite"] = $6;
onstat_p["delete"] = $7;
} else if (num == 3) {
} else if (num == 4) {
onstat_p["ovlock"] = $1;
onstat_p["ovbuff"] = $3;
onstat_p["usercpu"] = $4;
onstat_p["syscpu"] = $5;
onstat_p["numckpts"] = $7;
} else if (num == 5) {
onstat_p["bufwaits"] = $1;
onstat_p["lokwaits"] = $2;
onstat_p["lockreqs"] = $3;
onstat_p["deadlks"] = $4;
onstat_p["ckpwaits"] = $6;
onstat_p["compress"] = $7;
onstat_p["seqscans"] = $8;
} else if (num == 6) {
onstat_p["ixda-RA"] = $1;
onstat_p["idx-RA"] = $2;
onstat_p["da-RA"] = $3;
onstat_p["RA-pgsused"] = $4;
}
}
}

END {
on_p();
}

' ${DATADIR}/evi.txt

#Operate the data by SQL
DATABASE="database_name" #To specify the database's name at here

#Operate for onstat_p

PREDATE=`dbaccess ${DATABASE} 2>${PREDIR}/log <<! | awk -v date=${DATEBASE_DATE} '
BEGIN{a=date}
/^[0-9]+/ {a=$1}
END {print a}'
select first 1 date from onstat_p where date < "${DATEBASE_DATE}" and server="${SERVER}" 
and position="${POSITION}" and machine="${MACHINE}" order by date desc
!
`

printf ${DATEBASE_DATE},${PREDATE}"\n"

dbaccess ${DATABASE} >${PREDIR}/log 2>&1 <<!

delete from onstat_p where server="${SERVER}" and position="${POSITION}" and machine="${MACHINE}" and date="${DATEBASE_DATE}";
load from ${PREDIR}/onstat_p.txt insert into onstat_p;

unload to ${PREDIR}/onstat_p.txt select case a.type
when "read" then "read"
when "write" then "write"
end,
a.disk, a.page, a.buffer, a.cache::decimal(4,2), b.cache::decimal(4,2),
case when a.cache > nvl(b.cache,0) then "↑"
when a.cache < nvl(b.cache,0) then "↓"
else "→"
end
from onstat_p a, outer onstat_p b
where a.date="${DATEBASE_DATE}" and b.date="${PREDATE}"
and a.server="${SERVER}" and a.position="${POSITION}" and a.machine="${MACHINE}"
and a.server=b.server and a.position=b.position and a.machine=b.machine and a.type=b.type;

!

#Operate for dbspace information
#If can not find previous information, I think the PREDATE is current date, so the trent of value is →
PREDATE=`dbaccess ${DATABASE} 2>${PREDIR}/log <<! | awk -v date=${DATEBASE_DATE} '
BEGIN{a=date}
/^[0-9]+/ {a=$1}
END {print a}'
select first 1 date from dbs_spaces where date < "${DATEBASE_DATE}" and server="${SERVER}"
and position="${POSITION}" and machine="${MACHINE}" order by date desc
!
`
sed "s@^@${SERVER}|${POSITION}|${MACHINE}|${DATEBASE_DATE}|@" ${DATADIR}/dbs_spaces.txt > ${PREDIR}/dbs_spaces.txt

dbaccess ${DATABASE} >${PREDIR}/log 2>&1 <<!
delete from dbs_spaces where server="${SERVER}" and position="${POSITION}" and machine="${MACHINE}" and date="${DATEBASE_DATE}";
load from ${PREDIR}/dbs_spaces.txt insert into dbs_spaces;

unload to ${PREDIR}/dbs_spaces.txt select a.dbspace,a.total,a.free,a.usage,b.usage,
case when a.usage > nvl(b.usage,0) then "↑"
when a.usage < nvl(b.usage,0) then "↓"
else "→"
end
from dbs_spaces a, outer dbs_spaces b
where a.date="${DATEBASE_DATE}" and b.date="${PREDATE}"
and a.server="${SERVER}" and a.position="${POSITION}" and a.machine="${MACHINE}"
and a.server=b.server and a.position=b.position and a.machine=b.machine and a.dbspace=b.dbspace
order by a.usage desc
!

#Operate tabspace usage

PREDATE=`dbaccess ${DATABASE} 2>${PREDIR}/log <<! | awk -v date=${DATEBASE_DATE} '
BEGIN{a=date}
/^[0-9]+/ {a=$1}
END {print a}'
select first 1 date from maxrows where date < "${DATEBASE_DATE}" and server="${SERVER}"
and position="${POSITION}" and machine="${MACHINE}" order by date desc
!
`
awk -v server=${SERVER} -v position=${POSITION} -v machine=${MACHINE} -v date=${DATEBASE_DATE} '
!/^$/ && !/^db.*ratio$/ {printf "%s|%s|%s|%s|%s|%s|%s|%s|%s|%s|%s|%s|\n", server, position, machine, date, $1, $2, $3, $4, $5, $6, $7, $8}
' ${DATADIR}/maxrows.txt > ${PREDIR}/maxrows.txt

dbaccess ${DATABASE} >${PREDIR}/log 2>&1 <<!
delete from maxrows where server="${SERVER}" and position="${POSITION}" and machine="${MACHINE}" and date="${DATEBASE_DATE}";
load from ${PREDIR}/maxrows.txt insert into maxrows;

unload to ${PREDIR}/maxrows.txt select a.database, a.table, a.fragment, a.rowsize, a.rows, a.maxrows, a.ratio::decimal(4,2), b.ratio::decimal(4,2),
case when a.ratio > nvl(b.ratio,0) then "↑"
when a.ratio < nvl(b.ratio,0) then "↓"
else "→"
end
from maxrows a, outer maxrows b
where a.date="${DATEBASE_DATE}" and b.date="${PREDATE}"
and a.server="${SERVER}" and a.position="${POSITION}" and a.machine="${MACHINE}"
and a.server=b.server and a.position=b.position and a.machine=b.machine and a.database=b.database and a.table=b.table and a.fragment=b.fragment
order by a.ratio desc;
!

#Operate extent usage

PREDATE=`dbaccess ${DATABASE} 2>${PREDIR}/log <<! | awk -v date=${DATEBASE_DATE} '
BEGIN{a=date}
/^[0-9]+/ {a=$1}
END {print a}'
select first 1 date from maxextents where date < "${DATEBASE_DATE}" and server="${SERVER}"
and position="${POSITION}" and machine="${MACHINE}" order by date desc
!
`
awk -v server=${SERVER} -v position=${POSITION} -v machine=${MACHINE} -v date=${DATEBASE_DATE} '
!/^$/ && !/^db.*ratio$/ {printf "%s|%s|%s|%s|%s|%s|%s|%s|%s|%s|%s|%s|%s|\n", server, position, machine, date, $1, $2, $3, $4, $5, $6, $7, $8, $9}
' ${DATADIR}/maxextents.txt > ${PREDIR}/maxextents.txt

dbaccess ${DATABASE} >${PREDIR}/log 2>&1 <<!
delete from maxextents where server="${SERVER}" and position="${POSITION}" and machine="${MACHINE}" and date="${DATEBASE_DATE}";
load from ${PREDIR}/maxextents.txt insert into maxextents;

unload to ${PREDIR}/maxextents.txt select a.database, a.table, a.fragment, a.extent, a.restextent, a.ratio::decimal(4,2), b.ratio::decimal(4,2),
case when a.ratio > nvl(b.ratio,0) then "↑"
when a.ratio < nvl(b.ratio,0) then "↓"
else "→"
end
from maxextents a, outer maxextents b
where a.date="${DATEBASE_DATE}" and b.date="${PREDATE}"
and a.server="${SERVER}" and a.position="${POSITION}" and a.machine="${MACHINE}"
and a.server=b.server and a.position=b.position and a.machine=b.machine and a.database=b.database and a.table=b.table and a.fragment=b.fragment
order by a.ratio desc;
!

#Operate sequence scan

PREDATE=`dbaccess ${DATABASE} 2>${PREDIR}/log <<! | awk -v date=${DATEBASE_DATE} '
BEGIN{a=date}
/^[0-9]+/ {a=$1}
END {print a}'
select first 1 date from seq_scans where date < "${DATEBASE_DATE}" and server="${SERVER}"
and position="${POSITION}" and machine="${MACHINE}" order by date desc
!
`
sed "s@^@${SERVER}|${POSITION}|${MACHINE}|${DATEBASE_DATE}|@" ${DATADIR}/seq_scans*.txt > ${PREDIR}/seq_scans.txt

dbaccess ${DATABASE} >${PREDIR}/log 2>&1 <<!
delete from seq_scans where server="${SERVER}" and position="${POSITION}" and machine="${MACHINE}" and date="${DATEBASE_DATE}";
load from ${PREDIR}/seq_scans.txt insert into seq_scans;

unload to ${PREDIR}/seq_scans.txt select a.database,a.table,a.rows,a.seqscan,b.seqscan,
case when a.seqscan > nvl(b.seqscan,0) then "↑"
when a.seqscan < nvl(b.seqscan,0) then "↓"
else "→"
end
from seq_scans a, outer seq_scans b
where a.date="${DATEBASE_DATE}" and b.date="${PREDATE}"
and a.server="${SERVER}" and a.position="${POSITION}" and a.machine="${MACHINE}"
and a.server=b.server and a.position=b.position and a.machine=b.machine and a.database=b.database and a.table=b.table 
order by a.rows desc, a.seqscan desc
!

#Operate serial number issue
awk '
$1 ~ /TBLspace/ && $2 ~ /.+:.+\..+/ {table=$2}
/Current serial value/ && $4 >= 1000000000 {print table, $4} ' ${DATADIR}/oncheck.pc > ${PREDIR}/oncheck.serial

done



https://www-304.ibm.com/support/docview.wss?uid=swg21330733

728x90

+ Recent posts