728x90

안녕하세요. 우연히 IBM 문서에서 발견한 내용을 소개해 드립니다. 이렇게 매뉴얼을 정독하지 않은 티를 내게 되는군요.

인포믹스에서 UNIQUE 제약조건과 UNIQUE 인덱스의 기능에 차이가 있다는 내용입니다.

평소에 두개의 기능이 다를 것이라고 생각하지 않았는데, 알고나니 개발할 때 이런 점까지 고려해야 되나 생각이 들기도 하네요.


문서에서 설명한 주요 기능의 차이는 아래와 같습니다.

In DML statements, enabled unique constraints on a logged table are checked at the end of a statement, but unique indexes are checked on a row-by-row basis, thereby preventing any insert or update of a row that might potentially violate the uniqueness of the specified column (or for a multiple-column column constraint or index, the column list).


요약하자면, 로깅 모드 (unbuffered, buffered) 테이블에서는

unique 제약조건에서는 insert나 update 문장 실행의 끝날때 중복여부를 확인하고,

unique 인덱스에서는 row-by-row, 건건이 실행할 때마다 중복값을 확인한다는 차이가 있다는 것입니다.


위의 IBM 문서에 기반한 내용으로 아래의 시나리오를 만들어 봤습니다.


시나리오 #1 ==================

1. 정수형(integer) 데이터 타입 1개를 포함한 테이블 작성

> create table test (c int);


2. 테스트 데이터 10만개 입력

> insert into test select level from sysmaster:sysdual connect by level <= 100000;


3. 해당 컬럼에 UNIQUE 인덱스 생성

> create unique index test_idx on test(c);


4. UPDATE 문장 실행

> update test set c=c+1;


  346: Could not update a row in the table.    << UPDATE 문장 실행 즉시 오류가 발생합니다.


  100: ISAM error:  duplicate value for a record with unique key.


============================


시나리오 #2 ==================

1. 정수형(integer) 데이터 타입 1개를 포함한 테이블 작성, 해당 컬럼에 UNIQUE 제약조건 설정.

> create table test (c int, unique(c));


2. 테스트 데이터 10만개 입력

insert into test select level from sysmaster:sysdual connect by level <= 100000;


3. UPDATE 문장 실행

> update test set c=c+1;


100000 row(s) updated.    << 약 3~7초 정도 경과후 중복값 오류 없이 UPDATE가 실행됩니다.

============================


여기서 시나리오 2번에서 오류가 발생하지 않았으니 unique 제약조건으로 선언하는 것이 낫겠다는 생각이 들기도 합니다.

그러나 몇가지 고려할 사항이 있습니다.


unique 제약조건이 정의되어 있으면 UPDATE/INSERT 문장이 수행되는 도중에 중복값이 발생하더라도

오류가 곧바로 발생하지 않습니다. 아래에서 극단적인 예를 들어보겠습니다.


> drop table test;

> create table test (c int, unique(c));

> insert into test select level from sysmaster:sysdual connect by level <= 100000;


100000 row(s) inserted.


$ echo "update test set c=3;" | timex dbaccess stores_demo


Database selected.



  268: Unique constraint (informix.u160_134) violated.


  100: ISAM error:  duplicate value for a record with unique key.

Error in line 1

Near character position 18



Database closed.



real 37.48

user 0.02

sys  0.03


시나리오 #2의 내용과 동일한 테이블과 데이터를 만들고 같은 값 3으로 UPDATE를 실행했습니다.

unique 제약조건이 있기 때문에 사실 수행될 수 없는 문장입니다.

그러나 수행이 이뤄지고 마지막에 중복값을 확인하기 때문에 약 37초가 소요되었습니다.

수행시간은 시스템 성능에 따라 차이는 있겠지만 수행중 오류가 발생하고 변경사항이 rollback 되었음을 확인할 수 있습니다.


여기서 몇가지 더 의문이 생겼습니다.

그럼 primary key 제약조건에서어떨까?

unique 제약조건과 동일하게 268 오류가 발생했습니다.


그러면 unique 인덱스, primary key 제약조건을 만들었을때는?

마찬가지로 268 오류가 발생했습니다.


그럼 unique 인덱스, primary key 제약조건을 만들고 UPDATE 문장에 unique 인덱스를 사용하도록 힌트를 주었다면?

UPDATE 문장에서 INDEX를 강제로 사용하면 달라지지 않을까 예상했지만,

역시 아래와 같이 268 오류가 발생했습니다.


> drop table test;

> create table test (c int);

> insert into test select level from sysmaster:sysdual connect by level <= 100000;

> create unique index test_pk on test(c);

> alter table test add constraint primary key (c) constraint test_pk;


$ echo "update {+ explain index(test test_pk)} test set c=3;" | timex dbaccess stores_demo


Database selected.



  268: Unique constraint (informix.test_pk) violated.


  100: ISAM error:  duplicate value for a record with unique key.

Error in line 1

Near character position 50



Database closed.



real 39.24

user 0.02

sys  0.03



결론적으로 unique/primary key 제약조건이 설정된 경우에는

UPDATE/INSERT 문장이 실행된 이후에 중복값이 검사된다는 것입니다.

속성이 유일한 값이다보니 UPDATE가 드물긴 하겠지만, unique index와는 작동 방식이 다르니 참고할 필요는 있을 것 같습니다.



** 참고

https://www.ibm.com/support/knowledgecenter/en/SSGU8G_12.1.0/com.ibm.sqls.doc/ids_sqs_0517.htm

728x90
728x90

안녕하세요. 2019년 2월자 IIUG insider에 따르면 인포믹스 14 버전이 2019년 상반기에 출시될 예정이라고 합니다.

작년에 IIUG World 컨퍼런스에서도 소개가 된 내용인데 버전 숫자까지 공개된 것은 처음이네요..

라고 생각했지만 웹에서 검색해보니 이미 작년부터 14.10 버전이야기가 나오고 있었네요.

서양에서는 13이라는 숫자를 확실히 싫어하는 걸까요?


곧 IBM에서 EAP(Early Access Program)도 시작하지 않을까 생각했는데,

이미 CURSOR 등의 회사나 고객사에서 베타테스트를 진행하고 있었던 모양입니다.

https://www.cursor-distribution.de/en/community/community-informix


아래 링크에서 인포믹스 14.10 버전의 새로운 기능을 확인하실 수 있습니다.

http://www.iiug.org/en/2019/02/04/iiug-insider-issue-222-january-2019/



Enhances log replay performance of remote secondary servers and OLTP transactions

      • Up to 5x improvement in replication enables client applications to sustain near zero latency between primary and secondary servers allowing faster recovery time objective in disaster scenarios.
      • Up to 10% faster than 12.10 for standard OLTP transactions

Provides higher security for encryption keys and integrated backup encryption

      • By supporting remote storage of encryption at rest keys in Amazon Key Manager, an additional layer of security is applied to Informix server encrypted data. Three ciphers AES128, AES192 and AES256 are supported.
      • By supporting remote key management server to generate encryption keys and reducing DBA effort to encrypt Informix data backups, data security in backup media is enhanced without the risk of losing keys. The encryption key is itself encrypted (called Envelop Encryption).
      • Transport Layer Security (TLS) to 1.2 for a higher level of network transport security.

Enhances usability, streamlines administration, and increases uptime

      • By supporting additional in-place alter operations on tables and data types, database downtime is avoided

Provides a new centralized and graphical administration tool called InformixHQ

      • InformixHQ is a modern web console for visualizing, monitoring, and managing your Informix server instances. It is purpose built for ease-of-use, scale-out, and optimizing DevOps needs.
      • It provides critical performance management capabilities, monitoring how key performance metrics are changing overtime and tracking how efficiently Informix is running your workload even when you’ve stepped away from your screen. Its monitoring system feeds directly into a customizable alerting system so you can be immediately alerted via email, Twilio, or PagerDuty whenever an issue occurs on one of your Informix database server instances.
      • It is designed to be scalable to efficiently manage and monitor as many Informix database server instances as you run.
      • It enables collaboration between the DBAs, the app developers, the ops engineers, and management and accessed from any desktop, laptop, or mobile device.
      • InformixHQ is the path forward for graphical monitoring, alerting, and administration of your Informix database servers.

Enhances Unicode support to current V11.0

      • Ability to support storage and processing for text characters that are new over the last 10 years. The new scripts and characters in Version 11.0 add support for lesser-used languages and unique written requirements worldwide.

Enhances time series granularity and spatial projection

      • Supporting Sub-second timestamps allow for very high speed event generation
      • Support for more geodetic and projections systems enables you to track packages in a shipping depot using your own coordinate system

Increased hardware limits at the Workgroup Edition level

      • New limits 24 cores / 32GB RAM enable higher performance and scalability for your critical applications providing even higher return on your Informix investment.

Includes storage optimization at the Enterprise Edition level

      • All Informix Enterprise edition deployments now benefit from the well proven data compression technology already built in Informix.
      • Data compression reduces primary, secondary, backups and log storage while simultaneously enhancing performance by reducing I/O operations.
      • Customers have reported average 4x reductions in database size while experiencing faster I/O operations and faster backups.

Common table expression

      • By implementing the SQL standard CTE, application developers improve readability and maintenance of complex queries, able to re-use CTE result-set multiple times in one query, Pipeline aggregations and write powerful recursive queries.
      • Complex queries can be divided into simple, ad hoc, logical building blocks. These simple blocks can then be used to build more complex, interim CTEs until the final result set is generated.

Setting up Informix instance/or database level replication using Enterprise Replication

      • Single command that automates setting up of Enterprise Replication and data migration between two Informix server instances located either on-prem or in the cloud. This command automates tasks such as defining Enterprise Replication domain between the two servers, adding key columns for the tables, creating required storage spaces for the databases, copying database schema from source server to target serve, creating replicate definitions and finally synchronizing data between source and target server instances, all in a transactional, flexibly phased manner, with no downtime.


** 출처

http://www.iiug.org/en/2019/02/04/iiug-insider-issue-222-january-2019/

http://www.hamburg-data.de/en/news/136-soon-informix-14-10

https://www.cursor-distribution.de/en/community/community-informix

728x90
728x90

인포믹스 11버전부터 데이터베이스 관리를 위한 sysadmin 데이터베이스가 있습니다.


sysadmin 데이터베이스에는 인포믹스 상태 정보를 보관하기 때문에,

계속해서 데이터가 유입된다면 데이터를 주기적으로 삭제하거나, 처음부터 여유있는 데이터 파일에 배치하는 것이 좋습니다.


그래서 sysadmin 데이터베이스를 스크립트를 사용하거나,

sysadmin 데이터베이스의 admin/task 프로시저를 사용해서 재생성할 수 있습니다.


1. 수동으로 sysadmin 데이터베이스 다시 만들기

IBM의 Technote에 따르면 $INFORMIXDIR/etc/sysadmin에 sysadmin 데이터베이스를 재생성하기 위한 스크립트가 있습니다.

아래의 명령을 순서대로 실행합니다.


cd $INFORMIXDIR/etc/sysadmin;

dbaccess - db_uninstall.sql;

dbaccess - db_create.sql;

dbaccess sysadmin db_install.sql;

dbaccess sysadmin sch_tasks.sql;

dbaccess sysadmin sch_aus.sql;

dbaccess sysadmin sch_sqlcap.sql;

dbaccess sysadmin start.sql;


여기서 db_create.sql 파일 내용을 살펴보면 데이터베이스를 생성하는 구문이 있습니다.


CREATE DATABASE sysadmin WITH LOG;


파일 내용을 수정해서 이 구문으로 dbspace를 지정할 수도 있을 것입니다.


2. sysadmin 데이터베이스의 프로시저를 사용해 sysadmin 데이터베이스 이동하기

sysadmin 데이터베이스에 admin/task 프로시저를 사용해서 sysadmin 데이터베이스를 재생성할 수 있습니다.


execute function task("reset sysadmin", "admindbs"); 


dbspace 명을 지정하지 않으면 rootdbs에 만들어집니다. 재생성이 완료되면 스케줄러가 자동으로 재시작됩니다.



** 참고

https://www-01.ibm.com/support/docview.wss?uid=swg21266296

https://www-01.ibm.com/support/docview.wss?uid=swg21420189

728x90
728x90

안녕하세요. 인포믹스 12.1 이전 버전에는 RANK/DENSE_RANK 함수 기능이 제공되지 않습니다.

따라서 별도 프로그램을 사용해야 합니다.


인터넷을 검색해보니 변수 처리하거나 상호연관 서브쿼리 (Correlated sub-query)를 이용하는 방법이 일반적인 것 같습니다.

참고해서 인포믹스 SQL과 FUNCTION으로 구현해 보았습니다.


## 샘플 데이터 입력

DROP TABLE scores;

CREATE TABLE scores

     (

         id serial not null,

         score int not null

     );


INSERT INTO scores (score) VALUES     (50);

INSERT INTO scores (score) VALUES     (40);

INSERT INTO scores (score) VALUES     (75);

INSERT INTO scores (score) VALUES     (80);

INSERT INTO scores (score) VALUES     (55);

INSERT INTO scores (score) VALUES     (50);



## SQL문 실행 예제

SQL문으로 RANK/DENSE_RANK 기능을 구현한 예제입니다.


select id, score, 

(select count(*)::int+1 end from scores a where a.score > b.score) rank,

(select count(distinct(a.score))::int+1 end from scores a where a.score > b.score) dense_rank

from scores b order by score desc;


         id       score          rank    dense_rank


          4          80             1             1

          3          75             2             2

          5          55             3             3

          1          50             4             4

          6          50             4             4

          2          40             6             5




## 함수로 구현 (ranking/dense_ranking)

동일한 값에 대한 순위를 평가하는 방법에 따라 RANK/DENSE_RANK 함수로 구분됩니다.

같은 세션에서 전역변수 처리하므로 여러번 실행하려면 init_ranking변수를 초기화 해야합니다.


DROP FUNCTION ranking;

CREATE FUNCTION  ranking (curr_val int) returning int ;

define global counter int default 0;

define global last_val int default 0;

define global seq_val int default 0;


if counter = 0 then

let counter = counter + 1;

elif seq_val - counter = 0 and last_val != curr_val then

let counter = counter + 1;

end if


let seq_val = seq_val + 1;

let last_val = curr_val;


if seq_val - counter > 1 then

return seq_val;

end if


return counter;

end function;



DROP FUNCTION dense_ranking;

CREATE FUNCTION  dense_ranking (curr_val int) returning int ;

define global counter int default 0;

define global last_val int default 0;


if counter = 0 or last_val != curr_val then

let counter = counter + 1;

end if


let last_val = curr_val;


return counter;

end function;



CREATE PROCEDURE  init_ranking ();

define global counter int default 0;

define global last_val int default 0;

define global seq_val int default 0;


let counter = 0;

let last_val = 0;

let seq_val = 0;

end procedure;



## 함수 실행 예제

call init_ranking ();

select id, score, ranking (score) rank from (select id,score from scores order by score desc);



         id       score        rank


          4          80           1

          3          75           2

          5          55           3

          1          50           4

          6          50           4

          2          40           6



call init_ranking ();

select id, score, dense_ranking (score) dense_ranking from (select id,score from scores order by score desc);



         id       score dense_ranking


          4          80             1

          3          75             2

          5          55             3

          1          50             4

          6          50             4

          2          40             5



코드는 예외처리가 더 필요할 것 같지만 결과는 잘 나온 것 같습니다.

참고가 되시길 바랍니다.



참조 사이트

https://mattmazur.com/2017/03/26/exploring-ranking-techniques-in-mysql/

http://informix-myview.blogspot.com/2012/09/in-search-of-rownum.html

728x90
728x90


Informix Dynamic Server and Linux -- Up and running

A cookbook for installing Linux and IDS





IDS was one of the first commercial database products ported to Linux. Today the advantages of running IDS 10.0 on the Linux platform are:

  • Support for new Linux platforms
  • Transparent optimization and exploitation of Linux environments
  • Exploitation of Linux 2.6 kernel features
  • Utilization of asynchronous I/O and direct I/O for enhancing I/O performance
  • Processor affinity to achieve improved scalability and parallelism
  • Performance optimization using configurable page sizes
  • Additional installation methods on Linux systems
  • Simple Network Management Protocol (SNMP) support
  • Interprocess communication with stream pipes
  • Scalability on the 2.6 kernel

This article describes a test configuration of IDS on Linux. By following the details I describe, you should be able to customize your own IDS configuration on Linux. We'll start by looking at the setup of Linux itself.

Installing Linux

Technical details for the sample system

The system used for installing Linux and IDS was a standard PC:

  • Pentium III 600MHz
  • 512MB RAM
  • 2 x 15.3GB disks (Maxtor 51536U3 ATA)

For IDS version 10.0 (32-bit) the CPU must be compatible with an Intel i686. Compatible CPU types should be at a minimum Intel Pentium II / Pentium Pro or AMD Athlon. For our sample system, the first disk was dedicated for installing the Linux OS, and the second disk provided space for the IDS databases. No partitions should exist on the disks.

Red Hat Enterprise Linux 4

For this example, Red Hat Enterprise Linux AS release 4 (Nahant) was installed on the sample PC. The following steps will help you to set up a Linux system that is ready to install and run IDS. Please note, the installation of Linux will destroy any operating system and data on the PC.

Detailed instructions can be found at the "Red Hat Enterprise Linux Documentation" page in the document "Red Hat Enterprise Linux Installation Guide for the x86, Itanium, and AMD64 Architectures.". See the Related topics section "Learn" for links.

During installation, you can use the default settings in most cases. Here are some hints and exceptions:

  • Boot the PC from the first of the 4 CDs.
  • In step Language Selection choose English.
  • In step Keyboard Configuration choose the layout for your keyboard.
  • In step Upgrade Examine select Install Red Hat Enterprise for a fresh install.
  • In step Automatic Partitioning select /dev/hda only (deselect /dev/hdb).
  • In step Package Installation Defaults select Customize software packages ....
  • In step Package Group Selection, make these selections:
    • in chapter Applications select Graphical Internet (get Firefox browser).
    • in chapter Development select Legacy Software Development (get packages for compatibility to older Red Hat versions).
  • Reboot after the installation is completed.
  • Log in as user root and open a terminal (Applications -> System Tools -> Terminal).
  • For IDS graphical tools, install OpenMotif runtime from CD #3:
    • Put CD #3 in the drive. It should be automatically mounted
    • To install the package, type following command in the terminal:
      1
      rpm -Uvh /media/cdrom/RedHat/RPMS/openmotif-2.2.3-6.RHEL4.2.i386.rpm
    • To eject the CD from the drive type the command:
      1
      eject
  • To access IDS raw devices with Kernel Asynchronous Input Output (KAIO), install libaio from CD#3:
    • Put CD#3 in the drive. It should be automatically mounted.
    • To install the package, type following command in the terminal:
      1
      rpm -Uvh /media/cdrom/RedHat/RPMS/libaio-0.3.102-1.i386.rpm
    • To eject the CD from the drive type the command:
      1
      eject

Novell SUSE Linux Enterprise Server 9

For this example SUSE Linux Enterprise Server 9 (SLES 9) was installed on the sample PC. The following steps should help to set up a Linux system which is ready to install and run IDS. Please note, the installation will destroy any operating system and data on the PC.

Detailed instructions can be found at the "SUSE Linux Documentation" page in the document "SUSE LINUX Enterprise Server 9 Administration and Installation" (PDF). See the Related topics section "Learn" for links.

During installation, you can use the default settings in most cases. Some hints and exceptions are listed below:

  • Boot the PC from the first of the 4 binary CDs.
  • In step Select your language, choose English (US).
  • In step Installation Settings
    • Click on Keyboard layout and choose the appropriate layout.
    • Click on Partitioning and check swap and root partitions are created in /dev/hda.
      • Choose Base partition setup on this proposal.
      • Choose root partition (should be /dev/hda2) and change its file system type to ext3 using the Edit menu.
    • Click on Time Zone and choose the appropriate time zone.
  • Accept the installation settings and confirm the start of installation.
  • Reboot after the installation is completed.
  • Log in and open a terminal (click on terminal looking icon in the panel).

Installing IDS

For this example, the trial version of IDS version 10 (32-bit) can be downloaded using installed Firefox or Konqueror browser from "Informix product family -- How to get Informix product downloads -- IBM Software" page. See the Related topics section "Get products and technologies" for a link. Click on Informix Dynamic Server EE V10.0 Trials for getting the tar archive iif.10.00.UC3R1TL.Linux.tar and save it to /tmp directory. Sign in with IBM ID and password may be required. Choose download using http.

If you are not already logged in as user root, then switch user id to root:

1
su -

Create group and user informix, and enter password:

1
2
3
groupadd informix<br>
useradd -g informix -m informix<br>
passwd informix

Unpack the downloaded archive of IDS:

1
2
3
4
mkdir /opt/informix<br>
chown informix:informix /opt/informix<br>
cd /opt/informix<br>
tar xvf /tmp/iif.10.00.UC3R1TL.Linux.tar

If you purchased IDS version 10, then you will probably get the IDS bundle. In this case copy the contents of directory SERVER of the unpacked bundle to the working directory and proceed.

Set environment:

1
export INFORMIXDIR=/opt/informix

Install the IDS packages:

1
./install_rpm -acceptlicense=yes

Logout from user root:

1
exit

Configuring IDS

Set up the environment

If you are not already logged in as user informix, then switch user id to informix:

1
su - informix

Create an environment file called ids.env using a graphical text editor (such as gedit or kate), for example:

1
gedit ids.env

Or, use the famous console editor vi:

1
vi ids.env

Add the following lines to the ids.env file:

1
2
3
export INFORMIXDIR=/opt/informix<br>
export INFORMIXSERVER=demo_on<br>
export PATH=$INFORMIXDIR/bin:$PATH

Save the file and exit the editor.

Set the environment for IDS:

1
. ./ids.env

(type: dot blank dot slash ids.env)
Each time you need to access this IDS instance you can set the environment from this file again.

Create and edit IDS configuration files:

1
2
3
cd $INFORMIXDIR/etc<br>
cp onconfig.std onconfig<br>
cp sqlhosts.std sqlhosts

Edit file onconfig using the editor of choice (see above), change parameters from - to:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
ROOTPATH        /dev/online_root
-
ROOTPATH        /home/informix/dbspaces/online_root
 
MSGPATH         /usr/informix/online.log
-
MSGPATH         /home/informix/logs/online.log
 
LTAPEDEV        /dev/tapedev
-
LTAPEDEV        /dev/null
 
DBSERVERNAME
-
DBSERVERNAME    demo_on

Find the network node hostname of the machine:

1
uname -n

asterix is printed for example.

Edit file sqlhosts using the editor of choice (see above), change line from - to:

1
2
3
demo_on onipcshm        on_hostname     on_servername
-
demo_on onipcshm        asterix         demo_on

(Replace the word "asterix" with the network node hostname of your machine.)

Create the logs directory:

1
2
cd<br>
mkdir logs

Create the root dbspace:

1
2
3
4
mkdir dbspaces<br>
cd dbspaces<br>
touch online_root<br>
chmod 660 online_root

Startup and shutdown

Start and initialize the configured IDS instance:

1
oninit -iv

prints following message:

1
2
3
This action will initialize IBM Informix Dynamic Server;
any existing IBM Informix Dynamic Server databases will NOT be accessible -
Do you wish to continue (y/n)?

If you are certain that you want to initialize this instance of IDS, type y to initialize the instance. Note, all data will be lost if the instance has been used previously.

Now IDS is running. For testing purposes, you can create the demo database by executing the script dbaccessdemo7 To access databases, the console utility dbaccess can be used.

You can shut down the IDS instance with the command:

1
onmode -ky

and started (without initialization) with this command:

1
oninit

IDS documentation can be read online at "IBM Informix Dynamic Server information center" page. Manuals for all Informix products in different languages can be found at "Informix library" page. See the Related topics section "Learn" for links.

Configuring a raw device

The whole second disk from the example machine can be configured as a raw device. Please note all data will be lost on this disk.

The first step is to create a partition. For this, use the interactive console utility fdisk. Switch to user id root to run following commands:

1
fdisk /dev/hdb

For example:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
The number of cylinders for this disk is set to 29651.
There is nothing wrong with that, but this is larger than 1024,
and could in certain setups cause problems with:
1) software that runs at boot time (e.g., old versions of LILO)
2) booting and partitioning software from other OSs
   (e.g., DOS FDISK, OS/2 FDISK)
 
Command (m for help): m
Command action
   a   toggle a bootable flag
   b   edit bsd disklabel
   c   toggle the dos compatibility flag
   d   delete a partition
   l   list known partition types
   m   print this menu
   n   add a new partition
   o   create a new empty DOS partition table
   p   print the partition table
   q   quit without saving changes
   s   create a new empty Sun disklabel
   t   change a partition's system id
   u   change display/entry units
   v   verify the partition table
   w   write table to disk and exit
   x   extra functionality (experts only)
 
Command (m for help): o
Building a new DOS disklabel. Changes will remain in memory only,
until you decide to write them. After that, of course, the previous
content won't be recoverable.
 
 
The number of cylinders for this disk is set to 29651.
There is nothing wrong with that, but this is larger than 1024,
and could in certain setups cause problems with:
1) software that runs at boot time (e.g., old versions of LILO)
2) booting and partitioning software from other OSs
   (e.g., DOS FDISK, OS/2 FDISK)
Warning: invalid flag 0x0000 of partition table 4 will be corrected by w(rite)
 
Command (m for help): n
Command action
   e   extended
   p   primary partition (1-4)
p
Partition number (1-4): 1
First cylinder (1-29651, default 1):
Using default value 1
Last cylinder or +size or +sizeM or +sizeK (1-29651, default 29651):
Using default value 29651
 
Command (m for help): p
 
Disk /dev/hdb: 15.3 GB, 15303075840 bytes
16 heads, 63 sectors/track, 29651 cylinders
Units = cylinders of 1008 * 512 = 516096 bytes
 
   Device Boot      Start         End      Blocks   Id  System
/dev/hdb1               1       29651    14944072+  83  Linux
 
Command (m for help): w
The partition table has been altered!
 
Calling ioctl() to re-read partition table.
Syncing disks.

To make sure the OS is informed about the partition table changes, run this command:

1
partprobe

Once the partition /dev/hdb1 is created, its permissions must be changed for use by IDS:

1
2
chown informix:informix /dev/hdb1<br>
chmod 660 /dev/hdb1

Note, the change of permissions may be required again after reboot of the machine. This can be done using an init script.

If you are not already logged in as user informix, then switch user id to informix:

1
su - informix

Set the environment for IDS:

1
. ./ids.env

(Type: dot blank dot slash ids.env)

To remember which chunks are configured, create symbolic links in the dbspaces directory:

1
ln -s /dev/hdb1 /home/informix/dbspaces/chunk1

Now we create the chunk path named '/home/informix/dbspaces/chunk1' of the dbspace named 'dbspace1'. The size of the partition in kilobytes can be found in raw 'Blocks' in output of 'fdisk' command 'p'. It is given as parameter of option '-s' and has the value 14944072 in our example. Switch to userid informix and set ids.env environment to run following commands:

1
2
onspaces -c -d dbspace1 -p /home/informix/dbspaces/chunk1 -o 0 -s
14944072

For testing, create the demo database in dbspace1:

1
dbaccessdemo7 -dbspace dbspace1

Conclusion

In this article we've walked through the installation and configuration of specific versions of Red Hat Enterprise Linux and SUSE Linux Enterprise Server for running IDS 10, to help you get Informix up and running on Linux without difficulty. Adaptation for newer versions of the operating systems should be easy by updating the version numbers of mentioned packages.

https://www.ibm.com/developerworks/data/library/techarticle/dm-0603breitfeld/index.html

728x90
728x90

인포믹스의 memory 스레드를 간략하게 설명한 IBM 문서입니다. mt_purge_cpu_vp_caches 함수관련 이슈가 있어서 찾다보니 발견한 내용입니다. 중국어이긴 하지만 구글번역 등으로 보시면 이해하시는데 무리없으실 겁니다.



疑问

有客户问到,经常从informix服务器运行中, 看到memory线程,想了解下该线程主要做什么用的?

答案

Memory 线程, 是数据库服务器启动时fork的一个线程, 主要是用来持续进行所有cpuvp的private cache memory的清理工作(draining)。
其中, 该线程主要工作在mt_purge_cpu_vp_caches() , 该function主要是把每个VP的private cache entries remove,直到全部完成为止。

相关配置参数: VP_MEMORY_CACHE_KB

相关监控命令: onstat -g vpcache


https://www-01.ibm.com/support/docview.wss?uid=swg22011956

728x90
728x90

Technote (FAQ)


Question

How to determine which tables require dummy updates in order to eliminate outstanding in-place alters before reversion to an earlier version?

Cause

In IBM Informix 11.70 there is a requirement for migration reversion: "In-place ALTER TABLE statements performed in the new version of the server must not be outstanding against any table."

Answer

1) Run 'oncheck -pt' command to identify which tables were altered in the current version. Qualifying tables must have tblspace flag 0x800000 set. For example:


    # oncheck -pt tdb:tab1 
    TBLspace Report for tdb:dmitriyr.tab1 

     Table fragment partition par1 in DBspace dbs1 

        Physical Address               2:5 
        Creation date                  06/25/2012 11:58:16 
        TBLspace Flags                 800802     Row Locking 
                                                  TBLspace use 4 bit bit-maps 
    <...>

As an alternative to 'oncheck -pt' you may run the following query against the Informix sysmaster database to find all qualifying tables: 
    SELECT ta.dbsname, ta.tabname, pt.partnum, HEX(pt.flags) flag 
    FROM sysmaster:systabnames ta, sysmaster:sysptnhdr pt 
    WHERE ta.partnum = pt.partnum 
    AND BIT_AND(HEX(pt.flags),'00800000'::BINARYVAR) = '00800000';

Sample result: 
    dbsname       tdb 
    tabname       tab1 
    partnum       2097154 
    flag          0x00800802

In above examples tblspace flag value is 0x800802. It is clear that value 0x800000 is set here. 

2) Run 'oncheck -pT <database>:<table>' against those tables identified in p.1 to find which tables have outstanding in-place alters. 

You must look at "Home Data Page Version Summary" section. For example: 
    <...> 
        Home Data Page Version Summary 

                     Version                                 Count 

                           0 (oldest)                           25 
                           1 (current)                          15 

    <...>

In the above example there are 25 pages to be altered to a new version. That means that a dummy update is required to be executed against that table. The steps to perform a dummy update is explained in detail within the migration guide:


http://www-01.ibm.com/support/docview.wss?uid=swg21599602

728x90
728x90

Technote (FAQ)


Question

How do you identify outstanding in-place alters in order to resolve them?

These are sometimes call incomplete, or "unrealised" in-place alters.

This document is specifically for IDS version 11 and later. For IDS versions 7 to 10,
see http://www-01.ibm.com/support/docview.wss?uid=swg21226410

Cause

Since version 7.24, some ALTER TABLE statements have been done as "fast alters", where rows are only altered to fit the new schema when the data page they are on is next written to. If there are many alters done, this can lead to the data pages for a table being a variety of versions. When migrating to a later version (or just for the performance benefit) it is a good idea to "complete" the outstanding in-place alters.

Answer

The first step is to work out which tables have data pages at old versions:

There are several documented ways to determine altered tables, including:

  • The migration guide has SQL to identify all tables with in place alters (chapter 3, page 52). However, some versions of the manual have SQL like this:

      -- first set OPTCOMPIND to 0 in onconfig and restart instance 
        set isolation to dirty read; 

      select pg_partnum + pg_pagenum - 1 partn 
          from sysmaster:syspaghdr, sysmaster:sysdbspaces a 
         where pg_partnum = 1048576 * a.dbsnum + 1 
           and pg_next != 0 
          into temp pp with no log ; 

      select b.dbsname database, b.tabname table 
          from sysmaster:systabnames b, pp 
         where partn = partnum ; 
    which shows all tables that ever had an in-place or fast alter, and does not exclude those even where every data page has been rewritten (and therefore updated) since the ALTER TABLE was run.
  • The oncheck -pT utility prints a table showing all current versions on data pages, and page counts for each version, but it runs serially, locking the whole table, and takes a long while to run because it reports on many other things as well.


As these methods have drawbacks, the following steps can be used to obtain a more specific list of outstanding in-place alters: 

1. Set OPTCOMPIND to 0 in the onCONFIG configuration file and restart the instance. 

2. Set isolation to dirty read. 

3. Find every partnum in the tblspace tbspace of each non-temp dbspace where pg_next (pointer to extended partition page) is set to something other than zero. These tablespaces have been altered, and pg_next points to the pnvers (partition version) page. 

select (p.pg_partnum + p.pg_pagenum - 1) as partn, a.name as dbspace 
 from sysmaster:syspaghdr p, sysmaster:sysdbspaces a 
 where p.pg_partnum = 1048576 * a.dbsnum + 1 
  and a.is_temp = 0 and a.is_blobspace = 0 and a.is_sbspace = 0 
  and p.pg_next != 0 
into temp altpts with no log ; 


4. Get the database and table names, minimum and maximum possible page versions, plus row and data page numbers for all those partnums. 

  select b.dbsname database, b.tabname table, a.dbspace dbspace, 
          hex(a.partn) partnum, i.ti_nrows nrows, i.ti_npdata datapages, 
          p.pta_oldvers as oldvers, p.pta_newvers as newvers, 
          -1 as mindver, 999 as maxdver 
  from altpts a, outer sysmaster:sysactptnhdr p, 
       outer sysmaster:systabnames b, outer sysmaster:systabinfo i 
  where a.partn = b.partnum 
    and a.partn = i.ti_partnum 
    and a.partn = p.partnum 
   into temp tabvers with no log ; 


5. Scan all non-empty home data pages for each altered tablespace and get the 
minimum and maximum versions: 

  update tabvers set mindver = 
        ( select (min(pg_next)) / 16777216 
            from sysmaster:syspaghdr p, altpts a 
           where p.pg_partnum = a.partn 
             and tabvers.partnum = a.partn 
             and sysmaster:bitval(pg_flags, '0x1') = 1   
             and sysmaster:bitval(pg_flags, '0x8') <> 1 
             and pg_frcnt < (2020 - 4*pg_nslots) 
        ) 
   where 1 = 1 ; 

  update tabvers set maxdver =   
        ( select (max(pg_next)) / 16777216 
            from sysmaster:syspaghdr p, altpts a 
           where p.pg_partnum = a.partn 
             and tabvers.partnum = a.partn 
             and sysmaster:bitval(pg_flags, '0x1') = 1 
             and sysmaster:bitval(pg_flags, '0x8') <> 1 
             and pg_frcnt < (2020 - 4*pg_nslots) 
        )   
    where 1 = 1 ; 


6. Display the tablespaces which require updating to complete the in-place alters: 

   select database, table, dbspace, nrows, datapages, 
          oldvers as oldestv, mindver, maxdver, newvers as newestv 
     from tabvers 
    where newvers > mindver 
    order by 1, 2; 


The second step is to do a dummy update on at least one row on each of those pages not updated to the current version. 

For all tables which have been altered, but are not yet "complete" , you can run a "dummy update" statement, setting the primary key column to itself (or other "change" that changes nothing). This will bring all of the pages to the most current version: 

update TABNAME set PKEY = PKEY where 1 = 1 ; 


http://www-01.ibm.com/support/docview.wss?uid=swg21389693

728x90
728x90

Technote (FAQ)


Question

How do you identify outstanding in-place alters in order to resolve them?

This document is specifically for IDS versions 7 to 10. For IDS version 11 and later,
see http://www-01.ibm.com/support/docview.wss?uid=swg21389693

Cause

Since version 7.24, some ALTER TABLE statements have been done as "fast alters", where rows are only altered to fit the new schema when the data page they are on is next written to. If there are many alters done, this can lead to the data pages for a table being a variety of versions. When migrating to a later version (or just for the performance benefit) it is a good idea to "complete" the outstanding in-place alters.

Answer

The first step is to work out which tables have data pages at old versions:

There are several documented ways to determine altered tables, including:

  • The 9.4 migration guide has SQL to identify all tables with in place alters (chapter 5, page 5):
    first set OPTCOMPIND to 0 in onconfig and restart instance
     set isolation to dirty read;

    select pg_partnum + pg_pagenum - 1 partn
       from sysmaster:syspaghdr, sysmaster:sysdbspaces a
      where pg_partnum = 1048576 * a.dbsnum + 1
        and pg_next != 0
       into temp pp with no log ;

    select b.dbsname database, b.tabname table
       from sysmaster:systabnames b, pp
      where partn = partnum ;

    Unfortunately, this shows all tables that ever had an in-place or fast alter, and does not exclude those even where every data page has been rewritten (and therefore updated) since the ALTER TABLE was run.
  • The oncheck -pT utility prints a table showing all current versions on data pages, and page counts for each version, but it runs serially, locks the whole table, takes a long while to run, and reports on many other things as well.

    This SQL will report the minimum and maximum version page version for a table (or by default, all tables) with incomplete alters. Dirty reads mean no locks, and it reports which dbspaces are involved too. It runs faster than oncheck -pT.

As these methods have drawbacks, the following steps can be used to obtain a more specific list of outstanding in-place alters: 

1. Set OPTCOMPIND to 0 in the onCONFIG configuration file and restart the instance. 

2. Set isolation to dirty read. 

3. Find every partnum in the tblspace tbspace of each non-temp dbspace where pg_next (pointer to extended partition page) is set to something other than zero. These tablespaces have been altered, and pg_next points to the pnvers (partition version) page. 

select (p.pg_partnum + p.pg_pagenum - 1) as partn, a.name as dbspace 
 from sysmaster:syspaghdr p, sysmaster:sysdbspaces a 
 where p.pg_partnum = 1048576 * a.dbsnum + 1   
           -- Tblspace tblspace partnums have format 0xnnn00001 
  and a.is_temp = 0 and a.is_blobspace = 0 and a.is_sbspace = 0 
           -- exclude tempspaces, blobspaces and smart blob spaces 
  and p.pg_next != 0 
           -- non zero = altered 
into temp altpts with no log ; 

4. Get the database and table names, plus row and data page numbers for all those partnums. 

select b.dbsname database, b.tabname table, 
       a.dbspace dbspace, hex(a.partn) partnum, 
       i.ti_nrows nrows, i.ti_npdata datapages, 
       -1 as minver, 999 as maxver 
  from altpts a, outer sysmaster:systabnames b, 
    outer sysmaster:systabinfo i 
  where a.partn = b.partnum and a.partn = i.ti_partnum 
into temp tabvers with no log ; 

5. Scan all non-empty home data pages for each altered tablespace and get 
the minimum and maximum versions: 

update tabvers set minver = ( 
  select (min(pg_next)) / 16777216 
  from sysmaster:syspaghdr p, altpts a 
  where p.pg_partnum = a.partn 
  and tabvers.partnum = a.partn 
  and sysmaster:bitval(pg_flags, '0x1') = 1  -- data page 
  and sysmaster:bitval(pg_flags, '0x8') <> 1 -- not remainder page 
  and pg_frcnt < (2020 - 4*pg_nslots)        -- with some data on it. 
) where 1 = 1 ; 

update tabvers set maxver = ( select (max(pg_next)) / 16777216 
  from sysmaster:syspaghdr p, altpts a 
  where p.pg_partnum = a.partn 
  and tabvers.partnum = a.partn 
  and sysmaster:bitval(pg_flags, '0x1') = 1 
  and sysmaster:bitval(pg_flags, '0x8') <> 1 
  and pg_frcnt < (2020 - 4*pg_nslots) 
)  where 1 = 1 ; 

Note the use of 2020 above is based on a dbspace with a 2KB pagesize, less the 24 byte header and 4 byte timestamp. For a 4KB pagesize use 4096 - 28 = 4078, for 8KB pagesize use 8192 - 28 = 8164, and so on. 

6. Display the tablespaces which require updating to complete the in-place alters: 

select database, table, dbspace, nrows, datapages, minver, maxver, 
 decode(maxver - minver, 0, "Maybe Needs Update", "Definitely Needs Update") 
 from tabvers 
order by 1, 2; 

The output of this SQL provides a list of which tablespaces (tables or fragments of tables) where in-place alters exist. If there are some data pages at a lower version from other data pages, then a dummy update is definitely required. If minver = maxver, then if there was an alter table statement run and no data pages have been written to since then (i.e. no inserts, updates, or deletes), then a dummy update may still be required. Prior to IDS version 11, it is not possible to determine from the sysmaster database what the potential maximum version of a data page is for a table. In version 11 there is a new sysmaster table called sysactptnhdr that does give this information. 


The second step is to do a dummy update on at least one row on each of those pages not updated to the current version. 

For all tables which have been altered, but are not yet "complete" , you can run a "dummy update" statement, setting the primary key column to itself (or other "change" that changes nothing). This will bring all of the pages to the most current version: 

update TABNAME set PKEY = PKEY where 1 = 1 ; 


http://www-01.ibm.com/support/docview.wss?uid=swg21226410

728x90
728x90

인포믹스 데이터 파일 내용을 조작하여 백업 테스트를 수행해보았습니다.


## 테스트 환경

1. 테스트서버 : AIX 6.1

2. DBSPACE 현황 (청크개수) : rootdbs (1), datadbs (3)

3. onstat -d 결과

IBM Informix Dynamic Server Version 7.31.UD9     -- Quiescent -- Up 00:29:45 -- 18112 Kbytes


Dbspaces

address  number   flags    fchunk   nchunks  flags    owner    name

4005a150 1        1        1        1        N        informix rootdbs

4005a888 2        1        2        1        N        informix plogdbs

4005a948 3        1        3        1        N        informix llogdbs

4005aa08 4        1        4        3        N        informix datadbs

 4 active, 2047 maximum


Chunks

address  chk/dbs offset   size     free     bpages   flags pathname

4005a210 1   1   0        5000     3009              PO-   /work2/INFORMIX/ids731/dbspaces/rootdbs

4005a388 2   2   0        250000   249947            PO-   /work2/INFORMIX/ids731/dbspaces/plogdbs

4005a488 3   3   0        250000   249707            PO-   /work2/INFORMIX/ids731/dbspaces/llogdbs

4005a588 4   4   0        250000   0                 PO-   /work2/INFORMIX/ids731/dbspaces/datadbs

4005a688 5   4   0        25000    0                 PO-   /work2/INFORMIX/ids731/dbspaces/datadbs1

4005a788 6   4   0        25000    0                 PO-   /work2/INFORMIX/ids731/dbspaces/datadbs2

 6 active, 2047 maximum


 

 

## 테스트 절차

1. 데이터 파일 조작 (6 청크 파일)

dd if=/dev/random seek=10 bs=1000 count=10 of=/work2/INFORMIX/ids731/dbspaces/datadbs2 conv=notrunc

 

2. ontape 백업

ontape -s -L 0

 

3. 백업 online.log 파일 내용 확인

17:37:39  Assert Failed: Archive detects that page 0x600003 is corrupt.

17:37:39  IBM Informix Dynamic Server Version 7.31.UD9

17:37:39   Who: Session(23, informix@pilma01, 23134372, 1074292452)

                Thread(60, arcbackup1, 4005e334, 1)

                File: rsarcbu.c Line: 2600

17:37:39  stack trace for pid 23396402 written to /tmp/af.424e052

17:37:39   See Also: /tmp/af.424e052

17:37:39  Archive on rootdbs, datadbs Completed with 2 corrupted pages detected.


백업중 발생한 오류 메시지에서 나타난 페이지 번호(0x600003)는 DBSPace 번호가 아닌 Chunk 번호를 의미하는 것으로 보입니다.

728x90

+ Recent posts