728x90

Environment

  • Red Hat Enterprise Linux 7 (7.7 and newer)
  • kernel-3.10.0-1062.el7 and newer.
  • Oracle Redo Logs on XFS file systems.
  • Redo Log using direct AIO via io_submit().

Issue

  • Customers who use file systems for the Oracle Redo Log, and in this specific case XFS have seen performance regressions when moving from 7.2 to 7.8.
  • Very slow io_submit syscalls lead to poor Oracle DB performance when the Oracle log writer process uses XFS filesystems, without any visible delays in iostat on the disks used by the filesystem.

Resolution

  • Moving the Redo Log destination on to EXT4 is the easiest way to resolve this for now.
  • If XFS is required, then the Redo Log block size needs to be at least equal to the filesystem blocksize. This can mean either increasing the Redo Log block size to 4k , to match the default XFS blocksize, or rebuilding the XFS filesystem with a 512b block size.
    • For instructions on changing the Redo Log block size you will need to review the database manuals and consult the database vendor (Oracle).
    • Rebuilding the XFS filesystem with a 512b blocksize can be done with the mkfs.xfs options -b size=512 -m crc=0
  • The commit introducing the delays addressed possible corruption related issues. There is no planned change in the XFS code of Red Hat Enterprise Linux 7 to improve the performance of unaligned dio writes.

Root Cause

Based on the stack traces collected while the issue was appearing, the delay is caused because of waiting in inode_dio_wait caused by the submission of direct I/O write requests that were not aligned to filesystem blocks. In such a scenario, the following upstream patch, serializes requests (more thoroughly than before) in order to avoid data corruption.

This patch has been backported to Red Hat Enterprise Linux 7, in kernel-3.10.0-1062.el7, with commit:

ce7cfcb2740 [fs] xfs: serialize unaligned dio writes against all other dio writes

XFS applies more strict serialization constraints to unaligned
direct writes to accommodate things like direct I/O layer zeroing,
unwritten extent conversion, etc. Unaligned submissions acquire the
exclusive iolock and wait for in-flight dio to complete to ensure
multiple submissions do not race on the same block and cause data
corruption.

This generally works in the case of an aligned dio followed by an
unaligned dio, but the serialization is lost if I/Os occur in the
opposite order. If an unaligned write is submitted first and
immediately followed by an overlapping, aligned write, the latter
submits without the typical unaligned serialization barriers because
there is no indication of an unaligned dio still in-flight. This can
lead to unpredictable results.

To provide proper unaligned dio serialization, require that such
direct writes are always the only dio allowed in-flight at one time
for a particular inode. We already acquire the exclusive iolock and
drain pending dio before submitting the unaligned dio. Wait once
more after the dio submission to hold the iolock across the I/O and
prevent further submissions until the unaligned I/O completes. This
is heavy handed, but consistent with the current pre-submission
serialization for unaligned direct writes.

Diagnostic Steps

Disclaimer: Links contained herein to external website(s) are provided for convenience only. Red Hat has not reviewed the links and is not responsible for the content or its availability. The inclusion of any link to an external website does not imply endorsement by Red Hat of the website or their entities, products or services. You agree that Red Hat is not responsible or liable for any loss or expenses that may result due to your use of (or reliance on) the external site or content.

  • Database logs frequently report long delays from the Redo Log writer, similar to the following:
  • Warning: log write elapsed time 158ms, size 35041KB
    
  • Collecting strace from the Redo Log writer process reveals io_submit calls taking long to complete, matching the logs from the database. Strace options -yy -T -tt -f provide useful information on the files being accessed and on timing. Strace will need to be attached to the Redo Log writer process with the -p option.
    In the output of strace, offsets and bytes written in each io_submit call can be seen. These are not multiples of the XFS filesystem block size (writes are unaligned to the filesystem block size and because of commit ce7cfcb2740 serialized to avoid data corruption).
    A sample of such an io_submit call as captured by strace is the following:This sample is from an XFS filesystem with the default 4k blocksize. We can observe that the offsets are not multiples of 4096 - therefore IO is not aligned to filesystem blocks.
  • 347496 13:39:59.853377 io_submit(140055665958912, 66, [
        {data=..., pwrite, fildes=262, str="..."..., nbytes=1048576, offset=5254478848},
        {data=..., pwrite, fildes=262, str="..."..., nbytes=285696, offset=5255527424},
        {data=..., pwrite, fildes=262, str="..."..., nbytes=1048576, offset=5255813120},
        {data=..., pwrite, fildes=262, str="..."..., nbytes=195072, offset=5256861696},
        {data=..., pwrite, fildes=262, str="..."..., nbytes=1048576, offset=5257056768},
        {data=..., pwrite, fildes=262, str="..."..., nbytes=843264, offset=5258105344},
        {data=..., pwrite, fildes=262, str="..."..., nbytes=1048576, offset=5258948608},
        {data=..., pwrite, fildes=262, str="..."..., nbytes=1048576, offset=5259997184},
        {data=..., pwrite, fildes=262, str="..."..., nbytes=407040, offset=5261045760},
        ...
    ...]) = 66 <0.120894>
    
  • Using the io_submit.stp systemtap script (attached at the bottom of the article), capture the slow io_submit and look for the dio_wait calls. In order to use the io_submit.stp script, the system needs to be prepared according to: What is SystemTap and how to use it?
    • Save the attached script as io_submit.stp. A copy of the script is also included in the systemtap-client package and can be found in /usr/share/systemtap/examples/io/io_submit.stp, once the package is installed. It can also be found in the upstream pages of systemtap
    • Compile and run the script using stap -g -d xfs -d kernel io_submit.stp
    • Run long enough to capture a few events.
    • Interrupt with CTRL+c. The script will only output data after stopping with CTRL+c.
    Example output from the io_submit.stp script. This output reveals that 218943+7630 io_submit calls have been waiting in inode_dio_wait called from xfs_file_dio_aio_write. This is the vast majority of calls. This code path matches the changes introduced by commit ce7cfcb2740 [fs] xfs: serialize unaligned dio writes against all other dio writes:
  • WARNING: Missing unwind data for a module, rerun with 'stap -d (unknown; retry with -DDEBUG_UNWIND)'
    WARNING: too many pending (warning) messages
    Ready!
    WARNING: Missing unwind data for a module, rerun with 'stap -d kernel'
    ^C218943: 0xffffffffae185d60 : schedule+0x0/0x70 [kernel]
     0xffffffffadc6ab49 : inode_dio_wait+0xd9/0x100 [kernel]
     0xffffffffc070c698 : xfs_file_dio_aio_write+0x2a8/0x390 [xfs]
     0xffffffffc070cb42 : xfs_file_aio_write+0x102/0x1b0 [xfs]
     0xffffffffadca4e13 : do_io_submit+0x3e3/0x8a0 [kernel]
     0xffffffffadca52e0 : sys_io_submit+0x10/0x20 [kernel]
     0xffffffffae18b459 : kretprobe_trampoline+0x0/0x57 [kernel]
     0x7fffffff7ef0 : 0x7fffffff7ef0
    7630: 0xffffffffae185d60 : schedule+0x0/0x70 [kernel]
     0xffffffffadc6ab49 : inode_dio_wait+0xd9/0x100 [kernel]
     0xffffffffc070c710 : xfs_file_dio_aio_write+0x320/0x390 [xfs]
     0xffffffffc070cb42 : xfs_file_aio_write+0x102/0x1b0 [xfs]
     0xffffffffadca4e13 : do_io_submit+0x3e3/0x8a0 [kernel]
     0xffffffffadca52e0 : sys_io_submit+0x10/0x20 [kernel]
     0xffffffffae18b459 : kretprobe_trampoline+0x0/0x57 [kernel]
     0x7fffffff7ef0 : 0x7fffffff7ef0
    511: 0xffffffffae185d60 : schedule+0x0/0x70 [kernel]
     0xffffffffae1874c5 : rwsem_down_write_failed+0x215/0x3c0 [kernel]
     0xffffffffadd96677 : call_rwsem_down_write_failed+0x17/0x30 [kernel]
     0xffff9bfd7d3bfd10 : 0xffff9bfd7d3bfd10
     0xffffffffc0719070 : xfs_ilock+0xa0/0x120 [xfs]
     0xffffffffc070c4a7 : xfs_file_dio_aio_write+0xb7/0x390 [xfs]
     0xffffffffc070cb42 : xfs_file_aio_write+0x102/0x1b0 [xfs]
     0xffffffffadca4e13 : do_io_submit+0x3e3/0x8a0 [kernel]
     0xffffffffadca52e0 : sys_io_submit+0x10/0x20 [kernel]
     0xffffffffae18b459 : kretprobe_trampoline+0x0/0x57 [kernel]
    110: 0xffffffffae185d60 : schedule+0x0/0x70 [kernel]
     0xffffffffadc6ab49 : inode_dio_wait+0xd9/0x100 [kernel]
     0xffffffffc070c698 : xfs_file_dio_aio_write+0x2a8/0x390 [xfs]
     0xffffffffc070cb42 : xfs_file_aio_write+0x102/0x1b0 [xfs]
     0xffffffffadca4e13 : do_io_submit+0x3e3/0x8a0 [kernel]
     0xffffffffadca52e0 : sys_io_submit+0x10/0x20 [kernel]
     0xffffffffae18b459 : kretprobe_trampoline+0x0/0x57 [kernel]
     0x7fffffff7790 : 0x7fffffff7790
    9: 0xffffffffae185d60 : schedule+0x0/0x70 [kernel]
     0xffffffffadc6ab49 : inode_dio_wait+0xd9/0x100 [kernel]
     0xffffffffc070c710 : xfs_file_dio_aio_write+0x320/0x390 [xfs]
     0xffffffffc070cb42 : xfs_file_aio_write+0x102/0x1b0 [xfs]
     0xffffffffadca4e13 : do_io_submit+0x3e3/0x8a0 [kernel]
     0xffffffffadca52e0 : sys_io_submit+0x10/0x20 [kernel]
     0xffffffffae18b459 : kretprobe_trampoline+0x0/0x57 [kernel]
     0x7fffffff7790 : 0x7fffffff7790
    5: 0xffffffffae185d60 : schedule+0x0/0x70 [kernel]
     0xffffffffae187775 : rwsem_down_read_failed+0x105/0x1c0 [kernel]
     0xffffffffadd96648 : call_rwsem_down_read_failed+0x18/0x30 [kernel]
     0xffff9bfd7d3bf9e0 : 0xffff9bfd7d3bf9e0
     0xffffffffc07190a9 : xfs_ilock+0xd9/0x120 [xfs]
     0xffffffffc0719120 : xfs_ilock_data_map_shared+0x30/0x40 [xfs]
     0xffffffffc06ff7ea : __xfs_get_blocks+0x9a/0x6d0 [xfs]
     0xffffffffc0700ff4 : xfs_get_blocks_direct+0x14/0x20 [xfs]
     0xffffffffadc917f6 : do_blockdev_direct_IO+0xf96/0x20a0 [kernel]
     0xffffffffadc92955 : __blockdev_direct_IO+0x55/0x60 [kernel]
     0xffffffffc070c570 : xfs_file_dio_aio_write+0x180/0x390 [xfs]
     0xffffffffc070cb42 : xfs_file_aio_write+0x102/0x1b0 [xfs]
     0xffffffffadca4e13 : do_io_submit+0x3e3/0x8a0 [kernel]
     0xffffffffadca52e0 : sys_io_submit+0x10/0x20 [kernel]
     0xffffffffae18b459 : kretprobe_trampoline+0x0/0x57 [kernel]
    2: 0xffffffffae185d60 : schedule+0x0/0x70 [kernel]
     0xffffffffadc6ab49 : inode_dio_wait+0xd9/0x100 [kernel]
     0xffffffffc070c698 : xfs_file_dio_aio_write+0x2a8/0x390 [xfs]
     0xffffffffc070cb42 : xfs_file_aio_write+0x102/0x1b0 [xfs]
     0xffffffffadca4e13 : do_io_submit+0x3e3/0x8a0 [kernel]
     0xffffffffadca52e0 : sys_io_submit+0x10/0x20 [kernel]
     0xffffffffae18b459 : kretprobe_trampoline+0x0/0x57 [kernel]
     0x7fffffff7da0 : 0x7fffffff7da0
    1: 0xffffffffae185d60 : schedule+0x0/0x70 [kernel]
     0xffffffffadc6ab49 : inode_dio_wait+0xd9/0x100 [kernel]
     0xffffffffc070c710 : xfs_file_dio_aio_write+0x320/0x390 [xfs]
     0xffffffffc070cb42 : xfs_file_aio_write+0x102/0x1b0 [xfs]
     0xffffffffadca4e13 : do_io_submit+0x3e3/0x8a0 [kernel]
     0xffffffffadca52e0 : sys_io_submit+0x10/0x20 [kernel]
     0xffffffffae18b459 : kretprobe_trampoline+0x0/0x57 [kernel]
     0x7fffffff7da0 : 0x7fffffff7da0
    

 

 

https://access.redhat.com/solutions/5906661

728x90
728x90

안녕하세요. 지난번에 이어 Informix에서 Oracle 데이터베이스 참조하기 위한 구성을 테스트해봤습니다.

마찬가지로 Enterprise Gateway Manager를 사용했습니다. 그런데 Informix 데이터베이스에 접속한 상태에서 Oracle 데이터베이스를 참조하려고 하니 908 오류가 발생하면서 접속이 되지 않더군요. 그래서 이번에는 EGM 7.31.UD3 버전으로 테스트를 시도해봤습니다.

 

1. 테스트 환경

① O/S : CentOS 7

② 클라이언트 : Informix Server 12.10, Informix Server 14.10

③ 접속 대상 : Oracle 11gR2, SQL Server 2017 on Linux

④ EGM 버전 : 7.31.UD3

 

2. 환경변수 설정

별도 EGM 인스턴스를 기동해야해서 INFORMIXSERVER, INFORMIXDIR을 수정했습니다.

Oracle을 사용하려는 경우 egmenv.sh 에서 ORACLE_HOME 환경변수를 지정하라고 합니다.

사실 ODBC를 사용하는 경우 tnsnames.ora 파일만 지정하면 되고 별도 라이브러리는 필요없으므로 ORACLE_HOME 설정이 되어있지 않어도 접속이 되었습니다. 따라서 오라클 클라이언트를 별도로 설치하지 않아도 됩니다. 이건 기본 테스트라 그렇지 설치가 필요한 경우도 있겠죠?

 

3. odbc.ini 파일 수정

EGM 7.31.UD4 버전의 ODBC 드라이버와 odbc.ini 파일을 참고하여 설정했습니다. 앞서 egmenv.sh 스크립트에서 ODBC 드라이버 매니저는 7.31.UD3의 라이브러리를 참조하도록 했습니다. (LD_LIBRARY_PATH=$INFORMIXDIR/egm/odbc/lib)

그리고 접속 대상인 Oracle 서버의 접속정보인 tnsnames.ora 파일의 경로를 TNSNamesFile에 설정하면 됩니다.

[oracle]
Driver=/work1/informix/egm/egm/odbc/lib/IXora27.so
Description=DataDirect 7.1 Oracle
AlternateServers=
ApplicationUsingThreads=1
ArraySize=7.100
CatalogIncludesSynonyms=1
CatalogOptions=0
ClientVersion=9iR2
ConnectionRetryCount=0
ConnectionRetryDelay=3
DefaultLongDataBuffLen=1024
DescribeAtPrepare=0
EnableDescribeParam=0
EnableNcharSupport=0
EnableScrollableCursors=1
EnableStaticCursorsForLongData=0
EnableTimestampWithTimeZone=0
LoadBalancing=0
LocalTimeZoneOffset=
LockTimeOut=-1
LogonID=
OptimizeLongPerformance=0
Password=
ProcedureRetResults=0
ReportCodePageConversionErrors=0
ServerName=test
TimestampEscapeMapping=0
TNSNamesFile=/work1/informix/egm/tnsnames.ora
UseCurrentSchema=1

4. SQL 실행 테스트

Oracle의 샘플 스키마인 scott 계정의 emp테이블을 Informix 데이터베이스로 가져와봅니다.

$ export CLIENT_LOCALE=en_us.819
$ export DB_LOCALE=en_us.819
$ echo "create table emp as select * from oracle@egm731ud3:\"SCOTT\".emp" | dbaccess stores_demo
Database selected.
14 row(s) retrieved into table.
Database closed.
[informix@db2 1210FC12W1XC]$ echo "select * from emp" | dbaccess stores_demo
Database selected.
 empno ename      job          mgr hiredate                  sal      comm deptno
  7369 SMITH      CLERK       7902 1980-12-17 00:00:00    800.00               20
  7566 JONES      MANAGER     7839 1981-04-02 00:00:00   2975.00               21
  7782 CLARK      MANAGER     7839 1981-06-09 00:00:00   2450.00               11
  7844 TURNER     SALESMAN    7698 1981-09-08 00:00:00   1500.00      0.00     30
  7902 FORD       ANALYST     7566 1981-12-03 00:00:00   3000.00               20
  7499 ALLEN      SALESMAN    7698 1981-02-20 00:00:00   1600.00    300.00     30
  7654 MARTIN     SALESMAN    7698 1981-09-28 00:00:00   1250.00   1400.00     30
  7788 SCOTT      ANALYST     7566 1987-04-19 00:00:00   3000.00               20
  7876 ADAMS      CLERK       7788 1987-05-23 00:00:00   1100.00               20
  7934 MILLER     CLERK       7782 1982-01-23 00:00:00   1300.00               10
  7521 WARD       SALESMAN    7698 1981-02-22 00:00:00   1250.00    500.00     30
  7698 BLAKE      MANAGER     7839 1981-05-01 00:00:00   2850.00               31
  7839 KING       PRESIDENT        1981-11-17 00:00:00   5000.00               10
  7900 JAMES      CLERK       7698 1981-12-03 00:00:00    950.00               31

Informix와 Oracle 테이블간의 inner join도 실행해보고 실행계획도 확인해봤습니다. 이것 참 재미있네요. Informix 테이블에 대한 정보만 표시되고 Oracle의 테이블에 대해서는 REMOTE PATH로 표시되네요.

$ echo "set explain on ;select b.* from emp a, oracle@egm731ud3:\"SCOTT\".emp b where a.empno = b.empno and a.empno = 7369" | dbaccess stores_demo
Database selected.
Explain set.
 empno ename      job          mgr hiredate                  sal      comm deptno
  7369 SMITH      CLERK       7902 1980-12-17 00:00:00    800.00               20
1 row(s) retrieved.
Database closed.
$  cat sqexplain.out
QUERY: (OPTIMIZATION TIMESTAMP: 09-14-2020 17:33:20)
------
select b.* from emp a, oracle@egm731ud3:"SCOTT".emp b where a.empno = b.empno and a.empno = 7369
Estimated Cost: 6
Estimated # of Rows Returned: 1
  1) informix.a: SEQUENTIAL SCAN  (Serial, fragments: ALL)
  2) informix.b: REMOTE PATH
    REMOTE SESSION ID FOR 'egm731ud3' is UNKNOWN
    Remote SQL Request:
    select x0.empno ,x0.ename ,x0.job ,x0.mgr ,x0.hiredate ,x0.sal ,x0.comm ,x0.deptno from oracle:"SCOTT".emp x0 where ((? = x0.empno ) AND (x0.empno = 7369 ) )                            
NESTED LOOP JOIN
Query statistics:
-----------------
  Table map :
  ----------------------------
  Internal name     Table name
  ----------------------------
  t1                a
  type     table  rows_prod  est_rows  rows_scan  time       est_cost
  -------------------------------------------------------------------
  scan     t1     14         1         14         00:00.00   4
  type     rows_prod  est_rows  time       est_cost
  -------------------------------------------------
  remote   1          1         00:00.01   0
  type     rows_prod  est_rows  time       est_cost
  -------------------------------------------------
  nljoin   1          1         00:00.01   7

SQL Server의 경우도 inner join이 잘 실행되었습니다. 테스트 환경이라 ODBC 라이브러리가 꼬인건지 모르겠지만.. 실행이 되는건 확인했네요. ODBC 드라이버 매니저와 드라이버가 적절하게 구성되는 것이 제일 중요한 것 같습니다. 인터넷에서 문서들을 찾다보면 unixODBC와 각 데이터베이스에서 제공되는 ODBC 드라이버로 구성한 사례들도 있는데 저는 라이브러리를 새로 컴파일하기가 귀찮아서 기본으로 제공되는 라이브러리들로 테스트했습니다. 깔끔한 상태의 컴퓨터에서 테스트해보시길 권장드립니다.

728x90
728x90

문자열에서 특수문자를 제거하는 방법을 찾아보았습니다. 엔터나 탭같은 보이지 않는 문자를 'nonprintable character'라고 부르기도 하는데요. 인포믹스에서는 이런 문자열을 확인하려면 데이터를 파일로 내려받습니다.

> insert into test values ('a'||chr(10)||'bcd'||chr(9)||'efg');
1 row(s) inserted.
> select * from test;
a  a
bcd     efg
$ echo "unload to test select * from test" | dbaccess -X kr_iis
Database selected.
1 row(s) unloaded.
Database closed
$ cat test
a\0abcd\09efg|

dbaccess 유틸리티에서 -X 옵션을 사용하면 nonprintable character를 16진수 형식으로 보여줍니다.

화면에는 표시되지않고 UNLOAD를 사용하여 파일로 내려받았을 때 보입니다.

 

REPLACE를 사용해서 일일이 특수문자를 치환할 수도 있지만, REGEX_REPLACE 함수를 사용해서 아래처럼 간단하게 사용할 수도 있습니다. 오라클의 REGEXP_REPLACE와 사용법은 동일합니다. ( chr(1) 부터 chr(31) 까지 )

> SELECT REGEX_REPLACE (A, '['||CHR(1)||'-'||CHR(31)||']','') FROM TEST;
(expression)  abcdefg

 

범위를 세분화하려면 아래와 같이 사용할 수도 있습니다. ( chr(1) 부터 chr(9), chr(11) 부터 chr(31) )

> SELECT REGEX_REPLACE (A, '['||CHR(1)||'-'||CHR(9)||']|['||CHR(11)||'-'||CHR(31)||']','') FROM TEST;
(expression)  a
bcdefg

 

사용방법은 아래 사이트를 참고했습니다.

https://www.ascii-code.com/

http://mrrame.blogspot.com/2010/05/oracle-how-to-replace-non-printable.html

728x90
728x90

안녕하세요. 회사 업무로 원격지에 있는 DB서버의 SEQUENCE값을 가져오는 방법에 대해 조사해보았습니다.

직접 쿼리로 수행하는 것보다는 VIEW를 사용하는 방법을 먼저 시도해보았는데요.

아래와 같이 8319 오류가 발생했습니다.

 

View에서 직접 Sequence를 참조할 때 발생하는 오류

8319 오류메시지를 finderr 명령으로 살펴보면 VIEW 정의에서는 CURRVAL 또는 NEXTVAL 키워드를 사용할 수 없다고 나와있습니다.

 

Db2와 Oracle에서도 동일한 제약사항이 있었습니다. 어쨌든 이런 현상을 우회하기 위해 사용자 정의 함수를 생성하고 해당 함수를 호출하는 VIEW를 생성해보았습니다. VIEW가 잘 생성되고 조회도 아래와 같이 잘 수행되었습니다.

 

RDBMS의 ANSI 표준인지 명확히 어떤 이유로 직접 VIEW 정의에서 사용할 수 없는지는 모르겠습니다.

이유를 알게 되면 업데이트하겠습니다.

 

참조 : https://www.jamescoyle.net/how-to/2872-oracle-ora-02287-sequence-number-not-allowed-here

 

Oracle ORA-02287: sequence number not allowed here | JamesCoyle.net Limited

I've recently hit an issue when trying to include a sequence.nextval in an Oracle database view. the database throws the following error: ORA-02287: sequence number not allowed here I can see why

www.jamescoyle.net

728x90
728x90

안녕하세요. 오라클 데이터베이스에서 Microsft Access (mdb) 파일을 조회하는 기능을 테스트해보았습니다.

테스트 환경은 윈도우 10 64비트, 오라클 데이터베이스 11gR2 64비트 입니다.


먼저 Database Gateway가 설치되어야 합니다. 오라클 기본 설치 구성 중 제외하지 않았다면 사용할 수 있습니다.


mdb 파일은 odbc 를 통해 접근할 수 있으므로 DG4ODBC 설정을 했습니다. 10g에서는 HSODBC라고 하죠.

참고로 Database Gateway는 Oracle Document 232482.1에 따르면 별도의 라이센스는 필요하지 않다고 합니다.

아래는 해당 문서 내용의 일부입니다.


The purpose of this note is to clarify the licensing considerations for the Oracle Database Gateway products, such as ODBC, SQL Server, Sybase, DRDA, and more.
The licensing applies to both Enterprise and Standard RDBMS editions.

The Oracle Database Gateway for ODBC (DG4ODBC) is included in your Oracle Database Server license without extra cost. If DG4ODBC is installed on a different machine from where the Oracle Database Server is installed then it is still covered by the Oracle Database Server license and that install of DG4ODBC does not need a separate license.



오라클 데이터베이스 설치는 전부 기본 설정으로 했고 아래의 설정파일만 수정했습니다.


1. %ORACLE_HOME%\network\admin\tnsnames.ora


2. %ORACLE_HOME%\network\admin\.ora


3. %ORACLE_HOME%\network\admin\initdg4odbc.ora


4. odbc 데이터소스를 정의한 화면입니다. sample.mdb 파일을 선택하고 확인을 누릅니다.



5. sample odbc 데이터소스에 대한 데이터베이스 링크를 만들고 SQL을 실행한 화면입니다.


linux에서 먼저 시도해봤는데 오류 해결방법을 아직 찾지 못해 윈도우로 해보니 비교적 쉽게(?) 되네요.



참고:

Database Gateway and Generic Connectivity (DG4ODBC) Licensing Considerations (Doc ID 232482.1)

https://docs.oracle.com/cd/E11882_01/gateways.112/e12013/configodbc.htm#OTGIS110

https://stackoverflow.com/questions/24081001/how-to-query-an-external-ms-access-db-from-an-oracle-db


728x90
728x90

Optim 11.3 버전에서 Oracle 12c 에 대한 DB Alias를 만들 때 필수적으로 필요한 권한들이 있습니다.

매뉴얼에서는 기본적으로 아래의 권한이 필요하다고 나와 있습니다.

Oracle 12cR1에서는 정상 동작하였고 12cR2는 테스트가 필요합니다.


CREATE PROCEDURE

CREATE TABLE

CREATE VIEW

CREATE SESSION

UNLIMITED TABLESPACE

SELECT ANY DICTIONARY

SELECT on SYS.USER$ TO oracle_user

SELECT on SYS.ENC$ TO oracle_user


테크노트에서는 작업에 따라 아래의 권한이 추가로 필요할 수 있다고 합니다.


CREATE SEQUENCE

CREATE SYNONYM

CREATE TRIGGER


https://www.ibm.com/support/knowledgecenter/en/SSMLNW_11.3.0/com.ibm.nex.optimd.install.doc/01cgintr/opinstall-r-required_database_permissions.html#wp118661

http://www-01.ibm.com/support/docview.wss?uid=swg21686313

728x90
728x90

4KB가 넘는 이미지 데이터가 hex string으로 존재하고 이를 이미지 형태로 ORACLE 테이블에 입력하는 예제입니다.

먼저 아래와 같은 함수를 만듭니다. 함수는 stack overflow의 글에서 참조했습니다.

기본적인 기능만 있어 예외처리가 필요할 수 있습니다.


create or replace function hextoblob(data in clob) return blob

is

    v_blob    blob;

    v_start  pls_integer := 1;

    v_buffer pls_integer := 4000;

begin


    dbms_lob.createtemporary(v_blob, true);


    for i in 1..ceil(dbms_lob.getlength(data) / v_buffer)

    loop

        dbms_lob.append(v_blob, hextoraw(DBMS_LOB.SUBSTR(data, v_buffer, v_start)));

        v_start := v_start + v_buffer;

    end loop; 


    return v_blob;

end;


위의 예제는 NULL값 처리는 제외된 내용입니다. 관련 내용은 아래 링크를 참고해주세요.


작업 전에 hex string 데이터를 CLOB 필드에 입력합니다. 

$ sqlplus scott/tiger

 

SQL> create table img (img clob);

 

Table created.

 

$ cat img.ctl     << SQL*Loader control file

 

load data

infile '/work2/INFORMIX/1150FC9W3/HDR_PRI/unload/img.unl'

append

into table img

( IMG CHAR(1000000) )

 

$  sqlldr userid=scott/tiger control='./img.ctl'

 

SQL*Loader: Release 11.2.0.1.0 - Production on Tue Nov 22 10:51:25 2016

 

Copyright (c) 1982, 2009, Oracle and/or its affiliates.  All rights reserved.

 

Commit point reached - logical record count 1

 

그리고 입력된 hex string 을 4000바이트씩 자른 후, hextoraw 함수로 변환하는 것입니다.

hextoraw 함수는 CHAR, VARCHAR2, NCHAR, NVARCHAR2 타입만 지원하므로 잘라서 처리해야 합니다.

SQL> create table test_lob (img blob);

 

Table created.

 

insert into test_lob select hextoblob(img) from img;

 

1 row created.

 


참조 사이트:

http://stackoverflow.com/questions/8549138/oracle-rawtohex-function-what-happens-if-return-value-exceeds-varchar2-limit

728x90
728x90

Problem(Abstract)

An 'SQL0433N Value "XXXX" is too long' error is returned when the TIMESTAMDIFF scalar function is run against a database with Oracle compatibility.

Symptom

SQL0433N Value "XXXX" is too long. SQLSTATE=22001

Cause

Subtraction of timestamp values returns DECFLOAT(34), representing the difference as a number of days. As a result, you can not use the TIMESTAMPDIFF scalar function because it depends on the default behavior of timestamp subtraction. Similarly, subtraction of date values returns DECFLOAT(34), which also represents a number of days, because date values are really TIMESTAMP(0). For more details on the functions that are changed under date_compat mode, please go through the 'DATE data type based on TIMESTAMP(0)' in the Related URL below.


Environment

Environments where databases are created with Oracle compatibility mode enabled.

Diagnosing the problem

The following test case can help you determine if you are encountering the problem:

  1. Set the DB2_COMPATIBILITY_VECTOR registry variable as below to enable all of the supported Oracle compatibility features:

    db2set DB2_COMPATIBILITY_VECTOR=ORA
    db2stop
    db2start
     
  2. Created the oratest database for testing purpose: 

    db2 create db oratest
    DB20000I  The CREATE DATABASE command completed successfully. 

  3. Running the TIMESTAMPDIFF scalar function will return SQL0433N error as below:

    db2 "values(TIMESTAMPDIFF(4,CHAR(TIMESTAMP('2001-09-29-11.25.42.483219')-TIMESTAMP('2001-09-26-12.07.58.065497'))))"

    1
    -----------
    SQL0433N  Value "2.970652982893518518518518518518519" is too long. SQLSTATE=22001

Resolving the problem

This is a restriction on the Oracle compatibility mode. A comparison of the tradeoffs needs to be done to determine if Oracle compatibility mode is more necessary than the use of these functions. one of the solutions below can be used as a work around when the database has been created with Oracle compatibility mode enabled:



o  Reducing the length of the timestamp

    db2 "values(TIMESTAMPDIFF(4,CHAR(TIMESTAMP('2010-04-01-09.43.05')-TIMESTAMP('2010-05-01-09.43.05'))))"
      1
      -----------
       
                0
        1 record(s) selected.


o  Multiply the DECFLOAT result to get the units you are interested in.  
    db2 "values ( TIMESTAMP('2001-09-29-11.25.42.483219')-TIMESTAMP('2001-09-26-12.07.58.065497') ) /* days */ * 24 /* hours per day */ * 60 /* minutes per hour */"
      1                                         
      ------------------------------------------
             4277.740295366666666666666666666668

        1 record(s) selected.


http://www-01.ibm.com/support/docview.wss?uid=swg21567168

728x90
728x90


Problem(Abstract)

After installing the "Critical Patch Update October 2013 for Solaris 11" (ORACLE SOLARIS 11.1.12.5.0 REPO ISO IMAGE) on your Oracle Solaris 11 system your Informix Server instance may fail to start

Symptom

During the Informix instance start you will see following types of error messages in the instance message log and the instance will not come online:

13:23:26  Assert Failed: No Exception Handler
13:23:26  IBM Informix Dynamic Server Version 11.70.FC7W2
13:23:26   Who: Thread(134, kaio, 0, 16)
  File: mtex.c Line: 490
13:23:26   Results: Exception Caught. Type: MT_EX_OS, Context: mem
13:23:26   Action: Please notify IBM Informix Technical Support.
13:23:26   See Also: /tmp/af.46e0eae, shmem.46e0eae.0

The af.<xxxxxx> file will contain an empty stack trace of the kaio thread; the si_signo value of 11 will indicate the SEGV signal received by the oninit process:

13:23:26  Stack for thread: 134 kaio

 base: 0x0000000146f02000
  len:   69632
   pc: 0x0000000101103350
  tos: 0x00000001019ff751
state: running
   vp: 22

 ucontext: 0x0000000101a00a20
  siginfo: 0x0000000101a00d00
           si_signo: 11  si_code: 1   si_errno: 0 si_pid: 0 si_uid: 0
           si_addr: 0x0000000001057d68 si_value: 0x0000000000000000  si_fd: 0


Cause

One of the possible causes of the problem is following Oracle Solaris defect:


15815250 : SUNBT7197575 AUTOMOUNTD FAILS TO MOUNT LDAP USERS HOME DIR on SOLARIS 10 BRANDED

Resolving the problem

Contact the Oracle Solaris support and ask them for following fix:


IDR-fix idr781.1 for Solaris 11.1.8.4



http://www-01.ibm.com/support/docview.wss?uid=swg21655799&myns=swgimgmt&mynp=OCSSGU8G&mync=E

728x90
728x90

Windows 7 64bit에서 오라클 11gR2 클라이언트를 설치할 때 필요 조건 검사 수행후 화면이 사라지는 경우가 있습니다.

이는 시스템에 jre가 32비트로 설치되었기 때문일 수 있습니다. 

설치시 64비트의 jre나 jdk의 경로를 지정하여 해결할 수 있습니다. 또는 32비트의 클라이언트를 설치할 수도 있겠지요.

 

Linux/Unix의 경우:

setup -jreloc 

 

Windows의 경우:

setup.exe -jreloc C:\Oracle\Middleware\jdk160_18\jre

 

 

참조: 

http://blog.naver.com/PostList.nhn?blogId=lsv400

https://forums.oracle.com/forums/thread.jspa?threadID=2190825

728x90

'Oracle > oracle' 카테고리의 다른 글

오라클에서 access 파일 조회하기  (0) 2019.08.01
hex string 을 blob 에 입력하기  (0) 2016.11.22
hash_value VS sql_id  (0) 2012.07.17
TRANSLATE 함수 사용법  (0) 2012.01.04
over(partition by) 예제  (0) 2011.09.09

+ Recent posts