본문 바로가기
다른 DBMS/Tibero

TAC Rawdevice 구축 (3node)

by 취미툰 2020. 8. 30.
반응형

오라클의 RAC와 비슷한 개념으로 가용성과 안정성을 늘린 TAC의 개념이 티베로에 있습니다. 이번에는 RAWDEVICE구성으로 TAC를 구성하는 테스트를 진행했던 것을 정리해서 포스팅하겠습니다.

 

접속정보 및 서버정보

  REAL IP VIP Interconnect
1번(TAC1)  10.10.47.141 10.10.47.142  50.0.0.141
2번(TAC2) 10.10.47.143 10.10.47.144  50.0.0.143
3번(TAC3) 10.10.47.145  10.10.47.146  50.0.0.145

티베로 설치파일 정보

tibero6-bin-FS06_CS_1902-linux64-174562-opt-20200219164536-tested.tar.gz

 

RAWDEVICE 정보

각자의 구성에 따라 달라질 수 있습니다. LVM방식으로 파티셔닝하였습니다.

 

controlfile 100M

system 500M

USER 500M

TEMP 500M

UNDO 500M

REDO11 100M

REDO12 100M

REDO13 100M

UNDO2 500M

REDO21 100M

REDO22 100M

REDO23 100M

UNDO3 500M

REDO31 100M

REDO32 100M

REDO33 100M

Cluster 500M

 

1.OS 파라미터 확인.(1,2,3번 확인)

-커널 사용메모리등 설정

#cat /etc/sysctl.conf

kernel.shmall = 16777216

kernel.shmmax = 68719476736

kernel.shmmni = 4096

kernel.sem = 10000 32000 10000 10000

net.ipv4.ip_local_port_range = 1024 65000

fs.file-max = 6815744

 

-쉘리미트 파라미터 설정

# cat /etc/security/limits.conf

 

*     soft        nofile           10240
*     soft        nproc            20470
*     hard        nofile           65536
*     hard        nproc            65536

 

2.티베로 바이너리 업로드 및 압축 해제(1,2,3번 확인)

$ tar xvf tibero6-bin-FS06_CS_1902-linux64-174562-opt-20200219164536-tested.tar.gz

 

3.라이센스 파일 업로드(1,2,3번 확인)

license.xml

라이센스는 아래의 URL에서 임시라이센스를 다운받을 수 있습니다.

technet.tmaxsoft.com/ko/front/main/main.do

 

Technet

효율적인 시스템 관리를 위한 기술 전문 포탈, 테크넷서비스

technet.tmaxsoft.com

4.bash_profile 설정 (1,2,3번 확인)

 

$ vi .bash_profile

### Tibero 6 ENV ###

export TB_HOME=/DBMS/MEDP/TIBERO/TIBERO6

export TB_SID=TAC1 ##TAC2 TAC3

export TB_PROF_DIR=$TB_HOME/bin/prof

export PATH=.:$TB_HOME/bin:$TB_HOME/client/bin:$PATH

export LD_LIBRARY_PATH=$TB_HOME/lib:$TB_HOME/client/lib:$LD_LIBRARY_PATH

 

export CM_HOME=/DBMS/MEDP/TIBERO/TIBERO6

export CM_SID=CM1 ##CM2 CM3

PS1='['`echo $TB_SID`:'$PWD']'$ '

 

 

5.tip 파일 생성(1,2,3번 확인)

$sh /$TB_HOME/config/gen_tip.sh

/DBMS/MEDP/TIBERO/TIBERO6/config/TAC1.tip generated

/DBMS/MEDP/TIBERO/TIBERO6/config/psm_commands generated

/DBMS/MEDP/TIBERO/TIBERO6/client/config/tbdsn.tbr generated.

Running client/config/gen_esql_cfg.sh

Done.

 

 

 

6.tip 파일 수정(1,2,3번 확인)

DB_NAME=TAC

LISTENER_PORT=4010

CONTROL_FILES="/dev/symraw_vg/rlv_scontrol1_raw_01g" 

#LOG_DEFAULT_DEST="/DBMS/MEDP/TIBERO/TBLOG"

MAX_SESSION_COUNT=1000

TOTAL_SHM_SIZE=8G

MEMORY_TARGET=16G

LOG_BUFFER=30M

LOG_ARCHIVE_DEST="/DBMS/MEDP/TBARCH"

 

 

 

#### TAC ENV #####

 

THREAD=0   ## 2번노드 1  3번노드 2

UNDO_TABLESPACE=UNDO0

CLUSTER_DATABASE=Y

 

LOCAL_CLUSTER_ADDR=50.0.0.141 ## 143 145

LOCAL_CLUSTER_PORT=11029

CM_PORT=11039

 

 

_MULTI_INSERT_MAX_NUM=100000

_PPC_MAX_CHUNK_SIZE=33554432

 

 

#####     ETC     #####

 

TPR_SNAPSHOT_RETENTION=31

 

##TUNING

GATHER_SQL_EXEC_TIME=Y

GATHER_SQL_PLAN_STAT=Y

_OPT_BIND_PEEKING=N

_INLINE_WITH_QUERY=N

_ADAPTIVE_CURSOR_SHARING=N

BOOT_WITH_AUTO_DOWN_CLEAN=Y

 

 

7.CM.tip 파일 생성(1,2,3번 노드)

$vi CM1.tip

 

CM_NAME=CM1  ## CM2 CM3

CM_UI_PORT=11039

CM_RESOURCE_FILE=/DBMS/MEDP/TIBERO/TIBERO6/config/cm1_res ##cm2_res cm3_res

 

#CM_WATCHDOG_EXPIRE=290

#CM_HEARTBEAT_EXPIRE=300

 

CM_WATCHDOG_EXPIRE=25

CM_HEARTBEAT_EXPIRE=30

 

CM_ENABLE_FAST_NET_ERROR_DETECTION=Y

 

CM_RESOURCE_FILE_BACKUP=/DBMS/MEDP/TIBERO/TIBERO6/config/cm1_res_bak ##cm2_res_bak cm3_res_bak

CM_RESOURCE_FILE_BACKUP_INTERVAL=1440

 

 

8.tbsdn.tbr 수정 (1,2,3번 노드)

$ vi /DBMS/MEDP/TIBERO/TIBERO6/client/config/tbdsn.tbr

## TAC2 TAC3

TAC1=(

    (INSTANCE=(HOST=localhost)

        (PORT=4010)

        (DB_NAME=TAC)

    )

)

 

TAC=(

    (INSTANCE=(HOST= 10.10.47.142 )

        (PORT=4010)

        (DB_NAME=TAC)

    )

    (INSTANCE=(HOST= 10.10.47.144 )

        (PORT=4010)

        (DB_NAME=TAC)

    )

           (INSTANCE=(HOST= 10.10.47.146 )

        (PORT=4010)

        (DB_NAME=TAC)

    )

    (LOAD_BALANCE=Y)

    (USE_FAILOVER=Y)

)

 

9.TBCM 기동(1,2,3번노드)

$ tbcm -b

/DBMS/MEDP/TIBERO/TIBERO6/instance/CM1/log/trace_cm.log log 확인 가능

 

--<node #1> network 등록

$ cmrctl add network --name inter1 --nettype private --ipaddr 50.0.0.141 --portno 21039

$ cmrctl add network --name pub1 --nettype public --ifname ens192

$ cmrctl add cluster --name cluster1 --incnet  inter1 --pubnet pub1 --cfile "/dev/symraw_vg/rlv_cluster00_05g"

$ cmrctl start cluster --name cluster1

$ cmrctl show

 

--<node #2> network 등록

$ cmrctl add network --name inter2 --nettype private --ipaddr 50.0.0.143 --portno 21039

$ cmrctl add network --name pub2 --nettype public --ifname ens256

$ cmrctl add cluster --name cluster1 --incnet  inter2 --pubnet pub2 --cfile "/dev/symraw_vg/rlv_cluster00_05g"

$ cmrctl start cluster --name cluster1

$ cmrctl show

 

--<node #3> network 등록

$ cmrctl add network --name inter3 --nettype private --ipaddr 50.0.0.145 --portno 21039

$ cmrctl add network --name pub3 --nettype public --ifname ens256

$ cmrctl add cluster --name cluster1 --incnet  inter3 --pubnet pub3 --cfile "/dev/symraw_vg/rlv_cluster00_05g"

$ cmrctl start cluster --name cluster1

$ cmrctl show

 

10.CM TAC Service & instance 등록 (1,2,3번 노드)

 

--<node #1>

. .bash_profile

echo $TB_SID -> TAC1확인

$ cmrctl add service --name TAC --cname cluster1 --type db

$ cmrctl add db --name TAC1 --svcname TAC --dbhome $TB_HOME --envfile $HOME/.bash_profile

 

--<node #2>

. .bash_profile

echo $TB_SID -> TAC2확인

$ cmrctl add db --name TAC2 --svcname TAC --dbhome $TB_HOME --envfile $HOME/.bash_profile

 

--<node #3>

. .bash_profile

echo $TB_SID -> TAC3확인

$ cmrctl add db --name TAC3 --svcname TAC --dbhome $TB_HOME --envfile $HOME/.bash_profile

 

$ cmrctl show

Resource List of Node CM1

=====================================================================

  CLUSTER     TYPE        NAME       STATUS           DETAIL        

----------- -------- -------------- -------- ------------------------

     COMMON  network         inter1       UP (private) 50.0.0.141/21039

     COMMON  network           pub1       UP (public) ens192

     COMMON  cluster       cluster1       UP inc: inter1, pub: pub1

   cluster1     file     cluster1:0       UP /dev/symraw_vg/rlv_cluster00_05g

   cluster1  service            TAC     DOWN Database, Active Cluster (auto-restart: OFF)

   cluster1       db           TAC1     DOWN TAC, /DBMS/MEDP/TIBERO/TIBERO6, failed retry cnt: 0

=====================================================================

###확인 명령어

[TAC1:/DBMS/MEDP/TIBERO]$ cmrctl show cluster --name cluster1 &&date

Cluster Resource Info

===============================================================

Cluster name      : cluster1

Status            : UP        

Master node       : (1) CM1

Last NID          : 3

Local node        : (1) CM1

Storage type      : NORMAL

No. of cls files  : 1

  (1) /dev/symraw_vg/rlv_cluster00_05g

===============================================================

|                          NODE LIST                          |

|-------------------------------------------------------------|

| NID   Name          IP/PORT       Status Schd Mst  FHB  NHB |

| --- -------- -------------------- ------ ---- --- ---- ---- |

|   1      CM1     50.0.0.141/21039     UP    Y   M [ LOCAL ] |

|   2      CM2     50.0.0.143/21039     UP    Y       29   34 |

|   3      CM3     50.0.0.145/21039     UP    Y       29   34 |

===============================================================

|                   CLUSTER RESOURCE STATUS                   |

|-------------------------------------------------------------|

|       NAME         TYPE    STATUS    NODE        MISC.      |

| ---------------- -------- -------- -------- --------------- |

|                        SERVICE: TAC                         |

|             TAC1       DB     DOWN      CM1                 |

|             TAC2       DB     DOWN      CM2                 |

|             TAC3       DB     DOWN      CM3                 |

===============================================================

Thu Aug 13 14:22:06 KST 2020

 

11.DB 생성 (1번노드에서 수행)

--<node #1>

. .bash_profile

echo $TB_SID -> TAC1확인

tbboot nomount

 

tbsql sys/tibero@TAC1

 

CREATE DATABASE "TAC"

  USER sys IDENTIFIED BY tibero

  MAXDATAFILES 2048

  CHARACTER SET UTF8

  LOGFILE

      GROUP 0 ('/dev/symraw_vg/rlv_redo011_01g') SIZE 80M,

      GROUP 1 ('/dev/symraw_vg/rlv_redo021_01g') SIZE 80M,

      GROUP 2 ('/dev/symraw_vg/rlv_redo031_01g') SIZE 80M

  MAXLOGFILES 100

  MAXLOGMEMBERS 8

  ARCHIVELOG

    DATAFILE '/dev/symraw_vg/rlv_system001_05g' SIZE 450M AUTOEXTEND OFF

    DEFAULT TABLESPACE USR

      DATAFILE '/dev/symraw_vg/rlv_ts_usr_05g' SIZE 450M AUTOEXTEND OFF

    DEFAULT TEMPORARY TABLESPACE TEMP

      TEMPFILE '/dev/symraw_vg/rlv_temp001_05g' SIZE 450M AUTOEXTEND OFF

    UNDO TABLESPACE UNDO0

      DATAFILE '/dev/symraw_vg/rlv_undo001_05g' SIZE 450M AUTOEXTEND OFF

 

 

-. DB 기동 후 2번 노드에 undo redo 추가

tbboot

tbsql sys/tibero@TAC1

CREATE UNDO TABLESPACE UNDO1 datafile '/dev/symraw_vg/rlv_undo011_05g' SIZE 450M AUTOEXTEND OFF;

alter database add logfile thread 1 group 5 ('/dev/symraw_vg/rlv_redo111_01g') size 80M;

alter database add logfile thread 1 group 6 ('/dev/symraw_vg/rlv_redo121_01g') size 80M;

alter database add logfile thread 1 group 7 ('/dev/symraw_vg/rlv_redo131_01g') size 80M;

 

 

alter database enable public thread 1;

 

- 3번 노드에 undo redo 추가

 

CREATE UNDO TABLESPACE UNDO2 datafile '/dev/symraw_vg/rlv_undo021_05g' SIZE 450M AUTOEXTEND OFF;

alter database add logfile thread 2 group 8 ('/dev/symraw_vg/rlv_redo211_01g') size 80M;

alter database add logfile thread 2 group 9 ('/dev/symraw_vg/rlv_redo221_01g') size 80M;

alter database add logfile thread 2 group 10 ('/dev/symraw_vg/rlv_redo231_01g') size 80M;

 

alter database enable public thread 2;

 

12.system.sh 실행

$ sh $TB_HOME/scripts/system.sh

                     -p1 password : sys password

                     -p2 password : syscat password

                     -a1 Y/N : create default system users & roles

                     -a2 Y/N : create system tables related to profile

                     -a3 Y/N : register dbms stat job to Job Scheduler

                     -a4 Y/N : create TPR tables

                     pkgonly : create psm built-in packages only

                     -sod Y/N : separation of duties

 

13. 2번노드 3번노드 instance 기동

 

2번노드에서 에러 발생

 

tbboot

 

[TAC2:/DBMS/MEDP/TIBERO]$ tbboot

Change core dump dir to /DBMS/MEDP/TIBERO/TIBERO6/bin/prof.

Listener port = 4010

 

********************************************************

* Critical Warning : Crash recovery failed due to

*   TBR-1024 :  Database needs media recovery: open failed(/DBMS/MEDP/TIBERO/TIBERO6/database/TAC/tpr_ts.dtf).  

*   Shutting down.

********************************************************

 

원인 : TPR 테이블스페이스는 기본적으로 필요한데, 뺴먹고 설치시도

해결 : TPR 테이블 스페이스 추가 후 1번에서 등록하고 다시 2번 tbboot

 

 

create tablespace SYSSUB datafile '/dev/symraw_vg/rlv_tprts_05g' size 450m autoextend off;

 

 

14.VIP 설정

 

--<node #1,2,3>

1) CM, TAS, TAC 서비스 종료

$tbdown

$tbcm -d

 

 

#tbcm -b

$tbboot

 

            

### 노드 1 ###

 

su -

 

source /DBMS/MEDP/TIBERO/.bash_profile

 

tbcm -b

 

- vip add command

$ cmrctl add vip --name VIP1 --node CM1 --svcname TAC --ipaddr 10.10.47.142

 

### 노드 2 ###

 

su -

 

source /DBMS/MEDP/TIBERO/.bash_profile

 

tbcm -b

 

- vip add command

$ cmrctl add vip --name VIP2 --node CM2 --svcname TAC --ipaddr 10.10.47.144

 

 

### 노드 3 ###

 

su -

 

source /DBMS/MEDP/TIBERO/.bash_profile

 

tbcm -b

 

- vip add command

$ cmrctl add vip --name VIP3 --node CM3 --svcname TAC --ipaddr 10.10.47.146

 

설치 완료.

반응형

댓글