Thursday, November 28, 2019

Release/Shrink UNDO tablespace

  

Sometimes there will be situations where we run out of DB storage due to huge undo growth and we wanted to shrink it to release some space for other datafiles in the database and The best way to shrink Undo tablespace is to switch to a new Undo tablespace and drop the old Undo tablespace

Create New UNDOTBS2 tablespace

SQL> create undo tablespace UNDOTBS2 datafile '+DATAC1' size 500M;

Tablespace created.


SQL> ALTER SYSTEM SET UNDO_TABLESPACE = UNDOTBS2 SCOPE=BOTH;

System altered.

SQL>  select tablespace_name, status, count(*) from dba_rollback_segs group by tablespace_name, status;

TABLESPACE_NAME                STATUS             COUNT(*)
------------------------------ ---------------- ----------
SYSTEM                         ONLINE                    1
UNDOTBS1                       OFFLINE                  12
UNDOTBS2                       ONLINE                   10

If there are Undo segments with a status other than OFFLINE in the tablespace to be dropped, we need to wait till they become OFFLINE.

Check any segments other than OFFLINE status using below

SQL>  select status,segment_name from dba_rollback_segs where status not in ('OFFLINE') and tablespace_name='UNDOTBS1';

no rows selected

Since there are no segments then we can easily drop the tablespace and release space

SQL> drop tablespace UNDOTBS1 including contents and datafiles;

Tablespace dropped.

SQL>  select tablespace_name, status, count(*) from dba_rollback_segs group by tablespace_name, status;

TABLESPACE_NAME                STATUS             COUNT(*)
------------------------------ ---------------- ----------
SYSTEM                         ONLINE                    1
UNDOTBS2                       ONLINE                   10
UNDOTBS3                       OFFLINE                  22

Now reset to same tablespace by creating same tablespace again

SQL> create undo tablespace UNDOTBS1 datafile '+DATAC1' size 500M;

Tablespace created.

SQL> ALTER SYSTEM SET UNDO_TABLESPACE =UNDOTBS1 SCOPE=BOTH;

System altered.

SQL>  select tablespace_name, status, count(*) from dba_rollback_segs group by tablespace_name, status;

TABLESPACE_NAME                STATUS             COUNT(*)
------------------------------ ---------------- ----------
SYSTEM                         ONLINE                    1
UNDOTBS1                       ONLINE                   15
UNDOTBS2                       OFFLINE                  10

Sometimes dropping the old tablespace may give ORA-30013: undo tablespace '%s' is currently in use. This error indicates you must wait for the undo tablespace to become unavailable. In other words, you must wait for existing transaction to commit or rollback.   

Also, According to ORACLE DOC:
On some platforms, disk space is not freed to the OS until the database is restarted.  The disk space will remain "allocated" from the OS perspective until the database restart.



Friday, November 8, 2019

expdp ORA-01775: looping chain of synonyms



While doing export on my 12.1 database i get below error

oracledev.domain.com:REPODB:/home/oracle $ expdp directory=STATSDIR dumpfile=REPODB_Triggers.dmp logfile=exp_REPODB_Triggers.log include=TRIGGER metrics=y schemas= Workers

Export: Release 12.1.0.2.0 - Production on Wed Oct 23 10:48:44 2019

Copyright (c) 1982, 2014, Oracle and/or its affiliates.  All rights reserved.

Username: / as sysdba

Connected to: Oracle Database 12c Enterprise Edition Release 12.1.0.2.0 - 64bit Production
With the Partitioning, Real Application Clusters, Automatic Storage Management, OLAP,
Advanced Analytics and Real Application Testing options
ORA-39001: invalid argument value
ORA-01775: looping chain of synonyms



FIX:


SQL> Select owner, object_name, object_type, status   from dba_objects  where object_name like '%SYS_EXPORT%';

OWNER
--------------------------------------------------------------------------------
OBJECT_NAME
--------------------------------------------------------------------------------
OBJECT_TYPE             STATUS
----------------------- -------
PUBLIC
SYS_EXPORT_SCHEMA_01
SYNONYM                 VALID

SYSTEM
SYS_EXPORT_TABLE_01
TABLE                   VALID




connect / as sysdba
drop public synonym sys_export_schema_01;

Now rerun export again

(OR)

If you dont want to drop the table now then give some job name ( job_name=<JOB_NAME>" ) and run the export 



Referrence:
DataPump Import Results In ORA-39001 Invalid Argument Value ORA-1775 Looping Chain Of Synonyms (Doc ID 459151.1)


IMPDP Job Hanging On STATISTICS/MARKER


DB Version : 12.1
OS Version : RHEL 6

While doing import for one of the Env my impdp job just waiting at STATISTICS/MARKER step and taking very very long though i'm doing small set of data import, and if i check the JOB status and it shows job is 100% Done and there is no movement at all even in DB Alert log

Impdp job at “Processing object type SCHEMA_EXPORT/STATISTICS/MARKER”

After doing some research online and according to oracle looks like this is Due to the BUG and below is the work around/Solutions


Data pump Import Job Is Hanging On STATISTICS/MARKER (Doc ID 2135702.1)


Sunday, May 12, 2019

Find table in trail file using Goldengate logdump utility



Check TABLE (CUST.SALES) in Trail file using LOGDUMP

cd  $GG_HOME

$./logdump
Logdump 1 >ghdr on
Logdump 2 >detail on
Logdump 3 >detail data
Logdump 4 >open /goldengate/home/dirdat/xxxx/CS000023799
Current LogTrail is /goldengate/home/dirdat/xxxx/CS000023799

Logdump 6 >FILTER INCLUDE FILENAME CUST.sales
Logdump 7 >count
Scanned     10000 records, RBA    7356269, 2019/05/03 14:44:45.000.000
Scanned     20000 records, RBA   14846486, 2019/05/03 14:46:49.000.000
Scanned     30000 records, RBA   23290032, 2019/05/03 14:56:03.000.000
Scanned     40000 records, RBA   32171981, 2019/05/03 15:04:39.000.000
Scanned     50000 records, RBA   41262061, 2019/05/03 15:09:51.000.000
Partial record at RBA 46133384
Format 550 record, total len = 3268
LogTrail /goldengate/home/xxxx/hp000023799 has 0 records
Filtering matched            0 records
          suppressed     56004 records

 Logdump 8 >n

RFS[11]: Client instance is standby database instead of primary


Server: RHEL6
Oracle DB version: 12.1

In my Primary Alert log i'm seeing RFS messages writing frequently like below

Primary alert log :

RFS[10]: Assigned to RFS process (PID:16586)
RFS[10]: Database mount ID mismatch [0x6666ae9b:0x666707e4] (1718005403:171802260)
RFS[10]: Client instance is standby database instead of primary
Tue Feb 19 16:10:14 2019
RFS[11]: Assigned to RFS process (PID:174682)
RFS[11]: Database mount ID mismatch [0x6666ae9b:0x666707e4] (1718005403:1718028260)
RFS[11]: Client instance is standby database instead of primary

 After researching further and according to Oracle (Doc ID 1450132.1)

Cause:

It's due to my Standby database is sending ARCH logs back to my Primary site (RAC exadata machine) (don’t know how this was happening) and causing these messages writing to my Primary alert log  and to fix this I’ve to find and remove "dest_" parameter which is pointing back to primary site

On standby site

SQL> sho parameter dest_2

NAME                                 TYPE        VALUE
------------------------------------ ----------- ------------------------------
db_create_online_log_dest_2          string
log_archive_dest_2                   string      service=myprod_prim ARCH SYNC NO
                                                              AFFIRM delay=0 OPTIONAL reopen=15 register


SQL> sho parameter dest_state_2

NAME                                 TYPE        VALUE
------------------------------------ ----------- ------------------------------
log_archive_dest_state_2             string      ENABLE

Yes i see my dest_2 is pointing to Primary and it is ENABLED so remove this parameter (OR) DEFER it 

FIX:

SQL> ALTER SYSTEM SET log_archive_dest_2='' scope=both;

System altered.

Now i see  NO messages are writing in alert log

Saturday, March 9, 2019

OGG Trail File Sequence Number Does Not Reset To 0



GGSCI (myorasrc.domain.com) 1> info all

Program     Status      Group       Lag at Chkpt  Time Since Chkpt


MANAGER     RUNNING

JAGENT      RUNNING


GGSCI (myorasrc.domain.com) 8> add extract EXTMYDEV, INTEGRATED TRANLOG, BEGIN NOW


EXTRACT (Integrated) added.


GGSCI (myorasrc.domain.com) 9> add EXTTRAIL /goldengate/dirdat/devdb/cr, extract EXTMYDEV, MEGABYTES 100

EXTTRAIL added.


GGSCI (myorasrc.domain.com) 10> add extract PMPMYDEV, EXTTRAILSOURCE /goldengate/dirdat/devdb/cr


EXTRACT added.


GGSCI (myorasrc.domain.com) 11> ADD RMTTRAIL /goldengate/dirdat/devdb/CR, extract PMPMYDEV, MEGABYTES 100

RMTTRAIL added.


GGSCI (myorasrc.domain.com) 13> info all


Program         Status         Group          Lag at Chkpt  Time Since Chkpt

MANAGER     RUNNING

JAGENT          RUNNING

EXTRACT      STOPPED      EXTMYDEV    00:00:00      00:00:10

EXTRACT      STARTING    PMPMYDEV    00:00:00      00:00:08


It shouldn’t be in STARTING state, it should be STOPPED (Like Extract above)

GGSCI (myorasrc.domain.com) 15> info EXTMYDEV detail


EXTRACT    EXTMYDEV  Initialized   2019-01-24 10:46   Status STOPPED
Checkpoint Lag       00:00:00 (updated 00:00:17 ago)

Log Read Checkpoint  Oracle Integrated Redo Logs

                     2019-01-24 10:46:57

                     SCN 0.0 (0)


  Target Extract Trails:


  Trail Name                                       Seqno        RBA     Max MB Trail Type


  /goldengate/dirdat/devdb/cr           0          0        100 EXTTRAIL



Integrated Extract outbound server first scn: Unavailable.Need DBLOGIN.


Integrated Extract outbound server filtering start scn: Unavailable.Need DBLOGIN.


  Extract Source                          Begin             End


  Not Available                           * Initialized *   2019-01-24 10:46


Current directory    /goldengate

Report file          /goldengate/dirrpt/EXTMYDEV.rpt (does not yet exist)

Parameter file       /goldengate/dirprm/EXTMYDEV.prm

Checkpoint file      /goldengate/dirchk/EXTMYDEV.cpe

Process file         /goldengate/dirpcs/EXTMYDEV.pce

Error log            /goldengate/ggserr.log



GG extract looks good, lets check PUMP

GGSCI (myorasrc.domain.com) 13> info PMPMYDEV detail


EXTRACT    PMPMYDEV  Last Started 2019-01-24 12:18   Status RUNNING

Checkpoint Lag       00:00:00 (updated 00:00:08 ago)

Process ID           130534

Log Read Checkpoint  File /goldengate/dirdat/devdb/cr000000000

                     First Record  RBA 1451


  Target Extract Trails:


  Trail Name                                       Seqno        RBA     Max MB Trail Type


  /goldengate/dirdat/devdb/CR         122        911        100 RMTTRAIL          --> Wrong should be “0”


  Extract Source                          Begin             End


  /goldengate/dirdat/devdb/cr000000000  * Initialized *   First Record

  /goldengate/dirdat/devdb/cr000000000  * Initialized *   First Record



Current directory    /goldengate


Report file          /goldengate/dirrpt/PMPMYDEV.rpt

Parameter file       /goldengate/dirprm/PMPMYDEV.prm

Checkpoint file      /goldengate/dirchk/PMPMYDEV.cpe

Process file         /goldengate/dirpcs/PMPMYDEV.pce

Error log            /goldengate/ggserr.log



GGSCI (myorasrc.domain.com) 14> info EXTMYDEV showch


EXTRACT    EXTMYDEV  Last Started 2019-01-24 11:06   Status RUNNING

Checkpoint Lag       00:00:01 (updated 00:00:05 ago)

Process ID           117568

Log Read Checkpoint  Oracle Integrated Redo Logs

                     2019-01-24 13:55:34

                     SCN 95.1378716772 (409400609892)



Current Checkpoint Detail:


Read Checkpoint #1


  Oracle Integrated Redo Log


  Startup Checkpoint (starting position in the data source):

    Timestamp: 2019-01-24 10:46:57.000000

    SCN: Not available


  Recovery Checkpoint (position of oldest unprocessed transaction in the data source):

    Timestamp: 2019-01-24 13:55:34.000000

    SCN: 95.1378716771 (409400609891)


  Current Checkpoint (position of last record read in the data source):

    Timestamp: 2019-01-24 13:55:34.000000

    SCN: 95.1378716772 (409400609892)


Write Checkpoint #1


  GGS Log Trail


  Current Checkpoint (current write position):

    Sequence #: 122

    RBA: 1451

    Timestamp: 2019-01-24 13:55:35.651323

    Extract Trail: /goldengate/dirdat/devdb/cr

    Seqno Length: 9

    Flip Seqno Length: No

    Trail Type: EXTTRAIL


Header:

  Version = 2

  Record Source = A

  Type = 13

  # Input Checkpoints = 1

  # Output Checkpoints = 1


File Information:

  Block Size = 2048

  Max Blocks = 100

  Record Length = 2048

  Current Offset = 0


Configuration:

  Data Source = 3

  Transaction Integrity = 1

  Task Type = 0


Status:

  Start Time = 2019-01-24 11:06:15

  Last Update Time = 2019-01-24 13:55:35

  Stop Status = A

  Last Result = 520



 $  ls -lrrth /goldengate/dirdat/devdb/cr*


-rw-r----- 1 oracle dba 1.5K Jan 24 11:06 /goldengate/dirdat/devdb/cr000000000


Trail files on source side starting with number ‘ZERO’ which is Correct as expected



CAUSE :


Issue is due to the TRAIL files NOT deleted on the replciat (TARGET) side so when you start the PUMP on source side it will start writing to same Trail file on target side which follows the same target trail seq number



Target side:


oratgtdev.doamin.com:NOTSET:/goldengate/dirdat/devdb $ ls -lrth

total 16K

-rw-r----- 1 oracle dba 14K Jan 24 15:43 CR000000121

-rw-r----- 1 oracle dba 14K Jan 24 15:43 CR000000122



  

Solution:


DELETE Trail files on replicat side and recreate PUMP


GGSCI (myorasrc.domain.com) 16> stop PMPMYDEV


Sending STOP request to EXTRACT PMPMYDEV ...

Request processed.



GGSCI (myorasrc.domain.com) 17> delete PMPMYDEV

Deleted EXTRACT PMPMYDEV.


GGSCI (myorasrc.domain.com) 10> add extract PMPMYDEV, EXTTRAILSOURCE /goldengate/dirdat/devdb/cr


EXTRACT added.



GGSCI (myorasrc.domain.com) 11> ADD RMTTRAIL /goldengate/dirdat/devdb/CR, extract PMPMYDEV, MEGABYTES 100


RMTTRAIL added.


GGSCI (myorasrc.domain.com) 16> info PMPMYDEV detail



EXTRACT    PMPMYDEV  Initialized   2019-01-24 10:46   Status STARTING

Checkpoint Lag       00:00:00 (updated 00:00:16 ago)

Process ID           157497

Log Read Checkpoint  File /goldengate/dirdat/devdb/cr000000000

                     First Record  RBA 0


  Target Extract Trails:


  Trail Name                                       Seqno        RBA     Max MB Trail Type


  /goldengate/dirdat/devdb/CR           0          0        100 RMTTRAIL


  Extract Source                          Begin             End


  /goldengate/dirdat/devdb/cr000000000  * Initialized *   First Record



Current directory    /goldengate


Report file          /goldengate/dirrpt/PMPMYDEV.rpt (does not yet exist)

Parameter file       /goldengate/dirprm/PMPMYDEV.prm

Checkpoint file      /goldengate/dirchk/PMPMYDEV.cpe

Process file         /goldengate/dirpcs/PMPMYDEV.pce

Error log            /goldengate/ggserr.log


Now I see Seq# and RBA reset to ‘ZERO’ values


Trail File Sequence Number Does Not Reset To 0 While Delete And Recreation Of Goldengate Extract. (Doc ID 2316739.1)




Auto Scroll Stop Scroll