Quantcast
Channel: SanOraLife
Viewing all 112 articles
Browse latest View live

"_allow_resetlogs_corruption" recovery....

$
0
0
Please DO NOT use this in any production environment without Oracle Support...

*** Using this parameter should be your LAST option as you do not have any proper backups to bring your database online ***

One of our database was kind of un-opened mode due to system tablespace datafile corruption (for any reason like restored from old backup etc.,)

We are OK to loose some data and some tablespaces which were created later point than the SCN that system tablespace has. So, used the parameter "_allow_resetlogs_corruption" an un-documented (repeat Oracle will not support your database if you recover your database with this option without their support) parameter. Below these errors shows the use of this parameter and I was able to recover database and open.

Database mounted.
ORA-01589: must use RESETLOGS or NORESETLOGS option for database open

SQL> alter database open resetlogs;
alter database open resetlogs
*
ERROR at line 1:
ORA-01245: offline file 1 will be lost if RESETLOGS is done
ORA-01110: data file 1: '/oradata/db_name/datafile//system.282.757162799'

SQL> alter database open noresetlogs;
alter database open noresetlogs
*
ERROR at line 1:
ORA-01588: must use RESETLOGS option for database open

SQL> alter database datafile 1 online;

Database altered.

SQL> alter database open;
alter database open
*
ERROR at line 1:
ORA-01589: must use RESETLOGS or NORESETLOGS option for database open

SQL> alter database open resetlogs;
alter database open resetlogs
*
ERROR at line 1:
ORA-01152: file 1 was not restored from a sufficiently old backup
ORA-01110: data file 1: '/oradata/db_name/datafile/system.282.757162799'

Use of the parameter:


SQL> alter system set "_allow_resetlogs_corruption"=true scope=spfile;

System altered.

SQL> shutdown
ORA-01109: database not open

Database dismounted.
ORACLE instance shut down.
SQL> startup mount;
ORACLE instance started.

Total System Global Area 1469792256 bytes
Fixed Size                  2121312 bytes
Variable Size            1107296672 bytes
Database Buffers          352321536 bytes
Redo Buffers                8052736 bytes
Database mounted.
SQL> recover database;
ORA-00283: recovery session canceled due to errors
ORA-01610: recovery using the BACKUP CONTROLFILE option must be done

SQL> alter database open resetlogs;
alter database open resetlogs
*
ERROR at line 1:
ORA-01248: file 19 was created in the future of incomplete recovery
ORA-01110: data file 19: '/oradata/db_name/datafile/file2.dbf'

I know for sure I do not need to worry about this datafile as I have data that I can put back in so marking it as OFFLINE.

SQL> alter database datafile 19 offline;

Database altered.

Now Open with resetlogs and you should be good. Take a FULL Backup ASAP.

SQL> alter database open resetlogs;

Database altered.

SQL>

Once again, Please do not use this parameter in any production or critical databases without Oracle Support and recommendation otherwise you will end up with inconsistency database.


ORA-01031: insufficient privileges with "as sysman"

$
0
0
This error "ORA-01031: insufficient privileges" really made me frustrated while trying to upgrade 11.2.0.2 to 11.2.0.3.

Basically, as an upgrade process it has to write some information to 11.2.0.2 ASM so it shutsdown ASM instance running on 11.2.0.2 and then try's to start up with 11.2.0.3 binaries and thats when I kept getting this error.

Following is the Error that I was getting at the time of migration:
--------------------------------------------------------------------------------------------

CRS-4133: Oracle High Availability Services has been stopped.
OLR initialization - successful
Replacing Clusterware entries in inittab
Start of resource "ora.asm" failed
CRS-2672: Attempting to start 'ora.drivers.acfs' on 'node1'
CRS-2676: Start of 'ora.drivers.acfs' on 'node11' succeeded
CRS-2672: Attempting to start 'ora.asm' on 'node1'
ORA-01031: insufficient privileges
CRS-5017: The resource action "ora.asm start" encountered the following error:
ORA-01031: insufficient privileges
. For details refer to "(:CLSN00107:)" in "/opt/grid/app/11.2.0.3/grid/log/node1/agent/ohasd/oraagent_oracle/oraagent_oracle.log".
CRS-2674: Start of 'ora.asm' on 'node1' failed
CRS-2679: Attempting to clean 'ora.asm' on 'node1'
ORA-01031: insufficient privileges

--------------------------------------------------------------------------------------------

So, tried to just start "sqlplus / as sysasm" (note with / as sysdba has no issues)

NONE::node1:/opt/oracle>export ORACLE_SID=+ASM1
+ASM1::node1:/opt/oracle>export ORACLE_HOME=/opt/grid/app/11.2.0.3/grid
+ASM1::node1:/opt/oracle>export PATH=$ORACLE_HOME/bin:$PATH

+ASM1::node1:/opt/oracle>sqlplus / as sysasm

SQL*Plus: Release 11.2.0.3.0 Production on Thu Apr 4 11:24:22 2013

Copyright (c) 1982, 2011, Oracle.  All rights reserved.

ERROR:
ORA-01031: insufficient privileges


First thought it might be the environment variables but not in my case.
ORACLE_HOME is set to 11.2.0.3 and PATH has $ORACLE_HOME/bin;$PATH but still gets the same error.

No issue when I set ORACLE_HOME back to 11.2.0.2 !!!! So, whats the difference?

Took me a while to figure this out but nailed it:
Basically, when I was doing upgrade OUI offers with "Privileged Operating Systems Groups" for ASM DBA, ASM Operator and ASM Administration.
I chose the default values for this and un-fortunately in my environment grid user "oracle" does not belong to a group named "asmadmin" and thus I get "insufficient privileges"

How do I change it?

Check the file config.c under /opt/grid/app/11.2.0.3/grid/rdbms/lib/

You will see the following:


#define SS_DBA_GRP "asmdba"
#define SS_OPER_GRP "asmoper"
#define SS_ASM_GRP "asmadmin" --> this was my problem. I changed this to "dba".

Its not done yet.

We need to recompile oracle binary to get this change affected.

Here is how you do that:

node1:/opt/grid/app/11.2.0.3/grid/rdbms/lib>make -f ins_rdbms.mk config.o ioracle

Make sure to check the output thoroughly as you might see errors in 11g environment as the folder/files permissions might be owned by "root" and it might fail to move the old oracle file to oracleO and copy the new file.
If you do see such errors then ask your admin to replace the old oracle file from /opt/grid/app/11.2.0.3/grid/bin/oracle to /opt/grid/app/11.2.0.3/grid/bin/oracleO and copy the file from
/opt/grid/app/11.2.0.3/grid/rdbms/lib/oracle to /opt/grid/app/11.2.0.3/grid/bin/

That is it:
The error vanishes now.


+ASM1:node1:/opt/grid/app/11.2.0.3/grid/rdbms/lib>sqlplus / as sysasm

SQL*Plus: Release 11.2.0.3.0 Production on Thu Apr 4 17:12:55 2013

Copyright (c) 1982, 2011, Oracle.  All rights reserved.

Connected to an idle instance.

Good luck. I am having another migration issue right now and will see what else is messed up.



Ora-27303: additional information: startup egid = 80, current egid = 82

$
0
0
While using DBUA to upgrade database from 11.2.0.2 to 11.2.0.3, DBUA was throwing an error starting with PRCR-1079: Failed to start resource ora.xxx.db followed by bunch of other errors.

One error in between those error stack kinda made me obvious to look the file "oracle" permissions.


Ora-27303: additional information: startup egid = 80 (dba), current egid = 82 (asmadmin)

So, comparing the file permissions on both Oracle Home and Grid Home tells me that they are not same:
/opt/oracle>ls -al app/product/11.2.0.3/db_1/bin/oracle
-rwsr-s--x 1 oracle dba 232399431 Apr 29 12:39 app/product/11.2.0.3/db_1/bin/oracle
/opt/oracle>
/opt/oracle>ls -al /opt/grid/app/11.2.0.3/grid/bin/oracle
-rwxr-xr-x 1 oracle dba 203973009 Apr  4 17:08 /opt/grid/app/11.2.0.3/grid/bin/oracle
/opt/oracle>

Changing the permissions on Grid Home Oracle and after this DBUA had no issues.

/opt/oracle>cd /opt/grid/app/11.2.0.3/grid/bin/
/opt/grid/app/11.2.0.3/grid/bin>chmod 6751 oracle
/opt/grid/app/11.2.0.3/grid/bin>ls -al oracle
-rwsr-s--x 1 oracle dba 203973009 Apr  4 17:08 oracle
/opt/grid/app/11.2.0.3/grid/bin>

RMAN Connect error with 11g

$
0
0
Was trying to connect to 11g database using RMAN and getting this error:

/opt/oracle/etc>rman target DP_BACKUP/pass@db_name


RMAN-00571: ===========================================================
RMAN-00569: =============== ERROR MESSAGE STACK FOLLOWS ===============
RMAN-00571: ===========================================================
RMAN-00554: initialization of internal recovery manager package failed
RMAN-04005: error from target database:
ORA-01017: invalid username/password; logon denied

The userid/password works fine if I connect using sqlplus as showed below!!!

/opt/oracle/etc>sqlplus DP_BACKUP/pass@db_name

SQL*Plus: Release 11.1.0.7.0 - Production on Wed May 8 09:59:03 2013

Copyright (c) 1982, 2008, Oracle.  All rights reserved.


Connected to:
Oracle Database 11g Enterprise Edition Release 11.1.0.7.0 - 64bit Production
With the Partitioning and Real Application Clusters options

SQL>

User DP_BACKUP has SYSDBA role assigned 
SQL> select * from v$pwfile_users;

USERNAME                       SYSDB SYSOP SYSAS
------------------------------ ----- ----- -----
SYS                            TRUE  TRUE  FALSE
DP_BACKUP                      TRUE  FALSE FALSE

But still errors out!!!

Reason is the password is not in sync with the password file. So, either re-create the password file with appropriate password or change the password for DP_BACKUP user.

Finding Bind variables in SQL's executed...

$
0
0
v$sql_bind_capture --> this view shows the bind variables passed to the sql at the run time
DBA_HIST_SQLBIND --> historical information

Ex:

select b.last_captured, a.sql_text, b.name, b.position, b.datatype_string, b.value_string
  from dba_hist_sqlbind b,
       --gv$sql_bind_capture b,
       v$sqlarea a
 where b.sql_id = '66j3kkqnv3m2f' --> sqlid that you are interested in looking for bind variable passed.
   and b.sql_id = a.sql_id

From Oracle Docs on DBA_HIST_SQLBIND:

DBA_HIST_SQLBIND displays historical information on bind variables used by SQL cursors. This view contains snapshots of V$SQL_BIND_CAPTURE.
LAST_CAPTUREDDATE Date when the bind value was captured. Bind values are captured when SQL statements are executed. To limit the overhead, binds are captured at most every 15 minutes for a given cursor.

RMAN-20035: invalid high recid

$
0
0
While moving our database backups to use new version of HP Data Protector software, I got a hick up on one database with "RMAN-20035: invalid high recid" error.

RMAN-03014: implicit resync of recovery catalog failed
RMAN-06004: ORACLE error from recovery catalog database: RMAN-20035: invalid high recid

Also, we were moving our RMAN catalog from one database to another database to resolve another performance issue. So, it might be due to sync issue.

Solution to that was to un-register the database from RMAN catalog and re-register.

More on this topic/issue in metalink id#273446.1

Note, you can unregister using dbms_rcvcat.unregisterdatabase procedure also but I am showing what I did.

To Unregister:
Connect to your RMAN catalog as how you generally do take your db backup and run:

RMAN> unregister database;

To Register:
RMAN> register database;

After this I was able to backup the database successfully...

ORA-00845: MEMORY_TARGET not supported on this system

$
0
0
We had one of our RAC Node crashed and all databases shutdown and as usual they should come back once the ASM is up.

But this one database would not start at all.

Grepping for PMON for that database returns nothing:

So, trying to start manually:

SQL*Plus: Release 11.2.0.2.0 Production on Mon May 27 04:55:22 2013

Copyright (c) 1982, 2010, Oracle.  All rights reserved.

Connected to an idle instance.

SQL> startup
ORA-32004: obsolete or deprecated parameter(s) specified for RDBMS instance
ORA-00845: MEMORY_TARGET not supported on this system

Now that is weird, why would I get that? /dev/shm has no space? Really? but why? database is not started yet to utilize that!!!!

Grepping for database name gives me:

db2:11.2.0:node2:/opt/oracle>ps -ef |grep db2
oracle     493     1  0 Apr28 ?        00:00:00 oracledb2 (LOCAL=NO)
oracle   24841     1  0 04:51 ?        00:00:00 oracledb2 (DESCRIPTION=(LOCAL=YES)(ADDRESS=(PROTOCOL=beq)))
oracle   24846     1  0 04:51 ?        00:00:00 oracledb2 (DESCRIPTION=(LOCAL=YES)(ADDRESS=(PROTOCOL=beq)))
oracle   31322 26885  0 05:02 pts/1    00:00:00 grep db2
oracle   32553     1  0 Apr28 ?        00:00:00 oracleras2 (LOCAL=NO)

Hmm... I get nothing when I grep for PMON but there is still something running in the backgroud as hung process!!!

Time to kill those process...
db2:node2:/opt/oracle>kill -9 493 24841 24846 32553

Now do startup and database comes up without any complaints...





RMAN Slow performance with "control file sequential read" wait event

$
0
0
We have a database which is about 40TB and the control file is about 900MB and we have about 1.5TB Archivelogs generation everyday.

So, was thinking it was because of all the above why "show all" takes about 45 minutes in RMAN...

But, for some reason I was not able to convince my self that it should take that long for just SHOW ALL command!!!.
Not only that, it takes way too long to just allocate channels and that was making our backups to take for ever...

Anyways, digging deeper found the following sql that just sits in there with "control file sequential read" wait event... This made me to hunt for the sql:

SELECT RECID, STAMP, TYPE OBJECT_TYPE, OBJECT_RECID, OBJECT_STAMP, OBJECT_DATA,
       TYPE, (CASE
                WHEN TYPE = 'DATAFILE RENAME ON RESTORE' THEN DF.NAME
WHEN TYPE = 'TEMPFILE RENAME' THEN TF.NAME
ELSE TO_CHAR (NULL)
 END)
  OBJECT_FNAME, (CASE
WHEN TYPE = 'DATAFILE RENAME ON RESTORE' THEN DF.CREATION_CHANGE#
WHEN TYPE = 'TEMPFILE RENAME' THEN TF.CREATION_CHANGE#
ELSE TO_NUMBER (NULL)
 END)
   OBJECT_CREATE_SCN, SET_STAMP, SET_COUNT
  FROM V$DELETED_OBJECT, V$DATAFILE DF, V$TEMPFILE TF
 WHERE OBJECT_DATA = DF.FILE#(+)
   AND OBJECT_DATA = TF.FILE#(+)
   AND RECID BETWEEN :b1 AND :b2
   AND (STAMP >= :b3 OR RECID = :b2)
   AND STAMP >= :b4
 ORDER BY RECID

That brought me into this metalink id# Rman Backup Very Slow After Mass Deletion Of Expired Backups (Disk, Tape) [ID 465378.1]

Work around solution made SHOW ALL to return the results in seconds from 45 minutes...

*** Please check with Oracle Support prior to deploying this workaround in your database to make user it is OK ***


Solution

Bug is fixed in 11g but until then, use the workaround of clearing the controlfile section which
houses v$deleted_object:
SQL> execute dbms_backup_restore.resetcfilesection(19);
Then clear the corresponding high water mark in the catalog :
SQL> select * from rc_database;
     --> note the 'db_key' and 'dbinc_key' of your target based on dbid

For pre-11G catalog schemas:

SQL> update dbinc set high_do_recid = 0 where db_key = '' and dbinc_key=;
SQL> commit;

For 11G+ catalog schemas:

SQL> update node set high_do_recid = 0 where db_key = '' and dbinc_key=;
SQL> commit;


ORA-23515: materialized views and/or their indices exist in the tablespace

$
0
0
Was trying to drop a tablespace that was not needed anymore and I receive this error:

drop tablespace ts1 including contents and datafiles cascade constraints;

ORA-23515: materialized views and/or their indices exist in the tablespace

Solution:
Run the following query to get the materialized views that it is talking about and drop them first and then you can drop the tablespace without any issues.

select 'drop materialized view '||owner||'.'||name||' PRESERVE TABLE;'
  from dba_registered_snapshots
 where name in (select table_name from dba_tables where tablespace_name = 'TS1');


ORA-19643: datafile 6: incremental-start SCN is too recent

$
0
0
When you see that error in your RMAN backups then check the status of the file#. It might be OFFLINE.

Here is the RMAN Log showing that error:

RMAN-00571: ===========================================================
RMAN-00569: =============== ERROR MESSAGE STACK FOLLOWS ===============
RMAN-00571: ===========================================================
RMAN-03009: failure of backup command on dev_7 channel at 07/10/2013 13:31:04
ORA-19643: datafile 6: incremental-start SCN is too recent
ORA-19640: datafile checkpoint is SCN 286109177130 time 01/28/2013 15:44:19

Recovery Manager complete.
[Major] From: ob2rman@ndhdbt3 "adm_t"  Time: 07/10/13 13:32:09
External utility reported error.

RMAN PID=9062

[Major] From: ob2rman@ndhdbt3 "adm_t"  Time: 07/10/13 13:32:09
The database reported error while performing requested operation.

RMAN-00571: ===========================================================
 RMAN-00569: =============== ERROR MESSAGE STACK FOLLOWS ===============
 RMAN-00571: ===========================================================
 RMAN-03009: failure of backup command on dev_7 channel at 07/10/2013 13:31:04
 ORA-19643: datafile 6: incremental-start SCN is too recent
 ORA-19640: datafile checkpoint is SCN 286109177130 time 01/28/2013 15:44:19

 Recovery Manager complete.

In my case, I know that we have to take this particular tablespace offline due to some issues and that was causing SCN check to be invalid.

select current_scn from v$database;
CURRENT_SCN
331472777523

So, I just have to ged rid of that tablespace to make my backups run successful.

ASM Disk Limits

$
0
0
Limits when configuring ASM Instance (meatlink Note ID:370921.1)
Oracle Database - Enterprise Edition - Version 10.1.0.2 to 11.1.0.7 [Release 10.1 to 11.1]

ASM imposes the following limits:

63 disk groups in a storage system

10,000 ASM disks in a storage system

2 terabyte maximum storage for each ASM disk (the Bug 6453944 allowed larger sizes, but that led to problems, see Note 736891.1 "ORA-15196 WITH ASM DISKS LARGER THAN 2TB")

40 exabyte maximum storage for each storage system

1 million files for each disk group

2.4 terabyte maximum storage for each file

Netezza Database size

$
0
0
Below sql will give you the size used and allocated for each database in Netezza:

select orx.database::nvarchar(64) as "databasename" ,
       case when sum(sod.used_bytes) is null then 0 else sum(sod.used_bytes)/1073741824 end as "usedspace_gb",
       case when sum(sod.allocated_bytes) is null then 0 else sum(sod.allocated_bytes)/1073741824 end as "allocatedspace_gb"
  from _v_sys_object_dslice_info sod inner join _v_obj_relation_xdb orx on orx.objid = sod.tblid
 group by "databasename"
 order by "databasename";

ORA-19625: error identifying file

$
0
0
Was getting the following error in one of our database backups.
Tried crosscheck archivelog all; and change archivelog all crosscheck; followed by delete expired archivelog all did not find this archive log at all.

RMAN-03002: failure of backup command at 07/16/2013 23:11:14
RMAN-06059: expected archived log not found, lost of archived log compromises recoverability
ORA-19625: error identifying file /ora13/orafra/RH_PAY/archivelog/2013_03_07/o1_mf_1_82660_1TzwREeTB_.arc
ORA-17503: ksfdopn:4 Failed to open file /ora13/orafra/RH_PAY/archivelog/2013_03_07/o1_mf_1_82660_1TzwREeTB_.arc
ORA-17500: ODM err:File does not exist

Will have to research more to see where this file information got stuck.
Solution:
But for now I am going to go with SKIP INACCESSIBLE option in my backup script.

Oracle Patches and Patchsets...

$
0
0
Refer to this Metalink ID 753736.1 for Quick Reference to RDBMS Database Patchset Patch Numbers

Increasing VirtualBox VDI size...

$
0
0
Run the command vboxmanage with "modifyhd" parameter and the size in MB as follow if you ever want to increase the virtual size of your virtual box vdi size.

c:\VirtualBox>"c:\Program Files\Oracle\VirtualBox\VBoxManage.exe" modifyhd OL6-112-Rac1.vdi --resize 51200
0%...10%...20%...30%...40%...50%...60%...70%...80%...90%...100%

ins_precomp.mk:19: warning: overriding commands for target `pcscfg.cfg'

$
0
0
Got the following warning while upgrading 11.2.0.3.0 to 11.2.0.3.6
Oracle NoteID:1448337.1 says don't worry about it :)

OPatch found the word "warning" in the stderr of the make command.
Please look at this stderr. You can re-run this make command.
Stderr output:
ins_precomp.mk:19: warning: overriding commands for target `pcscfg.cfg'
/u01/app/oracle/product/11.2.0/db_1/precomp/lib/env_precomp.mk:2160: warning: ignoring old commands for target `pcscfg.cfg'
/u01/app/oracle/product/11.2.0/db_1/precomp/lib/ins_precomp.mk:19: warning: overriding commands for target `pcscfg.cfg'
/u01/app/oracle/product/11.2.0/db_1/precomp/lib/env_precomp.mk:2160: warning: ignoring old commands for target `pcscfg.cfg'


Composite patch 16056266 successfully applied.
OPatch Session completed with warnings.
Log file location: /u01/app/oracle/product/11.2.0/db_1/cfgtoollogs/opatch/opatch2013-07-23_16-15-40PM_1.log

Excerpt from Oracle Metalink note 1448337.1:

CAUSE

This warning is independent of the version of opatch used. 
Targets are defined more than once within the makefiles and this is just a warning that the second (later) definition is being used:

ins_srvm.mk:71: warning: overriding commands for target `libsrvmocr11.so'
ins_srvm.mk:34: warning: ignoring old commands for target `libsrvmocr11.so'

ins_precomp.mk:19: warning: overriding commands for target `pcscfg.cfg'
env_precomp.mk:2115: warning: ignoring old commands for target `pcscfg.cfg'

If you check the $OH/install/make.log you will see that these warnings existed before patching i.e. after the original installation.  This is not an issue which the patch you are applying has introduced, simply a warning which opatch has correctly captured and is reporting back to the user.

SOLUTION

This is a warning only which opatch is reporting.  The Patch has applied successfully and the warning output can be safely ignored.

REFERENCES

ORA-01031: insufficient privileges while rebuilding index online

$
0
0
When rebuilding an index online Oracle will create a journal table in the schema where the index resides.

Thus said, you will get this error if you are trying to rebuilding an index thru a procedure using execute immediate even though you are the owner of the table/index/procedure.

You have to have direct grants on every operation when executing using execute immediate.

So, have "grant create table" "grant alter any index" and you should be good.

Golden Gate Initial Load Methods...

$
0
0
GoldenGate (Rel.11) provides multiple choices on performing Initial Load for any Migration or DR setup apart from other Legacy methods such as RMAN backup/restore, cloning etc.,

I am working on migrating 10g database from HP unix to 11gR2 Linux. Pretty simple migration one can think...
The catch is we are dealing with databases ranging from 2TB to 40TB.
So, XTTS (Cross Platform Transportable Tablespaces) is also not an option with 40TB database.

So, went with GoldenGate and trying to chose the best Initial Load methodology and thought of sharing that info here.

Below are the Initial Load GoldenGate methods:

Opt#OptionConceptProsConsMet#
1Loading Data from File to ReplicatInitial load extract writes the records to an external file which Replicat uses to apply the changes to the target site.Quite a slow initial-load and cannot be used due to limitations such as column size/data-type limitations.1441172.1
1.aLoading Data from Trail to ReplicatInitial load extract writes the records to a series of external files which Replicat uses to apply the changes to the target site.Faster than #1 because replication can begin while extraction is already started. It can also process a near unlimited amount of tables. 1195705.1
2Loading Data from File to Database UtilityInitial load extract writes the records to an external ASCII files. These files are uses as datafiles at target by a data bulk load utility native to the database such as SQLLoader1457989.1
3Loading Data with an Oracle GoldenGate Direct LoadInitial load extract extracts the records and sends them directly to a Replicat initial-load task.FasterNo support for LOBs, LONGs and UTD's and any other data-type larger than 4K.1457164.1
4Loading Data with a Direct Bulk Load to SQL*LoaderInitial load extract extracts the source records and sends them directly to an initial-load Replicat task, which is dynamically started by manager. The initial load Replicat task interfaces with the SQL*Loader API to load the data as a direct-bulk loadNo support for LOBs, LONGs.
Only works with Oracle's SQLLOADER.
146185.1

As you see none of the above, method 1, 1a, 3 and 4 are not going to work for my situation as we have plenty of CLOB's, LONG's and UTD's (User Defined Datatype).
So only option left is #2 and I really did not want to use that method as our databases are not in GB range but TB's and trying to extract the data into ASCII file and storing them on source and transferring to target over network and load them... no. not really an option to chose from.

So, what's my option for Initial load then?

I propose pure legacy method using EXP(dp)/IMP(dp)  as this will be lot faster than option#2 and merely there is no other option left.

Check the note below:
999EXP/IMPBig UNDO as this will be with SCN1276058.1

Also planned on moving more than 50% of 40TB data to Target prior to starting Extract process to reduce the overall migration time.
Reach me out for more details on this monster migration.

Failed to get next ts for EXTRACT PPCS07 (error 109, Invalid pointer retrieving GGS logtrail timestamp in )

$
0
0
GGSCI (dbsrv1) 1> info all

Program     Status      Group       Lag at Chkpt  Time Since Chkpt

MANAGER     RUNNING
EXTRACT     RUNNING     EPCS07      00:48:12      00:00:08
Failed to get next ts for EXTRACT  PPCS07     (error 109, Invalid pointer retrieving GGS logtrail timestamp in )


GGSCI (ndhdbp4) 3> info *

EXTRACT    EPCS07    Last Started 2013-08-20 14:05   Status RUNNING
Checkpoint Lag       00:57:05 (updated 00:00:10 ago)
Log Read Checkpoint  Oracle Redo Logs
                     2013-08-20 14:19:58  Thread 1, Seqno 34114, RBA 1065005856
                     SCN 69.2532298994 (298885042418)
Log Read Checkpoint  Oracle Redo Logs
                     2013-08-20 14:20:00  Thread 2, Seqno 33665, RBA 268387208
                     SCN 69.2532299819 (298885043243)

EXTRACT    PPCS07    Last Started 2013-08-20 15:02   Status RUNNING
ERROR: Error retrieving current checkpoint timestamp.

That message keeps coming for every Extract process we run so thought of nailing it down to see whats causing this message to popup when the Extract and Pump process is running fine.

Well, the reason was simple:
When creating Pump process, we used "Threads 2" (just like for Extract process) and that "Threads 2" option is not valid with Pump process. Thats why we were getting this message.

As you see "info *" also errors out when it tries to show info for thread 2.

Recreate the pump process without this parameter and you wont see this coming up again...

Statistics reply buffer exceeded. Results truncated...

$
0
0
In GoldenGate when using "stats extract group_name" you may get this error when the ouptut is too big to fit in the buffer:

"Statistics reply buffer exceeded.  Results truncated..."

You can write the statistics to a report file like below for the entire group stats:

GGSCI (ndhdbp3) 18> send epcs07, report

Sending REPORT request to EXTRACT EPCS07 ...
Request processed.


Or use stats extract group_name, table owner.table_name(or *) to get stats on individual table or schema.
Viewing all 112 articles
Browse latest View live