lunes, 31 de diciembre de 2012

Migración de Base de Datos a ASM “No Zero Downtime”


Migración de Base de Datos a ASM “No Zero Downtime”

Por Joel Pérez 
Publicado en diciembre 2012
Reciban estimados tecnólogos Oracle un cordial saludo. A través del presente artículo, tendremos la oportunidad de visualizar y adentrarnos un poco en el tema de migración o traslado de una base de datos ( BBDD ) Oracle a ASM utilizando RMAN ( Oracle Recovery Manager ).
A partir de la versión de servidor de base de datos 10g contamos con la tecnología de almacenamiento ASM ( Automatic Storage Management ) la cual revoluciono la forma de almacenar y administrar nuestras base de datos.
Anterior al surgimiento de ASM , las opciones típicas de almacenamiento eran filesystems o Raw devices. Los raw devices eran y siguen siendo dispositivos rápidos en acceso, debido a que el sistema operativo no tiene la necesidad de establecer una capa adicional de manejo de volúmenes para trabajar con los mismos.
La desventaja de ellos son varias:
  • Las particiones no pueden ser redimensionadas una vez establecidas
  • Los archive logs de la base de datos no pueden estar almacenados en raw devices por lo poco flexible de sus constitución
  • Si existiese la necesidad de crear nuevos datafiles, se tendría la necesidad de crear nuevos dispositivos raw
  • Y en general son poco flexibles para su administración
Para nosotros los DBAs ASM constituyo un giro de 360 grados de cómo seria la tendencia de almacenamiento y administración de nuestras bases de datos.
  • Ya no habría necesidad de crear varios puntos de montaje o filesystems
  • Los datafiles tendrían mayor protección respecto a su almacenamiento en filesystems
  • Tendríamos a la mano nuevas filosofías de arquitectura de storage: solo 2 Diskgroups para todas las bases de datos
  • Los problemas de I/O por saturación y cuellos de botellas en puntos de monturas serian elementos del pasado al existir el concepto de “Rebalance” entre ASM Disks, etc
  • Estas y 1000 razones mas hay para justificar la migración de nuestras bases de datos de filesystem a ASM
Para el presente artículo desarrollaremos el traslado de una base de datos de filesystem a ASM permitiendo un “Downtime” de termino medio, llevaremos a cabo el traslado de la base de datos estando la misma en estado cerrado. Un “Downtime” de termino medio se lleva a cabo al realizar una tarea que implica no disponibilidad de BBDD para empresas que por lo general poseen un sistema de producción que no trabaja 24x7x365. Representan empresas que generalmente poseen horarios laborales para sus sistemas que comprenden solo una parcialidad del dia ( Ej: de 8:00am a 5:00pm ). Estas cuentan con la posibilidad de establecer tareas de mantenimiento fuera de horarios laborales. Cuando existe este perfil de uso para nuestros sistemas expresamos que podemos llevar a cabo una tarea en la cual no dispondremos del servicio de la base de datos por un espacio de tiempo prologando de horas, a esto le denominamos “Downtime” medio. El termino mas conocido y asociado a la palabra “Downtime” es “Zero Downtime”.
Una estrategia de migración y/o traslado “Zero Downtime” tiene consigo la concepción de llevar a cabo la tarea en el menos tiempo posible ( segundos…, minutos… ) y por lo general esta asociada a empresas con negocios y servicios de alta criticidad que generalmente trabajan 24x7x365. Este mismo articulo lo desarrollaremos para llevar a cabo la misma tarea pero con técnicas que satisfagan la filosofía “Zero Downtime”.
Escenario: se posee una base de datos single instance con todos sus elementos ( Controlfiles, Datafiles, Redo Logs & Archives ) en filesystem y se desea trasladar la misma a ASM. Asumiendo que previamente el software necesario esta instalado, vamos a iniciar la actividad. La técnica utilizada en este articulo es valida para los “Oracle Servers 10g” en adelante.
BBDD Origen: SOURCE
Diskgroups disponibles para la migración: +DATA & +FRA
Reconocimiento de los elementos a trasladar de la BBDD “Source”
Reconocimiento de Datafiles:
oracle@MyjpServer ~]$ export ORACLE_SID=SOURCE
[oracle@MyjpServer ~]$
[oracle@MyjpServer ~]$ sqlplus / as sysdba

SQL*Plus: Release 11.1.0.7.0 - Production on Sat Jul 28 18:34:39 2012

Copyright (c) 1982, 2008, Oracle.  All rights reserved.


Connected to:
Oracle Database 11g Release 11.1.0.7.0 - 64bit Production
With the Real Application Clusters option

SQL> select file_name from dba_data_files;

FILE_NAME
--------------------------------------------------------------------------------
/home/oracle/SOURCE/users01.dbf
/home/oracle/SOURCE/undotbs01.dbf
/home/oracle/SOURCE/sysaux01.dbf
/home/oracle/SOURCE/system01.dbf


Reconocimiento de Tempfiles:
SQL> select file_name from dba_temp_files;

FILE_NAME
--------------------------------------------------------------------------------
/home/oracle/SOURCE/temp01.dbf

Reconocimiento de Controlfiles:
SQL> show parameters control_files

NAME                                 TYPE        VALUE
------------------------------------ ----------- ------------------------------
control_files                        string      /home/oracle/SOURCE/control01.
                                                 ctl, /home/oracle/SOURCE/contr
                                                 ol02.ctl, /home/oracle/SOURCE/
                                                 control03.ctl
SQL>
SQL> select NAME from v$controlfile;
NAME
---------------------------------
/home/oracle/SOURCE/control01.ctl
/home/oracle/SOURCE/control02.ctl
/home/oracle/SOURCE/control03.ctl

SQL>

Reconocimiento de Redo Log files:
SQL> select GROUP#, MEMBER from v$logfile;

    GROUP# MEMBER
---------- ---------------------------------
         3 /home/oracle/SOURCE/redo03.log
         2 /home/oracle/SOURCE/redo02.log
         1 /home/oracle/SOURCE/redo01.log

Reubicación de Controlfiles
Realizaremos cambios de parámetros a nivel de spfile ( Server Parameter File ) por lo tanto procederemos a respaldar el mismo para su restaurado en caso de ser necesitado.
Nota: Para respaldar el server parameter file la BBDD debe estar en estado “mount or open “
Estado actual de la BBBD: abierta.
 
rman target /

Recovery Manager: Release 11.1.0.7.0 - Production on Sat Jul 28 18:49:07 2012

Copyright (c) 1982, 2007, Oracle.  All rights reserved.

connected to target database: SOURCE (DBID=2908208036)

RMAN> backup spfile format '/home/oracle/SOURCE/MySpfileBackup.ora';

Starting backup at 28-07-2012 18:49:11
using target database control file instead of recovery catalog
allocated channel: ORA_DISK_1
channel ORA_DISK_1: SID=147 device type=DISK
channel ORA_DISK_1: starting full datafile backup set
channel ORA_DISK_1: specifying datafile(s) in backup set
including current SPFILE in backup set
channel ORA_DISK_1: starting piece 1 at 28-07-2012 18:49:12
channel ORA_DISK_1: finished piece 1 at 28-07-2012 18:49:13
piece handle=/home/oracle/SOURCE/MySpfileBackup.ora tag=TAG20120728T184911 comment=NONE
channel ORA_DISK_1: backup set complete, elapsed time: 00:00:01
Finished backup at 28-07-2012 18:49:13

RMAN>

Procedemos a cambiar los parámetros (controlfile y db_create_file_dest) a su nueva ruta. Cerraremos la base de datos con opción “immediate” para poseer los Datafiles & Redo Logs en correcta consistencia. La ruta escogida va alineada a las rutas “Oracle Managed Files” para base de datos en ASM. Los nuevos datafiles serán creados por defecto en la ruta especificada por el parámetro db_create_file_dest.
 
SQL> Alter System set control_files=’+DATA/source/controlfiles/control01.ctl’ scope=spfile;
SQL> alter system set db_create_file_dest='+DATA' scope=spfile;

SQL> shutdown immediate
Database closed.
Database dismounted.
ORACLE instance shut down.

Creación de Directorio en ASM donde se alojara el o los controlfiles
Para el presente caso trabajaremos creando solo 1 controlfile, si se desean crear controlfiles en distintos Diskgroups lo cual es el “best practice”, se podrá llevar a cabo de la misma manera. Se deberán crear los directorios respectivos en los Diskgroups respectivos y se deberá asignar rutas múltiples en el valor del parámetro control_files a nivel de spfile.
Recordemos que esta base de datos esta originalmente creada en filesystem y no posee ninguna relación con los ASM Diskgroups, por lo tanto es necesario crear los directorios de alojamiento de los Controlfiles, Datafiles, Redo Logs y otros. Para algunos de los elementos el procedimiento asociado ( RMAN restore ) los crea automáticamente, para otros no. En el caso del controlfile, el directorio tiene que ser creado.
Nota: para el presente caso estamos trabajando con un Oracle Server 11g R1 el cual posee la misma la misma arquitectura de “homes” a implementarse en ( 10g R1, 10g R2 & 11g R1 ). Dicha arquitectura cuenta con un home para ASM cuyo dueño típicamente es el usuario oracle. Este “home” trabaja de la mano con un “home” de nivel superior ( en escala de “stack” de componentes ) perteneciente al Oracle Server cuyo dueño es el usuario oracle también. Es por ello que establecemos el “home” de ASM a través del mecanismo ( . oraenv ).
Si trabajáramos en 11g R2 el “best practice” será que el “Grid Infraestructure Software” pertenezca al usuario “grid” y el Oracle Server al usuario “oracle”, en caso de estar en esta arquitectura, este paso se realizaría conectado al usuario grid.
Creación de directorios necesarios para poseer finalmente la siguiente ruta: +DATA/SOURCE/controlfiles
 
[oracle@MyjpServer ~]$ . oraenv
ORACLE_SID = [TEST] ? +ASM
The Oracle base for ORACLE_HOME=/u01/app/oracle/product/11.1.0/asm1 is /u01/app/oracle
[oracle@MyjpServer ~]$
[oracle@MyjpServer ~]$ asmcmd
ASMCMD>
ASMCMD> cd +DATA
ASMCMD>
ASMCMD> mkdir SOURCE
ASMCMD>
ASMCMD> cd SOURCE
ASMCMD>
ASMCMD> mkdir CONTROLFILES
ASMCMD>
ASMCMD> cd controlfiles
ASMCMD>
ASMCMD> pwd
+DATA/SOURCE/controlfiles
ASMCMD>

Restaurado de Controlfiles en ASM
 
SQL> startup nomount
ORACLE instance started.

Total System Global Area  680607744 bytes
Fixed Size                  2162800 bytes
Variable Size             180359056 bytes
Database Buffers          494927872 bytes
Redo Buffers                3158016 bytes
SQL>

RMAN> restore controlfile from '/home/oracle/SOURCE/control01.ctl';

Starting restore at 28-07-2012 19:12:25
using channel ORA_DISK_1

channel ORA_DISK_1: copied control file copy
output file name=+DATA/source/controlfiles/control01.ctl
Finished restore at 28-07-2012 19:12:26

RMAN>

Visualizando el Controlfile creado
 
ASMCMD> pwd
+DATA/SOURCE/controlfiles
ASMCMD>
ASMCMD> ls -lt
Type   Redund  Striped  Time    Sys  Name
                                N    control01.ctl => +DATA/SOURCE/CONTROLFILE/
                                                       current.261.789851545
ASMCMD>

Estableciendo en modo “mount” la BBDD
 
SQL> alter database  mount;

Database altered.

SQL>

“Backup as Copy” de la BBDD
SQL> ho rman target /

Recovery Manager: Release 11.1.0.7.0 - Production on Sat Jul 28 19:18:08 2012

Copyright (c) 1982, 2007, Oracle.  All rights reserved.

connected to target database: SOURCE (DBID=2908208036, not open)

RMAN> backup as copy database format '+DATA';

Starting backup at 28-07-2012 19:18:28
using target database control file instead of recovery catalog
allocated channel: ORA_DISK_1
channel ORA_DISK_1: SID=147 device type=DISK
channel ORA_DISK_1: starting datafile copy
input datafile file number=00001 name=/home/oracle/SOURCE/system01.dbf
output file name=+DATA/source/datafile/system.263.789851909 tag=TAG20120728T191829 RECID=1 
STAMP=789851918
channel ORA_DISK_1: datafile copy complete, elapsed time: 00:00:15
channel ORA_DISK_1: starting datafile copy
input datafile file number=00002 name=/home/oracle/SOURCE/sysaux01.dbf
output file name=+DATA/source/datafile/sysaux.264.789851925 tag=TAG20120728T191829 RECID=2 
STAMP=789851932
channel ORA_DISK_1: datafile copy complete, elapsed time: 00:00:15
channel ORA_DISK_1: starting datafile copy
input datafile file number=00003 name=/home/oracle/SOURCE/undotbs01.dbf
output file name=+DATA/source/datafile/undotbs1.268.789851939 tag=TAG20120728T191829 RECID=3
STAMP=789851939
channel ORA_DISK_1: datafile copy complete, elapsed time: 00:00:01
channel ORA_DISK_1: starting datafile copy
copying current control file
output file name=+DATA/source/controlfile/backup.269.789851941 tag=TAG20120728T191829 
RECID=4 STAMP=789851941
channel ORA_DISK_1: datafile copy complete, elapsed time: 00:00:01
channel ORA_DISK_1: starting datafile copy
input datafile file number=00004 name=/home/oracle/SOURCE/users01.dbf
output file name=+DATA/source/datafile/users.270.789851943 tag=TAG20120728T191829 RECID=5 
STAMP=789851942
channel ORA_DISK_1: datafile copy complete, elapsed time: 00:00:01
channel ORA_DISK_1: starting full datafile backup set
channel ORA_DISK_1: specifying datafile(s) in backup set
including current SPFILE in backup set
channel ORA_DISK_1: starting piece 1 at 28-07-2012 19:19:03
channel ORA_DISK_1: finished piece 1 at 28-07-2012 19:19:04
piece handle=+DATA/source/backupset/2012_07_28/nnsnf0_tag20120728t191829_0.271.789851943 
tag=TAG20120728T191829 comment=NONE
channel ORA_DISK_1: backup set complete, elapsed time: 00:00:01
Finished backup at 28-07-2012 19:19:04

RMAN>

Visualizado de Datafiles creados
 
ASMCMD> cd  DATAFILE/
ASMCMD>
ASMCMD> ls -lt
Type      Redund  Striped  Time             Sys  Name
DATAFILE  UNPROT  COARSE   JUL 28 19:00:00  Y    USERS.270.789851943
DATAFILE  UNPROT  COARSE   JUL 28 19:00:00  Y    UNDOTBS1.268.789851939
DATAFILE  UNPROT  COARSE   JUL 28 19:00:00  Y    SYSTEM.263.789851909
DATAFILE  UNPROT  COARSE   JUL 28 19:00:00  Y    SYSAUX.264.789851925
ASMCMD>

RMAN Report schema antes de la aplicación de “Switch Database to copy”
 
RMAN> report schema;

Report of database schema for database with db_unique_name SOURCE

List of Permanent Datafiles
===========================
File Size(MB) Tablespace           RB segs Datafile Name
---- -------- -------------------- ------- ------------------------
1    700      SYSTEM               ***     /home/oracle/SOURCE/system01.dbf
2    550      SYSAUX               ***     /home/oracle/SOURCE/sysaux01.dbf
3    30       UNDOTBS1             ***     /home/oracle/SOURCE/undotbs01.dbf
4    5        USERS                ***     /home/oracle/SOURCE/users01.dbf

List of Temporary Files
=======================
File Size(MB) Tablespace           Maxsize(MB) Tempfile Name
---- -------- -------------------- ----------- --------------------
1    20       TEMP                 32767       /home/oracle/SOURCE/temp01.dbf

RMAN>

“Switch Database to copy”
 
RMAN> switch database to copy;

datafile 1 switched to datafile copy "+DATA/source/datafile/system.263.789851909"
datafile 2 switched to datafile copy "+DATA/source/datafile/sysaux.264.789851925"
datafile 3 switched to datafile copy "+DATA/source/datafile/undotbs1.268.789851939"
datafile 4 switched to datafile copy "+DATA/source/datafile/users.270.789851943"

RMAN>

RMAN Report schema posterior a la aplicación de “Switch Database to copy”
 
RMAN> report schema;

Report of database schema for database with db_unique_name SOURCE

List of Permanent Datafiles
===========================
File Size(MB) Tablespace           RB segs Datafile Name
---- -------- -------------------- ------- ------------------------
1    700      SYSTEM               ***     +DATA/source/datafile/system.263.789851909
2    550      SYSAUX               ***     +DATA/source/datafile/sysaux.264.789851925
3    30       UNDOTBS1             ***     +DATA/source/datafile/undotbs1.268.789851939
4    5        USERS                ***     +DATA/source/datafile/users.270.789851943

List of Temporary Files
=======================
File Size(MB) Tablespace           Maxsize(MB) Tempfile Name
---- -------- -------------------- ----------- --------------------
1    20       TEMP                 32767       /home/oracle/SOURCE/temp01.dbf

RMAN>

Creación de Directorio para Redo Log
 
ASMCMD> pwd
+DATA/source
ASMCMD>
ASMCMD> mkdir ONLINELOG
ASMCMD>
ASMCMD

Sustitución y/o cambios de Redo Logs
En esta base de datos tenemos originalmente 3 grupos de “Redo Logs” ( en filesystem ). El objetivo es crear grupos de redo logs con alojamiento en ASM. Para realizar esta tarea existen diversas técnicas. Para el presente caso lo haremos siguiendo la filosofía de “No Zero Downtime”. De acuerdo a la filosofía, no tendremos restricciones para detener e iniciar la BBDD hasta culminar la tarea. Si estuviésemos trabajando con filosofía “Zero Downtime” aplicaríamos técnicas para establecer nuevos grupos de redo logs con la BBDD en modo “open”.
Estado de la BBDD: “open”
Por estar trabajando en “Single Instance” no será necesario incluir el atributo “thread”. Si dicha técnica se estuviese aplicando para una BBDD en RAC se estableciera el parámetro “thread” para definir la asociación del grupo de Redo Log con el correspondiente “thread” ( thread=1/thread=2, etc )
Adición de Grupos de Redo Logs 4 y 5:
SQL> ALTER DATABASEBorrado del grupo de Redo Log 1:
  2  ADD LOGFILE GROUP 4 ('+DATA/source/ONLINELOG/redo04.log') SIZE 50M;

Database altered.

SQL> ALTER DATABASE
  2  ADD LOGFILE GROUP 5 ('+DATA/source/ONLINELOG/redo05.log') SIZE 50M;

Database altered.

Visualización de grupos de Redo Logs después de la adiciones:
SQL> select GROUP#, MEMBER from V$LOGFILE

    GROUP# MEMBER
---------- --------------------------------------------------
         3 /home/oracle/SOURCE/redo03.log
         2 /home/oracle/SOURCE/redo02.log
         1 /home/oracle/SOURCE/redo01.log
         4 +DATA/source/onlinelog/redo04.log
         5 +DATA/source/onlinelog/redo05.log

SQL>

Borrado del grupo de Redo Log 1:
SQL> alter database drop logfile group 1; 

Borrado del grupo de Redo Log 2. El mismo no puede ser removido aun debido a que la operación de redo log group para la BBDD se encuentra apuntando al mismo.
 
Database altered.
SQL> alter database drop logfile group 2;
alter database drop logfile group 2
*
ERROR at line 1:
ORA-01623: log 2 is current log for instance SOURCE (thread 1) - cannot drop
ORA-00312: online log 2 thread 1: '/home/oracle/SOURCE/redo02.log'

Borrado del grupo de Redo Log 3:
 
SQL> alter database drop logfile group 3;

Database altered.

SQL>

Visualización de grupos de Redo Logs posterior a las remociones:
 
SQL> select GROUP#, MEMBER from V$LOGFILE;

    GROUP# MEMBER
---------- --------------------------------------------------
         2 /home/oracle/SOURCE/redo02.log
         4 +DATA/source/onlinelog/redo04.log
         5 +DATA/source/onlinelog/redo05.log

SQL>

Estatus de los mismos. Tal como podemos visualizar. El grupo de redo log 1 se encuentra en “status”:current, con dicho estatus no podrá ser removido. Tenemos que aplicar diversos “switch logfile” para que el mismo se establezca en “status”:inactive y pueda ser removido ( esto aplica si la BBDD se encuentra abierta ), si la BBDD esta cerrada solo bastara que el grupo de redo log “current” sea el 4 o 5:
 
SQL> select GROUP#, STATUS, ARCHIVED from v$log;

    GROUP# STATUS           ARC
---------- ---------------- ---
         2 CURRENT          NO
         4 UNUSED           YES
         5 UNUSED           YES

SQL>

Realizaremos los “switchs” correspondientes:
 
SQL> alter system switch logfile;

System altered.

SQL> select GROUP#, STATUS, ARCHIVED from v$log;

    GROUP# STATUS           ARC
---------- ---------------- ---
         2 ACTIVE           NO
         4 CURRENT          NO
         5 UNUSED           YES


SQL> alter system switch logfile;

System altered.

SQL> select GROUP#, STATUS, ARCHIVED from v$log;

    GROUP# STATUS           ARC
---------- ---------------- ---
         2 ACTIVE           NO
         4 ACTIVE           NO
         5 CURRENT          NO

Tal cual fue el objetivo, el grupo de Redo log “current” actual es el 5. Podríamos haber escogido el 4 también. Lo importante es que no fuese el grupo de redo log 2, debido a que removeremos el mismo. Procederemos a cerrar la BBDD, establecimiento en modo “mount” de la misma y la remoción final del grupo de Redo Log 2:
 
SQL> shutdown immediate
Database closed.
Database dismounted.
ORACLE instance shut down.
SQL> startup mount
ORACLE instance started.

Total System Global Area  680607744 bytes
Fixed Size                  2162800 bytes
Variable Size             188747664 bytes
Database Buffers          486539264 bytes
Redo Buffers                3158016 bytes
Database mounted.
SQL>

SQL> alter database drop logfile group 2;

Database altered.

SQL> select GROUP#, STATUS, ARCHIVED from v$log;

    GROUP# STATUS           ARC
---------- ---------------- ---
         5 CURRENT          NO
         4 INACTIVE         NO

SQL>

SQL> alter database open;

Database altered.

SQL> alter system switch logfile;

System altered.

SQL> r
  1* alter system switch logfile

System altered.

SQL> r
  1* alter system switch logfile

System altered.

En este punto ya llevamos a cabo el objetivo de establecer operativamente solo grupos de Redo Logs en ASM:
 
SQL> select GROUP#, STATUS, ARCHIVED from v$log;

    GROUP# STATUS           ARC
---------- ---------------- ---
         4 CURRENT          NO
         5 INACTIVE         NO

SQL>

Visualizando Datafiles & Controlfile en ASM
Resumen del trabajo realizado hasta el momento:
SQL> select file_name from dba_data_files;

FILE_NAME
--------------------------------------------------------------------------------
+DATA/source/datafile/users.270.789851943
+DATA/source/datafile/undotbs1.268.789851939
+DATA/source/datafile/sysaux.264.789851925
+DATA/source/datafile/system.263.789851909

SQL>
SQL> show parameters control_files

NAME                                 TYPE        VALUE
------------------------------------ ----------- ------------------------------
control_files                        string      +DATA/source/controlfiles/cont
                                                 rol01.ctl
SQL>
 
Trabajo sobre Tempfiles
Nuestra BBDD posee en este momento su “Temporary Tablespace” temp con datafile en filesystem. El objetivo es establecer el mismo en ASM. Para ellos realizaremos lo siguiente:
 
SQL> select file_name from dba_temp_files;

FILE_NAME
--------------------------------------------------------------------------------
/home/oracle/SOURCE/temp01.dbf

SQL>

Creación de Tablespace temporal nuevo “temp_tbsp”:
SQL> create temporary tablespace temp_tbsp 2 tempfile '+DATA' size 100m;
Establecimiento del Tablespace temporal “temp_tbsp” como tablespace por defecto de la BBDD:
 
SQL> ALTER DATABASE DEFAULT TEMPORARY TABLESPACE temp_tbsp;

Database altered.

Remoción del Tablespace temporal “temp”:
 
SQL> drop tablespace temp including contents and datafiles;

Tablespace dropped.

Objetivo alcanzado, poseer un nuevo tablespace temporal con almacenamiento en ASM:
 
SQL> select file_name from dba_temp_files;

FILE_NAME
--------------------------------------------------------------------------------
+DATA/source/tempfile/temp_tbsp.275.789854493

SQL>

Cambio de la ruta para el Flash Recovery Area
Establecer la ruta de del Flash Recovery Area a un Diskgroup destinado para almacenar: Backups, Archive Redo Logs, y demás componentes y elementos de la BBDD.
 
SQL> show parameter db_create_file_dest

NAME                                 TYPE        VALUE
------------------------------------ ----------- ------------------------------
db_create_file_dest                  string      +FRA

Choosing a Location for the Flash Recovery Area
Oracle® Database Backup and Recovery Basics
10g Release 2 (10.2) 

http://docs.oracle.com/cd/B19306_01/backup.102/b14192/setup005.htm
Establecimiento de spfile en ASM Diskgroup
[oracle@MyjpServer ~]$ rman target /

Recovery Manager: Release 11.1.0.7.0 - Production on Sun Aug 5 23:56:07 2012

Copyright (c) 1982, 2007, Oracle.  All rights reserved.

connected to target database: SOURCE (DBID=2908920228)


RMAN> run { BACKUP AS BACKUPSET SPFILE;
2> RESTORE SPFILE TO '+DATA/SOURCE/spfilesource.ora';
3> }

Starting backup at 05-08-2012 23:57:45
using target database control file instead of recovery catalog
allocated channel: ORA_DISK_1
channel ORA_DISK_1: SID=140 device type=DISK
channel ORA_DISK_1: starting full datafile backup set
channel ORA_DISK_1: specifying datafile(s) in backup set
including current SPFILE in backup set
channel ORA_DISK_1: starting piece 1 at 05-08-2012 23:57:46
channel ORA_DISK_1: finished piece 1 at 05-08-2012 23:57:47
piece handle=+FRA/SOURCE/backupset/2012_08_05/o1_mf_nnsnf_TAG20120805T235746_81yq6tkl_.bkp 
tag=TAG20120805T235746 comment=NONE
channel ORA_DISK_1: backup set complete, elapsed time: 00:00:01
Finished backup at 05-08-2012 23:57:47

Starting restore at 05-08-2012 23:57:47
using channel ORA_DISK_1

channel ORA_DISK_1: starting datafile backup set restore
channel ORA_DISK_1: restoring SPFILE
output file name=+DATA/SOURCE/spfilesource.ora
channel ORA_DISK_1: reading from backup piece +FRA/SOURCE/backupset/2012_08_05/
o1_mf_nnsnf_TAG20120805T235746_81yq6tkl_.bkp
channel ORA_DISK_1: piece handle=+FRA/SOURCE/backupset/2012_08_05/
o1_mf_nnsnf_TAG20120805T235746_81yq6tkl_.bkp tag=TAG20120805T235746
channel ORA_DISK_1: restored backup piece 1
channel ORA_DISK_1: restore complete, elapsed time: 00:00:03
Finished restore at 05-08-2012 23:57:50

RMAN>

Visualización de spfile generado en ASM Diskgroup
 
[oracle@MyjpServer ~]$ . oraenv
ORACLE_SID = [TEST] ? +ASM
The Oracle base for ORACLE_HOME=/u01/app/oracle/product/11.1.0/asm1 is /u01/app/    oracle
[oracle@MyjpServer ~]$
[oracle@MyjpServer ~]$ asmcmd
ASMCMD>
ASMCMD> cd data
ASMCMD>
ASMCMD> cd source
ASMCMD>
ASMCMD> ls -lt
Type  Redund  Striped  Time  Sys  Name
                             N    ONLINELOG/
                             N    CONTROLFILES/
                             Y    CONTROLFILE/
                             N    spfilesource.ora => +DATA/     DB_UNKNOWN/PARAMETERFILE/
                                                                  SPFILE.261.790559869

ASMCMD>

Ajuste de archivo initsource.ora
 
[oracle@MyjpServer ~]$ rm $ORACLE_HOME/dbs/spfileSOURCE.ora
[oracle@MyjpServer ~]$ cd $ORACLE_HOME/dbs/
[oracle@MyjpServer ~]$ echo "SPFILE=+DATA/SOURCE/PARAMETERFILE/spfilesource.ora" > 
initsource.ora

De esta manara la instancia tomara como primera opción el “Parameter File” de nuestra BBDD, el cual apunta internamente al spfile que se encuentra en ASM.
“No Zero Downtime”
El presente artículo lleva a cabo el traslado de una BBDD de filesystem a ASM. Los pasos son estándar para bases de datos de cualquier tamaño. La técnica poseerá en no disponibilidad la BBDD mientras se apliquen lo procedimientos; mientras mayor sea la capacidad de la misma, mayor será el tiempo de “Downtime” para la tarea, es por ello que al mismo lo hemos denominado, un traslado con filosofía “No Zero Downtime”, para el próximo articulo llevaremos a cabo la misma tarea pero con técnicas para satisfacer un traslado con proposito “Zero Downtime”.
Aplicaciones y uso
Las técnicas para trasladar y/o alojar elementos de una base de datos en filesystem a ASM son llevados a cabo en situaciones como las siguientes:
  • Traslado de BBDD “Single Instance” de filesystem a BBDD “Single Instance” en ASM
  • Traslado de BBDD “Single Instance” de filesystem a BBDD “RAC” en ASM/OCFS/Certified NFS
  • Poseer una copia de BBDD en ASM o locación diversa para recuperaciones rápidas de datafiles
  • Poseer una copia de BBDD en ASM o locación diversa para recuperaciones rápidas de la BBDD completa
  • Y muchos otros casos mas



Joel es un experto en DBA con más de 12 años de experiencia, especializado en las áreas de bases de datos con especial énfasis en la solución de alta disponibilidad (RAC, Data Guard, y otros). Es un conferencista habitual en eventos de Oracle como: OTN LAD TOUR y otros. Es el primer latinoamericano en ser nombrado "Experto OTN" en el año 2003 y Oracle ACE Director.
Source: http://www.oracle.com/technetwork/es/articles/sql/migrar-base-de-datos-a-asm-con-rman-1887534-esa.html

Using Oracle Enterprise Manager Cloud Control 12c with Filer Snapshotting


Using Oracle Enterprise Manager Cloud Control 12c with Filer Snapshotting

by Porus Homi Havewala
December 2012

Introduction

Network Appliance's network-attached storage (NAS) filers have gained popularity in the corporate world because of their relative cost efficiency compared to Storage Area Networks (SANs). In NAS, servers are connected to the storage system by a standard Ethernet network, and they use standard file access protocols such as NFS and CIFS in order to make requests to the storage.
Oracle Databases are in production all over the world on NAS filers, including databases inside Oracle itself. Where EMC SANs use the concept of a Business Continuance Volume (BCV) to make a copy of an entire SAN volume (with the BCV utilizing as much expensive SAN space as the original volume), filers rely on a different backup paradigm, known as a snapshot.
Network Appliance's snapshot technology allows the creation of read-only copies of an entire file system. Such snapshots can be taken at any time and at predetermined intervals. This allows easy reverting to a file or file system in seconds. Another benefit is that snapshots also take up a minimal subset of disk space as compared to the original volume. The amount of space contained in a snapshot is calculated by the number of 4K data blocks that have been changed in the active file system, but are still held in a snapshot.
Snapshot restore software is then used to recover data, if required. A single file or an entire volume can be recovered using stored snapshot copies.
The alternative to snapshotting would be to use Oracle's Recovery Manager (RMAN), a technology that has evolved over the years and now consists of advanced mainframelike features such as block-level recovery, backup set compression, and the ability to back up a database without the need to place the database in the special backup mode. RMAN thus places less stress on redo log generation, whereas snapshotting still requires the old method of placing the entire database in backup mode.
Therefore, RMAN as a backup technology is clearly superior in the database technology sense of the word. However, the fact remains that in organizations with large filer implementations, the snapshot technology is normally agreed upon (by management) as the primary backup method for Oracle Databases, even if this means placing databases in backup mode. This may be due to the business compulsion to make use of the available filer technology for snapshotting.
The benefit is that the snapshotting takes place in the background and in minimal time, so the databases need not stay in backup mode for long. Also, for large databases above 200GB or 300GB in size, it makes sense to use the snapshotting method to save on the space required for backup. For databases smaller than 200GB, the RMAN-to-disk backup method can be used, providing faster recoverability and the ability to switch the database over to the RMAN backup copy in an emergency. Furthermore, RMAN features such as block-level recovery and compression can be used.
Oracle Enterprise Manager Cloud Control 12c is Oracle's premium enterprise management software for managing the entire Oracle technology stack—Oracle Databases, Middleware, Applications, Servers and Storage—across the enterprise. Highly scalable, Oracle Enterprise Manager Cloud Control 12c is typically used in large organizations that need a central repository of Oracle installations. Such central repositories are used for management purposes as well as for day-to-day administration and management activities by the database administrator (DBA) teams, as well as infrastructure, middleware and application teams.
Oracle Cloud Control has an excellent facility for creating and scheduling RMAN backups of the database using the GUI interface of the management console. The DBA can log in to the Oracle Cloud Control console, then go to the Database target, and selectAvailability..Backup & Recovery from the drop-down menu where the Backup Settings and Schedule Backup links are visible. Using these links, it is possible to set up an RMAN backup of any database under the DBA's control in under 10 minutes, running as a scheduled job in Oracle Enterprise Manager, without modifying and installing any shell scripts at the OS level.
Before this facility was available, UNIX shell scripts had to be written, tested, modified for each new database or server, and then installed and tested on each server, an activity that would take hours. Since the current push of many organizations is away from manual scripting, due to the high costs involved in maintaining such scripts and the costs in training new staff, this facility provided by Oracle Enterprise Manager was enthusiastically welcomed.
Support of snapshotting is important for organizations using snapshotting technology as the primary method of backing up large Oracle Databases. In this article, I will explain how to use Oracle Cloud Control to schedule snapshot backups, and as a plus point, to interface with RMAN. (Note however that this interface is only for the purpose of cataloging the snapshot copies in the database control file or the RMAN repository if used; no block-level recovery is possible.)

Background

Before starting this setup procedure, please note that the database volumes being used by the database should be set up without any scheduled snapshots at the filer level, in case the filer administrators have scheduled snapshots every hour along with nightly snapshots. Such hourly and nightly snapshots are a waste of space and should be removed, because snapshots of database files are considered for the most part useless (except in certain circumstances) if the database tablespaces have not been placed in backup mode. Oracle Cloud Control itself, via script, will be responsible for logging in remotely to the filer, and for creating the database snapshots at the filer level.
At the central Oracle Cloud Control level (we'll call it the EMCENTRAL site), we'll create a snapshot script to interact with the database and filer, and then create four snapshots per database to occur each day at six-hour intervals, as follows:
  • Snapshot 0 at 24:00 hours
  • Snapshot 1 at 06:00 hours
  • Snapshot 2 at 12:00 hours
  • Snapshot 3 at 18:00 hours
These snapshot copies will be recycled each day. Each snapshot will be deleted and then recreated at the appropriate time via a simple UNIX shell script placed on the database server, or in the case of an Oracle RAC cluster, on each node in the same location. It is also possible to place the script directly in Oracle Cloud Control, thus avoiding any script at the UNIX level.
For the purpose of filer capacity calculation, we have assumed that roughly 10 percent of blocks will change between the six-hour snapshots; but this will depend on application activity. (The amount of space contained in a snapshot is calculated by the number of 4K data blocks that have been changed in the active file system, but are still held in a snapshot.) Thus, if each snapshot takes up 10 percent of the database volume size, and 30 snapshots are stored, this would occupy extra filer capacity of 300 percent of the database volume size. This figure could be more or less, depending on application activity.
There is another storage caveat: If any file is deleted from the filer, the space cannot be reused unless the entire snapshot referring to that file is deleted from the filer. This may lead to an overhead of 10 to 20 percent of extra filer disk space as a requirement for the database volumes.

Cloud Control Filer Snapshot Setup

We presume that in a large organization with different departments or projects and different DBA teams looking after each department, the central Oracle Cloud Control site would normally have target groups set up by the central database team. These are lists of targets each DBA team should have access to, and include the host, database, listener, and other targets on each server. To handle the target group, a separate administrator, who has appropriate rights over the target group, can be created in Oracle Cloud Control.
Say, for example, that we are using the convention of CoXYZ_DeptABC_P for naming a production target group as well as the special administrator name. We can then proceed as follows:
  1. Log in to the Cloud Control console as the administrator CoXYZ_DeptABC_P .
  2. Navigate to Enterprise..Job..Library, and create a host command job.
  3. Select the target of the job as the host itself. (Note: In the case of an Oracle RAC, the cluster target may appear as down; whereas both agents, both hosts, and both instances may show as up in Oracle Cloud Control. In this case, only the host can be selected as a target and not the cluster. If the cluster is selected, the job is suspended immediately.)
  4. Type in the Job Name as SnapShotRmanJob
  5. Then type in the following command:
    /bin/ksh /home/oracle/dba/scripts/db_snapshot_backup.sh
  6. Under the Schedule, select a start time of either 12 a.m. or 6 a.m. or 12 p.m. or 6 p.m., and then set this to repeat every six hours.
Place the script supplied on both cluster nodes in the same location. In the script, change the line "SIDNAME="sidname1"" manually to the right instance security identifier (SID) as per the node, since in the case of Oracle RAC, the SIDS or instances are numbered from 1 to n. Other changes are also required, such as the IP address of the filer, and the volume names used by the database on the filer. In the case of the script, an Oracle RAC database with only two nodes is presumed.
The script supplied as a part of this article works in the following manner:
First, SQL commands are issued to ensure that an RMAN script will be generated on the server that will first uncatalog datafile copies in the previous six-hour database snapshot of the preceding day. This first RMAN script is then executed.
Second, another RMAN script is generated that will back up the control file of the database. This is also executed.
Third, a final RMAN script is generated that will alter the tablespaces into hot backup mode, log in remotely to the filer, delete the previous six-hour snapshots, again log in remotely to the filer, take a new six-hour snapshot, then alter the tablespaces out of hot backup mode, and finally catalog the datafile copies in the current snapshot. This RMAN script is finally executed.
A point to note is that after the snapshot is deleted, a UNIX sleep for some minutes has been forcibly introduced in order to make sure that the snapshot delete has completed, since the snapshot deletion occurs asynchronously in the background. If the deletion has not completed and the UNIX wait is not forced, then the new snapshot creation will fail. This has been noticed at times of heavy filer activity.
When the Oracle Enterprise Manager 12c Cloud Control job has been successfully created, you can sit back and enjoy the filer backups that will occur every six hours. But don't forget to check the RMAN logs that are being generated each six hours, to see if everything is working all right.
#!/bin/ksh
#
#    db_snapshot_backup.sh
#
#    function:    dbbackup using snapshots and cataloging in rman
#    Author:     Porus Homi Havewala
#

# job name 
JOBNAME="$0"
JOB=`basename $JOBNAME`
ERRMSG="
  $JOB: `date '+%H:%M:%S'` invalid parameter:
  $JOB <dbname> 
"

DBNAME="MYDBP"
SIDNAME="MYDBP1"
# Note: change sid manually as per node

# set environment vars such as Oracle Home etc. 
. envdb.sh $SIDNAME y
if [ $? -ne 0 ]; then
  echo "$JOB: `date '+%H:%M:%S'` Failed to set oracle env vars. Aborting.."
  exit ${ERROR}
fi

BACKUPDIR=/u50/flash_recovery_area/$DBNAME

if [ ! -d $BACKUPDIR ]; then
  echo "$JOB: `date '+%H:%M:%S'` Cannot find $BACKUPDIR directory. Aborting.."
  exit ${ERROR}
fi

# Filer specifics 

# Production Filer Ip and Db volume in Production for Main Database
MYFILER="<ip_address>" 
DBVOL="mydbp_vol1"
DBFILEMOUNTPOINT="u02/oradata/MYDBP/data"
UNDOFILEMOUNTPOINT1="u02/oradata/MYDBP/MYDBP1/undo"
UNDOFILEMOUNTPOINT2="u02/oradata/MYDBP/MYDBP2/undo"

# NLS_LANG is as per db setting
case $DBNAME in
   "MYDBP") NLS_LANG=AMERICAN_AMERICA.WE8ISO8859P1 ; export NLS_LANG 
         ;;
esac

# NLS_DATE_FORMAT is as per db setting
NLS_DATE_FORMAT='YYYY-MM-DD HH24:MI:SS' ; export NLS_DATE_FORMAT

# Following NLS settings are as per db setting, exporting to Unix env just in case 
# required
NLS_TIMESTAMP_FORMAT='YYYY-MM-DD HH24:MI:SS.FF3'; export NLS_TIMESTAMP_FORMAT
NLS_TIMESTAMP_TZ_FORMAT='YYYY-MM-DD HH24:MI:SS.FF3'; export NLS_TIMESTAMP_TZ_FORMAT

# copy parameter files for the Sid to the Filer so that a copy is included in the 
# snapshot
# This is not needed since in MyDbP Rac the spfile is on shared storage
# Therefore the following lines are commented

# if [ -f $ORACLE_BASE/admin/${ORACLE_SID}/pfile/init${ORACLE_SID}.ora ]; then
# cp $ORACLE_BASE/admin/${ORACLE_SID}/pfile/init${ORACLE_SID}.ora / 
    ${DBFILEMOUNTPOINT}/oradata/${ORACLE_SID}/init${ORACLE_SID}_backup.ora
# fi
# if [ -f $ORACLE_HOME/dbs/spfile${ORACLE_SID}.ora ]; then
# cp $ORACLE_HOME/dbs/spfile${ORACLE_SID}.ora /   
    ${DBFILEMOUNTPOINT}/oradata/${ORACLE_SID}/spfile${ORACLE_SID}_backup.ora
# fi

SCRIPT=`basename $0`
LOGDIR="$HOME/dba/logs"
CMDFILE1="$HOME/dba/scripts/snap_${DBNAME}_1.cmd"
CMDFILE2="$HOME/dba/scripts/snap_${DBNAME}_2.cmd"
CMDFILE3="$HOME/dba/scripts/snap_${DBNAME}_3.cmd"

NODENAME=`hostname`
# also include node name
LOGFILE1="${LOGDIR}/${SCRIPT}_${NODENAME}_${DBNAME}_`date 
'+%d%m%y-%H:%M'`_1.log"
LOGFILE2="${LOGDIR}/${SCRIPT}_${NODENAME}_${DBNAME}_`date 
'+%d%m%y-%H:%M'`_2.log"
LOGFILE3="${LOGDIR}/${SCRIPT}_${NODENAME}_${DBNAME}_`date 
'+%d%m%y-%H:%M'`_3.log"
LOGFILE4="${LOGDIR}/${SCRIPT}_${NODENAME}_${DBNAME}_`date 
'+%d%m%y-%H:%M'`_4.log"

sqlplus -S / <<EOF 
/* sql script to generate First RMAN script that will first uncatalog 
  datafile copies in the previous 6 hourly snapshot
*/
set feedback off
set heading off
set termout off
set linesize 1000
set pagesize 0
set trimspool on
set verify off
spool $CMDFILE1
select ' #
 # DO NOT EDIT THIS FILE - created dynamically by db_snapshot_backup.sh
 #' from dual;
select ' #
 # It is assumed that any datafile that is part of the database is already
 # part of the previous 6 hourly snapshot, so "uncatalog" that 
 # datafilecopy in RMAN before delete the 6 hourly snapshot.
 # Note that cannot "delete" the datafilecopy (as opposed to uncatalog)
 # because RMAN will try to delete the actual datafilecopy in the 
 # snapshot directory. This operation will fail since snapshots are read-only.
 #' from dual;
select distinct 'change datafilecopy ''' || name || ''' uncatalog;' from 
v\$datafile_copy
 where name like '%db_hot_' || to_char(sysdate,'HH24') || '%';
select 'exit;' from dual;
spool off;
exit;
EOF

rman nocatalog target=/ cmdfile=$CMDFILE1 log=$LOGFILE1
if [ $? -ne 0 ]; then
  echo "$JOB: `date '+%H:%M:%S'` Uncataloging failed, see $LOGFILE1. 
  Proceeding with next steps.."
fi

sqlplus -S / << EOF 
/* sql script to generate Second RMAN script that will backup the controlfile
*/
set feedback off
set heading off
set termout off
set linesize 1000
set pagesize 0
set trimspool on
set verify off
spool $CMDFILE2
select ' #
 # DO NOT EDIT THIS FILE - created dynamically by db_snapshot_backup.sh
 #' from dual;
select ' #
 # put a copy of the controlfile onto the Filer so that it becomes
 # part of the snapshot, also backup the controlfile to trace
 #' from dual; 
select 'sql "alter database backup controlfile to ''''/${DBFILEMOUNTPOINT}/' || 
NAME || '_controlfile_backup.ctrl'''' reuse";' from v\$database;
select 'sql ''alter database backup controlfile to trace'';' from dual;
select 'exit;' from dual;
spool off;
exit;
EOF

rman nocatalog target=/ cmdfile=$CMDFILE2 log=$LOGFILE2
if [ $? -ne 0 ]; then
  echo "$JOB: `date '+%H:%M:%S'` Backup of controlfile failed, see $LOGFILE2. 
  Proceeding with next steps.."
fi

sqlplus -S / << EOF 
/* sql script to generate Third RMAN script that will alter the tablespaces 
  into hot backup mode, delete the previous 6 hourly snapshot, 
  take a new 6 hourly snapshot, alter the tablespaces out 
  of hot backup mode and catalog the datafile copies in the current snapshot 
*/
set feedback off
set heading off
set termout off
set linesize 1000
set pagesize 0
set trimspool on
set verify off
spool $CMDFILE3
select ' #
 # DO NOT EDIT THIS FILE - created dynamically by db_snapshot_backup.sh
 #' from dual;
select ' #
 # bring existing backup mode tablespaces out of backup mode if any
 #' from dual; 
select 'sql ''alter tablespace ' || name || ' end backup'';' from V\$TABLESPACE 
where ts# in (select ts# from v\$datafile where file# in (select file# from v\$backup 
where status='ACTIVE') minus select ts# from v\$tempfile);
select ' #
 # Alter the tablespaces into backup mode
 #' from dual; 
select 'sql ''alter tablespace ' || name || ' begin backup'';' from V\$TABLESPACE 
where ts# not in (select ts# from v\$tempfile);
select ' #
 # Delete the previous 6 hourly snapshot 
 #' from dual; 
select 'host ''ssh oracledba@${MYFILER} snap delete ${DBVOL} db_hot_' || 
to_char(sysdate,'HH24') ||'_${DBNAME} | tee -a $LOGFILE4 ' || ''';' from dual;
select 'host ''sleep 480'';' from dual;
 # The sleep is to make sure the snap delete has completed, since it occurs in the 
 # background
select ' #
 # create a new 6 hourly snapshot which will replace the one just deleted
 #' from dual; 
select 'host ''ssh oracledba@${MYFILER} snap create ${DBVOL} db_hot_' || 
to_char(sysdate,'HH24') ||'_${DBNAME} | tee -a $LOGFILE4 ' || ''';' from dual;
select ' #
 # alter the tablespaces out of backup mode
 #' from dual; 
select 'sql ''alter tablespace ' || name || ' end backup'';' from V\$TABLESPACE 
where ts# not in (select ts# from v\$tempfile);
select 'sql ''alter system archive log current'';' from dual;
select 'host ''sleep 480'';' from dual;
select ' #
 # catalog the datafile copies that are part of the 6 hourly snapshot created
 #' from dual; 
select 'catalog datafilecopy ''/${DBFILEMOUNTPOINT}/.snapshot/db_hot_' || 
to_char(sysdate,'HH24') ||'_${DBNAME}' || substr(name,24) || 
''';' from v\$datafile where ts# not in (select ts# from v\$tablespace 
where name like 'UNDO%');
-- the substr(name,24) is to remove the /${DBFILEMOUNTPOINT} which is in the 
-- datafile but not in the snapshot file below the snapshot subdirectory

select 'catalog datafilecopy ''/${UNDOFILEMOUNTPOINT1}/.snapshot/db_hot_' || 
to_char(sysdate,'HH24') ||'_${DBNAME}' || substr(name,31) || 
''';' from v\$datafile where ts# in (select ts# from v\$tablespace where name in 
(Select value From SYS.GV_\$parameter where name = 'undo_tablespace' and inst_id = 1));
-- the substr(name,31) is to remove the /${UNDOFILEMOUNTPOINT1} which is in the datafile 
-- but not in the snapshot file below the snapshot subdirectory

select 'catalog datafilecopy ''/${UNDOFILEMOUNTPOINT2}/.snapshot/db_hot_' || 
to_char(sysdate,'HH24') ||'_${DBNAME}' || substr(name,31) || ''';' from v\$datafile 
where ts# in (select ts# from v\$tablespace where name in (Select value 
From SYS.GV_\$parameter where name = 'undo_tablespace' and inst_id = 2));
-- the substr(name,31) is to remove the /${UNDOFILEMOUNTPOINT2} which is in the 
-- datafile but not in the snapshot file below the snapshot subdirectory

select 'exit;' from dual;
spool off;
exit;
EOF

rman nocatalog target=/ cmdfile=$CMDFILE3 log=$LOGFILE3
if [ $? -ne 0 ]; then
  echo "$JOB: `date '+%H:%M:%S'` rman failed, see $LOGFILE3. Aborting.."
  exit ${ERROR}
fi

echo "$JOB: `date '+%H:%M:%S'` $ORACLE_SID snap backup completed"

# end db_snapshot_backup.sh

About the Author

Porus Homi Havewala is the Senior Manager in the Enterprise Technology Program Office of Oracle Singapore, and is the ASEAN regional SME and business development lead on Oracle Enterprise Manager technology. A Double Oracle Certified Master (OCM) in Oracle 10g and Oracle 11g, Porus has more than 25 years of experience in the IT industry, including more than 18 years of experience using Oracle technologies. He is the author of two books, Oracle Enterprise Manager Cloud Control 12c: Managing Data Center Chaos(2012, Packt Publishing) and Oracle Enterprise Manager Grid Control: Advanced OEM Techniques for the Real World (2010, Rampant TechPress), and the author of the Oracle Enterprise Manager Cloud Control 12c blog.  LinkedIn 

Source: http://www.oracle.com/technetwork/articles/oem/havewala-filer-oem12c-1883280.html