Search This Blog

Friday, May 28, 2021

Online Redolog Files Management

To check the current Online Redo Log file information:

set linesize 200 pagesize 30
select group#,thread#,sequence#,bytes/1024/1024/1024 size_gb,
members,archived,status
from v$log
order by group#;

set linesize 200 pagesize 30
col member for a50
select group#,type,member from v$logfile order by group#;

col member for a50
select a.group#, thread#,sequence#, bytes/1024/1024/1024 size_gb, members, 
a.status, type, member
from v$log a,v$logfile b
where a.group# = b.group#
order by a.group#;

Adding redo log group to the database

alter database add logfile
group 4 '/u01/prod/redo04a.log'
size 50m;

Droping redo log group from database

col member for a50
select group#, member from v$logfile order by group#;

alter database drop logfile group 4;

Adding redo log member to an existing group

alter database
add logfile member '/u01/prod/redo04b.log'
to group 4;

Drop  redo log member from the database

column member format a50
select group#, member from v$logfile order by group#;

alter database drop logfile member '/u01/prod/redo04b.log';

select group#, member from v$logfile order by group#;


Resizing redo log groups (create new ones and delete old ones after few log switches)

select group#, status, bytes/1024/1024/1024 size_gb from v$log order by group#;

alter database add logfile group 4 '/u01/prod/redo04.log' size 100m;

alter database add logfile group 5 '/u01/prod/redo05.log' size 100m;

alter database add logfile group 6 '/u01/prod/redo06.log' size 100m;


alter system switch logfile;

/

/

/

select group#, members, status from v$log order by group#;

now drop the old ones

alter database drop logfile group 1;

alter database drop logfile group 2;

alter database drop logfile group 3;

select group#, members, status, bytes from v$log order by group#;

now all will be of 100m size


https://docs.oracle.com/cd/A57673_01/DOC/server/doc/SAD73/ch5.htm#:~:text=To%20drop%20specific%20inactive%20online,DROP%20LOGFILE%20MEMBER%20'log3c'%3B

Oracle Performance Tuning

Statistics

The Oracle Optimizer examines all of the possible plans for a SQL statement and picks the one with the lowest cost, where cost represents the estimated resource usage for a given plan. In order for the optimizer to accurately determine the cost for an execution plan it must have information about all of the objects (table and indexes) accessed in the SQL statement as well as information about the system on which the SQL statement will be run.

This necessary information is commonly referred to as optimizer statistics. Understanding and managing optimizer statistics is key to optimal SQL execution. Knowing when and how to gather statistics in a timely manner is critical to maintaining acceptable performance.

» How to gather statistics 
» When to gather statistics 
» Improving the quality of statistics 
» Gathering statistics more quickly 
» When not to gather statistics 
» Gathering other types of statistics

select table_name,num_rows,last_analyzed from dba_tables where owner='SCOTT';

Automatic Statistics Gathering

The Oracle database collects statistics for database objects that are missing statistics or have “stale” (out of date) statistics. This is done by an automatic task that executes during a predefined maintenance window. Oracle internally prioritizes the database objects that require statistics, so that those objects, which most need updated statistics, are processed first. 

The automatic statistics-gathering job uses the DBMS_STATS.GATHER_DATABASE_STATS_JOB_PROC procedure, which uses the same default parameter values as the other DBMS_STATS.GATHER_*_STATS procedures. The defaults are sufficient in most cases. However, it is occasionally necessary to change the default value of one of the statistics gathering parameters, which can be accomplished by using the DBMS_STATS.SET_*_PREF procedures. Parameter values should be changed at the smallest scope possible, ideally on a per-object basis. For example, if you want to change the staleness threshold for a specific table, so its statistics are considered stale when only 5% of the rows in the table have changed rather than the default 10%, you can change the STALE_PERCENT table preference for that one table using the DBMS_STATS.SET_TABLE_PREFS procedure. By changing the default value at the smallest scope you limit the amount of non-default parameter values that need to be manually managed. For example, here’s how you can change STALE_PRECENT to 5% on the SALES table:

exec dbms_stats.set_table_prefs(user,'SALES','STALE_PERCENT','5');

To check what preferences have been set, you can use the DBMS_STATS.GET_PREFS function. It takes three arguments; the name of the parameter, the schema name, and the table name:

SQL> select dbms_stats.get_prefs('STALE_PERCENT','SCOTT','EMP') stale_percent from dual;

STALE_PERCENT
--------------------
10


Setting DBMS_STATS Preferences

It is possible to set DBMS_STATS preferences to target specific objects and schemas to modify the behavior of auto statistics gathering where necessary. You can specify a particular non-default parameter value for an individual DBMS_STATS.GATHER_*_STATS command, but the recommended approach is to override the defaults where necessary using “targeted” DBMS_STATS.SET_*_PREFS procedures.

A parameter override can be specified at a table, schema, database, or global level using one of the following procedures (noting that AUTOSTATS_TARGET and CONCURRENT can only be modified at the global level): 

SET_TABLE_PREFS 
SET_SCHEMA_PREFS 
SET_DATABASE_PREFS 
SET_GLOBAL_PREFS 

Traditionally, the most commonly overridden preferences have been ESTIMATE_PERCENT (to control the percentage of rows sampled) and METHOD_OPT (to control histogram creation), but estimate percent is now better left at its default value for reasons.

The SET_TABLE_PREFS procedure allows you to change the default values of the parameters used by the DBMS_STATS.GATHER_*_STATS procedures for the specified table only.

The SET_SCHEMA_PREFS procedure allows you to change the default values of the parameters used by the DBMS_STATS.GATHER_*_STATS procedures for all of the existing tables in the specified schema. This procedure actually calls the SET_TABLE_PREFS procedure for each of the tables in the specified schema. Since it uses SET_TABLE_PREFS, calling this procedure will not affect any new objects created after it has been run. New objects will pick up the GLOBAL preference values for all parameters.

The SET_DATABASE_PREFS procedure allows you to change the default values of the parameters used by the DBMS_STATS.GATHER_*_STATS procedures for all of the user-defined schemas in the database. This procedure actually calls the SET_TABLE_PREFS procedure for each table in each user-defined schema. Since it uses SET_TABLE_PREFS, this procedure will not affect any new objects created after it has been run. New objects will pick up the GLOBAL preference values for all parameters. It is also possible to include the Oracle owned schemas (sys, system, etc) by setting the ADD_SYS parameter to TRUE.

The SET_GLOBAL_PREFS procedure allows you to change the default values of the parameters used by the DBMS_STATS.GATHER_*_STATS procedures for any object in the database that does not have an existing table preference. All parameters default to the global setting unless there is a table preference set, or the parameter is explicitly set in the GATHER_*_STATS command. Changes made by this procedure will affect any new objects created after it has been run. New objects will pick up the GLOBAL_PREFS values for all parameters.

The DBMS_STATS.GATHER_*_STATS procedures and the automated statistics gathering task obeys the following hierarchy for parameter values; parameter values explicitly set in the command overrule everything else. If the parameter has not been set in the command, we check for a table level preference. If there is no table preference set, we use the GLOBAL preference.

 

Oracle Database 12 Release 2 includes a new DBMS_STATS preference called PREFERENCE_OVERRIDES_PARAMETER. When this preference is set to TRUE, it allows preference settings to override DBMS_STATS parameter values. For example, if the global preference ESTIMATE_PERCENT is set to DBMS_STATS.AUTO_SAMPLE_SIZE, it means that this best-practice setting will be used even if existing manual statistics gathering procedures use a different parameter setting (for example, a fixed percentage sample size such as 10%).










Ahmad Barakah
===========

9.
Performance Issue Attributes
  • Scope (entire db or part of it, specific functionality, session, user, batch, action etc.)
  • Very slow or hang
  • Permanent or intermittent
  • Is it reproducible
  • Specific Errors :
    ORA-00257: archiver error, connect internal only, until freed
    ORA-04031: unable to allocate n bytes of shared memory (due to small shared pool)

10. Performance Tuning Methodology

    1. Set a mesurable target (ask questions to the user)
    2. Discover the symptoms (generate comparison awr reports) (awr, addm, ash, v$, os level)
    3. Determine possible causes
    4. Develop a trial solution (follow addm recom, create index etc)
    5. Test the solution - Target Hit - End - No then go to step 3 and repeat


11. A Typical System Workload Example

    Number of users         : 100
    Transaction rate          : 1 trx per 5 min per user
    Business hours            : 8 hours in 5 working days (5x8)
    Calculated rate            : 20 trx per min, 9600 transactions daily 
    Peak rate                      : 120 trx per minute


12. 
    Server Performance Measurement Aspects
    1. CPU
    2. Memory
    3. Storage
    4. Network
    All above are physical specifications


14. Performance Tuning Tools
  • v$ views and data dictionary views
  • Automatic Workload Repository (AWR)
  • Statspack reports
  • Automatic Database Diagnostic Monitor (ADDM)
  • Performance tuning advisors
  • alertlog file
  • Trace files
  • Enterprise Manager Cloud Control and EM Database Express
Types of Performance Tuning Statistics
  • Cumulative statistics: mostly have no meaning without the time factor 
    • Time model
    • Wait events
  • Metrics: statistic rates
  • Sampled statistics
    • ADDM, AWR, Active Session History (ASH)
Setting Statistics Level
  • STATISTICS_LEVEL controls the level of collecting database and operating system statistics. 
    • BASIC: Most important performance statistics are not collected
    • TYPICAL: All major statistics are collected
    • ALL: Timed operating system statistics and execution plan statistics are added
  • Can be altered at the system and session levels
  • To know which parameter is affected by setting this parameter, query the view V$STATISTICS_LEVEL

Section: 6 Using Time Model 

15. 
DB Time
        Idle waiting
        Perform processing (DB CPU) once receive request from the clients by using cpu, ram, disk
        Waiting for a resource (DB waits)

DB Time = performing processing + Waiting for a resource
DB Time = DB CPU + total DB waits

Time Mode = DB Time = SQL + Connect + Java + PLSQL + RMAN + Parse

16. About Time Model 
  • Is a set of statistics that give an overview on which operation types the DB time is spent inside the Oracle database
  • Available in: V$SYS_TIME_MODEL and V$SESS_TIME_MODEL
  • DB time represents the time the database spent on executing the user calls (DB CPU and DB waits)
  • Statistics are accumulated since instance startup
  • Benefits:
    • Single indicator on the entire database workload and performance
    • Study the impact of a change on the entire database performance
    • May trigger investigating into performance issues

17. Time Model Hierarchy
DB time
    DB CPU
    connection management call elapsed time
    sequence load elapsed time
    sql execute elapsed time
    parse time elapsed
        hard parse elapsed time
            hard parse (sharing criteria) elapsed time
                hard parse (bind mismatch) elapsed time
        failed parse elapsed time
            failed parse (out of shared memory) elapsed time
    PL/SQL execution elapsed time
    inbound PL/SQL rpc elapsed time
    PL/SQL compilation elapsed time
    repeated bind elapsed time

background elapsed time
    background cpu time
        RMAN cpu time (backup/restore)


Database Time Model Query Example

select stat_name, to_char(value/10000000, '999,999') time_s
from v$sys_time_model
where value <> 0 and stat_name not in ('background elapsed time', 'background cpu time')
order by value desc;

Sessions Time Model Query Example

select s.sid, s.username, t.stat_name, 
round(t.value/1000000,2) "Time (SEC)"
from v$sess_time_model t, v$session s
where t.sid = s.sid and t.stat_name in ('DB time', 'DB CPU')
and s.username is not null
order by t.value desc;

Using Time Model to Measure System Scalability

select to_char(dbtime.value/1000000,'999,999') dbtime,
to_char(dbcpu.value/1000000,'999,999') dbcpu,
to_char((dbtime.value-dbcpu.value)/1000000,'999,999') wait_time,
to_char((dbtime.value-dbcpu.value)/dbtime.value*100,'99.99') || '%' wait_pct,
(select count(*) from v$session where username is not null) users_cnt
 from v$sys_time_model dbtime, v$sys_time_model dbcpu
 where dbtime.stat_name = 'DB time' and dbcpu.stat_name = 'DB CPU';


Historical System Time Model Statistics
  • Can be retrieved from DBA_HIST_SYS_TIME_MODEL
  • Has an advantage over reading from the V$ views.

18. Using Time Model

Lot of good commands will be created in the scripts folder

Note: We got scripts folder and .pdf file to help this lesson


19. Using Time Model


Section 7: Using Instance Activity Statistics

20. Using Instance Activity Statistics 1
About Instance Activity and Wait Events
  • Instance activity statistics:
    • Quantitative measures on the instance activities
    • Their figures (at the system level) can be obtained from V$SYSSTAT
    • Each statistic can belong to one or more from 8 statistic classes
    • Their names are described in V$STATNAME

Instance Activity Statistic Views
  • System:
    • V$SYSSTAT
  • Services:
    • V$SERVICE_STATS
  • Sessions:
    • V$SESSTAT
    • V$MYSTAT
    • Note: Usually linked with V$STATNAME
  • Segment:
    • V$SEGMENT_STATISTICS

21. Using Instance Activity Statistics 2
System Statistic Classes
Class Number    Class Name
-----------------------------------
1        User
2        Redo
4        Enqueue
8        Cache
16      OS
32      Real Application Cluster
64      SQL
128    Debug

  • Can be retrieved from the CLASS column in V$SYSSTAT and V$STATNAME views (V$SESSTAT view does not have it)

Instance Activity Query Example

select name, class, value from v$sysstat;

Obtaining Activity Statistics of Specific Statements

---(1) obtain the current statistics value from v$mystat:
select s.value into :value1
from v$mystat s, v$statname n
where s.statistics# = n.statistic#
and name = 'CPU used by this session';

--(2) execute the statements:

--(3) obtain the current statistics value from v$mystat:
select s.value into :value2
from v$mystat s, v$statname n
where s.statistic# = n.statistic#
and name = 'CPU used by this session';

--(4) substract values obtained in (3) from (1)
:delta := :value2 - :value1



Section 8: Wait Events
=================

Patching

Types of patches:

1. Interim patch (single patch)
2. Patchset patch (collection of patches)

We install interim patching using opatch utility and patchset we install using Oracle Universal Installer (OUI).


Applying Interim Patch

Location of opatch utility = $ORACLE_HOME/OPatch

These patches were released by oracle on a quarterly basis in January, April, July and October.

We should not install these patches until unless there is a need for it. We can search for this quarterly patch in google and download it from oracle website.

Once you unzip the downloaded file it will create a folder with patch number.

Inside the folder apart from patch files there is readme.txt.  Go through this file.  This will tell us which bug this patch is going to fix.

Stop all the services like database, listener and dbconsole etc.

Go to $ORACLE_HOME/OPatch folder

To apply the patch:

$opatch apply <patch id>

To see the information which problem the patch is going to fix:

$opatch query

To get the installed patches information:

$opatch lsinventory

(This will read the file $ORACLE_HOME/ContentsXML/comps.xml)

To see the version of opatch utility:

$opatch version

To rollback the patch:

$opatch rollback -id <patch id>

$opatch rollback -id 1234567

To restore a failed patch:

$cd $ORACLE_HOME/.patch_storage/<patchid-timestamp>

$./restore.sh

Wednesday, May 26, 2021

ADRCI

We must start ADRCI command inside ADR base.

cd /u01/app/oracle  (is my adr base)

adrci (press enter)

adrci> help

The main use of adrci command line utility is to view alert log file and trace files.

show alert

Choose the alert log from the following homes to view:

1: diag/tnslsnr/m1/listener
2: diag/asm/+asm/+ASM
3: diag/rdbms/prod/prod
Q: to quit

Here choose 3 to read alert log file.

3 (press enter)

This will show the entire alert log file.  To quit this alert log file:
:quit

We will come back to main menu. To quit from here press Q.

To see the last 20 lines from the alert log file:

show alert -tail 20
This will throw below error as we got here multiple adr homes, like asm and rdbms etc.
DIA-48449: Tail alert can only apply to single ADR home

to clear the screen we can use the below command:
host "clear"

We need to set a single home to use tail command. To set rdbms home:
set home diag/rdbms/prod/prod

now we use the command to see bottom 20 lines 
show alert -tail 20

If we only want to see ORA- error messages:
show alert -p "message_text like '%ORA-%'"

If we want to see only the incidents:
show alert -p "message_text like '%incident%'"

To see the list of trace files:
show tracefile

To see a particular trace file from the above list:
show trace /u01/app/oracle/diag/rdbms/prod/prod/trace/prod_vkrm_5427.trc

Tuesday, May 25, 2021

Password File

Default directory for Oracle Password File:
$ORACLE_HOME/dbs/orapw$ORACLE_SID on Unix
%ORACLE_HOME\database\PWD%ORACLE_SID%.ora on Windows.

To use password file to login to server from a remote machine we need to set this parameter:

show parameter remote_login_passwordfile;

The value for above parameter should be:
EXCLUSIVE for a single instance database.
SHARED for a RAC database.
NONE means no password file.  We can't use password file to login to server as admin from a remote machine.  Even the file exists oracle will ignore it.

Listener must be running in the source machine.  Tns entry must be working from remote machine to the server.

orapwd is the exe we use to create password file. Which lies inside bin folder.

To see all available options:
orapwd help=y

When we run below command oracle will ask password for SYS user.  After giving password it will add SYS user to password file.  We are executing orapwd command inside dbs folder.  If we are running this command from outside we must provide file=path.

orapwd file=orapwprod entries=5 force=y ignorecase=y

We can give password for SYS user inside the above command:

orapwd file=orapwprod password=manager entries=5 force=y ignorecase=y


Generally we login to oracle DB with OS authentication like below:

sqlplus / as sysdba

Here '/' means OS credentials.

To login from a remote machine as sysdba we use the below method:

sqlplus sys/manager@toprod as sysdba


To confirm password file is being used or not:

At Source DB:
shutdown immediate;
rename the password file to orapwprod.bak
startup mount;

By keeping the source db in mount mode, now try this from remote machine:
sqlplus sys/manager@toprod as sysdba
this will throw an error : insufficient privileges

At Source DB:
shutdown immediate;
rename the password file to the original name as orapwprod
startup mount;

From Remote machine:
sqlplus sys/manager@toprod as sysdba
We are able to connect in mount mode.

to confirm
select open_mode, name, host_name from v$database,v$instance;
MOUNTED, PROD, m1.dba.com


Users are added to the password file when they are granted the SYSDBA, SYSOPER, SYSASM, SYSKM, SYSDG or SYSBACKUP privilege.  By default, SYS is the only user that has SYSDBA and SYSOPER privileges.

To check the number of users with SYSDBA and SYSOPER privileges:
select * from v$pwfile_users;


To create a new user 'u1' and grant sysdba privilege, and confirm its addition to the password file:

creat user u1 identified by u1;
grant sysdba to u1;

select * from v$pwfile_users;
USERNAME        SYSDB SYSOP SYSAS
------------------------------ ----- ----- -----
SYS                TRUE  TRUE  FALSE
U1                TRUE  FALSE FALSE

We can grant both SYSDBA and SYSOPER roles to a user:
create user u2 identified by u2;
grant sysdba, sysoper to u2;

select * from v$pwfile_users;
USERNAME        SYSDB SYSOP SYSAS
------------------------------ ----- ----- -----
SYS                TRUE  TRUE  FALSE
U1                TRUE  FALSE FALSE
U2                TRUE  TRUE  FALSE

If we delete the orapwprod file all the above 3 records will be gone.  Password file will become empty. We can't login as sysdba from remote machine. But we can still login as a normal user.

In the password creation command if entries=2 then, even though we granted sysdba privilege to 'n' number of users only 2 can login as sysdba at a time.

As the password file is a binary file, to see the contents inside it at OS level:
strings orapwprod

As we added user 'U1' to the password file, we can login to db from a remote machine using U1 credentials as:

sqlplus u1/u1@toprod as sysdba

Now we can startup or shutdown the db using U1 credentials.

Even through we logged in using U1 credentials still it will show as 'SYS' user with below commands:

select user from dual;
show user;

Saturday, May 22, 2021

Alert Log File

Check Alert Log File Location in Oracle

If the db is up and running:

show parameter dump;
(here look for the parameter background_dump_dest)

select value from v$parameter where name = 'background_dump_dest';
(here look for the path)

Go to the file location and list the last 100 lines:

tail -100f alert_prod.log


If DB is Down

find / -name alert_prod.log 2>/dev/null

Saturday, May 15, 2021

Oracle Database Managed Files Setup (OMF)

This will eliminate the need for a dba to directly manage the operating system files in a database.

After this configured we can just create a database object without specifying the file path.

OMF will support below objects:
Tablespaces
redo log files
control files
archived logs
block change tracking files
flashback logs
RMAN backups

To enable OMF we must configure below parameters:
DB_CREATE_FILE_DEST (datafiles, temp files)
DB_CREATE_ONLINE_LOG_DEST_n (redo log and control files) DB_RECOVERY_FILE_DEST (fra)

To setup these parameters:

alter system set db_create_file_dest='/u01/prod';

alter system set db_create_online_log_dest_1='/u01/prod';


To set FRA:

alter system set db_recovery_file_dest_size = 10G;

alter system set db_recovery_file_dest = '/u01/fra';

Once the above parameter setup we create a tablespace with this command:

create tablespace test;

(here size of the datafile and location and other parameters oracle takes automatically)
Default size of the datafile will be 100MB.
It will be Locally Managed Tablespace.




Thursday, May 13, 2021

Block Change Tracking

To check the current status of block change tracking:

set linesize 200
col filename for a50
select * from v$block_change_tracking;

select filename from v$block_change_tracking;

To check the same at OS level:

CTWR = Change Track Writer

is the background process responsible for tracking the blocks.

ps -eaf|grep ctwr

[oracle@m1 bkptrc]$ ps -eaf|grep ctwr
oracle    7809     1  0 11:57 ?        00:00:00 ora_ctwr_prod
oracle    8396  5621  0 11:59 pts/1    00:00:00 grep ctwr

(block change tracking will start a process called ctwr)

To enable the block change tracking file in our selected location:

$mkdir /u01/prod/bkptrc

alter database enable block change tracking using file '/u01/prod/bkptrc/bkptrc.trc';

To enable the block change tracking in database area:

set  this parameter to database area: DB_CREATE_FILE_DEST
then issue the following SQL statement to enable change tracking:

alter database enable block change tracking;

To confirm the change

$ll /u01/prod/bkptrc

select * from v$block_change_tracking;

If the same name old file exists and we want to overight it, add reuse in the end:

alter database enable block change tracking using file '/u01/prod/bkptrc/bkptrc.trc' reuse;

To disable block change tracking:

alter database disable block change tracking;

(the file will be deleted and also the ctwr process will be gone)

Moving or relocating the file if we can bounce back the database:

select filename from v$block_change_tracking;

shutdown immediate;

at OS level move the tracking file to new location.

startup mount;

alter database rename file '/u01/prod/bkptrc/bkptrc.trc' to '/u01/new/bkptrc.trc';

alter database open;

Moving or relocating the file if we can't restart the database:

In this case we can disable current file and re-enable with new location.

alter database disable block change tracking;
alter database enable block change tracking using file '/u01/new/bkptrc.trc';


Thursday, May 6, 2021

FRA Configuration and Maintenance

Creating the Flash Recovery Area

In 10g FRA means Flash Recovery Area.

In 11g FRA means Fast Recovery Area

disable the below 2 parameters if they are already set:

alter system set log_archive_duplex_dest = '';

alter system set log_archive_dest = '';


To enable FRA in a database we need to set below 2 parameters in the same sequence:

db_recovery_file_dest_size

db_recovery_file_dest

First we need to set the size then the location path.

alter system set db_recovery_file_dest_size = 10G;

alter system set db_recovery_file_dest = '/u01/fra';


if we find

show parameter log_archive_dest = /u01/prod/arch

then issue the below command:

alter system set log_archive_dest_1 = 'location=/u01/prod/arch' scope=both;

Now set the previous parameter value to null:

alter system set log_archive_dest = '';



Writing Regular RMAN Backups to the FRA

rman target /

backup database;

This will take the backup pieces to fra by default.

If we want to store somewhere else we need to use channel and format commands.



Freeing FRA Space in an Emergency

We can find messages in alert log when fra is full.

Also if flashback is enabled and stored in fra we can't open the db when fra is full.


We can do the 3 following things to resolve this issue:

1. increase fra size

show parameter db_recovery;

alter system set db_recovery_file_dest_size = 20G;

We need to make sure at OS level we got space.

Same command we can use to shrink (reduce the size) the fra also.


2. remove restore points

select * from v$restore_point;

select name, storage_size from v$restore_point;

If we find any restore points which are taking space we can drop them.

drop restore point rp1;


3. disable flashback

select name, open_mode, log_mode, flashback_on from v$database;

shutdown immediate;

startup mount;

alter database flashback off;

alter database open;

select name, open_mode, log_mode, flashback_on from v$database;


We can also delete old backups or obsolete backups and archived log files.

Note: be careful while using "noprompt"

rman target = /

rman target / nocatalog

RMAN> delete noprompt archivelog all;

RMAN> delete noprompt backup of database;

RMAN> delete noprompt copy of database;

Now we can open the database.


Once the space is available we can recreate the things dropped.

create restore point rp1;

(This will create a normal restore point. System will delete flashback logs if needed space).


To create guaranteed restore point issue the below command:

create guaranteed restore point rp1;

By using the below command we can take the entire db to above restore point.

flashback database to rp1;



Checking Space Usage in the FRA

select * from v$recovery_file_dest;

This will show one line for entire fra.


To see in detail:

select * from v$flash_recovery_area_usage;

Here we can get detailed report by each type of file stored in FRA.


To know the space used by each type of file we can combine the above 2 views.

select file_type, round(space_used*percent_space_used/100/1024/1024) used, 
round(space_reclaimable*percent_space_reclaimable/100/1024/1024) reclaimable, 
frau.number_of_files
from v$recovery_file_dest rfd, v$flash_recovery_area_usage frau;


Expanding or Shrinking the FRA

If the current size is 10gb we can increase it by below command:

Make sure there must be enough spacee at OS level.

alter system set db_recovery_file_dest_size = 12G;


We can also reduce the size by the same command;

Make sure the used space in FRA must be less than 2g.

alter system set db_recovery_file_dest_size = 2G;


When we reduced the size of FRA and try to take backup by below command throws an error:

RMAN> backup as copy tablespace users;

report obsolete;

(rman retention policy will be applied to this command)

delete obsolete;

delete noprompt obsolete;


Configuring Archived Redo Logs to Go to FRA

As a best practice we should not send archived redo logs to FRA.

We can use log_archive_dest_1 to send archive logs to another area.

Default location for archived redo log files is $ORACLE_HOME/dbs in Linux and database in windows.

To send it to a different location use the below command:

alter system set log_archive_dest_1 = 'location=/u01/prod/arch';

(default scope is both)


To send the archive log files to FRA:

alter system set log_archive_dest_1 = 'LOCATION=USE_DB_RECOVERY_FILE_DEST';

Here the condition is log_archive_dest_state_1 must be enabled. By default its enabled.

alter system set log_archive_dest_state_1 = enable;


To confirm the above changes:

archive log list

Issue the below command and check the file creation inside FRA:

alter system switch logfile;

(If this command fails also it means FRA setting is not done properly)


In the above method we can check the physical presence of the file.

Below we can see the record existence in the db.

select name, completion_time from v$archived_log order by completion_time;


Placing a Control File in the FRA

We can set the control_files parameter accordingly:

control_files = ('/u01/prod/control01.ctl','/u02/fra/prod/controlfile/control02.ctl')


Placing Online Redo Log Files in FRA

During database creation we can set the below parameters:

db_create_online_log_dest_1 = '/u01/fra'

db_create_online_log_dest_2 = '/u01/fra'


make sure db_create_file_dest = ''


To confirm the creation:

select member from v$logfile;

select member from v$logfile where group#=1;


Adding new logfile group and both members in fra:

To see the fra location.

show parameter db_recovery_file_dest;


Below parameters must be null:

show parameter db_create_online_log_dest

show parameter db_create_file_dest


Now add the log file group with proper group number:

alter database add logfile group 4;


As the fra is already set this will create a new log file members in fra.

To confirm the above create command:

select member from v$logfile where group# = 4;

Also we can check physical existence of the file:

cd /u01/fra/prod/onlinelog

ls -l

The above create command will create single member in the log group.


To make sure 2 members we can set the below parameters:

alter system set db_create_online_log_dest_1 = '/u02/fra';

alter system set db_create_online_log_dest_2 = '/u02/fra';


Now run the create command:

alter database add logfile group 4;

column member format a50;
select group#, member from v$logfile order by group#;
select group#, bytes/1024/1024 "mb" from v$log order by group#;

alter database drop logfile group 4;

To confirm the creation:

select member from v$logfile where group# = 4;

We can go to the physical location of fra and confirm 2 files.


We can use the below command for faster redo log group creation:

alter database add logfile;

It will take last sequence number plus 1 as its number.

To confirm the creation:

select group#, member from v$logfile order by group# desc;


Adding new log group and one member only in fra:

For this we need to define two parameters:

1. fra 

2. db_create_file_dest

alter system set db_create_file_dest = '+DG1';

show parameter db_create_file_dest;

show parameter db_recovery_file_dest;


If the above 2 parameters are set already and we run add logfile command, group will be created with 2 members in both.

1. to create a new group

alter database add logfile group 7;


2. check how many members are created and where

select member from v$logfile where group# = 7;



Sending Image Copies to the FRA

This will happen if below 2 conditions are met:

1. the fra

2. RMAN script does not have any format command in the channel configuration

RMAN> backup as copy database;

Image copies will go to fra by default.



Deleting Backup Sets from the FRA

There is no command to delete files from fra.  Oracle does this automatically.
Based on space requirements.

rman target = /

list backupset;

delete backupset 62;



Deleting Archived Redo Logs from the FRA

RMAN> list archivelog all;

delete archivelog from logseq = 12 until logseq = 18;



Reinstating a Damaged Datafile from an Image Copy

1. To check the files in the database:

RMAN> report schema;

2. Check for existence of image copies of the damaged datafile:

rman target=/

list copy of datafile 4;

3. Take the damaged datafile offline:

RMAN>  sql 'alter database datafile 4 offline';

4. Now tell db to use image copy of the file as prod file:

RMAN>  switch datafile 4 to copy;

5. Recover the copy to make it consistent with the current state of the database:

RMAN> recover datafile 4;

6. Bring the recovered datafile online:

RMAN> sql 'alter database datafile 4 online';


This is a temporary adjustment. As soon as we get some time, should restore the production file back and switch to it.


Traditional approach to recovery is:

1. Take the tablespace offline

2. Restore the datafile from rman backup

3. Apply the incremental backups

4. Recover the datafile by applying archived redo logs

5. Bring the tablespace online

Switch method instead of traditional restore and recovery will save lot of time.



Switching Back from an Image Copy

1. Check datafiles

RMAN> report schema;

Here it shows datafile 4 is in fra.

2. Remove the file at OS level from original location. As this is unused and offline, there will be no effect on database.

$ rm /u01/prod/users01.dbf

3. connect to rman

rman target=/

4. create an image copy of the file, in this case file 4.

Place that image copy in the file's original location:

RMAN>  backup as copy datafile 4 format='/u01/prod/users01.dbf';

5. Take the datafile offline:

RMAN> sql 'alter database datafile 4 offline';

6. Switch the datafile to the copy you just placed in the original location:

RMAN> switch datafile 4 to copy;

7. Recover datafile to bring it up-to-date with changes that occurred between step 4 and step 5.

RMAN> recover datafile 4;

8. Bring the datafile online:

RMAN> sql 'alter database datafile 4 online';

9. Check the location of the file once again:

RMAN> report schema;

The file is in original location now.

10. As the best practice we must create a fresh image copy of the file and place it in the fra:

RMAN>  backup as copy datafile 4;



Backing Up the FRA to Tape

RMAN> run{
allocate channel c1 type sbt_tape;
backup recovery area;}

This run block backs up the entire flash recovery area to tape.



Sizing the Flash Recovery Area

This part comes with experience. But we can estimate some values.

Sum of size of all datafiles.

Sum of archive log generation per day.

Incremental backup per week.

Control file auto backup.


Tuesday, May 4, 2021

Data Guard Configuration

Primary Database Preparation

Step 1:- Setup Archive log mode

Primary database must be in Archive Log Mode.  This is mandatory.

Below 2 commands help us identify weather the db is in archivelog mode or no.

SQL> archive log list;

or

SQL>  select log_mode from v$database;

To put the database in archive log mode do the following:

shutdown immediate;

startup mount;

alter database archivelog;

alter database open;

To confirm the archive log status;

archive log list;

or

select name, log_mode from v$database;


Step 2:- Enable Force Logging

Force Logging must be enabled at primary side.

To check the force logging status at primary side:

select name, force_logging from v$database;

to enable force logging:

alter database force logging;


Step 3:- Standby Redo Logs

SRL at primary side are optional.  These will be used when we perform switchover.

SRL files size must be same as OLR files.  If we have 3 ORL files at least 4 SRL must be configured.

To check the count and size of Online Redo Log Files:

select a.group#, a.status, a.bytes/1024/1024 SizeMB, b.member
from v$log a, v$logfile b
where a.group# = b.group#
order by group#;

Here we can see 3 Online Redo Log files with each 10 Mb size.  So we need to create atleast 4 standby redo log files with 10 mb each.  The advantage of creating these 4 files at primary side is when we use RMAN duplicate command this will create same standby redo logs at standby side also.

alter database add standby logfile 
group 11 ('C:\APP\LENOVO\ORADATA\ORCL\standby_redo01.log') 
size 10m;

alter database add standby logfile 
group 12 ('C:\APP\LENOVO\ORADATA\ORCL\standby_redo02.log') 
size 10m;

alter database add standby logfile 
group 13 ('C:\APP\LENOVO\ORADATA\ORCL\standby_redo03.log') 
size 10m;

alter database add standby logfile 
group 14 ('C:\APP\LENOVO\ORADATA\ORCL\standby_redo04.log') 
size 10m;

To check the status for above standby redo log files:

select * from v$standby_log;

To choose important 3 columns from the above dynamic view:

select group#, bytes/1024/1024 "Mb", status from v$standby_log;



Step 4:- Setup FRA (Fast / Flash Recovery Area)


To configure FRA we need to set below 2 dynamic parameters.

show parameter db_recovery_file_dest;

NAME                                   TYPE            VALUE                             
--------------------------             -----------       --------------------------------- 
db_recovery_file_dest            string            
db_recovery_file_dest_size    big integer   

To setup above 2 parameters we need to first set the size then location.

alter system set db_recovery_file_dest_size=10g;

alter system set db_recovery_file_dest=C:\app\Lenovo\flash_recovery_area;

show parameter db_recovery_file_dest;

NAME                                      TYPE         VALUE                             
--------------------------               -----------     --------------------------------- 
db_recovery_file_dest              string          C:\app\Lenovo\flash_recovery_area 
db_recovery_file_dest_size      big integer 10G    


Note:- While creating initStandby.ora file for standby we need to set these 2 parameters for standby fra.


Step 5:- Some important prameters

db_name 
This is db name (8 characters), must be same in all instances. Max length 8 characters.

db_unique_name
For all the above instances with same db_name, we can use this parameter to set different unique names. This must be different on primary and standby. Max length 30 characters. With this parameter we can create location specific alias to each database.

alter system set db_unique_name='hyd' scope=spfile;

Parameter

Primary

Physical Standby

Instance Name

Prod

Std

DB_NAME

Prod

Prod

DB_UNIQUE_NAME

Hyd

Pune

Net Service Name

ToHyd

ToPune

 

log_archive_config
You can enable or disable sending/receiving redo logs to/from databases.

Default are SEND, RECEIVE, NODG_CONFIG

alter system set log_archive_config= 'DG_CONFIG=(pune,hyd)' scope=both;


log_archive_dest_n
Here n can be from 1 to 31.  We can configure 31 destinations for archive redo data.  With this we also need to configure Location or Service attributes.


Location and Service
To point this parameter to normal location:
alter system set log_archive_dest_1='LOCATION=/u01/prod/arch';

To point this parameter to Fra:
alter system set log_archive_dest_1='LOCATION=USE_DB_RECOVERY_FILE_DEST';

To configure Service attribute:
alter system set log_archive_dest_2='SERVICE=topune db_unique_name=pune';


valid_for
valid_for (redo_log_type, database_role)
redo_log_type options :- online_logfile, standby_logfile, all_logfiles
database_role options :- primary_role, standby_role, all_roles

Sync and Async
This is for redo transport mode. This is used in max protection and max availability mode.

alter system set log_archive_dest_2='SERVICE=topune LGWR SYNC db_unique_name=pune';

Async is default:-
alter system set log_archive_dest_2='SERVICE=topune LGWR ASYNC db_unique_name=pune';


Affirm and NoAffirm
Here default is Affirm.

alter system set log_archive_dest_2='SERVICE=topune SYNC AFFIRM DB_UNIQUE_NAME=pune';

select affirm from v$archive_dest where dest_id=2;


Delay
This attribute is used to set a delay between the primary and standby databases.

alter system set log_archive_dest_2='SERVICE=topune delay=10 
db_unique_name=pune';

select delay_mins, destination from v$archive_dest where dest_id=2;

This attribute will be ignored when we use real time apply.  Also we can NODELAY option in the managed recovery command.

ALTER DATABASE RECOVER MANAGED STANDBY DATABASE NODELAY;


log_archive_dest_state_n
Here n is from 1 to 31.  This is the state of the redo transport for respective destination.  The default value is ENABLE. 

alter system set log_archive_dest_state_2 = 'defer';

show parameter log_archive_dest_state_2;

While configuring failover the value for this parameter will be ALTERNATE. 



Parameters to set on Standby database

FAL_SERVER
fal_server=tohyd 
to_hyd is the net service name of the primary database.

STANDBY_FILE_MANAGEMENT
standby_file_management=auto (manual)

alter system set standby_file_management='AUTO';

If this parameters value is AUTO, it will create or drop datafiles at standby side automatically.

DB_FILE_NAME_CONVERT

alter system set db_file_name_convert= "'/u01/prod/hyd', '/u01/prod/pune'" scope=spfile;