Oracle Autonomous Database: SYSDATE and Time Zones

This post is part of a series of blog posts on the Oracle Autonomous Database.

Please be careful when using the SYSDATE function in Oracle Autonomous Database!

SYSDATE is used extensively in PL/SQL based applications. What will happen when you migrate your data to Autonomous Cloud (or build a new application on Autonomous) as by default the Time Zone is set to Coordinated Universal Time (UTC) in an OCI environment?

You are allowed to change the database and session timezones in ADB, but this doesn’t change the SYSDATE and SYSTIMESTAMP in the timezones. So, the PL/SQL packages, procedure and functions and in particular all SQL using SYSDATE and SYSTIMESTAMP might not return what you expect. For example, you might need to check what happened during the previous hour (sysdate-1/24) – and you might be surprised – you will get the result for the past one hour but it might not be your last hour.

Currently, it is not possible to change the timezone of SYSDATE and SYSTIMESTAMP since it is coming from the operating system. The SYSDATE and SYSTIMESTAMP functions simply perform a system-call to the server Operating System to get the time – the “gettimeofday” call. The server OS (Unix) timezone settings influences the time that the OS will pass on to Oracle and returned by SYSDATE and SYSTIMESTAMP.

So, how can we get around this issue?

The general recommendation is to use CURRENT_DATE and CURRENT_TIMESTAMP which will return the date in session timezone.

In a globally distributed database environment, UTC is the recommended time zone to use.

– How to change the Time Zone in Oracle Database hosted in OCI with an Example: Doc ID 2459830.1
– Timestamps and time zones – Frequently Asked Questions: Doc ID 340512.1
– How to Change the “Database Time” (SYSDATE and SYSTIMESTAMP) to another Time/Timezone: Doc ID 1988586.1

Oracle Autonomous Database: Using a Standby Database

This post is part of a series of blog posts on the Oracle Autonomous Database.

Autonomous Database uses a feature called Autonomous Data Guard to enable a standby (peer) database to provide data protection and disaster recovery for your Autonomous Database instance.

When you enable Autonomous Data Guard, the system creates a standby database that continuously gets updated with the changes from the primary database.

With Autonomous Data Guard enabled Autonomous Database provides one identical standby database that allows the following, depending on the state of the primary database:

  • If your primary database goes down, Autonomous Data Guard converts the standby database to the primary database with minimal interruption. After failover completes, Autonomous Data Guard creates a new standby database for you.
  • You can perform a switchover operation, where the primary database becomes the standby database, and the standby database becomes the primary database.

Autonomous Database does not provide access to the standby database. You perform all operations, such as scaling up the OCPU Count and enabling Auto Scaling on the primary database and Autonomous Data Guard then performs the same actions on the standby database. Likewise, you only perform actions such as stopping or restarting the database on the primary database.

All Autonomous Database features from the primary database are available when the standby instance becomes the primary after the system fails over or after you perform a switchover operation, including the following:

Database Options: The OCPU Count, Storage, Display Name, Database Name, Auto Scaling, Tags, and Licensing options have the same values after a failover to the standby database or after you perform a switchover.

OML Notebooks: Notebooks and users created in the primary database are available in the standby.

APEX Data and Metadata: APEX information created in the primary database is copied to the standby.

ACLs: The Access Control List (ACL) of the primary database is duplicated for the standby.

Private Endpoint: The private endpoint from the primary database applies to the standby.

APIs or Scripts: Any APIs or scripts you use to manage the Autonomous Database continue to work without any changes after a failover operation or after you perform a switchover.

Client Application Connections: Client applications do not need to change their connection strings to connect to the database after a failover to the standby database or after you perform a switchover.

Wallet Based Connections: You can continue using your existing wallets to connect to the database after a failover to the standby database or after you perform a switchover.

Oracle Autonomous Database: Setting up ACLs and Private Endpoints

This post is part of a series of blog posts on the Oracle Autonomous Database.

Securing Application Connections with Access Control Lists (ACLs)

  • No network Access Control List (ACL) is specified by default and the database is accessible from any IP address
  • Specifying an access control list blocks all IP addresses that are not in the list from accessing the database
  • Once an ACL is set, the database will only accept connections from addresses in the access control – all other client connections will be rejected
  • ACLs can be configured for an ADB at any time. While provisioning or added to an existing ADB
  • ACLs can be configured for individual IP addresses, CIDR block ranges, and VCNs – including simultaneously
  • A database restore does not overwrite existing ACLs
  • All services that are subsets of an ADB (e.g. SQL Developer Web, APEX, ML) are also blocked by ACLs
  • The Autonomous Transaction Processing Service console is not subject to ACLs.

Private Endpoints

Private endpoints provide the ability to NOT permit any public access to your ADB database.

It is a private IP address within your VCN that can be used to access a given service within OCI.

Think of it as just another VNIC (Virtual Network Interface Card) in your VCN

  • You can provision an ADB to use private endpoints and configure a Virtual Cloud Network (VCN) in your tenancy to use with the private endpoint.
  • Allow you to keep all traffic to and from your ADB off the public internet
  • Does require you to set up transit routing in your VCN and use a service gateway to connect
  • Can be configured when you provision a new database or when you clone an existing database only

To setup Private Endpoints with ADB:

  • There must be a VCN within the region for the ADB. This cannot be changed after provisioning
  • A private subnet with default DHCP options is required
  • At least one Network Security Group (NSG) within your VCN is required. This can be edited after provisioning

Network Security Group (NSG):

  • NSGs create a virtual firewall for your Autonomous Database using security rules
  • You can specify up to 5 NSGs to control access to your Autonomous Database
  • To use NSG with Private Endpoints set: Ingress / TCP / Port 1522

When you provision or clone an Autonomous Database, you can configure the network access so that the database uses a private endpoint. If your organization has strict security mandates that do not allow you to have a public endpoint for your database, this provides you with the necessary private endpoint. When you use private access, your database is only accessible through the IP address of the associated private endpoint. Additionally, this allows you to configure your access so that the traffic does not use public subnets and allows you to keep all traffic to and from your Autonomous Database off of the public internet within one of your tenancy’s virtual cloud networks (VCNs).

Change from Private to Public Endpoints with Autonomous Database

If your Autonomous Database instance is configured to use a private endpoint you can change the configuration to use a public endpoint.

Note: Changing an instance from a private to a public endpoint on Autonomous Database is only supported with Oracle Database 19c onwards.

After updating the network access type all database users must obtain a new wallet and use the new wallet to access the database.

After the update completes, you can define access control rules for the public endpoint by specifying ACLs.

Oracle Autonomous Database: Service Limits and Compartment Quotas

This post is part of a series of blog posts on the Oracle Autonomous Database.

Service Limits

  • A quota or allowance set on a resource
  • Are set by the service you have subscribed to
  • May change automatically based on your OCI resource usage or by changing your account type. e.g. Always-Free to Paid

Service Limits are quotas or allowances that are set on a specific resource. Service limits are defined and set by Oracle. They’re set by the service you have subscribed to and they can change automatically based on your OCI resource usage or by changing your account type.

The service limit is based on the service you’ve subscribe to. The service could be database service in which case you storage could be something that’s controlled for you. For example, you understand with the autonomous database, it could be anywhere from 1 to 128 terabytes. That’s the type of service that could be limited.

In addition to that, the service may change. So what is available for you may change based on your service. In the example of always free to paid, if you’re defining an autonomous database, that would be limited to one OCPU maximum in the always free account. If you were to convert that account to a paid account, that limit will be lifted from one to obviously 128 OCPUs that could be configured.

Compartment Quotas

  • Similar to Service Limits. However, where as Service Limits are set by Oracle, Compartment Quotas are set by Customer Administrators. Please note, in dedicated Infrastructure, this is managed by the FLEET administrator
  • Set using Policy Statements. A simple declarative language similar to IAM policy language

Compartment quotas are very similar to service limits, however, whereas a service limit is set by Oracle, compartment quotas are set by customer administrators (fleet administrators). In the dedicated infrastructure world, this is managed by the fleet administrator and be one of their roles and responsibilities. Compartment quotas are set using policy statements that are very simple, and very easy to follow, and similar to the IAMs.

So if we look at service limits and compartment quotas– keep in mind that tenancy limits, and quotas, and usage can be viewed using your cloud console. The console may not display the information for all of the OCI services or resources, but that’s because it’s only going to show what you subscribe to. Also keep in mind that the usage may be greater than the limit that’s available to you and that’s because the limit may have been reduced after the resource was initially created.

The other thing to keep in mind is that limits are sometimes displayed as zero. And if that is the case, then the account has been suspended. Now, that is not necessarily a bad thing. This is– the account could be suspended due to a lack of inactivity, which really just means you’re not being billed or charged for the resources that would potentially be consumed. This is apparent in the always free account when a database becomes idle for an extended period of time.

e.g. create quota policy stub. When you click on that, it will bring up a panel which we can then go in and enter in the fields there. In this case, we’re going to say set database quota, ADW total storage terabytes, or TB, to a number in our compartment team. So by doing that, we will then fix the limit in which anyone connecting to this compartment can actually create the storage of their database. So this is a way of controlling that resource.

Governance –> Limits, Quotas and Usage

set database quota adw-total-storage-tb to [number] in compartment Team

Oracle Autonomous Database: Schema Advisor

This post is part of a series of blog posts on the Oracle Autonomous Database.

There is a new tool called “Oracle Autonomous Database Schema Advisor” that can help answer questions such as:

  • What issues will I encounter during migration?
  • What objects are incompatible / cannot be migrated to ADB? How do I know?
  • What beforehand advice and guidelines can I receive prior to the migration to Autonomous Database?

All the details are in MOS Doc ID 2462677.1

First, you need to download and install the advisor. Run the script install_adb_advisor.sql (as SYSDBA) which you download from the MOS note above. The script will create a user, 7 tables, 4 indexes and a package. I have decided to call the user adb_advisor and the password will be XXXXX:

SQL> @install_adb_advisor.sql adb_advisor XXXXX

Index created.
Package created.
Package body created.
No errors.

Next, run the advisor as the user created in step one. What you need to specify is (1) the database schema/s and (2) the ADB type you will be using in the Autonomous Cloud. You can list maximum 30 schemas in a single advisor run or just use schemas=>’ALL’. And these are the 4 options for the ADB type:

– ATP for Autonomous Transaction Processing (Shared)
– ADW for Autonomous Data Warehouse (Shared)
– ATPD for Autonomous Transaction Processing (Dedicated)
– ADWD for Autonomous Data Warehouse (Dedicated)

Do not try to run the script as SYSDBA, you will be getting all sorts of errors.

e.g. ADB_ADVISOR.REPORT(schemas=>’ALL’, adb_type=>’ATP’);

The Advisor will generate a report with the following information:

– The counts of discovered objects and a summary of migration status

– Objects that cannot be migrated due to the restrictions and lockdowns imposed by the Autonomous Database on certain data types, database options and SQL statements

– The objects that will migrate with modifications that are automatically made during the import process or upon the execution of object creation DDL

– Informational section containing certain best practice recommendations and guidance

Common issues detected by the advisor:

–User-defined objects in the SYS schema. They will not be migrated.
– IOT table cannot be migrated as-is. It will be migrated as a normal, regular table.
– INMEMORY table cannot be migrated as-is. It will be created as a NO INMEMORY table.
– Modified init.ora parameters in your source database that you would not be permitted to alter in ADB. e.g. db_block_checksum, enable_ddl_logging, encrypt_new_tablespaces, local_listener, tablespace_encryption_default_algorithm, tde_configuration, use_large_pages

By default, the output gets truncated when the number of rows exceeds the maximum limit set in the Advisor package. You can reset the number of rows by running the following command prior to running the Advisor.

SQL> exec ADB_ADVISOR.setmaxrows(500);
You have to reset the max rows every time you run the Advsior as the settings is not saved in the database.

My recommendation is that, before you migrate your schemas to Autonomous Cloud, to run the advisor. It will minimize your post-migration hassle.

Oracle Autonomous Database: Oracle Machine Learning, APEX, SQL Developer Web

This post is part of a series of blog posts on the Oracle Autonomous Database.

Oracle Machine Learning

Workflow Tasks for using Oracle Machine Learning:

  1. Create Oracle Machine Learning user
  2. Create workspaces
  3. Create projects
  4. Create notebooks
  5. Run SQL scripts/statements

Please note: Oracle Machine Learning users can also be created using SQL Developer.

Oracle Application Express (APEX)

  • APEX is automatically enabled in the autonomous database
  • Access is through the Service Console
  • The initial access is through the “admin” user (in the Internal workspace)

APEX and SQL Developer Web Access Requirements with ADB-D:

  • Must be inside customer VCN
  • Service user Database Admin is automatically granted use of tooling
  • Database users must be enabled to use tooling using ORDS.ENABLE_SCHEMA
p_schema=> 'HRUSER', -- the database user name
p_url_mapping_type=> 'BASE_PATH',
p_url_mapping_pattern=> 'hr_alias', -- determines URI template for RESTful Services
p_auto_rest_auth=> TRUE ); -- authenticate request before any request can be serviced

Oracle Autonomous Database: New Users, Backup, Recovery, Cloning, Scaling

This post is part of a series of blog posts on the Oracle Autonomous Database.

Creating users in Autonomous Database

  • No need to specify anything except the password
  • The password you specify must meet the default complexity rules in ADB
  • “DWROLE” includes all necessary privileges for a DW developer/user
  • By default, default password will not expire until 360 days

SQL> create user ocitest identified by ….;
SQL> grant dwrole to ocitest;

If you specify a weaker or NULL Password Verify Function (PVF) for a new profile, then the default PVF CLOUD_VERIFY_FUNCTION applies. If the verify function you specify is more strict than the default CLOUD_VERIFY_FUNCTION, then the new verify function is used.

Starting, Stopping, and Scaling

Can all be done by clicking through the console or through API calls.


  • Scale your Autonomous Database Instance on demand without tedious manual steps
  • Independently scale compute or storage
  • Resizing occurs instantly, fully online
  • Memory, IO bandwidth, concurrency scales linearly with CPU

ADB Auto Scaling:

  • Auto Scaling is enabled by default (it can be disabled at any time)
  • ADB autonomously and continuously monitors overall system performance
  • ADB scales CPU and IO resources up and down based on overall workload requirements
  • Scaling up autonomously expands CPU-IO resources by up to 3x the base core count
  • Can be Enabled when provisioning an ADB instance or anytime using Scale Up/Down on the OCI Console
  • Changing the setting does not require database downtime
  • You are only charged for the baseline (i.e. number of OCPUs provisioned)
  • Auto Scaling is billed based on average number of OCPUs consumed during an hour

Starting & Stopping:

  • Stop your instance on demand to conserve resources and halt CPU billing
  • Start your instance instantly on demand

Backup, Restore, and Cloning

Autonomous Database Backups:

  • The ADB is automatically backed up
  • The retention period for backups is 60 days
  • The database can be restored and recovered to any point-in-time within the retention period
  • Recovery can be initiated using the Cloud Console

When the restore operation finished, the ADB instance opens in read-only mode and the instance details page Lifecycle State shows “Available Needs Atttention”. This is for the user to sanity check the restore. To open the database in read-write mode, click Stop, then Start.

When you restore your database and open it in read-write mode, all backups between the date the restore completes and the date you specified for the restore operation, the restore time, are invalidated. After the database is opened in read-write mode after a restore, you cannot initiate further restore operations to any point in time between the restore time and restore completion time. You can only initiate new restore operations to a point in time older than the restore time or more recent than the time when the actual restore succeeded.

For example, assume you initiate a restore operation on Oct 8, 2018, 2 pm and specify Oct 1, 2018, 2 pm as the point in time to restore to and the restore completes on Oct 8, 2018, 2:30 pm. If you open your database in read-write mode at this point, backups between Oct 8, 2018, 2:30 pm and Oct 1, 2018, 2 pm will be invalidated. You will not be able to restore to any date between Oct 1, 2018, 2 pm and Oct 8, 2018, 2:30 pm. If you initiate a restore to a point in time between these dates the restore will fail with an error.

The restore operation also restores the DATA_PUMP_DIR directory and user defined directories to the timestamp you specified for the restore; files that were created after that timestamp would be lost.

When you restore, the Oracle Machine Learning workspaces, projects, and notebooks are not restored.

For external tables, partitioned external tables, and the external partitions of hybrid partitioned tables a backup does not include the external files that reside on your Object Store. Thus, for operations where you use a backup to restore your database, such as Restore or Clone from a backup it is your responsibility to backup, and restore if necessary, the external files associated with external tables, external partitioned tables, or the external files for a hybrid partitioned table.

Manual Backups:

In addition to automatic backups, Autonomous Database also allows you to take manual backups to your OCI Object Storage.

To perform manual backups you need to:

  • Define OCI Object Storage credentials
  • Define your OCI Object Storage tenancy URL
  • Need to create a bucket to hold the backups

The manual backup configuration tasks are a one-time operation. Once credentials, object bucket and tenancy URL are set, manual backups can be initiated without requiring the same operations again unless the URL, credentials or bucket change.

When creating the object storage bucket for manual backup:

  • The name format for the bucket in object storage must follow: backup_
  • This is the “Database Name” and not the “Display Name”
  • The object bucket name has to be all lower case

After the bucket has been created, connect to the database and perform the following:

  • Set the database default_bucket property to your OCI Object Storage tenancy URL

SET default_bucket = ‘https://…/backup_adb_20200505’;

Create a credential for your OCI Object Storage account:

credential_name => ‘DEF_CRED_NAME’,
username => ‘’.
password => ‘auth-token-string’

SET default_credential = ‘ADMIN.DEF_CRED_NAME’;

It is possible to create a new Autonomous Database as a clone of an existing Autonomous Database.

Autonomous Transaction Processing provides cloning where you can choose to create a full clone, create a metadata clone, or create a refreshable clone. You can also use cloning to upgrade your database to a newer Oracle Database version when newer Oracle Database versions are available. You can move an Autonomous Transaction Processing database to a different Oracle Cloud Infrastructure compartment.

Clone Type:

  • Full Clone of the source ADB
  • Metadata Clone of the source ADB
  • Refreshable Clone: creates a read-only full clone that can be easily refreshed with the data from the source database.

For a Full Clone or a Metadata Clone, you have the option to select the clone source:

  • Clone from a live database instance. This creates a clone of a running database.
  • Clone from a backup. This creates a clone when you select a backup from a list of backups, or when you enter a backup point-in-time to clone.

When you perform the clone operation you can select a newer Oracle Database version for the target database. This allows you to use cloning to upgrade your Autonomous Transaction Processing database.

With clone from backup, the Oracle Machine Learning workspaces, projects, and notebooks of the source database are not cloned to the new database.

For a Metadata Clone, the APEX Apps and the OML Projects and Notebooks are copied to the clone. For a Metadata Clone, the underlying database data of the APEX App or OML Notebook is not cloned.

If you define a network Access Control List (ACL) on the source database, the currently set network ACL is cloned to the new database. If a database is cloned from a backup, the current source database’s ACL is applied (not the ACL that was valid of the time of the backup).

An ADB can be configured to use Private Endpoints by cloning from an existing ADB with Public Endpoints.

If you create a clone and the source database has an access control list (ACL) and you specify the private endpoint network access option, Virtual cloud network for the target database, the ACL is not cloned to the new database. In this case, you must define security rules within your Network Security Group (or groups) to control traffic to and from your target database (instead of using the access control rules that were specified in the ACL on the clone source).

You can only clone an Autonomous Transaction Processing instance to the same tenancy and the same region as the source database.

The clone has its own wallet and you cannot use the wallet of the source database to connect to the clone.

About Refreshable Clones on Autonomous Database:
A refreshable clone allows you to do the following:

  • Maintain one or more copies of the source database for use as read-only databases. A clone database is available when you need it, and when you want to update the data, you can refresh the clone from the source database.
  • Share copies of a production database with multiple business units. For example, one business unit might use the source database for ongoing transactions and another business unit could at the same time use the refreshable clone database for read-only operations.

This option also allows you to spread the cost of database usage across multiple business units. You can bill the different units separately, based on their usage of one or more refreshable clone databases.

Note: Refreshable clones have a one week refresh age limit. If you do not perform a refresh within a week, then the refreshable clone is no longer refreshable. After a refreshable clone passes the refresh time limit, you can use the instance as a read only database or you can disconnect from the source to make the database a read/write (standard) database.

When you make certain changes on a source Autonomous Database instance that has one or more refreshable clones attached to it, the changes are applied to both the source database and to the refreshable clones as follows:

  • Storage: The storage value you set on the source database applies to both the source database and to any attached refreshable clones.
  • ADMIN password: The ADMIN password value you set on the source database applies to both the source database and to any attached refreshable clones.

Limitations and notes for Autonomous Database refreshable clones:

  • You cannot create a refreshable clone from an Autonomous JSON Database source. That is, if the Autonomous Database workload type is JSON Database, then you cannot create a refreshable clone.
  • You cannot create a refreshable clone for an Autonomous Database instance if your database version is Oracle Database 18c. You must first upgrade to Oracle Database 19c or higher to create a refreshable clone.
  • You cannot create a cascading series of refreshable clones. Thus, a refreshable clone cannot be created from another refreshable clone.
  • You cannot backup or restore a refreshable clone.
  • Oracle Application Express (APEX) URLs do not work in a refreshable clone read-only database and the APEX URLs are disabled in the Oracle Cloud Infrastructure console and the Autonomous Database Service Console. APEX URLs are enabled for a read/write database when a refreshable clone is disconnected from the source.
  • Automatic Workload Repository (AWR) data and reports are not available for refreshable clones.

Renaming ADB Database

The database rename operation is only supported with Oracle Database 19c onwards.

The database rename operation changes the connection strings required to connect to the database. Thus, after you rename a database you must download a new wallet for any existing instance wallets that you use to connect to the database (and any applications also must be updated to use the new connection string and the new wallet to connect to the database).

If you are using a regional wallet, you can continue to use the existing wallet to connect to any databases that were not renamed. However, if you want to connect to a renamed database, you must download a new regional wallet.

Please note: The rename operation terminates all connections to the database. After the rename operation completes, you can reconnect to the database.

Renaming your database does not change global references to your database from existing database links on remote databases. Changing such references is the responsibility of the administrator of the remote databases.

When you rename a database, the Autonomous Database OCID does not change.

If your database has Autonomous Data Guard enabled, the rename operation is not available.

You cannot use the rename operation on a refreshable clone instance or on a database that is the source for a refreshable clone.

The rename operation restarts the database.

If you configured your Autonomous Database for manual backups prior to the rename operation, then your existing manual backups continue to work after the rename operation. If you perform new manual backups after the rename, they are stored in the same bucket that you configured prior to the rename operation.

Move an Autonomous Transaction Processing Database to a Different Compartment

To move an Autonomous Transaction Processing database you must have the right to manage autonomous databases in the database’s current compartment and in the compartment you are moving it to. As soon as you move an Autonomous Transaction Processing database to a different compartment, the policies that govern the new compartment apply immediately and affect access to the database. Therefore, your access to the database may change, depending on the policies governing your Oracle Cloud user account’s access to resources.

Choose your region. See Switching Regions for information on switching regions and working in multiple regions.

Choose your Compartment. See Compartments for information on using and managing compartments.

Select an Autonomous Transaction Processing instance from the list in your compartment.

On the Details page, from the More Actions drop-down list, select Move Resource.

In the Move Resource to a Different Compartment page, select the new compartment.

Oracle Autonomous Database: Monitoring and Maintaining ADB

This post is part of a series of blog posts on the Oracle Autonomous Database.

Monitoring ADB Performance and Resources

The Overview and Activity tabs in the Service Console provide information about the performance of the service. The Activity tab also shows past and current monitored SQL statements and detailed information about each statement.

  • Monitoring Performance from the Service Console – Console Overview
  • Storage used: total and used storage capacity of the service. It indicates what percentage of the space is currently in-use.
  • CPU utilization (%)
  • Running SQL statements: average number of running SQL statements historically. This chart shows hourly data. A data point shows the running SQL statements for that hour
  • Number of OCPUs allocated
  • SQL statement response time (s)
  • Database activity / wait events

The ADB captures performance data for 8 days. This can be modified using: DBMS_WORKLOAD_REPOSITORY.MODIFY_SNAPSHOT_SETTINGS

  • CPU utilization (%):
    This chart shows the historical CPU utilization of the service:

    Auto Scaling Disabled:
    This chart shows hourly data. A data point shows the average CPU utilization for that hour. For example, a data point at 10:00 shows the average CPU utilization for 9:00-10:00. The utilization percentage is reported with respect to the number of CPUs the database is allowed to use which is two times the number of OCPUs. For example, if the database has four (4) OCPUs, the percentage in this graph is based on 8 CPUs.

    Auto Scaling Enabled:
    For databases with auto scaling enabled the utilization percentage is reported with respect to the maximum number of CPUs the database is allowed to use, which is six times the number of OCPUs. For example, if the database has four OCPUs with auto scaling enabled the percentage in this graph is based on 24 CPUs.
  • Number of OCPUs allocated:
    This chart shows the number of OCPUs allocated:

    Auto Scaling Disabled:
    For databases with auto scaling disabled, for each hour the chart shows the number of OCPUs allocated to the database if the database is open for at least some part of the hour.

    Auto Scaling Enabled:
    For databases with auto scaling enabled, for each hour the chart shows the average number of OCPUs used during that hour if that value is higher than the number of OCPUs provisioned. If the number of OCPUs used is not higher than the number of OCPUs provisioned, then the chart shows the number of OCPUs allocated for that hour.

    Stopped Database:
    If the database was stopped for the full hour the chart shows 0 OCPUs allocated for that hour.

Monitor ADB with Performance Hub

You can view real-time and historical performance data from the Performance Hub. Performance Hub shows Active Session History (ASH) analytics, SQL monitoring and workload information.

The Performance Hub page has the following sections:

  • The time selector.
  • The Reports drop-down list, containing the option to create and download an AWR (Automatic Workload Repository) report.
  • The tabbed data area, with the tabs ASH Analytics, SQL Monitoring and Workload.

Monitor Performance with Autonomous Database Metrics

You can monitor the health, capacity, and performance of your Autonomous Transaction Processing databases with metrics, alarms, and notifications. You can use Oracle Cloud Infrastructure console or Monitoring APIs to view metrics.

To view metrics for an Autonomous Database instance:

On the Details page, under Resources, click Metrics.

There is a chart for each metric. In each chart you can select the Interval and Statistic or use the default values.

To create an alarm on a metric, click Options and select Create an Alarm on this Query.

To view Metrics for Autonomous Databases in a Compartment:

Monitoring > Service Metrics.

On the Service Metrics page, under Compartment select your compartment.
On the Service Metrics page, under Metric Namespace select oci_autonomous_database.
If there are multiple Autonomous Databases in the compartment you can show metrics aggregated across the Autonomous Databases by selecting Aggregate Metric Streams.

Managing Priorities

The priority of user requests in Autonomous Transaction Processing is determined by the database service the user is connected with. For example, a user connecting with the atp1_tp service uses the consumer group TP.

Note: After connecting to the database using one service, do not attempt to manually switch that connection to a different service by simply changing the consumer group of the connection. When you connect using a service, Autonomous Transaction Processing performs more actions to configure the connection than just setting its consumer group.

Autonomous Transaction Processing includes predefined job_class values to use with Oracle Scheduler.

Manage CPU/IO Shares

Autonomous Transaction Processing comes with predefined CPU/IO shares assigned to different consumer groups. You can modify these predefined CPU/IO shares if your workload requires different CPU/IO resource allocations.

By default, the CPU/IO shares assigned to the consumer groups TPURGENT, TP, HIGH, MEDIUM, and LOW are 12, 8, 4, 2, and 1, respectively. The shares determine how much CPU/IO resources a consumer group can use with respect to the other consumer groups. With the default settings the consumer group TPURGENT will be able to use 12 times more CPU/IO resources compared to LOW, when needed. The consumer group TP will be able to use 4 times more CPU/IO resources compared to MEDIUM, when needed.

You can set CPU/IO shares from the service console or using the PL/SQL package cs_resource_manager.update_plan_directive.

Follow these steps to set CPU/IO shares from the service console:

From the Autonomous Transaction Processing details page, click Service Console.
On the Service Console click Administration.
Click Set Resource Management Rules.
Select CPU/IO shares to set CPU/IO share values for consumer groups.
Set the CPU/IO share values.
Click Save changes.

cs_resource_manager.update_plan_directive(consumer_group => ‘TPURGENT’, shares => 12);
cs_resource_manager.update_plan_directive(consumer_group => ‘TP’, shares => 4);
cs_resource_manager.update_plan_directive(consumer_group => ‘HIGH’, shares => 2);
cs_resource_manager.update_plan_directive(consumer_group => ‘MEDIUM’, shares => 1);
cs_resource_manager.update_plan_directive(consumer_group => ‘LOW’, shares => 1);

Manage Runaway SQL Statements

Specifies how you configure Autonomous Transaction Processing to terminate SQL statements automatically based on their runtime or the amount of IO they are doing.

You can set rules from the service console or using the PL/SQL package cs_resource_manager.

Follow these steps to set rules from the service console:

From the Autonomous Database details page, click Service Console.
On the Service Console click Administration.
Click Set Resource Management Rules.
Select the Run-away criteria tab to set rules for consumer groups.
Select the Consumer group: HIGH, MEDIUM, LOW, TP, or TPURGENT.
Set runaway criteria values:
– Query run time (seconds)
– Amount of IO (MB)
Click Save changes.

When a SQL statement in the specified consumer runs more than the specified runtime limit or does more IO than the specified amount, then the SQL statement will be terminated.

cs_resource_manager.update_plan_directive(consumer_group => ‘HIGH’, io_megabytes_limit => 1000, elapsed_time_limit => 120);

To reset the values and lift the limits, you can set the values to null.

Manage Optimizer Statistics

The Autonomous Database gathers optimizer statistics automatically so that you do not need to perform this task manually and this helps to ensure your statistics are current. Automatic statistics gathering is enabled in both Autonomous Transaction Processing and Autonomous Data Warehouse and runs in a standard maintenance window.

Autonomous Transaction Processing honors optimizer hints and PARALLEL hints in SQL statements by default. Autonomous Data Warehouse does not. You can disable optimizer hints by setting the parameter OPTIMIZER_IGNORE_HINTS to TRUE at the session or system level using ALTER SESSION or ALTER SYSTEM.

Manage Automatic Indexing

Automatic indexing automates the index management tasks in Autonomous Database. Auto Indexing is disabled by default in Autonomous Database.

Creating indexes manually requires deep knowledge of the data model, application, and data distribution. In the past, DBAs were responsible for making choices about which indexes to create, and then sometimes the DBAs did not revise their choices or maintain indexes as the conditions changed. As a result, opportunities for improvement were lost, and use of unnecessary indexes could be a performance liability. The automatic indexing feature in Autonomous Database monitors the application workload and creates and maintains indexes automatically.


This enables automatic indexing in a database and creates any new auto indexes as visible indexes, so that they can be used in SQL statements.

When automatic indexing is enabled, index compression for auto indexes is enabled by default.

To disable automatic indexing: EXEC DBMS_AUTO_INDEX.CONFIGURE(‘AUTO_INDEX_MODE’,’OFF’); — (existing auto indexes remain enabled)

Rotate Wallets for Autonomous Database

Wallet rotation lets you invalidate existing client certification keys for a database instance or for all Autonomous Database instances in a region.

You may want to rotate wallets for any of the following reasons:

  • If your organization’s policies require regular client certification key rotation.
  • When a client certification key or a set of keys is suspected to be compromised.

There are two options for client certification key rotation:

  • Per-database with Instance Wallet selected:
    –> For the database whose certification key is rotated, any existing database specific instance wallets will be void. After you rotate a wallet you have to download a new wallet to connect to the database.

    –> Regional wallets containing all database certification keys continue to work.

    –> All user sessions are terminated for the database whose wallet is rotated. User session termination begins after wallet rotation completes, however this process does not happen immediately.
  • Regional level with Regional Wallet selected:
    –> For the region whose certification key is rotated, both regional and database specific instance wallets will be void. After you rotate a wallet you have to download new regional or instance wallets to connect to any database in the region.

    –> All user sessions are terminated for the databases in the region whose wallet is rotated. User session termination begins after wallet rotation completes, however this process does not happen immediately.

Please note: If you want to terminate all connections immediately after the wallet rotation completes, Oracle recommends that you stop and then start your instance. This provides the highest level of security for your database.

Also, Oracle recommends you provide a database-specific instance wallet, with Wallet Type set to Instance Wallet when you use Download Wallet, to end users and for application use whenever possible. Regional wallets should only be used for administrative purposes that require potential access to all Autonomous Databases within a region.

You can also use the Autonomous Database API to rotate wallets using UpdateAutonomousDatabaseRegionalWallet and UpdateAutonomousDatabaseWallet.

Oracle Autonomous Database: Migration Considerations and Loading Data

This post is part of a series of blog posts on the Oracle Autonomous Database.

The first thing to consider when migrating to the Oracle Autonomous Database is how to load and move your data into the Autonomous Database. There are several options for you. When it comes to loading your data, the traditional tools that you are accustomed to using, such as Oracle Data Pump and SQL Loader, can be used to load data into the Oracle database.

When it comes to moving data into the Autonomous Database, Oracle supports many different types of data types, from SQL Loader text files, to export-import dump files, to CSV, to JSON files. When it comes to supporting object storage, ADB supports a few third-party cloud object storage platforms, such AWS S3 and Azure Blob, and, obviously, of course, Oracle’s own object storage. ADB also supports shifting data from an OCI application layer, as well as from virtual machines that may be hosted in your cloud tenancy.

When you look at migrating an existing database into the Autonomous Database, it’s important to realize that a physical database cannot simply be migrated to an autonomous database. It is not possible to migrate to the Autonomous Database by simply unplugging a PDB from an on-premise container database and plugging it into an Autonomous Database. This is not a supported option as of now.

Also, keep in mind all legacy features that are not supported must also be removed. For example, legacy style LOBs. Also, use of Data Pump export-import for exporting Oracle databases version 10.1 and higher are required.

So when we look at approaches to move data into the Autonomous Database, data can be obviously moved with Data Pump. And the nice thing about following this approach is it will eliminate legacy formats. It can perform upgrade versions. Its platform-independent. It’s a very portable way to load data into a pre-existing or pre-created, new Autonomous Database.

Another thing to keep in mind is that GoldenGate can be used. GoldenGate replication can be used to keep the on-premise or the non-autonomous database up and running and online during the process, meaning that you can have a zero down time migration process. And keep in mind that GoldenGate does have certain restrictions for row ID for nested tables, things such as that. Migrating third-party applications that use the Oracle database can also be supported, but do keep in mind, it’s important to ensure that the vendor supports the Autonomous Database. So while this may be technically possible, it is definitely important to work with the vendor.

Talking about Oracle GoldenGate, it is possible to set up GoldenGate between your on-premise or your other source system and use the Autonomous Database as a target database. Currently, only non-integrated replicas are supported- with the Autonomous Database. And the GoldenGate software release required (on the source system) is Oracle GoldenGate for Oracle database release and later. For GoldenGate, the Oracle Autonomous Database cannot be a source database, it can only be a target, so please keep this in mind for rollback purposes.

When we look at different methods that are available to us, we support many, many approaches. The SQL Developer Data Import Wizard, SQL Developer Migration Workbench, Oracle Data Pump, SQL*Loader, database links, DBMS_CLOUD Package, external tables, data sync, as well as other ETL tools. These are all approaches that can be used, and there are going to be certain situations where one will work better than the other.

Often these tools involve a combination of tools. For example, using the SQL Developer Data Import Wizard is a very powerful tool for loading data or flat files into the Autonomous Database. But in order to work with large data sets, I would strongly advise that you work with Oracle’s Cloud Object Storage. The SQL Developer Migration Workbench is a very powerful tool for migrating third party databases, and it can actually work with the Object Storage support as well. And Oracle do provide a plugin to allow you to migrate from other databases, such as SQL Server, Amazon Redshift.

Data Pump, as I mentioned previously, is definitely a powerful tool and one that is definitely a recommended tool, especially when migrating from different platforms. There will be situations where it may be the only approach, depending on the platform you’re coming from and the data source. But obviously, it’s an Oracle to Oracle solution.

So let’s look at loading data into the Autonomous Database. The DBMS_CLOUD package is the preferred method. Because with this tool, Oracle give you the ability to load various types of data, whether it be a CSV, an Avro file, a JSON file, or just a regular text file. The DBMS_CLOUD package will permit you to perform these upgrades, inserts, and data loads fairly efficiently.

Basically, what’s required is you need to copy your files, in whatever format they are, to the Oracle Object Storage, and then run the DBMS_CLOUD package from within your Autonomous Database to load that data into your Autonomous Database. And yes, the volume of data that this supports is 100% unlimited. We can work with very large data volumes to perform that operation, and it works independently of the Autonomous Database you’re connected through to.

Loading using SQL*Loader is definitely supported, and it’s a good approach for loading data that’s sitting on a local client machine. And it works over the network. You can transfer your data without having to setup the Object Storage to perform this operation. But do keep in mind, the fact that you will be going over your network and you will be running from a client machine, means that the volume of data that you’re transferring can be limited.

And then when it comes to the Import Wizard on SQL Developer, this tool is very powerful. It can import all types of files, and it’s good for importing XLS files if you wish. Keep in mind that that’s limited to number of rows it’s in the Excel version you’re running with. And this approach is good for loading data from your local desktop. When using this utility, one of the most powerful features is it’s a very good utility for getting an understanding of what your data looks like and how your data is going to be mapped when it gets loaded into the Autonomous Database. So it’s a very good tool for setting up a sandbox or a POC and determining how your upgrade or data migration is going to work. It helps with the mappings, etc.

When it comes to the Object Storage, there’s a package called DBMS_CLOUD, and that package is a wrapper on top of our Oracle external table functionality. It simplifies the requirements of defining how you’re going to load your data. And we’re going to drill into some of these packages or functions that are included, COPY_DATA being the one for loading your data. So keep that in mind, when it comes time to load your data using the DBMS_CLOUD package, COPY_DATA is the one that you will be using for actually loading the data into the database.

ADB APIs for Object Store Access

  • PL/SQL package, DBMS_CLOUD
  • Wrapper on top of Oracle external table functionality
  • Simplifies creating external tables on object store files
  • No need to type complicated “create external table” commands


External table API for queries: DBMS_CLOUD.CREATE_EXTERNAL_TABLE

DBMS_CLOUD is an interface to help you work with your Object Storage from your Autonomous Database. Because, remember, you do not have access to the local file system in the Autonomous Database, so you rely upon that package to perform these tasks.

In addition to DBMS_CLOUD, we have the DBMS_CLOUD_ADMIN package. And the DBMS_CLOUD_ADMIN package is only available on Autonomous Database Shared. And the reason for this is because these operations can be run natively through the database in Autonomous Dedicated. You can type the command, create database link, drop database link, etc. With Autonomous Shared, you do not have that ability. Those capabilities are actually locked down from the end users. So the way you create a database link or drop a database link, the way you enable or disable application continuity, is all done through this package, as well as granting tablespace quotas.

DBMS_CLOUD_ADMIN package is only available for the ADMIN user. To run it as a user other than ADMIN, the user must be granted EXECUTE privileges on the specific procedures, e.g. CREATE_DATABASE_LINK
Use GRANT_TABLESPACE_QUOTA to explictly grant a quota to a user if they need to load data.

We also have a package that’s available on Autonomous Shared called DBMS_CLOUD_MACADM. The DBMS_CLOUD_MACADM is used for configuring Oracle’s database vault on ADB. What you basically do, when you run this package, is you grant the privileges to use this package to the role DV_OWNER and DV_ACCTMGR to that user. Once that role has been granted to that user, that user can then enable, and disable, and manage a database vault.

Now, remember, the DV_OWNER cannot be the admin user. So you cannot grant DV_OWNER to admin. And this is one way of enforcing separation of duties. So keep in mind, the database vault requires the autonomous instance to be physically restarted. It cannot be controlled by the admin user. However, the roles to perform these operations is provided through the admin user to these specific sets of users. So it’s a mechanism to control and separate duties.

When it comes to loading data with the Object Storage, the Object Storage is available to provide you a very fast and low-cost model for storing data. And it’s a very good utility for actually staging data. And it’s possible to query that the components of the object storage directly from your Autonomous Database. The nice thing about using the Object Storage is it simplifies performing E-TL and ELT type operations. It allows you to also perform operations to query in place without actually physically moving your data from the Object Storage and loading it into the Autonomous Database. It’s possible to create hybrid partition tables or external tables that can map to the Object Storage. And it is extremely inexpensive, and your data is encrypted, so that only you can see it. So a very, very powerful feature. In addition to that, it does support the swift object URL as well as our own Object Storage URL.

Object Storage Integration

  • ADB is fully integrated with major object stores: Oracle, AWS, Azure
  • Choice of access:
    • Load object store files into the database
    • Query object store files on the fly without loading
  • Supports all text file formats: CSV, delimited, JSON, etc.


This procedure stores Cloud Object Storage credentials in the Autonomous Data Warehouse database. Use stored credentials for data loading or for querying external data residing in the Cloud.

credential_name => ‘X’,
username => ‘’.
password => ‘auth-token-string’

COPY_DATA Procedure

  • Loads source files into an existing target table
  • Handles compression, parallelism, logging automatically
  • Logs all loads in the table user_load_operations

table_name => ‘CHANNELS’,
credential_name => ‘OBJ_STORE_CRED’,
file_uri_list => ‘,
format => json_object ( ‘type’ value ‘CSV’, ‘skipheaders’ value ‘1’ )

This procedure creates an external table on files in the Cloud. This allows you to run queries on external data from Autonomous Data Warehouse. Handles compression, parallelism, logging automatically

table_name => ‘CHANNELS_EXT’,
credential_name => ‘OBJ_STORE_CRED’,
file_uri_list => ‘
format => json_object ( ‘type’ value ‘CSV’, ‘skipheaders’ value ‘1’ ),
column_list ==> ‘CHANNEL_ID NUMBER,


This procedure removes the specified file from the specified directory on Autonomous Data Warehouse.

PUT_OBJECT Procedure
This procedure copies a file from Autonomous Data Warehouse to the Cloud Object Storage. The maximum file size allowed in this procedure is 5 gigabytes (GB).

This procedure validates the source files for an external table, generates log information, and stores the rows that do not match the format options specified for the external table in a badfile table on Autonomous Data Warehouse.

To troubleshoot loads:
SELECT table_name, status, rows_loaded, logfile_table, badfile_table
FROM user_load_operations;

Export an Existing Oracle Database to Import into ADB

Oracle recommends schema mode for migrating to ADB to import schemas and their data.

List the schemas you want to export by using the schemas parameter.

Set the parallel parameter to at least the number of CPUs you have in your Autonomous Database to use parallelism.

The exclude and data_options parameters ensure that object types not required in ADB will not be exported and table partitions are grouped together so that they can be imported faster during the import to ADB.

i.e. Data Pump lets you exclude migration of objects like indexes and materialized views that are not required.


expdp sh@orcl \
exclude=index, cluster, indextype, materialized_view, materialized_view_log, materialized_zonemap, db_link \
data_options=group_partition_table_data \
parallel=16 \
schemas=sh \

impdp admin@ADW_high \
directory=data_pump_dir \
credential=X \
dumpfile=https:\objectstore……\export%u.dmp \
parallel=16 \
partition_options=merge \
transform=segment_attributes:n \
transform=dwcs_cvt_iots:y \
transform=constraint_use_default_index:y \
exclude=index, cluster, indextype, materialized_view, materialized_view_log, materialized_zonemap, db_link

The log files for Data Pump operations are stored in the directory DATA_PUMP_DIR
This is the only directory you can specify for the data pump directory parameter.

To access the log file you need to move the log file to your Cloud Object Storage using the PUT_OBJECT procedure.

credential_name => ‘X’,
object_uri => ‘https://…./import.log’,
directory_name => ‘DATA_PUMP_DIR’,
file_name => ‘import.log’

Create Database Links from Autonomous Database to Oracle Databases

Use DBMS_CLOUD_ADMIN.CREATE_DATABASE_LINK to create database links from an Autonomous Database to another Oracle database.

Autonomous Database supports creating database links only if the target database is accessible through a public IP or public hostname, or using an Oracle Database Gateway.

To use database links with Autonomous Database the target database must be configured to use TCP/IP with SSL (TCPS) authentication.

To ensure security, the database link port is restricted to the range 1521-1525.

  • Copy your target database wallet, cwallet.sso, containing the certificates for the target database to Object Store.
  • Create credentials to access your Object Store where you store the cwallet.sso
  • Create a directory on Autonomous Database for the wallet file cwallet.sso
  • Use DBMS_CLOUD.GET_OBJECT to upload the target database wallet to the directory you created in the previous step, DBLINK_WALLET_DIR

credential_name => ‘DEF_CRED_NAME’,
object_uri => ‘’,
directory_name => ‘DBLINK_WALLET_DIR’);

On ADB create credentials to access the target database. The username and password you specify with DBMS_CLOUD.CREATE_CREDENTIAL are the credentials for the target database that you use to create the database link.

Create the database link to the target database using DBMS_CLOUD_ADMIN.CREATE_DATABASE_LINK.

db_link_name => ‘SALESLINK’,
hostname => ‘’,
port => ‘1522’,
service_name => ‘’,
ssl_server_cert_dn => ‘,OU=Oracle BMCS FRANKFURT,O=Oracle Corporation,L=Redwood City,ST=California,C=US’,
credential_name => ‘DB_LINK_CRED’,
directory_name => ‘DBLINK_WALLET_DIR’);

Users other than ADMIN require privileges to run DBMS_CLOUD_ADMIN.CREATE_DATABASE_LINK.

Only one wallet file is valid per directory for use with database links. You can only upload one cwallet.sso at a time to the directory you choose for wallet files (for example DBLINK_WALLET_DIR). This means with a cwallet.sso in DBLINK_WALLET_DIR you can only create database links to the databases for which the wallet in that directory is valid. To use multiple cwallet.sso files with database links you need to create additional directories and put each cwallet.sso in a different directory. When you create database links with DBMS_CLOUD_ADMIN.CREATE_DATABASE_LINK, specify the directory that contains the wallet with the directory_name parameter.

Autonomous Database sets the SEC_CASE_SENSITIVE_LOGON parameter to true and this value cannot be changed. If your target database is not an Autonomous Database, then you must set the SEC_CASE_SENSITIVE_LOGON parameter to true on the target database.

Loading JSON Documents

For loading data from collections in the cloud, Autonomous Transaction Processing provides the procedure DBMS_CLOUD.COPY_COLLECTION.

Oracle Autonomous Database: Identity & Access Management – OCI Policies

This post is part of a series of blog posts on the Oracle Autonomous Database.

This is an Autonomous Database Dedicated component that the Fleet and Database Administrator would set up. So, in this case, we can say we’re going in and we’re configuring what a resource is.

An OCI Autonomous RESOURCE can be one of:
autonomous-exadata-infrastructures : dedicated hardware resources
autonomous-container-databases : runtime environments that meet specific SLAs
autonomous-databases : application databases
autonomous-backups : data archives

Policy statements
allow group <GROUP> to <VERB> <RESOURCE> in compartment

Group: specific set of users with the similar privileges

Compartment: an operating context for a specific set of service resources that are only available to GROUPS who are explicitly granted access

Policy: is used to bind privileges for a GROUP to a specific set of resources in a COMPARTMENT


INSPECT: is a limited use read-only grant intended for auditors
READ: is a read-only grant, allowing a user to see details of existing resources
USE: is a grant to allow a user to take all actions on existing resources
MANAGE: is a grant to allow a user to both create new and take all actions on resources

allow group CoyoteDBA to MANAGE autonomous-databases in compartment CoyoteCompartment

allow group CoyoteDBA to MANAGE autonomous-backups in compartment CoyoteCompartment

allow group RoadRunnerDBA to MANAGE autonomous-databases in compartment RoadRunnerCompartment

allow group RoadRunnerDBA to MANAGE autonomous-backups in compartment RoadRunnerCompartment

allow group AcmeFA to MANAGE autonomous-exadata-infrastructures in compartment FACompartment

allow group AcmeFA to MANAGE autonomous-container-databases in compartment FACompartment

allow group RoadRunnerDBA to READ autonomous-container-databases in compartment FACompartment

allow group CoyoteDBA to READ autonomous-container-databases in compartment FACompartment