Securing Application Connections with Access Control Lists (ACLs)
No network Access Control List (ACL) is specified by default and the database is accessible from any IP address
Specifying an access control list blocks all IP addresses that are not in the list from accessing the database
Once an ACL is set, the database will only accept connections from addresses in the access control – all other client connections will be rejected
ACLs can be configured for an ADB at any time. While provisioning or added to an existing ADB
ACLs can be configured for individual IP addresses, CIDR block ranges, and VCNs – including simultaneously
A database restore does not overwrite existing ACLs
All services that are subsets of an ADB (e.g. SQL Developer Web, APEX, ML) are also blocked by ACLs
The Autonomous Transaction Processing Service console is not subject to ACLs.
Private endpoints provide the ability to NOT permit any public access to your ADB database.
It is a private IP address within your VCN that can be used to access a given service within OCI.
Think of it as just another VNIC (Virtual Network Interface Card) in your VCN
You can provision an ADB to use private endpoints and configure a Virtual Cloud Network (VCN) in your tenancy to use with the private endpoint.
Allow you to keep all traffic to and from your ADB off the public internet
Does require you to set up transit routing in your VCN and use a service gateway to connect
Can be configured when you provision a new database or when you clone an existing database only
To setup Private Endpoints with ADB:
There must be a VCN within the region for the ADB. This cannot be changed after provisioning
A private subnet with default DHCP options is required
At least one Network Security Group (NSG) within your VCN is required. This can be edited after provisioning
Network Security Group (NSG):
NSGs create a virtual firewall for your Autonomous Database using security rules
You can specify up to 5 NSGs to control access to your Autonomous Database
To use NSG with Private Endpoints set: Ingress / TCP / Port 1522
When you provision or clone an Autonomous Database, you can configure the network access so that the database uses a private endpoint. If your organization has strict security mandates that do not allow you to have a public endpoint for your database, this provides you with the necessary private endpoint. When you use private access, your database is only accessible through the IP address of the associated private endpoint. Additionally, this allows you to configure your access so that the traffic does not use public subnets and allows you to keep all traffic to and from your Autonomous Database off of the public internet within one of your tenancy’s virtual cloud networks (VCNs).
Change from Private to Public Endpoints with Autonomous Database
If your Autonomous Database instance is configured to use a private endpoint you can change the configuration to use a public endpoint.
Note: Changing an instance from a private to a public endpoint on Autonomous Database is only supported with Oracle Database 19c onwards.
After updating the network access type all database users must obtain a new wallet and use the new wallet to access the database.
After the update completes, you can define access control rules for the public endpoint by specifying ACLs.
May change automatically based on your OCI resource usage or by changing your account type. e.g. Always-Free to Paid
Service Limits are quotas or allowances that are set on a specific resource. Service limits are defined and set by Oracle. They’re set by the service you have subscribed to and they can change automatically based on your OCI resource usage or by changing your account type.
The service limit is based on the service you’ve subscribe to. The service could be database service in which case you storage could be something that’s controlled for you. For example, you understand with the autonomous database, it could be anywhere from 1 to 128 terabytes. That’s the type of service that could be limited.
In addition to that, the service may change. So what is available for you may change based on your service. In the example of always free to paid, if you’re defining an autonomous database, that would be limited to one OCPU maximum in the always free account. If you were to convert that account to a paid account, that limit will be lifted from one to obviously 128 OCPUs that could be configured.
Similar to Service Limits. However, where as Service Limits are set by Oracle, Compartment Quotas are set by Customer Administrators. Please note, in dedicated Infrastructure, this is managed by the FLEET administrator
Set using Policy Statements. A simple declarative language similar to IAM policy language
Compartment quotas are very similar to service limits, however, whereas a service limit is set by Oracle, compartment quotas are set by customer administrators (fleet administrators). In the dedicated infrastructure world, this is managed by the fleet administrator and be one of their roles and responsibilities. Compartment quotas are set using policy statements that are very simple, and very easy to follow, and similar to the IAMs.
So if we look at service limits and compartment quotas– keep in mind that tenancy limits, and quotas, and usage can be viewed using your cloud console. The console may not display the information for all of the OCI services or resources, but that’s because it’s only going to show what you subscribe to. Also keep in mind that the usage may be greater than the limit that’s available to you and that’s because the limit may have been reduced after the resource was initially created.
The other thing to keep in mind is that limits are sometimes displayed as zero. And if that is the case, then the account has been suspended. Now, that is not necessarily a bad thing. This is– the account could be suspended due to a lack of inactivity, which really just means you’re not being billed or charged for the resources that would potentially be consumed. This is apparent in the always free account when a database becomes idle for an extended period of time.
e.g. create quota policy stub. When you click on that, it will bring up a panel which we can then go in and enter in the fields there. In this case, we’re going to say set database quota, ADW total storage terabytes, or TB, to a number in our compartment team. So by doing that, we will then fix the limit in which anyone connecting to this compartment can actually create the storage of their database. So this is a way of controlling that resource.
Governance –> Limits, Quotas and Usage
set database quota adw-total-storage-tb to [number] in compartment Team
There is a new tool called “Oracle Autonomous Database Schema Advisor” that can help answer questions such as:
What issues will I encounter during migration?
What objects are incompatible / cannot be migrated to ADB? How do I know?
What beforehand advice and guidelines can I receive prior to the migration to Autonomous Database?
All the details are in MOS Doc ID 2462677.1
First, you need to download and install the advisor. Run the script install_adb_advisor.sql (as SYSDBA) which you download from the MOS note above. The script will create a user, 7 tables, 4 indexes and a package. I have decided to call the user adb_advisor and the password will be XXXXX:
SQL> @install_adb_advisor.sql adb_advisor XXXXX
Index created. Package created. Package body created. No errors.
Next, run the advisor as the user created in step one. What you need to specify is (1) the database schema/s and (2) the ADB type you will be using in the Autonomous Cloud. You can list maximum 30 schemas in a single advisor run or just use schemas=>’ALL’. And these are the 4 options for the ADB type:
– ATP for Autonomous Transaction Processing (Shared) – ADW for Autonomous Data Warehouse (Shared) – ATPD for Autonomous Transaction Processing (Dedicated) – ADWD for Autonomous Data Warehouse (Dedicated)
Do not try to run the script as SYSDBA, you will be getting all sorts of errors.
e.g. ADB_ADVISOR.REPORT(schemas=>’ALL’, adb_type=>’ATP’);
The Advisor will generate a report with the following information:
– The counts of discovered objects and a summary of migration status
– Objects that cannot be migrated due to the restrictions and lockdowns imposed by the Autonomous Database on certain data types, database options and SQL statements
– The objects that will migrate with modifications that are automatically made during the import process or upon the execution of object creation DDL
– Informational section containing certain best practice recommendations and guidance
Common issues detected by the advisor:
–User-defined objects in the SYS schema. They will not be migrated. – IOT table cannot be migrated as-is. It will be migrated as a normal, regular table. – INMEMORY table cannot be migrated as-is. It will be created as a NO INMEMORY table. – Modified init.ora parameters in your source database that you would not be permitted to alter in ADB. e.g. db_block_checksum, enable_ddl_logging, encrypt_new_tablespaces, local_listener, tablespace_encryption_default_algorithm, tde_configuration, use_large_pages
By default, the output gets truncated when the number of rows exceeds the maximum limit set in the Advisor package. You can reset the number of rows by running the following command prior to running the Advisor.
SQL> exec ADB_ADVISOR.setmaxrows(500); You have to reset the max rows every time you run the Advsior as the settings is not saved in the database.
My recommendation is that, before you migrate your schemas to Autonomous Cloud, to run the advisor. It will minimize your post-migration hassle.
Please note: Oracle Machine Learning users can also be created using SQL Developer.
Oracle Application Express (APEX)
APEX is automatically enabled in the autonomous database
Access is through the Service Console
The initial access is through the “admin” user (in the Internal workspace)
APEX and SQL Developer Web Access Requirements with ADB-D:
Must be inside customer VCN
Service user Database Admin is automatically granted use of tooling
Database users must be enabled to use tooling using ORDS.ENABLE_SCHEMA
ORDS.ENABLE_SCHEMA (p_enabled=> TRUE,
p_schema=> 'HRUSER', -- the database user name
p_url_mapping_pattern=> 'hr_alias', -- determines URI template for RESTful Services
p_auto_rest_auth=> TRUE ); -- authenticate request before any request can be serviced
The password you specify must meet the default complexity rules in ADB
“DWROLE” includes all necessary privileges for a DW developer/user
By default, default password will not expire until 360 days
SQL> create user ocitest identified by ….; SQL> grant dwrole to ocitest;
If you specify a weaker or NULL Password Verify Function (PVF) for a new profile, then the default PVF CLOUD_VERIFY_FUNCTION applies. If the verify function you specify is more strict than the default CLOUD_VERIFY_FUNCTION, then the new verify function is used.
Starting, Stopping, and Scaling
Can all be done by clicking through the console or through API calls.
Scale your Autonomous Database Instance on demand without tedious manual steps
Independently scale compute or storage
Resizing occurs instantly, fully online
Memory, IO bandwidth, concurrency scales linearly with CPU
ADB Auto Scaling:
Auto Scaling is enabled by default (it can be disabled at any time)
ADB autonomously and continuously monitors overall system performance
ADB scales CPU and IO resources up and down based on overall workload requirements
Scaling up autonomously expands CPU-IO resources by up to 3x the base core count
Can be Enabled when provisioning an ADB instance or anytime using Scale Up/Down on the OCI Console
Changing the setting does not require database downtime
You are only charged for the baseline (i.e. number of OCPUs provisioned)
Auto Scaling is billed based on average number of OCPUs consumed during an hour
Starting & Stopping:
Stop your instance on demand to conserve resources and halt CPU billing
Start your instance instantly on demand
Backup, Restore, and Cloning
Autonomous Database Backups:
The ADB is automatically backed up
The retention period for backups is 60 days
The database can be restored and recovered to any point-in-time within the retention period
Recovery can be initiated using the Cloud Console
When the restore operation finished, the ADB instance opens in read-only mode and the instance details page Lifecycle State shows “Available Needs Atttention”. This is for the user to sanity check the restore. To open the database in read-write mode, click Stop, then Start.
When you restore your database and open it in read-write mode, all backups between the date the restore completes and the date you specified for the restore operation, the restore time, are invalidated. After the database is opened in read-write mode after a restore, you cannot initiate further restore operations to any point in time between the restore time and restore completion time. You can only initiate new restore operations to a point in time older than the restore time or more recent than the time when the actual restore succeeded.
For example, assume you initiate a restore operation on Oct 8, 2018, 2 pm and specify Oct 1, 2018, 2 pm as the point in time to restore to and the restore completes on Oct 8, 2018, 2:30 pm. If you open your database in read-write mode at this point, backups between Oct 8, 2018, 2:30 pm and Oct 1, 2018, 2 pm will be invalidated. You will not be able to restore to any date between Oct 1, 2018, 2 pm and Oct 8, 2018, 2:30 pm. If you initiate a restore to a point in time between these dates the restore will fail with an error.
The restore operation also restores the DATA_PUMP_DIR directory and user defined directories to the timestamp you specified for the restore; files that were created after that timestamp would be lost.
When you restore, the Oracle Machine Learning workspaces, projects, and notebooks are not restored.
For external tables, partitioned external tables, and the external partitions of hybrid partitioned tables a backup does not include the external files that reside on your Object Store. Thus, for operations where you use a backup to restore your database, such as Restore or Clone from a backup it is your responsibility to backup, and restore if necessary, the external files associated with external tables, external partitioned tables, or the external files for a hybrid partitioned table.
In addition to automatic backups, Autonomous Database also allows you to take manual backups to your OCI Object Storage.
To perform manual backups you need to:
Define OCI Object Storage credentials
Define your OCI Object Storage tenancy URL
Need to create a bucket to hold the backups
The manual backup configuration tasks are a one-time operation. Once credentials, object bucket and tenancy URL are set, manual backups can be initiated without requiring the same operations again unless the URL, credentials or bucket change.
When creating the object storage bucket for manual backup:
The name format for the bucket in object storage must follow: backup_
This is the “Database Name” and not the “Display Name”
The object bucket name has to be all lower case
After the bucket has been created, connect to the database and perform the following:
Set the database default_bucket property to your OCI Object Storage tenancy URL
ALTER DATABASE PROPERTY SET default_credential = ‘ADMIN.DEF_CRED_NAME’;
Cloning: It is possible to create a new Autonomous Database as a clone of an existing Autonomous Database.
Autonomous Transaction Processing provides cloning where you can choose to create a full clone, create a metadata clone, or create a refreshable clone. You can also use cloning to upgrade your database to a newer Oracle Database version when newer Oracle Database versions are available. You can move an Autonomous Transaction Processing database to a different Oracle Cloud Infrastructure compartment.
Full Clone of the source ADB
Metadata Clone of the source ADB
Refreshable Clone: creates a read-only full clone that can be easily refreshed with the data from the source database.
For a Full Clone or a Metadata Clone, you have the option to select the clone source:
Clone from a live database instance. This creates a clone of a running database.
Clone from a backup. This creates a clone when you select a backup from a list of backups, or when you enter a backup point-in-time to clone.
When you perform the clone operation you can select a newer Oracle Database version for the target database. This allows you to use cloning to upgrade your Autonomous Transaction Processing database.
With clone from backup, the Oracle Machine Learning workspaces, projects, and notebooks of the source database are not cloned to the new database.
For a Metadata Clone, the APEX Apps and the OML Projects and Notebooks are copied to the clone. For a Metadata Clone, the underlying database data of the APEX App or OML Notebook is not cloned.
If you define a network Access Control List (ACL) on the source database, the currently set network ACL is cloned to the new database. If a database is cloned from a backup, the current source database’s ACL is applied (not the ACL that was valid of the time of the backup).
An ADB can be configured to use Private Endpoints by cloning from an existing ADB with Public Endpoints.
If you create a clone and the source database has an access control list (ACL) and you specify the private endpoint network access option, Virtual cloud network for the target database, the ACL is not cloned to the new database. In this case, you must define security rules within your Network Security Group (or groups) to control traffic to and from your target database (instead of using the access control rules that were specified in the ACL on the clone source).
You can only clone an Autonomous Transaction Processing instance to the same tenancy and the same region as the source database.
The clone has its own wallet and you cannot use the wallet of the source database to connect to the clone.
About Refreshable Clones on Autonomous Database: A refreshable clone allows you to do the following:
Maintain one or more copies of the source database for use as read-only databases. A clone database is available when you need it, and when you want to update the data, you can refresh the clone from the source database.
Share copies of a production database with multiple business units. For example, one business unit might use the source database for ongoing transactions and another business unit could at the same time use the refreshable clone database for read-only operations.
This option also allows you to spread the cost of database usage across multiple business units. You can bill the different units separately, based on their usage of one or more refreshable clone databases.
Note: Refreshable clones have a one week refresh age limit. If you do not perform a refresh within a week, then the refreshable clone is no longer refreshable. After a refreshable clone passes the refresh time limit, you can use the instance as a read only database or you can disconnect from the source to make the database a read/write (standard) database.
When you make certain changes on a source Autonomous Database instance that has one or more refreshable clones attached to it, the changes are applied to both the source database and to the refreshable clones as follows:
Storage: The storage value you set on the source database applies to both the source database and to any attached refreshable clones.
ADMIN password: The ADMIN password value you set on the source database applies to both the source database and to any attached refreshable clones.
Limitations and notes for Autonomous Database refreshable clones:
You cannot create a refreshable clone from an Autonomous JSON Database source. That is, if the Autonomous Database workload type is JSON Database, then you cannot create a refreshable clone.
You cannot create a refreshable clone for an Autonomous Database instance if your database version is Oracle Database 18c. You must first upgrade to Oracle Database 19c or higher to create a refreshable clone.
You cannot create a cascading series of refreshable clones. Thus, a refreshable clone cannot be created from another refreshable clone.
You cannot backup or restore a refreshable clone.
Oracle Application Express (APEX) URLs do not work in a refreshable clone read-only database and the APEX URLs are disabled in the Oracle Cloud Infrastructure console and the Autonomous Database Service Console. APEX URLs are enabled for a read/write database when a refreshable clone is disconnected from the source.
Automatic Workload Repository (AWR) data and reports are not available for refreshable clones.
Renaming ADB Database
The database rename operation is only supported with Oracle Database 19c onwards.
The database rename operation changes the connection strings required to connect to the database. Thus, after you rename a database you must download a new wallet for any existing instance wallets that you use to connect to the database (and any applications also must be updated to use the new connection string and the new wallet to connect to the database).
If you are using a regional wallet, you can continue to use the existing wallet to connect to any databases that were not renamed. However, if you want to connect to a renamed database, you must download a new regional wallet.
Please note: The rename operation terminates all connections to the database. After the rename operation completes, you can reconnect to the database.
Renaming your database does not change global references to your database from existing database links on remote databases. Changing such references is the responsibility of the administrator of the remote databases.
When you rename a database, the Autonomous Database OCID does not change.
If your database has Autonomous Data Guard enabled, the rename operation is not available.
You cannot use the rename operation on a refreshable clone instance or on a database that is the source for a refreshable clone.
The rename operation restarts the database.
If you configured your Autonomous Database for manual backups prior to the rename operation, then your existing manual backups continue to work after the rename operation. If you perform new manual backups after the rename, they are stored in the same bucket that you configured prior to the rename operation.
Move an Autonomous Transaction Processing Database to a Different Compartment
To move an Autonomous Transaction Processing database you must have the right to manage autonomous databases in the database’s current compartment and in the compartment you are moving it to. As soon as you move an Autonomous Transaction Processing database to a different compartment, the policies that govern the new compartment apply immediately and affect access to the database. Therefore, your access to the database may change, depending on the policies governing your Oracle Cloud user account’s access to resources.
Choose your region. See Switching Regions for information on switching regions and working in multiple regions.
Choose your Compartment. See Compartments for information on using and managing compartments.
Select an Autonomous Transaction Processing instance from the list in your compartment.
On the Details page, from the More Actions drop-down list, select Move Resource.
In the Move Resource to a Different Compartment page, select the new compartment.
The Overview and Activity tabs in the Service Console provide information about the performance of the service. The Activity tab also shows past and current monitored SQL statements and detailed information about each statement.
Monitoring Performance from the Service Console – Console Overview
Storage used: total and used storage capacity of the service. It indicates what percentage of the space is currently in-use.
CPU utilization (%)
Running SQL statements: average number of running SQL statements historically. This chart shows hourly data. A data point shows the running SQL statements for that hour
Number of OCPUs allocated
SQL statement response time (s)
Database activity / wait events
The ADB captures performance data for 8 days. This can be modified using: DBMS_WORKLOAD_REPOSITORY.MODIFY_SNAPSHOT_SETTINGS
CPU utilization (%): This chart shows the historical CPU utilization of the service:
Auto Scaling Disabled: This chart shows hourly data. A data point shows the average CPU utilization for that hour. For example, a data point at 10:00 shows the average CPU utilization for 9:00-10:00. The utilization percentage is reported with respect to the number of CPUs the database is allowed to use which is two times the number of OCPUs. For example, if the database has four (4) OCPUs, the percentage in this graph is based on 8 CPUs.
Auto Scaling Enabled: For databases with auto scaling enabled the utilization percentage is reported with respect to the maximum number of CPUs the database is allowed to use, which is six times the number of OCPUs. For example, if the database has four OCPUs with auto scaling enabled the percentage in this graph is based on 24 CPUs.
Number of OCPUs allocated: This chart shows the number of OCPUs allocated:
Auto Scaling Disabled: For databases with auto scaling disabled, for each hour the chart shows the number of OCPUs allocated to the database if the database is open for at least some part of the hour.
Auto Scaling Enabled: For databases with auto scaling enabled, for each hour the chart shows the average number of OCPUs used during that hour if that value is higher than the number of OCPUs provisioned. If the number of OCPUs used is not higher than the number of OCPUs provisioned, then the chart shows the number of OCPUs allocated for that hour.
Stopped Database: If the database was stopped for the full hour the chart shows 0 OCPUs allocated for that hour.
Monitor ADB with Performance Hub
You can view real-time and historical performance data from the Performance Hub. Performance Hub shows Active Session History (ASH) analytics, SQL monitoring and workload information.
The Performance Hub page has the following sections:
The time selector.
The Reports drop-down list, containing the option to create and download an AWR (Automatic Workload Repository) report.
The tabbed data area, with the tabs ASH Analytics, SQL Monitoring and Workload.
Monitor Performance with Autonomous Database Metrics
You can monitor the health, capacity, and performance of your Autonomous Transaction Processing databases with metrics, alarms, and notifications. You can use Oracle Cloud Infrastructure console or Monitoring APIs to view metrics.
To view metrics for an Autonomous Database instance:
On the Details page, under Resources, click Metrics.
There is a chart for each metric. In each chart you can select the Interval and Statistic or use the default values.
To create an alarm on a metric, click Options and select Create an Alarm on this Query.
To view Metrics for Autonomous Databases in a Compartment:
Monitoring > Service Metrics.
On the Service Metrics page, under Compartment select your compartment. On the Service Metrics page, under Metric Namespace select oci_autonomous_database. If there are multiple Autonomous Databases in the compartment you can show metrics aggregated across the Autonomous Databases by selecting Aggregate Metric Streams.
The priority of user requests in Autonomous Transaction Processing is determined by the database service the user is connected with. For example, a user connecting with the atp1_tp service uses the consumer group TP.
Note: After connecting to the database using one service, do not attempt to manually switch that connection to a different service by simply changing the consumer group of the connection. When you connect using a service, Autonomous Transaction Processing performs more actions to configure the connection than just setting its consumer group.
Autonomous Transaction Processing includes predefined job_class values to use with Oracle Scheduler.
Manage CPU/IO Shares
Autonomous Transaction Processing comes with predefined CPU/IO shares assigned to different consumer groups. You can modify these predefined CPU/IO shares if your workload requires different CPU/IO resource allocations.
By default, the CPU/IO shares assigned to the consumer groups TPURGENT, TP, HIGH, MEDIUM, and LOW are 12, 8, 4, 2, and 1, respectively. The shares determine how much CPU/IO resources a consumer group can use with respect to the other consumer groups. With the default settings the consumer group TPURGENT will be able to use 12 times more CPU/IO resources compared to LOW, when needed. The consumer group TP will be able to use 4 times more CPU/IO resources compared to MEDIUM, when needed.
You can set CPU/IO shares from the service console or using the PL/SQL package cs_resource_manager.update_plan_directive.
Follow these steps to set CPU/IO shares from the service console:
From the Autonomous Transaction Processing details page, click Service Console. On the Service Console click Administration. Click Set Resource Management Rules. Select CPU/IO shares to set CPU/IO share values for consumer groups. Set the CPU/IO share values. Click Save changes.
Specifies how you configure Autonomous Transaction Processing to terminate SQL statements automatically based on their runtime or the amount of IO they are doing.
You can set rules from the service console or using the PL/SQL package cs_resource_manager.
Follow these steps to set rules from the service console:
From the Autonomous Database details page, click Service Console. On the Service Console click Administration. Click Set Resource Management Rules. Select the Run-away criteria tab to set rules for consumer groups. Select the Consumer group: HIGH, MEDIUM, LOW, TP, or TPURGENT. Set runaway criteria values: – Query run time (seconds) – Amount of IO (MB) Click Save changes.
When a SQL statement in the specified consumer runs more than the specified runtime limit or does more IO than the specified amount, then the SQL statement will be terminated.
To reset the values and lift the limits, you can set the values to null.
Manage Optimizer Statistics
The Autonomous Database gathers optimizer statistics automatically so that you do not need to perform this task manually and this helps to ensure your statistics are current. Automatic statistics gathering is enabled in both Autonomous Transaction Processing and Autonomous Data Warehouse and runs in a standard maintenance window.
Autonomous Transaction Processing honors optimizer hints and PARALLEL hints in SQL statements by default. Autonomous Data Warehouse does not. You can disable optimizer hints by setting the parameter OPTIMIZER_IGNORE_HINTS to TRUE at the session or system level using ALTER SESSION or ALTER SYSTEM.
Manage Automatic Indexing
Automatic indexing automates the index management tasks in Autonomous Database. Auto Indexing is disabled by default in Autonomous Database.
Creating indexes manually requires deep knowledge of the data model, application, and data distribution. In the past, DBAs were responsible for making choices about which indexes to create, and then sometimes the DBAs did not revise their choices or maintain indexes as the conditions changed. As a result, opportunities for improvement were lost, and use of unnecessary indexes could be a performance liability. The automatic indexing feature in Autonomous Database monitors the application workload and creates and maintains indexes automatically.
To enable automatic indexing: EXEC DBMS_AUTO_INDEX.CONFIGURE(‘AUTO_INDEX_MODE’,’IMPLEMENT’);
This enables automatic indexing in a database and creates any new auto indexes as visible indexes, so that they can be used in SQL statements.
When automatic indexing is enabled, index compression for auto indexes is enabled by default.
To disable automatic indexing: EXEC DBMS_AUTO_INDEX.CONFIGURE(‘AUTO_INDEX_MODE’,’OFF’); — (existing auto indexes remain enabled)
Rotate Wallets for Autonomous Database
Wallet rotation lets you invalidate existing client certification keys for a database instance or for all Autonomous Database instances in a region.
You may want to rotate wallets for any of the following reasons:
If your organization’s policies require regular client certification key rotation.
When a client certification key or a set of keys is suspected to be compromised.
There are two options for client certification key rotation:
Per-database with Instance Wallet selected: –> For the database whose certification key is rotated, any existing database specific instance wallets will be void. After you rotate a wallet you have to download a new wallet to connect to the database.
–> Regional wallets containing all database certification keys continue to work.
–> All user sessions are terminated for the database whose wallet is rotated. User session termination begins after wallet rotation completes, however this process does not happen immediately.
Regional level with Regional Wallet selected: –> For the region whose certification key is rotated, both regional and database specific instance wallets will be void. After you rotate a wallet you have to download new regional or instance wallets to connect to any database in the region.
–> All user sessions are terminated for the databases in the region whose wallet is rotated. User session termination begins after wallet rotation completes, however this process does not happen immediately.
Please note: If you want to terminate all connections immediately after the wallet rotation completes, Oracle recommends that you stop and then start your instance. This provides the highest level of security for your database.
Also, Oracle recommends you provide a database-specific instance wallet, with Wallet Type set to Instance Wallet when you use Download Wallet, to end users and for application use whenever possible. Regional wallets should only be used for administrative purposes that require potential access to all Autonomous Databases within a region.
You can also use the Autonomous Database API to rotate wallets using UpdateAutonomousDatabaseRegionalWallet and UpdateAutonomousDatabaseWallet.
The first thing to consider when migrating to the Oracle Autonomous Database is how to load and move your data into the Autonomous Database. There are several options for you. When it comes to loading your data, the traditional tools that you are accustomed to using, such as Oracle Data Pump and SQL Loader, can be used to load data into the Oracle database.
When it comes to moving data into the Autonomous Database, Oracle supports many different types of data types, from SQL Loader text files, to export-import dump files, to CSV, to JSON files. When it comes to supporting object storage, ADB supports a few third-party cloud object storage platforms, such AWS S3 and Azure Blob, and, obviously, of course, Oracle’s own object storage. ADB also supports shifting data from an OCI application layer, as well as from virtual machines that may be hosted in your cloud tenancy.
When you look at migrating an existing database into the Autonomous Database, it’s important to realize that a physical database cannot simply be migrated to an autonomous database. It is not possible to migrate to the Autonomous Database by simply unplugging a PDB from an on-premise container database and plugging it into an Autonomous Database. This is not a supported option as of now.
Also, keep in mind all legacy features that are not supported must also be removed. For example, legacy style LOBs. Also, use of Data Pump export-import for exporting Oracle databases version 10.1 and higher are required.
So when we look at approaches to move data into the Autonomous Database, data can be obviously moved with Data Pump. And the nice thing about following this approach is it will eliminate legacy formats. It can perform upgrade versions. Its platform-independent. It’s a very portable way to load data into a pre-existing or pre-created, new Autonomous Database.
Another thing to keep in mind is that GoldenGate can be used. GoldenGate replication can be used to keep the on-premise or the non-autonomous database up and running and online during the process, meaning that you can have a zero down time migration process. And keep in mind that GoldenGate does have certain restrictions for row ID for nested tables, things such as that. Migrating third-party applications that use the Oracle database can also be supported, but do keep in mind, it’s important to ensure that the vendor supports the Autonomous Database. So while this may be technically possible, it is definitely important to work with the vendor.
Talking about Oracle GoldenGate, it is possible to set up GoldenGate between your on-premise or your other source system and use the Autonomous Database as a target database. Currently, only non-integrated replicas are supported- with the Autonomous Database. And the GoldenGate software release required (on the source system) is Oracle GoldenGate for Oracle database release 22.214.171.124.2 and later. For GoldenGate, the Oracle Autonomous Database cannot be a source database, it can only be a target, so please keep this in mind for rollback purposes.
When we look at different methods that are available to us, we support many, many approaches. The SQL Developer Data Import Wizard, SQL Developer Migration Workbench, Oracle Data Pump, SQL*Loader, database links, DBMS_CLOUD Package, external tables, data sync, as well as other ETL tools. These are all approaches that can be used, and there are going to be certain situations where one will work better than the other.
Often these tools involve a combination of tools. For example, using the SQL Developer Data Import Wizard is a very powerful tool for loading data or flat files into the Autonomous Database. But in order to work with large data sets, I would strongly advise that you work with Oracle’s Cloud Object Storage. The SQL Developer Migration Workbench is a very powerful tool for migrating third party databases, and it can actually work with the Object Storage support as well. And Oracle do provide a plugin to allow you to migrate from other databases, such as SQL Server, Amazon Redshift.
Data Pump, as I mentioned previously, is definitely a powerful tool and one that is definitely a recommended tool, especially when migrating from different platforms. There will be situations where it may be the only approach, depending on the platform you’re coming from and the data source. But obviously, it’s an Oracle to Oracle solution.
So let’s look at loading data into the Autonomous Database. The DBMS_CLOUD package is the preferred method. Because with this tool, Oracle give you the ability to load various types of data, whether it be a CSV, an Avro file, a JSON file, or just a regular text file. The DBMS_CLOUD package will permit you to perform these upgrades, inserts, and data loads fairly efficiently.
Basically, what’s required is you need to copy your files, in whatever format they are, to the Oracle Object Storage, and then run the DBMS_CLOUD package from within your Autonomous Database to load that data into your Autonomous Database. And yes, the volume of data that this supports is 100% unlimited. We can work with very large data volumes to perform that operation, and it works independently of the Autonomous Database you’re connected through to.
Loading using SQL*Loader is definitely supported, and it’s a good approach for loading data that’s sitting on a local client machine. And it works over the network. You can transfer your data without having to setup the Object Storage to perform this operation. But do keep in mind, the fact that you will be going over your network and you will be running from a client machine, means that the volume of data that you’re transferring can be limited.
And then when it comes to the Import Wizard on SQL Developer, this tool is very powerful. It can import all types of files, and it’s good for importing XLS files if you wish. Keep in mind that that’s limited to number of rows it’s in the Excel version you’re running with. And this approach is good for loading data from your local desktop. When using this utility, one of the most powerful features is it’s a very good utility for getting an understanding of what your data looks like and how your data is going to be mapped when it gets loaded into the Autonomous Database. So it’s a very good tool for setting up a sandbox or a POC and determining how your upgrade or data migration is going to work. It helps with the mappings, etc.
When it comes to the Object Storage, there’s a package called DBMS_CLOUD, and that package is a wrapper on top of our Oracle external table functionality. It simplifies the requirements of defining how you’re going to load your data. And we’re going to drill into some of these packages or functions that are included, COPY_DATA being the one for loading your data. So keep that in mind, when it comes time to load your data using the DBMS_CLOUD package, COPY_DATA is the one that you will be using for actually loading the data into the database.
ADB APIs for Object Store Access
PL/SQL package, DBMS_CLOUD
Wrapper on top of Oracle external table functionality
Simplifies creating external tables on object store files
No need to type complicated “create external table” commands
Data loading API: DBMS_CLOUD.COPY_DATA
External table API for queries: DBMS_CLOUD.CREATE_EXTERNAL_TABLE
DBMS_CLOUD is an interface to help you work with your Object Storage from your Autonomous Database. Because, remember, you do not have access to the local file system in the Autonomous Database, so you rely upon that package to perform these tasks.
In addition to DBMS_CLOUD, we have the DBMS_CLOUD_ADMIN package. And the DBMS_CLOUD_ADMIN package is only available on Autonomous Database Shared. And the reason for this is because these operations can be run natively through the database in Autonomous Dedicated. You can type the command, create database link, drop database link, etc. With Autonomous Shared, you do not have that ability. Those capabilities are actually locked down from the end users. So the way you create a database link or drop a database link, the way you enable or disable application continuity, is all done through this package, as well as granting tablespace quotas.
DBMS_CLOUD_ADMIN package is only available for the ADMIN user. To run it as a user other than ADMIN, the user must be granted EXECUTE privileges on the specific procedures, e.g. CREATE_DATABASE_LINK Use GRANT_TABLESPACE_QUOTA to explictly grant a quota to a user if they need to load data.
We also have a package that’s available on Autonomous Shared called DBMS_CLOUD_MACADM. The DBMS_CLOUD_MACADM is used for configuring Oracle’s database vault on ADB. What you basically do, when you run this package, is you grant the privileges to use this package to the role DV_OWNER and DV_ACCTMGR to that user. Once that role has been granted to that user, that user can then enable, and disable, and manage a database vault.
Now, remember, the DV_OWNER cannot be the admin user. So you cannot grant DV_OWNER to admin. And this is one way of enforcing separation of duties. So keep in mind, the database vault requires the autonomous instance to be physically restarted. It cannot be controlled by the admin user. However, the roles to perform these operations is provided through the admin user to these specific sets of users. So it’s a mechanism to control and separate duties.
When it comes to loading data with the Object Storage, the Object Storage is available to provide you a very fast and low-cost model for storing data. And it’s a very good utility for actually staging data. And it’s possible to query that the components of the object storage directly from your Autonomous Database. The nice thing about using the Object Storage is it simplifies performing E-TL and ELT type operations. It allows you to also perform operations to query in place without actually physically moving your data from the Object Storage and loading it into the Autonomous Database. It’s possible to create hybrid partition tables or external tables that can map to the Object Storage. And it is extremely inexpensive, and your data is encrypted, so that only you can see it. So a very, very powerful feature. In addition to that, it does support the swift object URL as well as our own Object Storage URL.
Object Storage Integration
ADB is fully integrated with major object stores: Oracle, AWS, Azure
Choice of access:
Load object store files into the database
Query object store files on the fly without loading
Supports all text file formats: CSV, delimited, JSON, etc.
CREATE_CREDENTIAL Procedure This procedure stores Cloud Object Storage credentials in the Autonomous Data Warehouse database. Use stored credentials for data loading or for querying external data residing in the Cloud.
DBMS_CLOUD.COPY_DATA ( table_name => ‘CHANNELS’, credential_name => ‘OBJ_STORE_CRED’, file_uri_list => ‘https://objectstorage..oraclecloud.com/n///, format => json_object ( ‘type’ value ‘CSV’, ‘skipheaders’ value ‘1’ ) );
CREATE_EXTERNAL_TABLE Procedure This procedure creates an external table on files in the Cloud. This allows you to run queries on external data from Autonomous Data Warehouse. Handles compression, parallelism, logging automatically
DBMS_CLOUD.CREATE_EXTERNAL_TABLE ( table_name => ‘CHANNELS_EXT’, credential_name => ‘OBJ_STORE_CRED’, file_uri_list => ‘https://objectstorage..oraclecloud.com/n/// format => json_object ( ‘type’ value ‘CSV’, ‘skipheaders’ value ‘1’ ), column_list ==> ‘CHANNEL_ID NUMBER, CHANNEL_DESC VARCHAR(20), … … X NUMBER’); )
DELETE_FILE Procedure This procedure removes the specified file from the specified directory on Autonomous Data Warehouse.
PUT_OBJECT Procedure This procedure copies a file from Autonomous Data Warehouse to the Cloud Object Storage. The maximum file size allowed in this procedure is 5 gigabytes (GB).
VALIDATE_EXTERNAL_TABLE Procedure This procedure validates the source files for an external table, generates log information, and stores the rows that do not match the format options specified for the external table in a badfile table on Autonomous Data Warehouse.
To troubleshoot loads: SELECT table_name, status, rows_loaded, logfile_table, badfile_table FROM user_load_operations;
Export an Existing Oracle Database to Import into ADB
Oracle recommends schema mode for migrating to ADB to import schemas and their data.
List the schemas you want to export by using the schemas parameter.
Set the parallel parameter to at least the number of CPUs you have in your Autonomous Database to use parallelism.
The exclude and data_options parameters ensure that object types not required in ADB will not be exported and table partitions are grouped together so that they can be imported faster during the import to ADB.
i.e. Data Pump lets you exclude migration of objects like indexes and materialized views that are not required.
On ADB create credentials to access the target database. The username and password you specify with DBMS_CLOUD.CREATE_CREDENTIAL are the credentials for the target database that you use to create the database link.
Create the database link to the target database using DBMS_CLOUD_ADMIN.CREATE_DATABASE_LINK.
Users other than ADMIN require privileges to run DBMS_CLOUD_ADMIN.CREATE_DATABASE_LINK.
Only one wallet file is valid per directory for use with database links. You can only upload one cwallet.sso at a time to the directory you choose for wallet files (for example DBLINK_WALLET_DIR). This means with a cwallet.sso in DBLINK_WALLET_DIR you can only create database links to the databases for which the wallet in that directory is valid. To use multiple cwallet.sso files with database links you need to create additional directories and put each cwallet.sso in a different directory. When you create database links with DBMS_CLOUD_ADMIN.CREATE_DATABASE_LINK, specify the directory that contains the wallet with the directory_name parameter.
Autonomous Database sets the SEC_CASE_SENSITIVE_LOGON parameter to true and this value cannot be changed. If your target database is not an Autonomous Database, then you must set the SEC_CASE_SENSITIVE_LOGON parameter to true on the target database.
Loading JSON Documents
For loading data from collections in the cloud, Autonomous Transaction Processing provides the procedure DBMS_CLOUD.COPY_COLLECTION.
This is an Autonomous Database Dedicated component that the Fleet and Database Administrator would set up. So, in this case, we can say we’re going in and we’re configuring what a resource is.
An OCI Autonomous RESOURCE can be one of: autonomous-exadata-infrastructures : dedicated hardware resources autonomous-container-databases : runtime environments that meet specific SLAs autonomous-databases : application databases autonomous-backups : data archives
Policy statements allow group <GROUP> to <VERB> <RESOURCE> in compartment
Group: specific set of users with the similar privileges
Compartment: an operating context for a specific set of service resources that are only available to GROUPS who are explicitly granted access
Policy: is used to bind privileges for a GROUP to a specific set of resources in a COMPARTMENT
Where VERB is one of: INSPECT, READ, USE, MANAGE
INSPECT: is a limited use read-only grant intended for auditors READ: is a read-only grant, allowing a user to see details of existing resources USE: is a grant to allow a user to take all actions on existing resources MANAGE: is a grant to allow a user to both create new and take all actions on resources
e.g. allow group CoyoteDBA to MANAGE autonomous-databases in compartment CoyoteCompartment
allow group CoyoteDBA to MANAGE autonomous-backups in compartment CoyoteCompartment
allow group RoadRunnerDBA to MANAGE autonomous-databases in compartment RoadRunnerCompartment
allow group RoadRunnerDBA to MANAGE autonomous-backups in compartment RoadRunnerCompartment
allow group AcmeFA to MANAGE autonomous-exadata-infrastructures in compartment FACompartment
allow group AcmeFA to MANAGE autonomous-container-databases in compartment FACompartment
allow group RoadRunnerDBA to READ autonomous-container-databases in compartment FACompartment
allow group CoyoteDBA to READ autonomous-container-databases in compartment FACompartment
Connecting to the Oracle Autonomous Database relies upon a wallet to help with an encrypted SSL connection over the public internet. Keep in mind that the wallet is mandatory when connecting to Autonomous Database Shared. However, with Autonomous Database Dedicated, you can choose to connect directly without using a wallet. So most of this post is 100% required when connecting to the Autonomous Database Shared. It’s optional when connecting to Dedicated.
So with that being said, all apps will use a secure connection to connect to the Autonomous Database, and the Autonomous Database uses certificate authentication and SSL. This ensures that no unauthorized access is available to the ADB, and it also ensures that communications between the app and the server are fully encrypted and cannot be intercepted or altered. This information that’s required to authenticate to the database is stored in a wallet that’s available on both the client (where your application is running) and the server (where your database is running), and is automatically generated for you when provisioning the Autonomous Database.
When you actually download this wallet from your Autonomous Database, you’ll see there’s a collection of files that are included with it– like a couple of key files– in addition to a TNS entry, a tnsnames names file, and a sqlnet file. These files have existing service handles and connect strings to connect to your Autonomous Database. And you will also see there’s a readme file. The readme file contains information about when the wallet is expected to expire. And just so you know, the expiration date of a wallet is around about 2 and 1/2 year mark. So you’re looking at a long period of time before these wallets do expire. And you will get warnings before that’s required to replace it.
There are two types of wallets:
Instance Wallet file Instance wallet contains only credentials and keys for a single Autonomous Database
Regional Wallet file Regional wallet contains credentials and keys for all Autonomous DBs in a specified region. Regional wallets should only be used by DBAs.
For ADB-Dedicated, the wallet file contains only credentials and keys for a single autonomous database. There is no regional file.
The ADB only accepts secure connections to the database, and you need to download that wallet first. You can download it both from the DB Connection button on your main instance details page, as well as through the Service Console. You can also use API calls to download the wallet as well. It’s definitely a good practice to store the credential file in a secured location and only share the file contents with people that have authorized access. However, with that file alone, you cannot connect to the database. You still need to know the database password. It provides a secure way to connect to your database to route you to it, but not to actually log into the database.
You also have the option to rotate either the instance wallet or the region wallet. If you rotate the instance wallet, all the keys for that associated instance are discarded, and it generates a completely new wallet. With a regional wallet, if I were to replace or rotate my instance, the regional wallet will continue to work. However, all existing connections will continue to work, but any new connections will be terminated and will need to be re-established using the new wallet. So ADB gracefully allows the controlled operation of rotating your wallet.
Predefined Database Service Names
If you look inside your wallet file, you will notice that there are several connections that are predefined for you– connection services– depending what service you’re running.
tpurgent (ATP only): The highest priority application connection service for time critical OLTP operations. This connection supports manual parallelism Share of resources: 12
tp (ATP only): A typical application connection service for OLTP operations. This connection service does not run with parallelism. Share of resources: 8
high: A high priority application connection service for reporting and batch operations. All operations run in parallel and are subject to queuing. Share of resources: 4
medium: A typical application connection service for reporting and batch operations. All operations run in parallel and are subject to queuing. Using this service, the degree of parallelism is limited to 1.25 × OCPUs Share of resources: 2
low: A lowest priority application connection service for reporting and batch processing operations. This connection service does not run with parallelism. Share of resources: 1
Please note: After connecting to the database using one service, do not attempt to manually switch that connection to a different service by simply changing the consumer group of the connection. When you connect using a service, ADB performs more actions to configure the connection than just setting its consumer group.
If we were running an Autonomous Data Warehouse (ADW), we would only have high, medium, and low. In the case of Autonomous Transaction Processing (ATP), we have both tpurgent and tp in addition to high, medium, and low. When you go to download or to connect to the database, the autonomous database will automatically create the service handles for you. But they will be prefixed with your database name. e.g. dbname_high, dbname_medium, and dbname_low.
The tp and tpurgent are focused on OLTP workloads, they’re intended to support a very high concurrency. As you would typically see for a transactional workload, you’ll see that ADB supports up to 100 times the number of CPUs before it starts queuing, versus the high, medium, and low data warehouse ones have a limitation on the number of concurrent sessions that you would see connected to the database.
With that being said, the parallelism for the high, medium, and low is different. For high, ADB allows you to run parallelism– so operating parallel operations that might run with parallel query can run across all CPUs. Medium would run with a default of 4, and low would run with a default of 1. And it’s not possible to change that for low.
With OLTP, ADB explicitly sets tp as only ever using one parallel query slave. And that’s because this is a transactional workload. It’s not typical to run things with parallel query enabled. If, however, you have some batch process or some operation that requires some level of parallelism, if you’re using tpurgent as your service to connect to, you can explicitly define your parallel degree on the operation you’re performing.
Now, in addition to parallelism, ADB also controls the amount of resource shares. So you’ll notice OLTP tpurgent has a high number of shares of resources that it gets slices to, if you wish. tp doesn’t have as high a number of resource shares, but it is up there, because we’re intending to run operations concurrently and simultaneously. And in transactional workloads, typically you’re going to see these will be short, bursty style operations.
With warehouse, batch, and reporting, on the other hand, the number of resource shares you get are diminished. And that’s because the idea is this is more of an ad-hoc, asynchronous style of performing work, where the units of work could be varied in complexity and size. So we typically do not share as many of these resources, bearing in mind that these resources could also be consuming a lot of memory or running for an inordinate amount of time when compared to an OLTP workload.
So connecting to the Autonomous Database– I will point out that the Autonomous Database licensed on port 1522. And that’s the standard on ADB Shared. And it’s the default for ADB Dedicated. All connections, as I pointed out earlier, are using SSL for encryption. And then, unlike other cloud services, you do not have a Unix command line interface on the system you’re connecting to. Whether you’re running in Autonomous Database Shared or Dedicated, you do not have access to the underlying operating system.
Having said that, all Autonomous Databases can be connected through the typical client utilities, such as SQL*Net, JDBC thick or thin, OCI, etc. Remember, the clients will require the security credentials wallet to connect when using those tools. And there’s a few connectivity options. The first one is through the public internet. And when connecting through the public internet, you could do public endpoints or private endpoints. So there’s two parts to that.
And then the additional approach is using the FastConnect model with public peering, in which you could connect through either a private connection from an on-premise network or from another cloud vendor. And obviously, through public internet, you could set up private endpoints, or you could set up a FastConnect private connectivity to other clouds. FastConnect is useful for extremely high bandwidth. Perhaps you have an application tier that does require the benefit of being on a dedicated pipe to connect to.
There is no SYS or SYSTEM. Admin is the highest user you will have in an Autonomous Database.
Transparent Application Continuity
Preconfigured with Autonomous Dedicated and can be enabled in Autonomous-Shared with DBMS_CLOUD_ADMIN.ENABLE_APP_CONT
Transparent application continuity is there to help with failover and masking transferring of instances from one machine to another. So remember, Autonomous Shared provides application continuity versus Dedicated provides the full on transparent application continuity. Both have a full view of session state, but application continuity allows customers the flexibility to keep their side effects and restore more complex states, such as session during lobs at failover for example. And you can use the ORAchk protection report to identify the protection level, to see where things are at.
Oracle Call Interface (OCI), ODBC, JDBC, OCI
Wallet File is required
Set WALLET_LOCATION in sqlnet.ora
Set TNS_ADMIN environment variable to credentials file
Oracle Client 126.96.36.199 or higher
HTTP Proxy in tnsnames.ora
So to connect through OCI, the traditional TNS model, you still require your wallet. But you would set your wallet location in the sqlnet.ora file, and you would point to wherever it is you’ve downloaded that wallet. Once you’ve done that, SQLNet, if you’re running a client 188.8.131.52 or higher, is capable of knowing and reading that wallet to find what it needs. And you would also set your TNS_ADMIN environment variable to point to that location of that wallet, or where you’ve put your tnsnames.ora file if you’ve unzipped your wallet.
Connecting with .NET applications, you would download and install ODT and ODP.NET. And then you can go ahead and run Visual Studio, and use the NuGet package manager, and then you can download ODAC, which is the Oracle Data Access Components for Windows. Once you’ve got all that configured and setup, you can then set your wallet location in sqlnet.ora, and you would ensure that you’ve set TLS 1.2 to be enabled. And then your .NET applications will connect no problem.
JDBC Thin or Universal Connection Pool (UCP)
A couple of requirements– you need to install JDK8u161 or higher. Or just use JDK9, and you’re good to go– or higher. And you would download either the JDBC Thin driver, ojdbc8.jar. Or you could use the 12.1 version, and you would patch it up to use the upc.jar for the universal connection pool.
Once you’ve set that up or you’ve got the correct driver working, you would want to go ahead and copy the ojdbc.properties file, which is inside your wallet. You could unzip your wallet, copy those contents to your TNS_ADMIN directory, set the properties to be your net wallet location. And then the JDCB URL is going to use oracel:thin@service_name– so the service name that’s inside your tns entry is what you would point to.
Now, if you’re not using a TNS_ADMIN environment variable, if you’ve not set that, you would then point to the TNS_ADMIN = wallet directory as part of your connect string. If you’re using a HTTP proxy– perhaps you have a proxy server to block your requests– that can easily be configured. You would add the HTTP proxy hostname to tnsnames.ora very easily. And that’s documented for you as well.
ADB also supports connecting through Python, Node,js, and other scripting languages. You install the Instant Client, install the language driver, unzip the credentials wallet, and use the TNS entry file with the various contents. In some cases, like Python, you can simply run a yum install command to get that.
ADB also supports connecting through Active Directory. So if you are running Active Directory, traditionally, in the past, you would have been expected to authenticate through your LDAP repository, Oracle LDAP. Now, since Oracle 18c, we are able to directly connect to Active Directory and authorize and authenticate your users and manage them through Active Directory and have them routed to your Autonomous Database. So if you have a Windows or a Microsoft-based application that uses Active Directory to connect to your workloads, that is supported to work natively with the Autonomous Database. Just keep in mind that when using Autonomous Database with Active Directory, both users and roles are mapped in Active Directory and not in the Autonomous Database.