Oracle Cloud Infrastructure Documentation

Block Volume Performance

The content in the sections below apply to Category 7 and Section 3.b of the Oracle PaaS and IaaS Public Cloud Services Pillar documentation.

The Oracle Cloud Infrastructure Block Volume service lets you dynamically provision and manage block storage volumes. You can create, attach, connect and move volumes as needed to meet your storage and application requirements. The Block Volume service uses NVMe-based storage infrastructure, and is designed for consistency. You just need to provision the capacity needed and performance scales with the performance characteristics of the elastic performance option selected up to the service maximums. See Block Volume Elastic Performance for specific details about the elastic performance options.

The Block Volume service supports creating volumes sized from 50 GB to a maximum size of 32 TB, in 1 GB increments. You can attach up to 32 volumes to an instance, with a maximum of 1 PB of attached volumes per instance. Latency performance is independent of the instance shape or volume size, and is always sub-millisecond at the 95th percentile for the Balanced and Higher Performance elastic performance options.

Note

Block Volume performance may be limited by the network bandwidth of the instance shape, for more information see Compute Shapes.

Higher Performance

The Higher Performance elastic performance option is recommended for workloads with the highest I/O requirements, requiring the best possible performance, such as large databases. This option provides the best linear performance scale with 75 IOPS/GB up to a maximum of 35,000 IOPS per volume. Throughput also scales at the highest rate at 600 KBPS/GB up to a maximum of 480 MBPS per volume.

The following table lists the Block Volume service's throughput and IOPS performance numbers based on volume size for this option. IOPS and KPBS performance scales linearly per GB volume size up to the service maximums so you can predictably calculate the performance numbers for a specific volume size. If you're trying to achieve certain performance targets for volumes configured to use the Higher Performance elastic performance option you can provision a minimum volume size using this table as a reference.

Volume Size

Max Throughput

(1 MB block size)

Max Throughput

(8 KB block size)

Max IOPS

(4 KB block size)

50 GB 30 MB/s 30 MB/s 3750
100 GB 60 MB/s 60 MB/s 7500
200 GB 120 MB/s 96 MB/s 15,000
300 GB 180 MB/s 180 MB/s 22,500
400 GB 240 MB/s 240 MB/s 30,000
500 GB 300 MB/s 280 MB/s 35,000
800 GB - 32 TB 480 MB/s 280 MB/s 35,000

Balanced Performance

The Balanced elastic performance option provides a good balance between performance and cost savings for most workloads, including those that perform random I/O such as boot volumes. This option provides linear peformance scaling with 60 IOPS/GB up to 25,000 IOPS per volume. Throughtput scales at 480 KBPS/GB up to a maximum of 480 MBPS per volume.

The following table lists the Block Volume service's throughput and IOPS performance numbers based on volume size for this option. IOPS and KPBS performance scales linearly per GB volume size up to the service maximums so you can predictably calculate the performance numbers for a specific volume size. If you're trying to achieve certain performance targets for volumes configured to use the Balanced elastic performance option you can provision a minimum volume size using this table as a reference.

Volume Size

Max Throughput

(1 MB block size)

Max Throughput

(8 KB block size)

Max IOPS

(4 KB block size)

50 GB 24 MB/s 24 MB/s 3000
100 GB 48 MB/s 48 MB/s 6000
200 GB 96 MB/s 96 MB/s 12,000
300 GB 144 MB/s 144 MB/s 18,000
400 GB 192 MB/s 192 MB/s 24,000
500 GB 240 MB/s 200 MB/s 25,000
750 GB 360 MB/s 200 MB/s 25,000
1 TB - 32 TB 480 MB/s 200 MB/s 25,000

Lower Cost

The Lower Cost elastic performance option is recommended for throughput intensive workloads with large sequential I/O, such as streaming, log processing, and data warehouses. This option gives you linear scaling 2 IOPS/GB upt to a maximum of 3000 IOPS per volume.

The following table lists the Block Volume service's throughput and IOPS performance numbers based on volume size for this option. IOPS and KPBS performance scales linearly per GB volume size up to the service maximums so you can predictably calculate the performance numbers for a specific volume size. If you're trying to achieve certain performance targets for volumes configured to use the Lower Cost elastic performance option you can provision a minimum volume size using this table as a reference.

Volume Size

Max Throughput

(1 MB block size)

Max Throughput

(8 KB block size)

Max IOPS

(4 KB block size)

50 GB 12 MB/s 0.8 MB/s 100
100 GB 24 MB/s 1.6 MB/s 200
200 GB 48 MB/s 3.2 MB/s 400
300 GB 72 MB/s 4.8 MB/s 600
400 GB 96 MB/s 6.4 MB/s 800
500 GB 120 MB/s 8 MB/s 1000
750 GB 180 MB/s 12 MB/s 1500
1 TB 240 MB/s 16 MB/s 2000
1.5 TB - 32 TB 480 MB/s 23 MB/s 3000

For more information about FIO command samples you can use for performance testing see Sample FIO Commands for Block Volume Performance Tests on Linux-based Instances.

See Using Block Volumes Service Metrics to Calculate Block Volume Throughput and IOPS for a walkthrough of a performance testing scenario with FIO that shows how you can use Block Volume metrics to determine the performance characteristics of your block volume.

Limitations and Considerations

  • Block Volume performance SLA for IOPS per volume and IOPS per instance applies to the Balanced and Higher Performance elastic performance settings only, not to the Lower Cost setting.
  • The throughput performance results are for bare metal Compute instances. Throughput performance on virtual machine (VM) Compute instances is dependent on the network bandwidth that is available to the instance, and further limited by that bandwidth for the volume. For details about the network bandwidth available for VM shapes, see the Network Bandwidth column in the VM Shapes table.

  • IOPS performance is independent of the instance type or shape, so is applicable to all bare metal and VM shapes, for iSCSI attached volumes. For VM shapes with paravirtualized attached volumes, see Block Volume Performance.

  • For the Lower Cost option you may not see the same latency performance that you see with the Balanced or Higher Performance elastic performance options. You may also see a greater variance in latency with the Lower Cost option.

  • Windows Defender Advanced Threat Protection (Windows Defender ATP) is enabled by default on all Oracle-provided Windows images. This tool has a significant negative impact on disk I/O performance. The IOPS performance characteristics described in this topic are valid for Windows bare metal instances with Windows Defender ATP disabled for disk I/O. Customers must carefully consider the security implications of disabling Windows Defender ATP. See Windows Defender Advanced Threat Protection.

  • The IOPS performance characteristics described in this topic are for volumes with iSCSI attachments. Block Volume performance SLA for IOPS per volume and IOPS per instance applies to iSCSI volume attachments only, not to paravirtualized attachments.

    Paravirtualized attachments simplify the process of configuring your block storage by removing the extra commands needed before accessing a volume. However, due to the overhead of virtualization, this reduces the maximum IOPS performance for larger block volumes. If storage IOPS performance is of paramount importance for your workloads, you can continue to experience the guaranteed performance Oracle Cloud Infrastructure Block Volume offers by using iSCSI attachments.

Testing Methodology and Performance for Balanced Elastic Performance Option

Warning

  • Before running any tests, protect your data by making a backup of your data and operating system environment to prevent any data loss.
  • Do not run FIO tests directly against a device that is already in use, such as /dev/sdX. If it is in use as a formatted disk and there is data on it, running FIO with a write workload (readwrite, randrw, write, trimwrite) will overwrite the data on the disk, and cause data corruption. Run FIO only on unformatted raw devices that are not in use.

This section describes the setup of the test environments, the methodology, and the observed performance for the Balanced elastic performance configuration option. Some of the sample volume sizes tested were:

  • 50 GB volume - 3,000 IOPS @ 4K

  • 1 TB volume - 25,000 IOPS @ 4K

  • Host maximum, Ashburn (IAD) region, twenty 1 TB volumes - 400,000 IOPS @ 4K

These tests used a wide range of volume sizes and the most common read and write patterns and were generated with the Gartner Cloud Harmony test suite. To show the throughput performance limits, 256k or larger block sizes should be used. For most environments, 4K, 8K, or 16K blocks are common depending on the application workload, and these are used specifically for IOPS measurements.

In the observed performance images in this section, the X axis represents the volume size tested, ranging from 4KB to 1MB. The Y axis represents the IOPS delivered. The Z axis represents the read/write mix tested, ranging from 100% read to 100% write.

Note

Performance Notes for Instance Types

  • The throughput performance results are for bare metal instances. Throughput performance on VM instances is dependent on the network bandwidth that is available to the instance, and further limited by that bandwidth for the volume. For details about the network bandwidth available for VM shapes, see the Network Bandwidth column in the VM Shapes table.

  • IOPS performance is independent of the instance type or shape, so is applicable to all bare metal and VM shapes, for iSCSI attached volumes. For VM shapes with paravirtualized attached volumes, see Block Volume Performance.

1 TB Block Volume

A 1 TB volume was mounted to a bare metal instance running in the Phoenix region. The instance shape was dense, workload was direct I/O with 10GB working set. The following command was run for the Gartner Cloud Harmony test suite:

~/block-storage/run.sh --nopurge --noprecondition --fio_direct\=1 --fio_size=10g --target /dev/sdb --test iops --skip_blocksize 512b

The results showed that for 1 TB, the bandwidth limit for the larger block size test occurs at 320MBS.

The following images show the observed performance for 1 TB:

Observed performance chart, 1 TB volume size

Observed performance slope, 1 TB volume size

50 GB Block Volume

A 50 GB volume was mounted to a bare metal instance running in the Phoenix region. The instance shape was dense, workload was direct I/O with 10GB working set. The following command was run for the Gartner Cloud Harmony test suite:

~/block-storage/run.sh --nopurge --noprecondition --fio_direct=1 --fio_size=10g --target /dev/sdb --test iops --skip_blocksize 512b

The results showed that for the 50 GB volume, the bandwidth limit is confirmed as 24,000 KBPS for the larger block size tests (256 KB or larger block sizes), and the maximum of 3,000 IOPS at 4K block size is delivered. For small volumes, a 4K block size is common.

The following images show the observed performance for 50 GB:

Observed performance chart, 50 GB volume size

Observed performance slope, 50 GB volume size

Host Maximum - Twenty 1 TB Volumes

Twenty 1 TB volumes were mounted to a bare metal instance running in the Ashburn region. The instance shape was dense, workload was direct I/O with 10GB working set. The following command was run for the Gartner Cloud Harmony test suite:

~/block-storage/run.sh --nopurge --noprecondition --fio_direct=1 --fio_size=10g --target /dev/sdb,/dev/sdc,/dev/sdd,/dev/sde,/dev/sdf,/dev/sdg,/dev/sdh,/dev/sdi,/dev/sdj,/dev/sdk,/dev/sdl,/dev/sdm,/dev/sdn,/dev/sdo,/dev/sdp,/dev/sdq,/dev/sdr,/dev/sds,/dev/sdt,/dev/sdu --test iops --skip_blocksize 512b

The results showed that for the host maximum test of twenty 1 TB volumes, the average is 2.1GBPS, and 400,000 IOPS to the host for the 50/50 read/write pattern.

The following images show the observed performance for 50 GB:

Observed performance chart, 20 x 1 TB volume size

Observed performance slope, 20 x 1 TB volume size