Oracle Cloud Infrastructure Documentation

Block Volume Performance

The content in the sections below apply to Category 7 and Section 3.b of the Oracle PaaS and IaaS Public Cloud Services Pillar documentation.

The Oracle Cloud Infrastructure Block Volume service lets you dynamically provision and manage block storage volumes. You can create, attach, connect and move volumes as needed to meet your storage and application requirements. The Block Volume service uses NVMe-based storage infrastructure, and is designed for consistency. You just need to provision the capacity needed and performance scales linearly per GB volume size up to the service maximums. The following table describes the performance characteristics of the service.

Metric

Characteristic

Volume Size 50 GB to 32 TB, in 1 GB increments
IOPS 60 IOPS/GB , up to 25,000 IOPS
Throughput 480 KBPS/GB, up to 320 MBPS
Latency Sub-millisecond latencies.
Per-instance Limits

32 attachments per instance, up to 1 PB.

Up to 620K or more IOPS, near line rate throughout.

The following table lists the Block Volume service's throughput and IOPS performance numbers based on volume size. If you're trying to achieve certain performance targets you can provision a minimum volume size using this table as a reference.

Volume Size

Max Throughput

(1 MB block size)

Max Throughput

(8 KB block size)

Max IOPS

(4 KB block size)

50 GB 24 MB/s 24 MB/s 3000
100 GB 48 MB/s 48 MB/s 6000
200 GB 96 MB/s 96 MB/s 12,000
300 GB 144 MB/s 144 MB/s 18,000
400 GB 192 MB/s 192 MB/s 24,000
500 GB 240 MB/s 200 MB/s 25,000
700 GB - 32 TB 320 MB/s 200 MB/s 25,000
Note

Performance Notes for Instance Types

  • The throughput performance results are for bare metal instances. Throughput performance on VM instances is dependent on the network bandwidth that is available to the instance, and further limited by that bandwidth for the volume. For details about the network bandwidth available for VM shapes, see the Network Bandwidth column in the VM Shapes table.

  • IOPS performance is independent of the instance type or shape, so is applicable to all bare metal and VM shapes, for iSCSI attached volumes. For VM shapes with paravirtualized attached volumes, see Paravirtualized Attachment Performance.

  • Latency performance is independent of the instance shape or volume size, and is always sub-millisecond at the 95th percentile.

  • There is no oversubscription of resources in Oracle Cloud Infrastructure to ensure stable and predictable throughput, IOPS, and latency performance.

Note

Testing Note for Windows Instances

Windows Defender Advanced Threat Protection (Windows Defender ATP) is enabled by default on all Oracle-provided Windows images. This tool has a significant negative impact on disk I/O performance. The IOPS performance characteristics described in this topic are valid for Windows bare metal instances with Windows Defender ATP disabled for disk I/O. Customers must carefully consider the security implications of disabling Windows Defender ATP. See Windows Defender Advanced Threat Protection.

Note

Paravirtualized Attachment Performance

The IOPS performance characteristics described in this topic are for volumes with iSCSI attachments. Block Volume performance SLA for IOPS per volume and IOPS per instance applies to iSCSI volume attachments only, not to paravirtualized attachments.

Paravirtualized attachments simplify the process of configuring your block storage by removing the extra commands needed before accessing a volume. However, due to the overhead of virtualization, this reduces the maximum IOPS performance for larger block volumes. If storage IOPS performance is of paramount importance for your workloads, you can continue to experience the guaranteed performance Oracle Cloud Infrastructure Block Volume offers by using iSCSI attachments.

For more information about FIO command samples you can use for performance testing see Sample FIO Commands for Block Volume Performance Tests on Linux-based Instances.

Testing Methodology and Performance

Warning

  • Before running any tests, protect your data by making a backup of your data and operating system environment to prevent any data loss.
  • Do not run FIO tests directly against a device that is already in use, such as /dev/sdX. If it is in use as a formatted disk and there is data on it, running FIO with a write workload (readwrite, randrw, write, trimwrite) will overwrite the data on the disk, and cause data corruption. Run FIO only on unformatted raw devices that are not in use.

This section describes the setup of the test environments, the methodology, and the observed performance. Some of the sample volume sizes tested were:

  • 50 GB volume - 3,000 IOPS @ 4K

  • 1 TB volume - 25,000 IOPS @ 4K

  • Host maximum, Ashburn (IAD) region, twenty 1 TB volumes - 400,000 IOPS @ 4K

These tests used a wide range of volume sizes and the most common read and write patterns and were generated with the Gartner Cloud Harmony test suite. To show the throughput performance limits, 256k or larger block sizes should be used. For most environments, 4K, 8K, or 16K blocks are common depending on the application workload, and these are used specifically for IOPS measurements.

In the observed performance images in this section, the X axis represents the volume size tested, ranging from 4KB to 1MB. The Y axis represents the IOPS delivered. The Z axis represents the read/write mix tested, ranging from 100% read to 100% write.

Note

Performance Notes for Instance Types

  • The throughput performance results are for bare metal instances. Throughput performance on VM instances is dependent on the network bandwidth that is available to the instance, and further limited by that bandwidth for the volume. For details about the network bandwidth available for VM shapes, see the Network Bandwidth column in the VM Shapes table.

  • IOPS performance is independent of the instance type or shape, so is applicable to all bare metal and VM shapes, for iSCSI attached volumes. For VM shapes with paravirtualized attached volumes, see Paravirtualized Attachment Performance.

1 TB Block Volume

A 1 TB volume was mounted to a bare metal instance running in the Phoenix region. The instance shape was dense, workload was direct I/O with 10GB working set. The following command was run for the Gartner Cloud Harmony test suite:

~/block-storage/run.sh --nopurge --noprecondition --fio_direct\=1 --fio_size=10g --target /dev/sdb --test iops --skip_blocksize 512b

The results showed that for 1 TB, the bandwidth limit for the larger block size test occurs at 320MBS.

The following images show the observed performance for 1 TB:

Observed performance chart, 1 TB volume size

Observed performance slope, 1 TB volume size

50 GB Block Volume

A 50 GB volume was mounted to a bare metal instance running in the Phoenix region. The instance shape was dense, workload was direct I/O with 10GB working set. The following command was run for the Gartner Cloud Harmony test suite:

~/block-storage/run.sh --nopurge --noprecondition --fio_direct=1 --fio_size=10g --target /dev/sdb --test iops --skip_blocksize 512b

The results showed that for the 50 GB volume, the bandwidth limit is confirmed as 24,000 KBPS for the larger block size tests (256 KB or larger block sizes), and the maximum of 3,000 IOPS at 4K block size is delivered. For small volumes, a 4K block size is common.

The following images show the observed performance for 50 GB:

Observed performance chart, 50 GB volume size

Observed performance slope, 50 GB volume size

Host Maximum - Twenty 1 TB Volumes

Twenty 1 TB volumes were mounted to a bare metal instance running in the Ashburn region. The instance shape was dense, workload was direct I/O with 10GB working set. The following command was run for the Gartner Cloud Harmony test suite:

~/block-storage/run.sh --nopurge --noprecondition --fio_direct=1 --fio_size=10g --target /dev/sdb,/dev/sdc,/dev/sdd,/dev/sde,/dev/sdf,/dev/sdg,/dev/sdh,/dev/sdi,/dev/sdj,/dev/sdk,/dev/sdl,/dev/sdm,/dev/sdn,/dev/sdo,/dev/sdp,/dev/sdq,/dev/sdr,/dev/sds,/dev/sdt,/dev/sdu --test iops --skip_blocksize 512b

The results showed that for the host maximum test of twenty 1 TB volumes, the average is 2.1GBPS, and 400,000 IOPS to the host for the 50/50 read/write pattern.

The following images show the observed performance for 50 GB:

Observed performance chart, 20 x 1 TB volume size

Observed performance slope, 20 x 1 TB volume size