deleted benchmark tests
This commit is contained in:
@@ -1,411 +0,0 @@
|
|||||||
# NFS Performance Benchmark - Claude Analysis
|
|
||||||
|
|
||||||
**Date:** 2026-01-19
|
|
||||||
**Storage Class:** nfs-csi
|
|
||||||
**NFS Server:** 192.168.0.105:/nfs/NFS/ocp
|
|
||||||
**Test Environment:** OpenShift Container Platform (OCP)
|
|
||||||
**Tool:** fio (Flexible I/O Tester)
|
|
||||||
|
|
||||||
## Executive Summary
|
|
||||||
|
|
||||||
Performance testing of the NAS storage via nfs-csi storage class reveals actual throughput of **65-80 MiB/s** for sequential operations. This represents typical performance for 1 Gbps Ethernet NFS configurations.
|
|
||||||
|
|
||||||
## Test Configuration
|
|
||||||
|
|
||||||
### NFS Mount Options
|
|
||||||
- **rsize/wsize:** 1048576 (1MB) - optimal for large sequential transfers
|
|
||||||
- **Protocol options:** hard, noresvport
|
|
||||||
- **Timeout:** 600 seconds
|
|
||||||
- **Retrans:** 2
|
|
||||||
|
|
||||||
### Test Constraints
|
|
||||||
- CPU: 500m
|
|
||||||
- Memory: 512Mi
|
|
||||||
- Namespace: nfs-benchmark (ephemeral)
|
|
||||||
- PVC Size: 5Gi
|
|
||||||
|
|
||||||
## Benchmark Results
|
|
||||||
|
|
||||||
### Sequential I/O (1M block size)
|
|
||||||
|
|
||||||
#### Sequential Write
|
|
||||||
- **Throughput:** 70.2 MiB/s (73.6 MB/s)
|
|
||||||
- **IOPS:** 70
|
|
||||||
- **Test Duration:** 31 seconds
|
|
||||||
- **Data Written:** 2176 MiB
|
|
||||||
|
|
||||||
**Latency Distribution:**
|
|
||||||
- Median: 49 µs
|
|
||||||
- 95th percentile: 75 µs
|
|
||||||
- 99th percentile: 212 ms (indicating occasional network delays)
|
|
||||||
|
|
||||||
#### Sequential Read
|
|
||||||
- **Throughput:** 80.7 MiB/s (84.6 MB/s)
|
|
||||||
- **IOPS:** 80
|
|
||||||
- **Test Duration:** 20 seconds
|
|
||||||
- **Data Read:** 1615 MiB
|
|
||||||
|
|
||||||
**Latency Distribution:**
|
|
||||||
- Median: 9 ms
|
|
||||||
- 95th percentile: 15 ms
|
|
||||||
- 99th percentile: 150 ms
|
|
||||||
|
|
||||||
### Synchronized Write Test
|
|
||||||
|
|
||||||
**Purpose:** Measure actual NAS performance without local caching
|
|
||||||
|
|
||||||
- **Throughput:** 65.9 MiB/s (69.1 MB/s)
|
|
||||||
- **IOPS:** 65
|
|
||||||
- **fsync latency:** 13-15ms average
|
|
||||||
|
|
||||||
This test provides the most realistic view of actual NAS write performance, as each write operation is synchronized to disk before returning.
|
|
||||||
|
|
||||||
### Random I/O (4K block size, cached)
|
|
||||||
|
|
||||||
**Note:** These results heavily leverage local page cache and do not represent actual NAS performance.
|
|
||||||
|
|
||||||
#### Random Write
|
|
||||||
- **Throughput:** 1205 MiB/s (cached)
|
|
||||||
- **IOPS:** 308k (cached)
|
|
||||||
|
|
||||||
#### Random Read
|
|
||||||
- **Throughput:** 1116 MiB/s (cached)
|
|
||||||
- **IOPS:** 286k (cached)
|
|
||||||
|
|
||||||
### Mixed Workload (70% read / 30% write, 4 concurrent jobs)
|
|
||||||
|
|
||||||
- **Read Throughput:** 426 MiB/s
|
|
||||||
- **Read IOPS:** 109k
|
|
||||||
- **Write Throughput:** 183 MiB/s
|
|
||||||
- **Write IOPS:** 46.8k
|
|
||||||
|
|
||||||
**Note:** High IOPS values indicate substantial local caching effects.
|
|
||||||
|
|
||||||
## Analysis
|
|
||||||
|
|
||||||
### Performance Characteristics
|
|
||||||
|
|
||||||
1. **Actual NAS Bandwidth:** ~65-80 MiB/s
|
|
||||||
- Consistent across sequential read/write tests
|
|
||||||
- Synchronized writes confirm this range
|
|
||||||
|
|
||||||
2. **Network Bottleneck Indicators:**
|
|
||||||
- Performance aligns with 1 Gbps Ethernet (theoretical max ~125 MiB/s)
|
|
||||||
- Protocol overhead and network latency account for 40-50% overhead
|
|
||||||
- fsync operations show 13-15ms latency, indicating network RTT
|
|
||||||
|
|
||||||
3. **Caching Effects:**
|
|
||||||
- Random I/O tests show 10-15x higher throughput due to local page cache
|
|
||||||
- Not representative of actual NAS capabilities
|
|
||||||
- Useful for understanding application behavior with cached data
|
|
||||||
|
|
||||||
### Bottleneck Analysis
|
|
||||||
|
|
||||||
The ~70 MiB/s throughput is likely limited by:
|
|
||||||
|
|
||||||
1. **Network Bandwidth** (Primary)
|
|
||||||
- 1 Gbps link = ~125 MiB/s theoretical maximum
|
|
||||||
- NFS protocol overhead reduces effective throughput to 55-60%
|
|
||||||
- Observed performance matches expected 1 Gbps NFS behavior
|
|
||||||
|
|
||||||
2. **Network Latency**
|
|
||||||
- fsync showing 13-15ms indicates network + storage latency
|
|
||||||
- Each synchronous operation requires full round-trip
|
|
||||||
|
|
||||||
3. **NAS Backend Storage** (Unknown)
|
|
||||||
- Current tests cannot isolate NAS disk performance
|
|
||||||
- Backend may be faster than network allows
|
|
||||||
|
|
||||||
## Recommendations
|
|
||||||
|
|
||||||
### Immediate Improvements
|
|
||||||
|
|
||||||
1. **Upgrade to 10 Gbps Networking**
|
|
||||||
- Most cost-effective improvement
|
|
||||||
- Could provide 8-10x throughput increase
|
|
||||||
- Requires network infrastructure upgrade
|
|
||||||
|
|
||||||
2. **Enable NFS Multichannel** (if supported)
|
|
||||||
- Use multiple network paths simultaneously
|
|
||||||
- Requires NFS 4.1+ with pNFS support
|
|
||||||
|
|
||||||
### Workload Optimization
|
|
||||||
|
|
||||||
1. **For Write-Heavy Workloads:**
|
|
||||||
- Consider async writes (with data safety trade-offs)
|
|
||||||
- Batch operations where possible
|
|
||||||
- Use larger block sizes (already optimized at 1MB)
|
|
||||||
|
|
||||||
2. **For Read-Heavy Workloads:**
|
|
||||||
- Current performance is acceptable
|
|
||||||
- Application-level caching will help significantly
|
|
||||||
- Consider ReadOnlyMany volumes for shared data
|
|
||||||
|
|
||||||
### Alternative Solutions
|
|
||||||
|
|
||||||
1. **Local NVMe Storage** (for performance-critical workloads)
|
|
||||||
- Use local-nvme-retain storage class for high-IOPS workloads
|
|
||||||
- Reserve NFS for persistent data and backups
|
|
||||||
|
|
||||||
2. **Tiered Storage Strategy**
|
|
||||||
- Hot data: Local NVMe
|
|
||||||
- Warm data: NFS
|
|
||||||
- Cold data: Object storage (e.g., MinIO)
|
|
||||||
|
|
||||||
## Conclusion
|
|
||||||
|
|
||||||
The NAS is performing as expected for a 1 Gbps NFS configuration, delivering consistent 65-80 MiB/s throughput. The primary limitation is network bandwidth, not NAS capability. Applications with streaming I/O patterns will benefit from the current configuration, while IOPS-intensive workloads should consider local storage options.
|
|
||||||
|
|
||||||
For significant performance improvements, upgrading to 10 Gbps networking is the most practical path forward.
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## Test Methodology
|
|
||||||
|
|
||||||
All tests were conducted using:
|
|
||||||
- Ephemeral namespace with automatic cleanup
|
|
||||||
- Constrained resources (500m CPU, 512Mi memory)
|
|
||||||
- fio version 3.6
|
|
||||||
- Direct I/O where applicable to minimize caching effects
|
|
||||||
|
|
||||||
Benchmark pod and resources were automatically cleaned up after testing, following ephemeral testing protocols.
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
# NVMe Local Storage Benchmark - Comparison Analysis
|
|
||||||
|
|
||||||
**Date:** 2026-01-19
|
|
||||||
**Storage Class:** local-nvme-retain
|
|
||||||
**Storage Backend:** Local NVMe SSD
|
|
||||||
**Test Environment:** OpenShift Container Platform (OCP)
|
|
||||||
**Tool:** fio (Flexible I/O Tester)
|
|
||||||
|
|
||||||
## Executive Summary
|
|
||||||
|
|
||||||
Local NVMe storage dramatically outperforms network-attached NFS storage, delivering **30-85x** higher throughput for sequential operations. Sequential read performance reaches **6845 MiB/s**, while sequential write achieves **2109 MiB/s** - compared to NFS's 80 MiB/s and 70 MiB/s respectively.
|
|
||||||
|
|
||||||
## NVMe Benchmark Results
|
|
||||||
|
|
||||||
### Sequential I/O (1M block size)
|
|
||||||
|
|
||||||
#### Sequential Write
|
|
||||||
- **Throughput:** 2109 MiB/s (2211 MB/s)
|
|
||||||
- **IOPS:** 2108
|
|
||||||
- **Test Duration:** 31 seconds
|
|
||||||
- **Data Written:** 64.1 GiB
|
|
||||||
- **Performance vs NFS:** **30x faster**
|
|
||||||
|
|
||||||
**Latency Distribution:**
|
|
||||||
- Median: 51 µs
|
|
||||||
- 95th percentile: 79 µs
|
|
||||||
- 99th percentile: 5.6 ms
|
|
||||||
|
|
||||||
#### Sequential Read
|
|
||||||
- **Throughput:** 6845 MiB/s (7177 MB/s)
|
|
||||||
- **IOPS:** 6844
|
|
||||||
- **Test Duration:** 20 seconds
|
|
||||||
- **Data Read:** 134 GiB
|
|
||||||
- **Performance vs NFS:** **85x faster**
|
|
||||||
|
|
||||||
**Latency Distribution:**
|
|
||||||
- Median: 50 µs
|
|
||||||
- 95th percentile: 816 µs
|
|
||||||
- 99th percentile: 840 µs
|
|
||||||
|
|
||||||
### Random I/O (4K block size)
|
|
||||||
|
|
||||||
#### Random Write
|
|
||||||
- **Throughput:** 989 MiB/s (1037 MB/s)
|
|
||||||
- **IOPS:** 253k
|
|
||||||
- **Test Duration:** 20 seconds
|
|
||||||
- **Data Written:** 19.3 GiB
|
|
||||||
|
|
||||||
**Latency Distribution:**
|
|
||||||
- Median: 1.4 µs
|
|
||||||
- 95th percentile: 1.8 µs
|
|
||||||
- 99th percentile: 2.2 µs
|
|
||||||
|
|
||||||
#### Random Read
|
|
||||||
- **Throughput:** 1594 MiB/s (1672 MB/s)
|
|
||||||
- **IOPS:** 408k
|
|
||||||
- **Test Duration:** 20 seconds
|
|
||||||
- **Data Read:** 31.1 GiB
|
|
||||||
|
|
||||||
**Latency Distribution:**
|
|
||||||
- Median: 980 ns
|
|
||||||
- 95th percentile: 1.3 µs
|
|
||||||
- 99th percentile: 1.6 µs
|
|
||||||
|
|
||||||
### Synchronized Write Test
|
|
||||||
|
|
||||||
**Purpose:** Measure actual storage performance with fsync
|
|
||||||
|
|
||||||
- **Throughput:** 197 MiB/s (206 MB/s)
|
|
||||||
- **IOPS:** 196
|
|
||||||
- **fsync latency:** 4.9ms average
|
|
||||||
- **Performance vs NFS:** **3x faster** (197 vs 66 MiB/s)
|
|
||||||
- **Latency vs NFS:** **3x lower** (4.9ms vs 15ms)
|
|
||||||
|
|
||||||
The significantly lower fsync latency (4.9ms vs 15ms for NFS) demonstrates the advantage of local storage for durability-critical operations.
|
|
||||||
|
|
||||||
### Mixed Workload (70% read / 30% write, 4 concurrent jobs)
|
|
||||||
|
|
||||||
- **Read Throughput:** 294 MiB/s
|
|
||||||
- **Read IOPS:** 75.2k
|
|
||||||
- **Write Throughput:** 126 MiB/s
|
|
||||||
- **Write IOPS:** 32.4k
|
|
||||||
|
|
||||||
**Note:** Lower than random I/O tests due to contention from 4 concurrent jobs and mixed read/write operations.
|
|
||||||
|
|
||||||
## Performance Comparison: NFS vs NVMe
|
|
||||||
|
|
||||||
| Test Type | NFS (nfs-csi) | NVMe (local-nvme-retain) | Improvement Factor |
|
|
||||||
|-----------|---------------|---------------------------|-------------------|
|
|
||||||
| **Sequential Write** | 70 MiB/s | 2109 MiB/s | **30x** |
|
|
||||||
| **Sequential Read** | 81 MiB/s | 6845 MiB/s | **85x** |
|
|
||||||
| **Sync Write (fsync)** | 66 MiB/s | 197 MiB/s | **3x** |
|
|
||||||
| **Random Write 4K** | 1205 MiB/s* | 989 MiB/s | - |
|
|
||||||
| **Random Read 4K** | 1116 MiB/s* | 1594 MiB/s | **1.4x** |
|
|
||||||
| **Random Write IOPS** | 308k* | 253k | - |
|
|
||||||
| **Random Read IOPS** | 286k* | 408k | **1.4x** |
|
|
||||||
| **fsync Latency** | 13-15ms | 4.9ms | **3x lower** |
|
|
||||||
|
|
||||||
*Note: NFS random I/O results are heavily cached and don't represent actual NAS performance
|
|
||||||
|
|
||||||
## Key Insights
|
|
||||||
|
|
||||||
### 1. Sequential Performance Dominance
|
|
||||||
|
|
||||||
NVMe's sequential performance advantage is dramatic:
|
|
||||||
- **Write throughput:** 2109 MiB/s enables high-speed data ingestion
|
|
||||||
- **Read throughput:** 6845 MiB/s ideal for data analytics and streaming workloads
|
|
||||||
- **Latency:** Sub-millisecond latency for sequential operations
|
|
||||||
|
|
||||||
### 2. Realistic Random I/O Performance
|
|
||||||
|
|
||||||
Unlike NFS tests which show cached results, NVMe delivers:
|
|
||||||
- **True 4K random write:** 989 MiB/s (253k IOPS)
|
|
||||||
- **True 4K random read:** 1594 MiB/s (408k IOPS)
|
|
||||||
- **Consistent sub-microsecond latencies**
|
|
||||||
|
|
||||||
### 3. Durability Performance
|
|
||||||
|
|
||||||
For applications requiring data durability (fsync operations):
|
|
||||||
- **NVMe:** 197 MiB/s with 4.9ms fsync latency
|
|
||||||
- **NFS:** 66 MiB/s with 15ms fsync latency
|
|
||||||
- **Advantage:** 3x faster with 3x lower latency
|
|
||||||
|
|
||||||
This makes NVMe significantly better for databases and transactional workloads.
|
|
||||||
|
|
||||||
## Storage Class Selection Guide
|
|
||||||
|
|
||||||
### Use NVMe (local-nvme-retain) For:
|
|
||||||
|
|
||||||
1. **Database Workloads**
|
|
||||||
- High IOPS requirements (>10k IOPS)
|
|
||||||
- Low latency requirements (<1ms)
|
|
||||||
- Transactional consistency (fsync-heavy)
|
|
||||||
- Examples: PostgreSQL, MySQL, MongoDB, Cassandra
|
|
||||||
|
|
||||||
2. **High-Performance Computing**
|
|
||||||
- Large sequential data processing
|
|
||||||
- Analytics and data science workloads
|
|
||||||
- Machine learning training data
|
|
||||||
|
|
||||||
3. **Application Cache Layers**
|
|
||||||
- Redis, Memcached
|
|
||||||
- Application-level caching
|
|
||||||
- Session stores
|
|
||||||
|
|
||||||
4. **Build and CI/CD Systems**
|
|
||||||
- Fast build artifacts storage
|
|
||||||
- Container image layers
|
|
||||||
- Temporary compilation outputs
|
|
||||||
|
|
||||||
### Use NFS (nfs-csi) For:
|
|
||||||
|
|
||||||
1. **Shared Storage Requirements**
|
|
||||||
- Multiple pods accessing same data (ReadWriteMany)
|
|
||||||
- Shared configuration files
|
|
||||||
- Content management systems
|
|
||||||
|
|
||||||
2. **Long-Term Data Storage**
|
|
||||||
- Application backups
|
|
||||||
- Log archives
|
|
||||||
- Media file storage (videos, images)
|
|
||||||
|
|
||||||
3. **Cost-Sensitive Workloads**
|
|
||||||
- Lower priority applications
|
|
||||||
- Development environments
|
|
||||||
- Acceptable 65-80 MiB/s throughput
|
|
||||||
|
|
||||||
### Hybrid Approach (Recommended):
|
|
||||||
|
|
||||||
Implement a tiered storage strategy:
|
|
||||||
|
|
||||||
```
|
|
||||||
┌─────────────────────────────────────────┐
|
|
||||||
│ Tier 1: NVMe (Hot Data) │
|
|
||||||
│ - Databases │
|
|
||||||
│ - Active application data │
|
|
||||||
│ - High-IOPS workloads │
|
|
||||||
│ Performance: 2000-7000 MiB/s │
|
|
||||||
└─────────────────────────────────────────┘
|
|
||||||
↓ Archive/Backup
|
|
||||||
┌─────────────────────────────────────────┐
|
|
||||||
│ Tier 2: NFS (Warm Data) │
|
|
||||||
│ - Shared files │
|
|
||||||
│ - Application backups │
|
|
||||||
│ - Logs and archives │
|
|
||||||
│ Performance: 65-80 MiB/s │
|
|
||||||
└─────────────────────────────────────────┘
|
|
||||||
↓ Long-term storage
|
|
||||||
┌─────────────────────────────────────────┐
|
|
||||||
│ Tier 3: Object Storage (Cold Data) │
|
|
||||||
│ - Long-term archives │
|
|
||||||
│ - Compliance data │
|
|
||||||
│ - Infrequently accessed backups │
|
|
||||||
└─────────────────────────────────────────┘
|
|
||||||
```
|
|
||||||
|
|
||||||
## Cost Considerations
|
|
||||||
|
|
||||||
### NVMe Local Storage:
|
|
||||||
- **Pros:** Exceptional performance, low latency, no network overhead
|
|
||||||
- **Cons:** Node-local (no pod mobility), limited capacity per node
|
|
||||||
- **Best for:** Performance-critical workloads where cost-per-IOPS is justified
|
|
||||||
|
|
||||||
### NFS Network Storage:
|
|
||||||
- **Pros:** Shared access, unlimited capacity, pod mobility across nodes
|
|
||||||
- **Cons:** Network-limited performance, higher latency
|
|
||||||
- **Best for:** Shared data, cost-sensitive workloads, large capacity needs
|
|
||||||
|
|
||||||
## Final Recommendations
|
|
||||||
|
|
||||||
1. **For New Database Deployments:**
|
|
||||||
- Use NVMe (local-nvme-retain) for primary storage
|
|
||||||
- Use NFS for backups and WAL archives
|
|
||||||
- Expected 30x performance improvement over NFS-only approach
|
|
||||||
|
|
||||||
2. **For Existing NFS-Based Applications:**
|
|
||||||
- Migrate performance-critical components to NVMe
|
|
||||||
- Keep shared/archival data on NFS
|
|
||||||
- Measure application-specific improvements
|
|
||||||
|
|
||||||
3. **For High-Throughput Applications:**
|
|
||||||
- NVMe sequential read (6845 MiB/s) enables near-real-time data processing
|
|
||||||
- Consider NVMe for any workload exceeding 100 MiB/s sustained throughput
|
|
||||||
|
|
||||||
4. **Network Upgrade Still Valuable:**
|
|
||||||
- Even with NVMe available, upgrading to 10 Gbps networking benefits:
|
|
||||||
- Faster pod-to-pod communication
|
|
||||||
- Better NFS performance for shared data
|
|
||||||
- Reduced network congestion
|
|
||||||
|
|
||||||
## Conclusion
|
|
||||||
|
|
||||||
Local NVMe storage provides transformational performance improvements over network-attached NFS storage, with 30-85x higher throughput for sequential operations and consistent sub-millisecond latencies. This makes NVMe the clear choice for performance-critical workloads including databases, analytics, and high-IOPS applications.
|
|
||||||
|
|
||||||
However, NFS remains essential for shared storage scenarios and cost-sensitive workloads where 65-80 MiB/s throughput is sufficient. The optimal strategy combines both: use NVMe for hot data requiring high performance, and NFS for shared access and archival needs.
|
|
||||||
|
|
||||||
The benchmark results validate that storage class selection should be workload-specific, with NVMe delivering exceptional value for performance-critical applications while NFS serves broader organizational needs for shared and persistent storage.
|
|
||||||
@@ -1,38 +0,0 @@
|
|||||||
# NAS Performance Benchmark Summary
|
|
||||||
|
|
||||||
**Date:** 2026-01-19
|
|
||||||
**Investigator:** Gemini Agent
|
|
||||||
|
|
||||||
## Problem Statement
|
|
||||||
User reported sluggish performance when accessing the NAS (`MiniMe.local`) mounted at `/Volumes/conan` on a laptop.
|
|
||||||
|
|
||||||
## Diagnostics Performed
|
|
||||||
|
|
||||||
### 1. Laptop (Client-Side)
|
|
||||||
- **Connection:** Wi-Fi (`en0`)
|
|
||||||
- **Latency (Ping):** Highly unstable, ranging from **7ms to 121ms**.
|
|
||||||
- **Throughput:**
|
|
||||||
- **Read:** ~12.7 MB/s (~100 Mbps)
|
|
||||||
- **Write:** ~6.5 MB/s (~52 Mbps)
|
|
||||||
|
|
||||||
### 2. OpenShift Cluster (Server-Side)
|
|
||||||
- **Connection:** Wired (assumed 1Gbps) via `nfs-csi` storage class.
|
|
||||||
- **Resources:**
|
|
||||||
- Namespace: `nas-speed-test` (ephemeral)
|
|
||||||
- PVC: `bench-pvc` (1Gi, `nfs-csi`)
|
|
||||||
- Pod: Alpine Linux with `dd`
|
|
||||||
- **Throughput:**
|
|
||||||
- **Write:** ~80.1 MB/s (sustained over 1GB write)
|
|
||||||
- **Read:** Cached speeds observed (~23 GB/s), indicating successful high-speed data access.
|
|
||||||
|
|
||||||
## Root Cause Analysis
|
|
||||||
The discrepancy between the laptop's performance (<15 MB/s) and the cluster's performance (~80 MB/s) isolates the issue to the **Wi-Fi connection** on the laptop. The NAS itself is capable of near-Gigabit speeds.
|
|
||||||
|
|
||||||
## Recommendations
|
|
||||||
1. **Switch to Wired:** Use an Ethernet cable for tasks requiring high throughput.
|
|
||||||
2. **Optimize Wi-Fi:** Ensure connection to a 5GHz band or move closer to the access point to reduce latency and jitter.
|
|
||||||
|
|
||||||
## Verification
|
|
||||||
- [x] Laptop `dd` tests confirmed low throughput.
|
|
||||||
- [x] Cluster `dd` tests confirmed high throughput.
|
|
||||||
- [x] Ephemeral test resources in `nas-speed-test` namespace have been cleaned up.
|
|
||||||
@@ -1,21 +0,0 @@
|
|||||||
# NAS Benchmark Results (nfs-csi)
|
|
||||||
|
|
||||||
Date: 2026-01-19
|
|
||||||
|
|
||||||
## Workload
|
|
||||||
- Tool: fio 3.35
|
|
||||||
- Pattern: randrw (70% read / 30% write)
|
|
||||||
- Block size: 4k
|
|
||||||
- I/O depth: 16
|
|
||||||
- Jobs: 4
|
|
||||||
- Size: 2GiB
|
|
||||||
- Runtime: 120s
|
|
||||||
- StorageClass: nfs-csi
|
|
||||||
- PVC: 10Gi RWX
|
|
||||||
|
|
||||||
## Results
|
|
||||||
- Read: IOPS=110, BW=441KiB/s (451kB/s), io=51.7MiB
|
|
||||||
- Write: IOPS=49, BW=197KiB/s (202kB/s), io=23.1MiB
|
|
||||||
|
|
||||||
## Notes
|
|
||||||
- fio output reported synchronous I/O with iodepth capped to 1 (psync engine).
|
|
||||||
Reference in New Issue
Block a user