首页 > 其他分享 >Benchmark JuiceFS on AWS

Benchmark JuiceFS on AWS

时间:2022-12-20 20:01:14浏览次数:41  
标签:operations count file AWS Benchmark ms small JuiceFS op

Tried JuiceFS v1.0.2, with PostgreSQL database, which is running at another t3a.medium instance runs within same VPC but another avalibility zone.

Used a t2.micro to test the performance. Mounted EFS as cache, which has 100MB throughput.

$ juicefs bench jfs-test -p 2

Cleaning kernel cache, may ask for root privilege...
Write big blocks count: 2048 / 2048 [==============================================================] done
Read big blocks count: 2048 / 2048 [==============================================================] done
Write small blocks count: 200 / 200 [==============================================================] done
Read small blocks count: 200 / 200 [==============================================================] done
Stat small files count: 200 / 200 [==============================================================] done
Benchmark finished!
BlockSize: 1 MiB, BigFileSize: 1024 MiB, SmallFileSize: 128 KiB, SmallFileCount: 100, NumThreads: 2
Time used: 64.2 s, CPU: 43.4%, Memory: 679.5 MiB
+------------------+------------------+----------------+
| ITEM | VALUE | COST |
+------------------+------------------+----------------+
| Write big file | 114.83 MiB/s | 17.84 s/file |
| Read big file | 73.38 MiB/s | 27.91 s/file |
| Write small file | 19.9 files/s | 100.29 ms/file |
| Read small file | 99.2 files/s | 20.16 ms/file |
| Stat file | 450.6 files/s | 4.44 ms/file |
| FUSE operation | 37855 operations | 2.81 ms/op |
| Update meta | 4412 operations | 17.85 ms/op |
| Put object | 712 operations | 507.14 ms/op |
| Get object | 516 operations | 173.71 ms/op |
| Delete object | 0 operations | 0.00 ms/op |
| Write into cache | 351 operations | 100.58 ms/op |
| Read from cache | 196 operations | 6.98 ms/op |
+------------------+------------------+----------------+

Change -p to 4, which means 4 threads.

$ juicefs bench jfs-test -p 4
Cleaning kernel cache, may ask for root privilege...
Write big blocks count: 4096 / 4096 [==============================================================] done
Read big blocks count: 4096 / 4096 [==============================================================] done
Write small blocks count: 400 / 400 [==============================================================] done
Read small blocks count: 400 / 400 [==============================================================] done
Stat small files count: 400 / 400 [==============================================================] done
Benchmark finished!
BlockSize: 1 MiB, BigFileSize: 1024 MiB, SmallFileSize: 128 KiB, SmallFileCount: 100, NumThreads: 4
Time used: 97.4 s, CPU: 49.8%, Memory: 560.9 MiB
+------------------+------------------+----------------+
| ITEM | VALUE | COST |
+------------------+------------------+----------------+
| Write big file | 114.89 MiB/s | 35.65 s/file |
| Read big file | 116.10 MiB/s | 35.28 s/file |
| Write small file | 35.1 files/s | 114.12 ms/file |
| Read small file | 131.4 files/s | 30.44 ms/file |
| Stat file | 874.9 files/s | 4.57 ms/file |
| FUSE operation | 75752 operations | 4.28 ms/op |
| Update meta | 8842 operations | 17.43 ms/op |
| Put object | 1424 operations | 509.62 ms/op |
| Get object | 1107 operations | 593.01 ms/op |
| Delete object | 0 operations | 0.00 ms/op |
| Write into cache | 669 operations | 73.71 ms/op |
| Read from cache | 317 operations | 9.26 ms/op |
+------------------+------------------+----------------+

With 8 threads.

$ juicefs bench jfs-test -p 8
Cleaning kernel cache, may ask for root privilege...
Write big blocks count: 8192 / 8192 [==============================================================] done
Read big blocks count: 8192 / 8192 [==============================================================] done
Write small blocks count: 800 / 800 [==============================================================] done
Read small blocks count: 800 / 800 [==============================================================] done
Stat small files count: 800 / 800 [==============================================================] done
Benchmark finished!
BlockSize: 1 MiB, BigFileSize: 1024 MiB, SmallFileSize: 128 KiB, SmallFileCount: 100, NumThreads: 8
Time used: 189.0 s, CPU: 49.9%, Memory: 502.6 MiB
+------------------+-------------------+----------------+
| ITEM | VALUE | COST |
+------------------+-------------------+----------------+
| Write big file | 117.95 MiB/s | 69.45 s/file |
| Read big file | 114.59 MiB/s | 71.49 s/file |
| Write small file | 40.1 files/s | 199.39 ms/file |
| Read small file | 240.1 files/s | 33.33 ms/file |
| Stat file | 1256.4 files/s | 6.37 ms/file |
| FUSE operation | 151773 operations | 8.35 ms/op |
| Update meta | 17826 operations | 24.64 ms/op |
| Put object | 2858 operations | 497.79 ms/op |
| Get object | 2259 operations | 968.98 ms/op |
| Delete object | 0 operations | 0.00 ms/op |
| Write into cache | 1184 operations | 88.53 ms/op |
| Read from cache | 604 operations | 14.63 ms/op |
+------------------+-------------------+----------------+

As we can see, the performance of Write big file doesn't change. Since t2.micro has only 1 vCPU, the write operation was limited by CPU.

The Stat file, FUSE operation, Update metadata are nearly increasing linearly, because metadata operation was done with database, that's not a bottleneck for this test.

Conclusion, JuiceFS is good candidate for cloud application which requires heavy read operation from object store (AWS S3 etc), and program should be optimized to use symbol link heavily, which is just a database operation rather than real IO operation. In the meanwhile, for best write performance, a good enough cache is also needed. 

标签:operations,count,file,AWS,Benchmark,ms,small,JuiceFS,op
From: https://www.cnblogs.com/Jedimaster/p/16994985.html

相关文章

  • AWS AppSync 添加 自定义 坐标查询 V2
    res.vtl#set($items=[])#foreach($entryin$context.result.hits.hits)#if(!$foreach.hasNext)#set($nextToken=$util.base64Encode($util.toJso......
  • Amazon AWS S3 操作手册
    InstalltheSDKTherecommendedwaytousetheAWSSDKforJavainyourprojectistoconsumeitfromMaven.Importthe ​​aws-java-sdk-bom​​ andspecifyth......
  • AWS的区域和可用区概念解释
     AWS的每个区域一般由多个可用区(AZ)组成,而一个可用区一般是由多个数据中心组成。AWS引入可用区设计主要是为了提升用户应用程序的高可用性。因为可用区与可用区之间在设计上......
  • AWS 下 EKS 部署 Dashboard
    一.准备工作打开AWSCloudShell安装eksctlcurl--silent--location"https://github.com/weaveworks/eksctl/releases/latest/download/eksctl_$(uname-s)_amd64.tar.g......
  • 使用 BenchmarkDotNet 比较指定容量的 List 的性能
    我们之前提到List是.NET中常用的数据结构,其在存储大量数据时,如果能够指定它的初始化容量,就会有性能提升。这个优化的方法并不是很明显,因此本文将使用BenchmarkDotNet......
  • JuiceFS CSI Driver 常见问题排查指南
    Kubernetes作为资源调度和应用编排的开源系统,正在成为云计算和现代IT基础架构的通用平台。JuiceFSCSIDriver实现了容器编排系统的存储接口,使得用户可以在Kubernetes......
  • AWS RDS binlog 读取
    下载awsrdsbinlog并转换成sql的脚本#download_and_convert_binlog.sh#!/bin/bashlogFile=$1mysqlbinlog\--read-from-remote-server\--host=xxx......
  • AWS EKS-QuickStart-Deployment
    EKSAmazonElasticKubernetesService(AmazonEKS)是一项托管服务,可用于在上运行AWSKubernetes,而无需安装、操作和维护您自己的Kubernetes控制层面或节点。Kubernete......
  • AWS-自建集群K8s-Master控制面板
    control-planeinit-kubeadm.yaml#catinit-kubeadm.yamlapiVersion:kubeadm.k8s.io/v1beta3bootstrapTokens:-groups:-system:bootstrappers:kubeadm:defaul......
  • AWS-自建集群K8s-Calico部署
    CalicoInstall镜像下载dockerpulldocker.io/calico/cni:v3.24.5dockerpulldocker.io/calico/node:v3.24.5dockerpulldocker.io/calico/kube-controllers:v3.24.......