首页 > 数据库 >【最佳实践】MongoDB导出导入数据

【最佳实践】MongoDB导出导入数据

时间:2023-10-09 09:13:05浏览次数:41  
标签:primitive -- MongoDB likingtest 09 导出 导入 2023 0800

首先说一下这个3节点MongoDB集群各个维度的数据规模:
1、dataSize: 1.9T
2、storageSize: 600G
3、全量备份-加压缩开关:186G,耗时 8h
4、全量备份-不加压缩开关:1.8T,耗时 4h27m
具体导出的语法比较简单,此处不再赘述,本文重点描述导入的优化过程,最后给出导入的最佳实践。

■ 2023-09-13T20:00 第1次4并发导入测试

mongorestore --port=20000 -uadmin -p'passwd' --authenticationDatabase=admin --numInsertionWorkersPerCollection=4 --bypassDocumentValidation -d likingtest /u01/nfs/xxxxx_mongodb/10.1.1.1/20230913/likingtest >> 10.2.2.2.log 2>&1 &
tail -100f /u01/nfs/xxxxx_mongodb/10.1.1.1/20230913/10.2.2.2.log
以上导入:
2023-09-13T21:59:55.452+0800    The --db and --collection flags are deprecated for this use-case; please use --nsInclude instead, i.e. with --nsInclude=${DATABASE}.${COLLECTION}
2023-09-13T21:59:55.452+0800    building a list of collections to restore from /u01/nfs/xxxxx_mongodb/10.1.1.1/20230913/likingtest dir
2023-09-13T21:59:55.466+0800    reading metadata for likingtest.oprceConfiguration from /u01/nfs/xxxxx_mongodb/10.1.1.1/20230913/likingtest/oprceConfiguration.metadata.json
2023-09-13T21:59:55.478+0800    reading metadata for likingtest.oprceDataObj from /u01/nfs/xxxxx_mongodb/10.1.1.1/20230913/likingtest/oprceDataObj.metadata.json
2023-09-13T21:59:55.491+0800    reading metadata for likingtest.oprcesDataObjInit from /u01/nfs/xxxxx_mongodb/10.1.1.1/20230913/likingtest/oprcesDataObjInit.metadata.json
2023-09-13T21:59:55.503+0800    reading metadata for likingtest.role from /u01/nfs/xxxxx_mongodb/10.1.1.1/20230913/likingtest/role.metadata.json
2023-09-13T21:59:55.508+0800    reading metadata for likingtest.activityConfiguration from /u01/nfs/xxxxx_mongodb/10.1.1.1/20230913/likingtest/activityConfiguration.metadata.json
2023-09-13T21:59:55.511+0800    reading metadata for likingtest.history_task from /u01/nfs/xxxxx_mongodb/10.1.1.1/20230913/likingtest/history_task.metadata.json
2023-09-13T21:59:55.512+0800    reading metadata for likingtest.resOutRelDataSnapshot from /u01/nfs/xxxxx_mongodb/10.1.1.1/20230913/likingtest/resOutRelDataSnapshot.metadata.json
2023-09-13T21:59:55.520+0800    reading metadata for likingtest.snapshotResource from /u01/nfs/xxxxx_mongodb/10.1.1.1/20230913/likingtest/snapshotResource.metadata.json
2023-09-13T21:59:55.524+0800    reading metadata for likingtest.oprceDataObjDraft from /u01/nfs/xxxxx_mongodb/10.1.1.1/20230913/likingtest/oprceDataObjDraft.metadata.json
2023-09-13T21:59:55.526+0800    reading metadata for likingtest.oprceDataObjInit from /u01/nfs/xxxxx_mongodb/10.1.1.1/20230913/likingtest/oprceDataObjInit.metadata.json
2023-09-13T21:59:55.761+0800    restoring likingtest.snapshotResource from /u01/nfs/xxxxx_mongodb/10.1.1.1/20230913/likingtest/snapshotResource.bson
...
2023-09-13T22:00:01.451+0800    [........................]      likingtest.oprceDataObj   408MB/1205GB    (0.0%)
...
2023-09-13T21:59:58.323+0800    finished restoring likingtest.oprceDataObjDraft (1559 documents, 0 failures)
2023-09-13T22:00:01.034+0800    finished restoring likingtest.resOutRelDataSnapshot (34426 documents, 0 failures)
2023-09-13T22:00:01.559+0800    finished restoring likingtest.history_task (3629 documents, 0 failures)
2023-09-13T22:00:02.086+0800    finished restoring likingtest.activityConfiguration (974 documents, 0 failures)
2023-09-13T22:00:02.293+0800    finished restoring likingtest.oprceConfiguration (162 documents, 0 failures)
2023-09-13T22:00:02.529+0800    finished restoring likingtest.oprcesDataObjInit (4 documents, 0 failures)
2023-09-13T22:00:02.857+0800    finished restoring likingtest.role (10 documents, 0 failures)
2023-09-13T22:00:29.153+0800    [########################]  likingtest.snapshotResource  2.04GB/2.04GB  (100.0%)
2023-09-13T22:00:29.155+0800    finished restoring likingtest.snapshotResource (50320 documents, 0 failures)
...
2023-09-14T00:18:58.451+0800    [############............]      likingtest.oprceDataObj  651GB/1205GB   (54.0%)
2023-09-14T00:18:59.857+0800    [########################]  likingtest.oprceDataObjInit  635GB/635GB  (100.0%)
2023-09-14T00:18:59.888+0800    finished restoring likingtest.oprceDataObjInit (43776648 documents, 0 failures)
...
2023-09-14T02:05:58.904+0800    [########################]      likingtest.oprceDataObj  1205GB/1205GB  (100.0%)
2023-09-14T02:05:58.937+0800    finished restoring likingtest.oprceDataObj (53311330 documents, 0 failures)
2023-09-14T02:05:58.945+0800    no indexes to restore for collection likingtest.activityConfiguration
2023-09-14T02:05:58.945+0800    no indexes to restore for collection likingtest.history_task
2023-09-14T02:05:58.945+0800    restoring indexes for collection likingtest.oprcesDataObjInit from metadata
2023-09-14T02:05:58.976+0800    index: &idx.IndexDocument{Options:primitive.M{"name":"flowId_1_activityConfiguration.activityNameEn_1", "ns":"likingtest.oprcesDataObjInit", "v":2}, Key:primitive.D{primitive.E{Key:"flowId", Value:1}, primitive.E{Key:"activityConfiguration.activityNameEn", Value:1}}, PartialFilterExpression:primitive.D(nil)}
2023-09-14T02:05:58.976+0800    index: &idx.IndexDocument{Options:primitive.M{"name":"oprceInfo.oprceInstID_1_activityInfo.activityInstID_1_workitemInfo.workItemID_1", "ns":"likingtest.oprcesDataObjInit", "v":2}, Key:primitive.D{primitive.E{Key:"oprceInfo.oprceInstID", Value:1}, primitive.E{Key:"activityInfo.activityInstID", Value:1}, primitive.E{Key:"workitemInfo.workItemID", Value:1}}, PartialFilterExpression:primitive.D(nil)}
2023-09-14T02:05:58.976+0800    no indexes to restore for collection likingtest.role
2023-09-14T02:05:58.976+0800    no indexes to restore for collection likingtest.snapshotResource
2023-09-14T02:05:58.976+0800    no indexes to restore for collection likingtest.oprceDataObjDraft
2023-09-14T02:05:58.976+0800    restoring indexes for collection likingtest.oprceDataObjInit from metadata
2023-09-14T02:05:58.976+0800    index: &idx.IndexDocument{Options:primitive.M{"name":"oprceInfo.oprceInstID_1_activityInfo.activityInstID_1_workitemInfo.workItemID_1", "ns":"likingtest.oprceDataObjInit", "v":2}, Key:primitive.D{primitive.E{Key:"oprceInfo.oprceInstID", Value:1}, primitive.E{Key:"activityInfo.activityInstID", Value:1}, primitive.E{Key:"workitemInfo.workItemID", Value:1}}, PartialFilterExpression:primitive.D(nil)}
2023-09-14T02:05:58.976+0800    index: &idx.IndexDocument{Options:primitive.M{"name":"flowNo_1", "ns":"likingtest.oprceDataObjInit", "v":2}, Key:primitive.D{primitive.E{Key:"flowNo", Value:1}}, PartialFilterExpression:primitive.D(nil)}
2023-09-14T02:05:58.976+0800    no indexes to restore for collection likingtest.oprceConfiguration
2023-09-14T02:05:58.976+0800    no indexes to restore for collection likingtest.resOutRelDataSnapshot
2023-09-14T02:05:58.976+0800    restoring indexes for collection likingtest.oprceDataObj from metadata
2023-09-14T02:05:58.976+0800    index: &idx.IndexDocument{Options:primitive.M{"name":"flowId_1_activityConfiguration.activityNameEn_1", "ns":"likingtest.oprceDataObj", "v":2}, Key:primitive.D{primitive.E{Key:"flowId", Value:1}, primitive.E{Key:"activityConfiguration.activityNameEn",Value:1}}, PartialFilterExpression:primitive.D(nil)}
2023-09-14T02:05:58.976+0800    index: &idx.IndexDocument{Options:primitive.M{"name":"flowNo_1", "ns":"likingtest.oprceDataObj", "v":2}, Key:primitive.D{primitive.E{Key:"flowNo", Value:1}}, PartialFilterExpression:primitive.D(nil)}
2023-09-14T02:05:58.976+0800    index: &idx.IndexDocument{Options:primitive.M{"name":"oprceInfo.oprceInstID_1_activityInfo.activityInstID_1_workitemInfo.workItemID_1", "ns":"likingtest.oprceDataObj", "v":2}, Key:primitive.D{primitive.E{Key:"oprceInfo.oprceInstID", Value:1}, primitive.E{Key:"activityInfo.activityInstID", Value:1}, primitive.E{Key:"workitemInfo.workItemID", Value:1}}, PartialFilterExpression:primitive.D(nil)}
2023-09-14T02:05:58.976+0800    index: &idx.IndexDocument{Options:primitive.M{"name":"flowId_1_activityConfiguration.activityNameEn_1", "ns":"likingtest.oprceDataObjInit", "v":2}, Key:primitive.D{primitive.E{Key:"flowId", Value:1}, primitive.E{Key:"activityConfiguration.activityNameEn", Value:1}}, PartialFilterExpression:primitive.D(nil)}
2023-09-14T03:45:47.152+0800    97179062 document(s) restored successfully. 0 document(s) failed to restore.

可见:
1、配置并发参数 --numInsertionWorkersPerCollection=4 和 检查参数 bypassDocumentValidation 后,restore速度大大提升,1.2T 的一个大集合 oprceDataObj,由原来默认restore方式约 12h,降为:4h
2、restore完所有数据以后,最后再restore索引,restore索引还是需要一定的时间,本次耗时:1h40m【注:实际没有成功,索引并未生效】
3、新版本的 -d -c 参数需统一修改为:--nsInclude --nsFrom= --nsTo=

■ 2023-09-14T10:40 第2次8并发导入测试

mongorestore --port=20000 -uadmin -p'passwd' --authenticationDatabase=admin --numInsertionWorkersPerCollection=8 --bypassDocumentValidation -d likingtest /u01/nfs/xxxxx_mongodb/10.1.1.1/20230914/likingtest >> 10.2.2.2.log 2>&1 &
tail -100f /u01/nfs/xxxxx_mongodb/10.1.1.1/20230914/10.2.2.2.log
---
2023-09-14T10:40:45.492+0800    The --db and --collection flags are deprecated for this use-case; please use --nsInclude instead, i.e. with --nsInclude=${DATABASE}.${COLLECTION}
...
2023-09-14T10:40:48.493+0800    [........................]       likingtest.oprceDataObj   112MB/1208GB    (0.0%)
...
2023-09-14T12:57:34.859+0800    [########################]       likingtest.oprceDataObj  1208GB/1208GB  (100.0%)
2023-09-14T12:57:34.867+0800    finished restoring likingtest.oprceDataObj (53413481 documents, 0 failures)

可见:
1、配置并发参数 --numInsertionWorkersPerCollection=8 和 检查参数 --bypassDocumentValidation 后,restore速度再次大大提升,1.2T的一个大集合 oprceDataObj,由原来默认restore方式约 12h,降为:2h17m
2、本次恢复采用nfs备份恢复,一台8C的虚机,8并发恢复时cpu占用约40%,网络接收速度300MB/s左右,本地磁盘写入速度在30-200MB/s左右,可见网络带段不是瓶颈。可以预见,如果采用更高的主机配置,尤其是IO更好的磁盘,resotore时间必将更少。

■ 2023-09-14T16:10 第3次12并发导入测试

【注意】由于新版本mongorestore摒弃了-d -c参数,虽然可用但使用不够灵活,因此需使用新参数--nsInclude,对于该参数的使用,摸索了多次才找到使用的限制条件,即 directory 必须为数据库备份的根目录/上一级目录,而不是 数据库目录!即类似 dumpdir/20230914,而不是 dumpdir/20230914/database!这是一个巨大的坑,切记!当然,这个目录下一定不能有其他不可识别的文件,否则也会报错。

mongorestore --port=20000 -uadmin -p'passwd' --authenticationDatabase=admin --numInsertionWorkersPerCollection=12 --bypassDocumentValidation --nsInclude="likingtest.*" /u01/nfs/xxxxx_mongodb/10.1.1.1/20230914 > 20230914.10.2.2.2-3.log 2>&1 &
tail -100f /u01/nfs/xxxxx_mongodb/10.1.1.1/20230914.10.2.2.2-3.log
---
2023-09-14T16:10:19.245+0800    preparing collections to restore from
...
2023-09-14T18:18:18.996+0800    [########################]  likingtest.oprceDataObj  1208GB/1208GB  (100.0%)
2023-09-14T18:18:19.014+0800    finished restoring likingtest.oprceDataObj (53413481 documents, 0 failures)

可见:
1、并发由 8 增至 12 并无效率提升,结论是 6-8 个并发就可以,这一点与oracle的并发导入设置为 6 基本是最佳实践类似。
2、本次恢复采用nfs备份恢复,一台8C的虚机,12并发恢复时cpu占用约60%,网络接收速度300MB/s左右,本地磁盘写入速度在30-500MB/s左右,可见网络带段不是瓶颈。可以预见,如果采用更高的主机配置,尤其是IO更好的磁盘,resotore时间必将更少。
3、关于索引的restore,restore时首先恢复数据,最后再创建索引,比较大的集合的索引创建还是需要较多的时间:

      currentOpTime: '2023-09-14T20:23:59.435+08:00',
...
      command: {
        createIndexes: 'oprceDataObj',
        indexes: [
          {
            key: { flowId: 1, 'activityConfiguration.activityNameEn': 1 },
            name: 'flowId_1_activityConfiguration.activityNameEn_1',
            ns: 'likingtest.oprceDataObj'
          },
          {
            key: { flowNo: 1 },
            name: 'flowNo_1',
            ns: 'likingtest.oprceDataObj'
          },
          {
            key: {
              'oprceInfo.oprceInstID': 1,
              'activityInfo.activityInstID': 1,
              'workitemInfo.workItemID': 1
            },
            name: 'oprceInfo.oprceInstID_1_activityInfo.activityInstID_1_workitemInfo.workItemID_1',
            ns: 'likingtest.oprceDataObj'
          }
        ],
.....
      currentOpTime: '2023-09-14T20:23:59.489+08:00',
...
      command: {
        createIndexes: 'oprcesDataObjInit',
        indexes: [
          {
            key: { flowId: 1, 'activityConfiguration.activityNameEn': 1 },
            name: 'flowId_1_activityConfiguration.activityNameEn_1',
            ns: 'likingtest.oprcesDataObjInit'
          },
          {
            key: {
              'oprceInfo.oprceInstID': 1,
              'activityInfo.activityInstID': 1,
              'workitemInfo.workItemID': 1
            },
            name: 'oprceInfo.oprceInstID_1_activityInfo.activityInstID_1_workitemInfo.workItemID_1',
            ns: 'likingtest.oprcesDataObjInit'
          }
        ],
......第二天再看,还没创建完索引:
      currentOpTime: '2023-09-15T09:16:16.460+08:00',
      effectiveUsers: [ { user: 'admin', db: 'admin' } ],
      runBy: [ { user: '__system', db: 'local' } ],
      threaded: true,
      opid: 'shard1:11312917',
      lsid: {
        id: new UUID("e78379ff-9664-46b1-9e87-2bdd4abc5c5f"),
        uid: Binary.createFromBase64("O0CMtIVItQN4IsEOsJdrPL8s7jv5xwh5a/A5Qfvs2A8=", 0)
      },
      secs_running: Long("53877"),
      microsecs_running: Long("53877330742"),
      op: 'command',
      ns: 'likingtest.oprcesDataObjInit',
      redacted: false,
      command: {
        createIndexes: 'oprcesDataObjInit',
......第二天满24h,还没创建完索引:
      currentOpTime: '2023-09-15T18:55:16.877+08:00',
      effectiveUsers: [ { user: 'admin', db: 'admin' } ],
      runBy: [ { user: '__system', db: 'local' } ],
      threaded: true,
      opid: 'shard1:11312917',
      lsid: {
        id: new UUID("e78379ff-9664-46b1-9e87-2bdd4abc5c5f"),
        uid: Binary.createFromBase64("O0CMtIVItQN4IsEOsJdrPL8s7jv5xwh5a/A5Qfvs2A8=", 0)
      },
      secs_running: Long("88617"),
      microsecs_running: Long("88617747875"),
      op: 'command',
      ns: 'likingtest.oprcesDataObjInit',
      redacted: false,
      command: {
        createIndexes: 'oprcesDataObjInit',
        indexes: [
          {
            key: { flowId: 1, 'activityConfiguration.activityNameEn': 1 },
            name: 'flowId_1_activityConfiguration.activityNameEn_1',
            ns: 'likingtest.oprcesDataObjInit'
          },

以上可见,mongorestore 导入数据库的数据效率目前是基本可控、可接受的,至少对于1.2T的大集合是可以接受的,但是最后的索引创建实在过于缓慢,且没有找到合适的解决办法:索引需多并发执行创建,且确保索引生效,本次索引创建最后并未生效

■ 2023-09-15T19:02 第4次10并发导入测试,不恢复索引

mongorestore --port=20000 -uadmin -p'passwd' --authenticationDatabase=admin --numInsertionWorkersPerCollection=10 --bypassDocumentValidation --nsInclude="likingtest.*" --nsFrom="likingtest.*" --nsTo="likingtest.*" --noIndexRestore /u01/nfs/xxxxx_mongodb/10.1.1.1/20230914 > 20230914.10.2.2.2-4.log 2>&1 &
tail -100f /u01/nfs/xxxxx_mongodb/10.1.1.1/20230914.10.2.2.2-4.log
2023-09-15T19:02:59.747+0800    preparing collections to restore from
...
2023-09-15T21:24:36.145+0800    [########################]  likingtest.oprceDataObj  1208GB/1208GB  (100.0%)
2023-09-15T21:24:36.161+0800    finished restoring likingtest.oprceDataObj (53413481 documents, 0 failures)
2023-09-15T21:24:36.165+0800    97367732 document(s) restored successfully. 0 document(s) failed to restore.

以上可见,耗时:2h22m

结论

1、restore 时需设置大数据量 collection 多并发导入:--numInsertionWorkersPerCollection=8
2、不恢复索引:--noIndexRestore
3、数据恢复后,后台创建索引:本站搜索"MongoDB 重建索引"

标签:primitive,--,MongoDB,likingtest,09,导出,导入,2023,0800
From: https://www.cnblogs.com/likingzi/p/17750673.html

相关文章

  • 高效数据传输:Java通过绑定快速将数据导出至Excel
    摘要:本文由葡萄城技术团队于博客园原创并首发。转载请注明出处:葡萄城官网,葡萄城为开发者提供专业的开发工具、解决方案和服务,赋能开发者。前言把数据导出至Excel是很常见的需求,而数据的持久化,往往又放在数据库中。因此把数据库中的数据导出到Excel中,成了非常普遍的一个需求......
  • eclipase项目导入错误Some projects cannot be imported because they already exist
    根本原因:workplace和工程名冲突解决办法:1.新建一个目录,打开工程选择该路径 2.导入实际工程得路径位置 ......
  • 【最佳实践】MongoDB导入数据时重建索引
    MongoDB一个广为诟病的问题是,大量数据resotore时索引重建非常缓慢,实测5000万的集合如果有3个以上的索引需要恢复,几乎没法成功,而且resotore时如果选择创建索引也会存在索引不生效的问题,种种情况表明,MongoDB的一些默认设置存在明显不合理之处。当然,深入理解后总会有办法解决这些问......
  • mysqldump 导出来的文件,使用 source还原时报错“ASCII '\0' appeared in the stateme
    导出语句:mysqldump-uroot-pword--databasesdb1--tablestable1>./sqldumps/archive-table1-`date+"%Y%m%d_%H%M%S"`.sql导出后,使用source还原报错:ASCII'\0'appearedinthestatement,butthisisnotallowedunlessoption我开始以为是我导出的编码格式有问题,......
  • 数据泵(impdb)导入Oracle分片的数据库dump文件
    数据泵(impdb)导入Oracle数据库一.sqlplus登录目标数据库,创建导入的目录路径#该目录要在导入的数据库本机建立,如果是docker就在容器内部创建createdirectorydata_diras'/home/oracle/prd_imp/prd_dump';data_dir为路径名称,可自命名。路径是导出的dmp文件存放的路径必须......
  • WIN11 安装 SQL Server 2019,SQLSERVER2022, MYSQL 8.0 ,Docker,Mongodb失败故障分析
    最近研究数据库性能调优遇到各种数据库各种装不上,不知道熬了多少根软白沙,熬了多少颗张三疯,问了多少AI,查了多少网页,熬了两天,终于搞明白了一件事:那就是WIN11ONARM(因为拿的是MACPROM2做.NET平台开发安装)SQLSERVER2019,SQLSERVER2022,MYSQL8.0,Docker,Mongodb失败故障分析,最终极......
  • MongoDB高阶特性:事务、索引
    一、事务一)MongoDB的事务首先我们需要知道MongoDB是有多种存储引擎的,不同的存储引擎在实现ACID的时候,使用不同的机制。而Mongodb从3.0开始默认使用的是WiredTiger引擎,本文后续所有文字均是针对WiredTiger引擎。WiredTiger引擎可以针对单个文档来保证ACID特性,但是当需要操作多个......
  • 火山引擎 ByteHouse:TB 级数据下,如何实现高效、稳定的数据导入
    更多技术交流、求职机会,欢迎关注字节跳动数据平台微信公众号,回复【1】进入官方交流群近期,火山引擎开发者社区、火山引擎数智平台(VeDI)联合举办以《数智化转型背景下的火山引擎大数据技术揭秘》为主题的线下Meeup。活动主要从数据分析、数据治理、研发提效等角度,带领数据领域从业者......
  • PLSQL 导入EXCEL数据
    导入前注意事项: 确认电脑的驱动安装,因之前PLSQL客户端安装的64位,而excel驱动安装的是32位,导致ODBC导入器一直无法显示对应的驱动信息,尝试安装64位excel驱动,但因与office版本有关,所以无法安装成功,后不得不变更PLSQL为32位,变更后驱动正常显示以下为安装步骤: 打......
  • MongoDBHelper + Expression+ JsonResult
    usingMongoDB.Driver;usingSystem;usingSystem.Collections.Generic;usingSystem.Linq;usingSystem.Linq.Expressions;namespaceMongodbTest.Common{///<summary>///MongoDb帮助类///</summary>publicclassMongoDbHelper......