且构网

分享程序员开发的那些事...
且构网 - 分享程序员编程开发的那些事

使用-jsonArray时mongoimport的速度非常慢

更新时间:2023-02-06 22:22:17

我对160Gb转储文件有完全相同的问题。我花了两天的时间用 -jsonArray 加载3%的原始文件,用这些更改加载15分钟。

I have the exact same problem with a 160Gb dump file. It took me two days to load 3% of the original file with -jsonArray and 15 minutes with these changes.

首先,删除最初的 [和尾随] 字符:

First, remove the initial [ and trailing ] characters:

sed 's/^\[//; s/\]$/' -i filename.json

然后在没有 -jsonArray 选项:

mongoimport --db "dbname" --collection "collectionname" --file filename.json

如果文件很大, sed 将花费很长时间,也许你会遇到存储问题。你可以使用这个C程序(不是由我写的,所有荣耀都归于@guillermobox):

If the file is huge, sed will take a really long time and maybe you run into storage problems. You can use this C program instead (not written by me, all glory to @guillermobox):

int main(int argc, char *argv[])
{
    FILE * f;
    const size_t buffersize = 2048;
    size_t length, filesize, position;
    char buffer[buffersize + 1];

    if (argc < 2) {
        fprintf(stderr, "Please provide file to mongofix!\n");
        exit(EXIT_FAILURE);
    };

    f = fopen(argv[1], "r+");

    /* get the full filesize */
    fseek(f, 0, SEEK_END);
    filesize = ftell(f);

    /* Ignore the first character */
    fseek(f, 1, SEEK_SET);

    while (1) {
        /* read chunks of buffersize size */
        length = fread(buffer, 1, buffersize, f);
        position = ftell(f);

        /* write the same chunk, one character before */
        fseek(f, position - length - 1, SEEK_SET);
        fwrite(buffer, 1, length, f);

        /* return to the reading position */
        fseek(f, position, SEEK_SET);

        /* we have finished when not all the buffer is read */
        if (length != buffersize)
            break;
    }

    /* truncate the file, with two less characters */
    ftruncate(fileno(f), filesize - 2);

    fclose(f);

    return 0;
};

PS:我没有权力建议移植这个问题,但我认为这可能乐于助人。

P.S.: I don't have the power to suggest a migration of this question but I think this could be helpful.