且构网

分享程序员开发的那些事...
且构网 - 分享程序员编程开发的那些事

即使出错也如何从git-lfs中获取大文件

更新时间:2023-10-28 11:25:40

分叉无法解决问题,如

Forking wouldn't solve the problem, as stated in nabla-c0d3/nassl issue 17

说实话,我对GitHub如何为公共项目支持git-lfs感到非常失望.对存储和带宽收费毫无意义,特别是因为切换到git-lfs可以减少GitHub的带宽使用/成本,而不是仅将大文件存储在git中.我不知道每个配额都有配额的事实,甚至更疯狂.

To be honest, I'm very disappointed with how GitHub supports git-lfs for public projects. Charging for storage and bandwidth makes no sense, especially since switching to git-lfs reduces bandwidth usage / cost for GitHub, compared to just storing big files in git. The fact the quotas are for every fork, which I did not know, is even crazier.

我同意您的观点,git-lfs还有很多不足之处,我想您提出了一个很好的观点,那就是向我们收费以节省他们的钱.

I agree with you that git-lfs leaves a lot to be desired, and I think you raise an excellent point about charging us for something that saves them money.

如果您不关心那些大文件的历史记录,可以在您获得额外信用的情况下进行:

If you don't care about the history of those large file, you could, in the context of your additional credit:

  • 创建一个新的本地存储库
  • 在其中添加其中一个大文件
  • 添加,提交和推送
  • 对其他大文件重复执行一次(查看进程是否在任何时候阻塞)

但是:如果您还没有这些大文件的本地副本,则只需 git lfs fetch 让他们回来.

But: if you don't have already a local copy of those large files, only a git lfs clone + git lfs fetch would be able to get them back.

然后...如果带宽限制已到达,将无济于事.
您需要从第三方来源获取这些文件,或者将您的案例请求给 GitHub支持.

And... if the bandwidth limit is already reached, that won't help.
You need to get those files from a third-party source, or plead your case to GitHub support.