且构网

分享程序员开发的那些事...
且构网 - 分享程序员编程开发的那些事

是否有基于kubernetes中请求的总Pod的GKE节点自动缩放的工具?

更新时间:2021-12-10 21:45:08

对于Go构建系统,我有一个类似的要求:想知道何时排定的vs.可用的CPU或内存> 1,并在何时扩展节点.这是正确的(或更确切地说,当它是〜.8时).没有内置指标,但是如您建议的那样,您可以使用自定义指标.

I had a similar requirement (for the Go build system): wanted to know when scheduled vs. available CPU or memory was > 1, and scale out nodes when that was true (or, more accurately, when it was ~.8). There's not a built-in metric, but as you suggest you can do it with a custom metric.

这一切都是在Go中完成的,但是它会带给您基本的思想:

This was all done in Go, but it will give you the basic idea:

  1. 创建指标(内存和CPU,就我而言
  2. 将值放入指标
  1. Create the metrics (memory and CPU, in my case
  2. Put values to the metrics

主要的IMO是,您必须遍历集群中的每个 pod 以确定消耗了多少容量,然后遍历集群中的每个 node 确定有多少可用容量.然后,只需将自动缩放器指向自定义指标即可.

The key takeaway IMO is that you have to iterate over each pod in the cluster to determine how much capacity is consumed, then iterate over each node in the cluster to determine how much capacity is available. It's then just a matter of pointing your autoscaler to the custom metric(s).

值得一提的大事:我最终确定,内置CPU利用率度量标准的扩展与自定义度量标准一样好(即使不是更好,但稍有提高).我们计划的每个Pod都固定了CPU,所以当Pod达到最大时,CPU也会固定.内置的CPU使用率指标可能会更好,因为您没有定期放置自定义指标所带来的延迟.

Big big big thing worth noting: I ultimately determined that scaling on the built-in CPU utilization metric was just as good as (if not better than, but more on that in a bit) than the custom metric. Each pod we scheduled pegged the CPU, so when pods were maxed out so was CPU. The build-in CPU utilization metric is probably better because you don't have the latency that comes with periodically putting custom metrics.