Conversation
|
[APPROVALNOTIFIER] This PR is NOT APPROVED This pull-request has been approved by: ulladz The full list of commands accepted by this bot can be found here. DetailsNeeds approval from an approver in each of these files:Approvers can indicate their approval by writing |
|
Hi @ulladz. Thanks for your PR. I'm waiting for a kubernetes member to verify that this patch is reasonable to test. If it is, they should reply with Tip We noticed you've done this a few times! Consider joining the org to skip this step and gain Once the patch is verified, the new status will be reflected by the I understand the commands that are listed here. DetailsInstructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository. |
e98cf05 to
250e2b8
Compare
|
/cc @x13n |
|
/ok-to-test |
|
/assign jbtk |
|
|
||
| // Drainable decides what to do with on-completion pods on node drain. | ||
| func (r *Rule) Drainable(drainCtx *drainability.DrainContext, pod *apiv1.Pod, _ *framework.NodeInfo) drainability.Status { | ||
| if drain.HasSafeToEvictOnCompletionAnnotation(pod) { |
There was a problem hiding this comment.
Why do we need this? Shouldn't we not even attempt drain if there are pods that have safe-to-evict=on-completion? I see that not safe to evict would return that drain is blocked and I believe that for these we would also not allow drain - do we?
There was a problem hiding this comment.
I thought that we still wanted to drain pods without safe-to-evict=on-completion. As a result, we will have an empty node that just waits for the on-completion pod to finish its work and scales down after
There was a problem hiding this comment.
Isn't it that in the end we wayend to wait with actively draining the node until all on completion pods are done and rely only on the soft taint to prevent scheduling new nodes if they fit elsewhere in the cluster? I might have misunderstood smith though.
There was a problem hiding this comment.
But it is only simulation, right? We want to add a node unneededNodes and receive the soft-taint to prevent new pods from scheduling. To do it a node must succeed in SimulateNodeRemoval. Returning BlockDrain would immediately fail simulation and prevent any soft-tainting.
The actual drain is then blocked in https://github.com/kubernetes/autoscaler/pull/9355/changes#diff-cbbb81f63d88e9a38a5aa943d9b243f59704b337ae8dffb14447ebe356aebbafR179
There was a problem hiding this comment.
+1 to @jbtk that we don't want to start drain of pods before all on-completion pods finished.
But based on comment above the logic already has it. We are going to delay scale-down (which includes pod eviction phase) as long there is at least 1 non-terminated on-completion pod.
There was a problem hiding this comment.
Cool, thanks for confirming that this works as we planned.
cluster-autoscaler/simulator/drainability/rules/oncompletion/rule_test.go
Outdated
Show resolved
Hide resolved
|
|
||
| // Drainable decides what to do with on-completion pods on node drain. | ||
| func (r *Rule) Drainable(drainCtx *drainability.DrainContext, pod *apiv1.Pod, _ *framework.NodeInfo) drainability.Status { | ||
| if drain.HasSafeToEvictOnCompletionAnnotation(pod) { |
There was a problem hiding this comment.
+1 to @jbtk that we don't want to start drain of pods before all on-completion pods finished.
But based on comment above the logic already has it. We are going to delay scale-down (which includes pod eviction phase) as long there is at least 1 non-terminated on-completion pod.
250e2b8 to
e6078a9
Compare
e6078a9 to
285be0c
Compare
|
/test pull-autoscaling-e2e-gci-gce-ca-test |
|
/lgtm |
|
@x13n - please have a look. |
What type of PR is this?
/kind feature
/kind api-change
What this PR does / why we need it:
This PR introduces the
cluster-autoscaler.kubernetes.io/safe-to-evict: "on-completion"annotation value. It allows users to specify that a pod should not be actively evicted by the Cluster Autoscaler during a scale-down event, but rather should be allowed to run to completion.When scaling down, the Cluster Autoscaler will wait for pods with this annotation to complete successfully rather than forcefully evicting them. Nodes containing only daemonset pods and "on-completion" pods are prioritized for scale-down after completely empty nodes, but before nodes requiring active pod eviction.
This is particularly useful for AI/ML workloads or batch jobs that cannot be easily interrupted or checkpointed, ensuring they finish their computation naturally before the underlying node is scaled down.
Special notes for your reviewer:
PR adds a new drainability rule
oncompletionwhich skips pods annotated withcluster-autoscaler.kubernetes.io/safe-to-evict=on-completion. To surface this state cleanly to the rest of the CA logic, thesimulator.GetPodsToMovefunction was refactored to return aPodMoveInfostruct rather than multiple slice return values. This cleans up the signature and allows better handling of special pod categories likeOnCompletionPods.The
ScaleDownNodeProcessorsorting logic is also updated to prioritize nodes with only on-completion pods right after completely empty nodes.Does this PR introduce a user-facing change?