Skip to content

Add safe-to-evict=on-completion#9355

Open
ulladz wants to merge 1 commit intokubernetes:masterfrom
ulladz:safe-to-evict-production
Open

Add safe-to-evict=on-completion#9355
ulladz wants to merge 1 commit intokubernetes:masterfrom
ulladz:safe-to-evict-production

Conversation

@ulladz
Copy link
Copy Markdown
Contributor

@ulladz ulladz commented Mar 13, 2026

What type of PR is this?

/kind feature
/kind api-change

What this PR does / why we need it:

This PR introduces the cluster-autoscaler.kubernetes.io/safe-to-evict: "on-completion" annotation value. It allows users to specify that a pod should not be actively evicted by the Cluster Autoscaler during a scale-down event, but rather should be allowed to run to completion.

When scaling down, the Cluster Autoscaler will wait for pods with this annotation to complete successfully rather than forcefully evicting them. Nodes containing only daemonset pods and "on-completion" pods are prioritized for scale-down after completely empty nodes, but before nodes requiring active pod eviction.

This is particularly useful for AI/ML workloads or batch jobs that cannot be easily interrupted or checkpointed, ensuring they finish their computation naturally before the underlying node is scaled down.

Special notes for your reviewer:

PR adds a new drainability rule oncompletion which skips pods annotated with cluster-autoscaler.kubernetes.io/safe-to-evict=on-completion. To surface this state cleanly to the rest of the CA logic, the simulator.GetPodsToMove function was refactored to return a PodMoveInfo struct rather than multiple slice return values. This cleans up the signature and allows better handling of special pod categories like OnCompletionPods.

The ScaleDownNodeProcessor sorting logic is also updated to prioritize nodes with only on-completion pods right after completely empty nodes.

Does this PR introduce a user-facing change?

Introduced `cluster-autoscaler.kubernetes.io/safe-to-evict: "on-completion"` annotation. Pods with this annotation will not be forcefully evicted during node scale-down; instead, the Cluster Autoscaler will wait for them to natively run to completion before scaling down the node. Nodes with only these pods are prioritized for scale-down over nodes with standard pods requiring eviction.

@k8s-ci-robot k8s-ci-robot added do-not-merge/work-in-progress Indicates that a PR should not merge because it is a work in progress. do-not-merge/release-note-label-needed Indicates that a PR should not merge because it's missing one of the release note labels. do-not-merge/needs-area labels Mar 13, 2026
@k8s-ci-robot
Copy link
Copy Markdown
Contributor

[APPROVALNOTIFIER] This PR is NOT APPROVED

This pull-request has been approved by: ulladz
Once this PR has been reviewed and has the lgtm label, please assign towca for approval. For more information see the Code Review Process.

The full list of commands accepted by this bot can be found here.

Details Needs approval from an approver in each of these files:

Approvers can indicate their approval by writing /approve in a comment
Approvers can cancel approval by writing /approve cancel in a comment

@k8s-ci-robot
Copy link
Copy Markdown
Contributor

Hi @ulladz. Thanks for your PR.

I'm waiting for a kubernetes member to verify that this patch is reasonable to test. If it is, they should reply with /ok-to-test on its own line. Until that is done, I will not automatically test new commits in this PR, but the usual testing commands by org members will still work.

Tip

We noticed you've done this a few times! Consider joining the org to skip this step and gain /lgtm and other bot rights. We recommend asking approvers on your previous PRs to sponsor you.

Once the patch is verified, the new status will be reflected by the ok-to-test label.

I understand the commands that are listed here.

Details

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository.

@k8s-ci-robot k8s-ci-robot added area/cluster-autoscaler needs-ok-to-test Indicates a PR that requires an org member to verify it is safe to test. labels Mar 13, 2026
@k8s-ci-robot k8s-ci-robot requested review from elmiko and feiskyer March 13, 2026 09:54
@k8s-ci-robot k8s-ci-robot added size/L Denotes a PR that changes 100-499 lines, ignoring generated files. cncf-cla: yes Indicates the PR's author has signed the CNCF CLA. and removed do-not-merge/needs-area labels Mar 13, 2026
@ulladz ulladz force-pushed the safe-to-evict-production branch 4 times, most recently from e98cf05 to 250e2b8 Compare March 16, 2026 09:54
@k8s-ci-robot k8s-ci-robot added release-note Denotes a PR that will be considered when it comes time to generate release notes. and removed do-not-merge/release-note-label-needed Indicates that a PR should not merge because it's missing one of the release note labels. labels Mar 16, 2026
@ulladz
Copy link
Copy Markdown
Contributor Author

ulladz commented Mar 16, 2026

/cc @x13n

@k8s-ci-robot k8s-ci-robot requested a review from x13n March 16, 2026 10:10
@ulladz ulladz marked this pull request as ready for review March 16, 2026 10:10
@k8s-ci-robot k8s-ci-robot removed the do-not-merge/work-in-progress Indicates that a PR should not merge because it is a work in progress. label Mar 16, 2026
@jbtk
Copy link
Copy Markdown
Member

jbtk commented Mar 16, 2026

/ok-to-test

@k8s-ci-robot k8s-ci-robot added ok-to-test Indicates a non-member PR verified by an org member that is safe to test. and removed needs-ok-to-test Indicates a PR that requires an org member to verify it is safe to test. labels Mar 16, 2026
@jbtk
Copy link
Copy Markdown
Member

jbtk commented Mar 19, 2026

/assign jbtk


// Drainable decides what to do with on-completion pods on node drain.
func (r *Rule) Drainable(drainCtx *drainability.DrainContext, pod *apiv1.Pod, _ *framework.NodeInfo) drainability.Status {
if drain.HasSafeToEvictOnCompletionAnnotation(pod) {
Copy link
Copy Markdown
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Why do we need this? Shouldn't we not even attempt drain if there are pods that have safe-to-evict=on-completion? I see that not safe to evict would return that drain is blocked and I believe that for these we would also not allow drain - do we?

Copy link
Copy Markdown
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I thought that we still wanted to drain pods without safe-to-evict=on-completion. As a result, we will have an empty node that just waits for the on-completion pod to finish its work and scales down after

Copy link
Copy Markdown
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Isn't it that in the end we wayend to wait with actively draining the node until all on completion pods are done and rely only on the soft taint to prevent scheduling new nodes if they fit elsewhere in the cluster? I might have misunderstood smith though.

Copy link
Copy Markdown
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

But it is only simulation, right? We want to add a node unneededNodes and receive the soft-taint to prevent new pods from scheduling. To do it a node must succeed in SimulateNodeRemoval. Returning BlockDrain would immediately fail simulation and prevent any soft-tainting.

The actual drain is then blocked in https://github.com/kubernetes/autoscaler/pull/9355/changes#diff-cbbb81f63d88e9a38a5aa943d9b243f59704b337ae8dffb14447ebe356aebbafR179

Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

+1 to @jbtk that we don't want to start drain of pods before all on-completion pods finished.
But based on comment above the logic already has it. We are going to delay scale-down (which includes pod eviction phase) as long there is at least 1 non-terminated on-completion pod.

Copy link
Copy Markdown
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Cool, thanks for confirming that this works as we planned.


// Drainable decides what to do with on-completion pods on node drain.
func (r *Rule) Drainable(drainCtx *drainability.DrainContext, pod *apiv1.Pod, _ *framework.NodeInfo) drainability.Status {
if drain.HasSafeToEvictOnCompletionAnnotation(pod) {
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

+1 to @jbtk that we don't want to start drain of pods before all on-completion pods finished.
But based on comment above the logic already has it. We are going to delay scale-down (which includes pod eviction phase) as long there is at least 1 non-terminated on-completion pod.

@ulladz ulladz force-pushed the safe-to-evict-production branch from 250e2b8 to e6078a9 Compare March 24, 2026 13:25
@ulladz ulladz force-pushed the safe-to-evict-production branch from e6078a9 to 285be0c Compare March 24, 2026 13:33
@ulladz
Copy link
Copy Markdown
Contributor Author

ulladz commented Mar 25, 2026

/test pull-autoscaling-e2e-gci-gce-ca-test

@jbtk
Copy link
Copy Markdown
Member

jbtk commented Mar 30, 2026

/lgtm

@k8s-ci-robot k8s-ci-robot added the lgtm "Looks good to me", indicates that a PR is ready to be merged. label Mar 30, 2026
@jbtk
Copy link
Copy Markdown
Member

jbtk commented Mar 30, 2026

@x13n - please have a look.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

area/cluster-autoscaler cncf-cla: yes Indicates the PR's author has signed the CNCF CLA. lgtm "Looks good to me", indicates that a PR is ready to be merged. ok-to-test Indicates a non-member PR verified by an org member that is safe to test. release-note Denotes a PR that will be considered when it comes time to generate release notes. size/L Denotes a PR that changes 100-499 lines, ignoring generated files.

Projects

None yet

Development

Successfully merging this pull request may close these issues.

4 participants