-
Notifications
You must be signed in to change notification settings - Fork 975
feat: add prebuild timing metrics to Prometheus #19503
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Conversation
9a14915
to
7bd4c6a
Compare
7bd4c6a
to
1cd4417
Compare
COUNT(*) AS created_count | ||
FROM workspaces w | ||
INNER JOIN workspace_builds wb ON wb.workspace_id = w.id | ||
AND wb.build_number = 1 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This metric counts workspaces whose first build succeeded. If the first build failed and a later build succeeded, that workspace is not included. We could change the query to count the first successful build instead (wd.transition='start'
& pj.job_status='succeeded'
); however, we would still need to exclude prebuild-created workspaces by verifying that the first build was not initiated by the prebuilds system user.
It is guaranteed that the first build of a prebuilt workspace will always be initiated by the prebuilds system user, right?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It is guaranteed that the first build of a prebuilt workspace will always be initiated by the prebuilds system user, right?
This assumption would fail if an admin were to create a workspace 'on behalf of' the prebuilds user. I don't see a logical reason why someone would do this though, as the reconciler would likely try to delete it?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This assumption would fail if an admin were to create a workspace 'on behalf of' the prebuilds user.
Is this possible? If yes, should we allow this?
as the reconciler would likely try to delete it?
The reconciliation loop would detect if the number of desired instances matches the number of running instances. If that were not the case, it would determine there is an extraneous instance, but it wouldn't necessarily delete this one; it just deletes the oldest prebuilt workspace: https://github.com/coder/coder/blob/main/coderd/prebuilds/preset_snapshot.go#L327
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Is this possible?
Yes, it's possible and supported to create a workspace for another user:
$ coder create --help
coder v2.25.1-devel+dd867bd74
USAGE:
coder create [flags] [workspace]
Create a workspace
- Create a workspace for another user (if you have permission):
$ coder create <username>/<workspace_name>
If yes, should we allow this?
Good question :) It gets very weird to reason about especially if the workspace that gets created is a prebuilt workspace...

I would consider it out of scope of this issue.
|
||
workspaceCreationTimings := prometheus.NewHistogramVec(prometheus.HistogramOpts{ | ||
Namespace: "coderd", | ||
Subsystem: "provisionerd", |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This PR introduces histogram metrics that are registered at the API layer but observed in provisionerdserver. Because provisionerdserver doesn’t own a Prometheus registry, we inject a callback from the API to record observations. Both metrics are updated in provisionerdserver’s completeWorkspaceBuildJob
.
Let me know what you think, or if it would be better to have the metrics on provisionerdserver or maybe even update them in another more appropriate place.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We already have set a precedent in enterprise/coderd/prebuilds.NewStoreReconciler
, where we inject api.PrometheusRegistry
. Would it make sense to do the same here for consistency?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yes, that makes sense 👍 Just wanted to check whether there would be a more appropriate place to introduce these metrics, before making that change
NativeHistogramBucketFactor: 1.1, | ||
// Max number of native buckets kept at once to bound memory. | ||
NativeHistogramMaxBucketNumber: 100, | ||
// Merge/flush small buckets periodically to control churn. | ||
NativeHistogramMinResetDuration: time.Hour, | ||
// Treat tiny values as zero (helps with noisy near-zero latencies). | ||
NativeHistogramZeroThreshold: 0, | ||
NativeHistogramMaxZeroThreshold: 0, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
These are the native histogram parameters. If we choose this approach, we need to (edit: should) ensure our Prometheus server supports native histograms (still experimental) before deploying to dogfood. Additionally, we should also update the documentation to note that we use on this Prometheus feature.
As described in the PR, if Prometheus is not started with native histograms enabled, the metrics will automatically fall back to standard bucketed histograms using the bucket scheme listed above (aligned with the existing provisionerd metrics).
Note: these native histograms were tested locally with Prometheus v3.5.0
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
we need to ensure our Prometheus server supports native histograms (still experimental)
I wouldn't assume that existing customers have an experimental Prometheus feature enabled. But as you say, there is a transparent fallback so this shouldn't be an issue?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
But as you say, there is a transparent fallback so this shouldn't be an issue?
This shouldn’t be an issue while customers are using classic histograms, but there will inevitably be a migration phase once customers want to adopt native histograms (or if we decide to enforce them). From https://prometheus.io/docs/specs/native_histograms/#migration-considerations
Classic and native histograms cannot be aggregated with each other. A change from classic to native histograms at a certain point in time makes it hard to create dashboards that work across the transition point, and range vectors that contain the transition point will inevitably be incomplete (i.e. a range vector selecting classic histograms will only contain data points in the earlier part of the range, and a range vector selecting native histograms will only contain data points in the later part of the range).
From my perspective, we have three paths forward:
- Enforce native histograms only:
- Customers would need to explicitly enable the feature on their Prometheus server today, to be able to see these metrics.
- This is risky, since not all customers may be willing (or able) to do so yet.
- Given that this feature was requested by a customer, this option might be a non-starter.
- Support both native and classic histograms (the approach proposed in this PR):
-
Provides the best compatibility: customers without native enabled continue using classic, while those with it enabled get the benefits.
-
At some point, there would still need to be a migration once customers decide to use native histograms (either by explicitly enabling it or once it becomes the default)
-
There are some interesting considerations here: https://prometheus.io/docs/specs/native_histograms/#migration-considerations
-
For our dogfood setup, it might make sense to mirror the expected customer journey:
- Run without native histograms.
- Enable native histograms in parallel with classic.
- Fully transition to native.
-
This way, we can validate the migration path ourselves and uncover issues before customers do.
- Stick with classic histograms only:
- Safest and simplest option in the short term.
- We can still migrate later once the feature becomes stable.
- At that point, we could migrate all histogram metrics together, not just these new ones.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Without fully understanding the implications yet, my weak preference is to tread the expected path and find issues first!
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I prefer option #2. A graceful fallback seems like a good idea, particularly given how wide you've made the fallback buckets. The metrics will be less useful, but still directionally helpful (i.e. you probably don't care if a claim took 5m or 6m or 7m or 10m to claim; anything above 5m is just awful).
We should add a follow-up to add the native histogram flag to coder/observability, and ensure everything works out-of-the-box. I'd prefer we did that before we merge this, otherwise proof of local validation using the flag will suffice.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I like the approach you've taken here 👍
NativeHistogramBucketFactor: 1.1, | ||
// Max number of native buckets kept at once to bound memory. | ||
NativeHistogramMaxBucketNumber: 100, | ||
// Merge/flush small buckets periodically to control churn. | ||
NativeHistogramMinResetDuration: time.Hour, | ||
// Treat tiny values as zero (helps with noisy near-zero latencies). | ||
NativeHistogramZeroThreshold: 0, | ||
NativeHistogramMaxZeroThreshold: 0, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I prefer option #2. A graceful fallback seems like a good idea, particularly given how wide you've made the fallback buckets. The metrics will be less useful, but still directionally helpful (i.e. you probably don't care if a claim took 5m or 6m or 7m or 10m to claim; anything above 5m is just awful).
We should add a follow-up to add the native histogram flag to coder/observability, and ensure everything works out-of-the-box. I'd prefer we did that before we merge this, otherwise proof of local validation using the flag will suffice.
60, // 1min | ||
60 * 5, | ||
60 * 10, | ||
60 * 30, // 30min | ||
60 * 60, // 1hr |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'd prefer we have higher resolution between 1 and 5m, since that'll actually be helpful, and cap at 5m because anything beyond 5m is so horrific that prebuilds are basically useless.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
That’s a good point. I wasn’t completely sure what appropriate claim time values would be, since that can vary a lot from customer to customer. For this first draft, I just reused the same buckets we already use for other histograms, like https://github.com/coder/coder/blob/main/provisionerd/provisionerd.go#L203-L211 for workspace build times.
I agree with having higher resolution between 1m and 5m, I’m just not sure about capping at 5m 🤔 Looking at those existing values, it seems we’ve had builds that take between 30 minutes and 1 hour. Do we still expect claim times to reliably stay under 5 minutes even in those cases? Just curious how you arrived at the 5m mark 🙂
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Looking at those existing values, it seems we’ve had builds that take between 30 minutes and 1 hour
For claims?
Do we still expect claim times to reliably stay under 5 minutes even in those cases? Just curious how you arrived at the 5m mark 🙂
If a prebuild claim takes longer than 5m, there's no point in using prebuilds. I chose 5m since it was the most realistic upper bound of the buckets you listed.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
For claims?
No, the bucket definitions I was referring to are for workspace builds, more precisely: coderd_provisionerd_workspace_build_timings_seconds
https://github.com/coder/coder/blob/main/provisionerd/provisionerd.go#L203-L211.
These record the time taken for a workspace to build, and I believe they were introduced before prebuilds existed. Based on these buckets, my assumption was that some workspace builds (without prebuilds) have taken on the order of 30m–1h, so I was curious whether in those cases a claim from a prebuilt workspace would still reliably fall under 5m.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Let's come back to the point of these metrics... Operators need to know when claims are taking too long, right? Any prebuilt workspace claim which exceeds 5m is pointless. Why do we need any more resolution above a threshold like this?
We're talking only about the fallback case here, btw. If we have native histograms then operators can see the "real" distribution, if they really need to.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We're talking only about the fallback case here, btw. If we have native histograms then operators can see the "real" distribution, if they really need to.
True, but my guess is that most customers with Prometheus are not using native histograms, so they will most likely use the classic histograms.
Why do we need any more resolution above a threshold like this?
I'm ok with capping at 5m. I was mainly curious if there’s already an agreement that 5 minutes is the maximum acceptable claim time for prebuilt workspaces to still be considered effective (especially taking into account longer build times for workspaces without prebuilds)
|
||
workspaceClaimTimings := prometheus.NewHistogramVec(prometheus.HistogramOpts{ | ||
Namespace: "coderd", | ||
Subsystem: "provisionerd", |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It feels a bit awkward to have this namespaced in this way as opposed to under coderd_prebuilt_workspaces
.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yes, I agree. Right now, the histogram metrics are set in the provisionerd server, which is why the subsystem was originally defined as provisionerd
. I can rename this one to prebuilt_workspace
, but then for consistency, we should also update workspace_creation_duration_seconds
.
An alternative would be to drop the provisionerd
subsystem altogether and just go with:
coderd_workspace_creation_duration_seconds
coderd_prebuilt_workspace_claim_duration_seconds
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
An alternative
Yeah that seems like a good solution. We already have metrics like coderd_workspace_builds_total
.
🚧 Work in Progress
Requesting an initial review to validate the addition of new workspace creation/claim metrics to Prometheus. In particular: the metrics definitions as well as the implementation choices in
prometheusmetrics
.Description
This PR introduces one counter and two histograms related to workspace creation and claiming. The goal is to provide clearer observability into how workspaces are created (regular vs prebuild) and the time cost of those operations.
workspace_creation_total
coderd_api_workspace_creation_source_total
organization_name
,template_name
,preset_name
This counter tracks whether a regular workspace (not created from a prebuild pool) was created using a preset or not.
Currently we already expose
coderd_prebuilt_workspaces_claimed_total
for claimed prebuilt workspaces, but we lack a comparable metric for regular workspace creations. This metric fills that gap, making it possible to compare regular creations against claims.Implementation notes:
coderd_api
metric, consistent with other workspace-related metrics (e.g.coderd_api_workspace_latest_build
: https://github.com/coder/coder/blob/main/coderd/prometheusmetrics/prometheusmetrics.go#L149).defaultRefreshRate
(1 minute ), DB queryGetRegularWorkspaceCreateMetrics
is executed to fetch all regular workspaces (not created from a prebuild pool).workspace_creation_duration_seconds
&workspace_claim_duration_seconds
coderd_provisionerd_workspace_creation_duration_seconds
organization_name
,template_name
,preset_name
,source
(distinguish between regular and prebuilt workspaces)coderd_provisionerd_workspace_claim_duration_seconds
organization_name
,template_name
,preset_name
We already have
coderd_provisionerd_workspace_build_timings_seconds
, which tracks build run times for all workspace builds handled by the provisioner daemon.However, in the context of this issue, we are only interested in creation and claim build times, not all transition; additionally, this metric does not include
preset_name
, and adding it there would significantly increase cardinality. Therefore, separate more focused metrics are introduced here:workspace_creation_duration_seconds
: Build time to create a workspace (either a regular workspace or the build into a prebuild pool, for prebuild initial provisioning build).workspace_claim_duration_seconds
: Time to claim a prebuilt workspace from the pool.The reason for two separate histograms is that:
Native histogram usage
Provisioning times vary widely between projects. Using static buckets risks unbalanced or poorly informative histograms.
To address this, these metrics use Prometheus native histograms:
prometheus/client_golang
v1.15.0+--enable-feature=native-histograms
)For compatibility, we also retain a classic bucket definition (aligned with the existing provisioner metric: https://github.com/coder/coder/blob/main/provisionerd/provisionerd.go#L182-L189).
Implementation notes:
Closes: #19528