Beyond Basic Scaling: Advanced Kubernetes Resource Strategies
This was originally published on Sept. 30, 2025, and has been updated with additional information.
Trying to set resource requests and limits in Kubernetes is kind of like the story of “Goldilocks and the Three Bears”: Overprovisioning gives developers abundant compute, but it wastes money, energy and other resources. Underprovision and you slow down developers, leaving them frustrated and likely costing you money as it takes longer to ship new products and features.
Improperly configured resources can lead to application instability, poor cluster utilization and performance issues. In a recent TNS webinar, Automating K8s Resource Management To Reduce Developer Burden, Andrew Hillier, co-founder and CTO of Densify, explained how inefficiencies in Kubernetes, such as overprovisioned containers and wasted resources, waste developers’ time and reduce their productivity.
“Developers should be focusing on time-to-market of new application features, new services. You know, that’s what makes the company competitive. And having them spend a lot of time trying to figure out what their utilization of their container was for the last few months so they can correct some numbers isn’t really the best use of their time,” explained Hillier.
Kubernetes’ complexity is a big part of the problem, he said. “You can have things that are too small, things that are too big, things that don’t have values, and it all makes this big mess inside the environment where you could be running on more infrastructure than you need. You could be running on less. You want to fix this, because there’s a lot of risk and a lot of waste. But the problem is, there are thousands of these things.”
How To Get to ‘Just Right’
The goal is to get to “just right” — to give developers the right amount of resources — plenty to meet their needs without overprovisioning and wasting money.
This is where a reliable, efficient and automated Kubernetes resource management system becomes so valuable for teams. It right-sizes resources, abstracts complexity and reduces unnecessary friction for developers building and shipping software.
To explain why this is so helpful to teams, Hillier walked through a demo of Densify’s technology that, rather than using guesswork and manual effort, automates Kubernetes resource management. By adjusting container settings dynamically, enterprises can optimize resources based on teams’ needs to save costs and reduce risk. He also showed an AI interface that enables users to ask questions and receive actionable insights, further streamlining resource management and reducing the burden on developers.
Watch This Free Webinar On Demand!
What You’ll Learn
By watching, you’ll leave with best practices, real-world examples and actionable tips, including:
- Understand the typical cost and risk factors impacting Kubernetes environments.
- Apply policy-based approaches to automatically optimize container requests and limits.
- See why continuous node capacity and scaling optimization are critical.
- Learn automation strategies — and the pitfalls to avoid — when scaling optimization initiatives.