Technology changes. Tools change. Acronyms change—sometimes daily. But the underlying problem doesn’t: go from idea to value efficiently, safely, and reliably.
DevOps gets described in a lot of different ways–a movement, a practice, a culture, a category, a tool, a job title. At its core, DevOps is about optimizing the speed of new features and the reliability of releases. Put more simply: it’s the optimization of work. To understand where we are going, you need to know how we got here, and what exactly we mean by work.
The Goal: A Process of On-going Improvement by Eliyahu M. Goldratt in 1984 is not only still relevant, but surprisingly entertaining. It revolves around a fictionalized Socratic dialog between Jonah, the professor, and Alex, the plant manager. Goldratt's story of Alex battling to save his plant that may be shut down, popularized Goldratt’s Theory of Constraints. The premise? Work flows (not workflows), constraints matter, and optimizing the system (not each individual component) is how you win. DevOps applies the same concepts developed in The Goal to software delivery.
The Phoenix Project: A Novel About IT, DevOps, and Helping Your Business Win, published In 2013 by Gene Kim, George Spafford, and Kevin Behr updated The Goal for our current era of software development. A new story for a different time. Now the Socratic pair is Erik, the eccentric board member, and Bill, the newly promoted VP of IT Operations who must save the chronically failing Phoenix Project (and the company).
DevOps: The AHA! Moment
These books have been around for a while. When did I read them? Last summer 2025! Even without this historical context DevOps made a big impression on me. For many years into my career, development and operations lived in friction. Releases were infrequent and difficult. Things broke. Ops got blamed for dropping the ball in the endzone. Dev got blamed for handing off code that shouldn’t have been near production.
When I first heard the word DevOps–I’ll admit—it felt like a revelation. DevOps—literally mashing the words Development and Operations—showed that closer collaboration, better tools, and automation could speed feature delivery and increase operational stability. I wasn’t thinking about how work flows or system constraints, I was “automating everything,” but the side effects of these efforts? Handoffs, waiting, rework, and mistrust dropped. Flow improved. Releases became faster. Systems became more reliable. Work was optimized.
Security: Turns Out the Internet Can Be Dangerous
While the Internet made information systems accessible, vulnerabilities in the software running those systems made them exploitable. Often, security was ignored—or set up to fail—slapped on at the end, but still expected to prevent problems that had already been coded and deployed.
DevSecOps solved for this by pulling security into the system rather than treating it as a gate:
Build securely from the start
Automate controls
Share responsibility
Security does not have to be a bottleneck. It can be part of the flow. Features can move fast—and safely.
Automation: Solving One Problem, Creating Another
DevOps leans heavily on automation:
“If you have to do it twice, automate it.”
This work has made a mountain of change to how software is released and operations are managed. Traditional automation is fast, but GenAI (including Machine Learning) is driving change, yet faster. With GenAI automation penetrates more of the stack:
Policy generation
Code development
Infrastructure deployment
Incident remediation
Breadth has increased dramatically—but control has fallen behind.
A Philosophical Detour (Because I Can’t Help Myself)
DevOps shortened the distance between thought and reality. A developer has an idea. They type code. That idea is suddenly running somewhere, affecting real people—sometimes in ways no one fully anticipated.
GenAI compresses that distance further. Ideas, implementations, and changes now propagate faster than Monday morning coffee disappears. At some point, you have to pause—preferably while sipping said coffee—and ask:
Who—or what—is actually in charge here?
Enter GovOps: Because Someone Has to Be in Charge
Governance has always existed, and yet, is generally unappreciated and typically done poorly or just well enough to meet regulatory requirements. Governance has not lived in the system. It’s been policies, audits, and slide decks no one reads. Continuous delivery and AI-generated everything break that model. We can no longer afford the risk of leaving Governance out of the loop. We need continuous Governance.
So we do what we did with DevOps and DevSecOps: pull governance into the flow. This is GovOps. Does that sound counterintuitive? Consider, AI is driving a rapid advancement in automation. These improvements will enable a change in how we think about Governance.
DevSecOps → Security Controls embedded into Code (continuous security)
Then:
GovOps → Continuous (automated) compliance.
GovOps: Making Governance Relevant
GovOps isn’t about adding more controls—it’s about operationalizing Governance. It is settings standards, validating control implementation and providing observability with audit reports on request. Expectations are declarative:
“We require encryption” → enforce it in code
“Review access quarterly” → monitor it continuously
“Produce audit evidence” → the system generates it automatically
What is now living in stale policy documents and inscrutable standards will be the AI prompts of our security and compliance agents. These prompts will define responsibilities, set limits, and enforce standards throughout the system with constant observability. Changes to standards (changes to the AI prompts governing agents) deployed to the system will have an immediate impact.
And here we go again.
The Pattern Should Look Familiar
We are repeating the same process:
Identify the constraint
Pull it into the system
Automate
Improve flow
DevOps did it for delivery. DevSecOps did it for security. GovOps does it for governance.
Reality Check
It may sound neat, but it’s going to be messy:
People resist change
Organizations protect silos
Each new layer (DevOps, DevSecOps, GovOps) gets turned into a tool category before it’s understood as a way of working
Where This Lands
If The Goal taught us to optimize systems, and DevOps applied that to software delivery, what we’re doing now is extending that thinking across the lifecycle:
Build it
Secure it
Prove it
Continuously.
We’re not just doing more work. We’re not just doing it faster. We’re understanding how work flows, where it breaks, and how to fix it—systematically. Everything else—tools, AI, acronyms—is just how we’re expressing that idea right now.
TruRisk™ is a metric to calculate asset risk-based on the severity of identified threats, vulnerabilities, and the value of the asset. Qualys publishes the equation for TruRisk™ enabling inspection and unfiltered feedback by unbiased parties. Notwithstanding that the equation is published, the length of the TruRisk™ equation and the extensive use of acronyms, made it too opaque for me to understand and trust at face value. So I tested it.
While TruRisk™ sounds like something that is immutable, it turns out that Qualys has multiple versions. Different versions are defaulted in different products. Not surprisingly, these different versions score risk differently. Surprisingly, these differences can be significant! Consider an asset that has the highest importance and is exposed to the internet; how many critical vulnerabilities does it take for these different TruRisk™ formulas to score an asset at critical risk?
I determined that the Qualys TruRisk™ version 2.0, the default in the Qualys Enterprise Threat Management (ETM) product, requires 87 critical vulnerabilities in order to rank an asset (with the highest asset value and the most exposure) at CRITICAL risk (full analysis below). Exploitation of just one of those 87 vulnerabilities could lead to a data breach. Is 87 the optimal number for scoring an asset at CRITICAL risk?
To muddle the matter more, I found that the TruRisk™ v1.0, the default in the Qualys Vulnerability Management, Detection and Response (VMDR) product scores risk much more aggressively than ETM. TruRisk™ v1.0 for VMDR requires only 15 vulnerabilities to rank an asset at critical risk.
I shared my findings with colleagues and was encouraged to share them directly with Qualys. Via LinkedIn, I connected to Kunal Modasiya, Senior Vice President of Product, GTM & Growth at Qualys, and shared my research. Kunal had April Lenhard, Principal Product Manager: Cyber Threat Intelligence reach out to me to discuss. Eventually, I met with April, Russ Sunderlin, Director, Subject Matter Expert, VMDR and Anthony Williams, Senior Subject Matter Expert VMDR, to explain the differences in the scoring between TruRisk™ v1.0 and TruRisk™ v2.0.
If you're unfamiliar with Vulnerability Management and Measuring Risk, the following explainers are meant to provide a brief overview of the concepts involved. If you’re familiar with these concepts, skip down to “Analysis: Measuring Asset Risk with Qualys TruRisk™” to find out what I learned.
Explainer: Vulnerability Management and Measuring Risk
When it comes to Vulnerability Management–the patching and updating of systems and applications to remediate vulnerabilities–keeping up with an ever-growing number of threats and an expanding attack surface is a real struggle. There are simply too many devices, with too much software, interacting with too many things to keep up. And if you’re falling behind patching, the number of vulnerabilities grows. As the number of vulnerabilities that go unmitigated grows, your risk of a breach grows too.
Automation has improved the ability of Operations and Information Security teams to remediate vulnerabilities, but if we want to keep up, simple automation is not enough. We need to manage our finite resources and prioritize remediation based on quantative measures of risk.
There are any number of definitions for risk, auditors, Information Security professionals, and risk managers have a standardized on a definition for their mutual use. They define asset risk as the product of threat severity, asset exposure and business impact:
The assessment of risk drives the expectations for how quickly a risk must be mitigated. In the image below, the TruRisk™ score is 801, which is classified as “High” risk.
These expectations are often codified in regulations and standards like the Payment Card Industry Data Security Standard (PCI), HIPAA and FedRAMP. Vulnerabilities must be risk assessed and remediated accordingly. Typical requirements for remediation looks something like this:
Severity
Qualys TruRisk Rating
Remediation Required Within
Critical
> 850
Seven days
High
700-849
30 days
Medium
500-699
90 days
Low
< 500
180 days
So, for the example above, the TruRisk™ score of 801 is classified as High. The remediation table indicates the system must be remediated within 30 days. As environments grow, the time, energy and expense required to remediate critical vulnerabilities within seven days increases. It’s expensive to remediate vulnerabilities within seven days, so accurate risk assessment is necessary in order to limit the amount of critical work to only what is actually necessary.
Analysis: Measuring Asset Risk with Qualys TruRisk™
TruRisk™ v1.0 was developed by Qualys for their premier product, Vulnerability Management, Detection and Response (VMDR). VMDR has been in use for years and is a mature product. For VMDR, the equation is expressed:
To calculate TruRisk™ Qualys uses eight variables, four weights and six nested functions. All the terms are coded. Unpacking the variables, they come in two flavors: Asset Factors and Threat Factors. Asset factors affect the value of the asset and its exposure. Threat Factors represent the severity of the threat.
Term
Name
Description
Value
Variable - Asset Factor
ACS - Asset Criticality Score
Asset criticality is determined by the business. ACS is a simplified measure of the Single Loss Expectancy of an asset.
1 - 5
Variable - Asset Factor
External - Asset Exposure
When an asset is exposed (typically to the Internet). When an asset is exposed, it's more likely to be compromised.
1 - 1.2
Variable - Threat Factor
MaxQDS - Maximum Qualys Detection Score
Maximum QID – QID is a Qualys proprietary system for identifying and categorizing vulnerabilities. A single QID can contain multiple CVEs.
0 - 100
Variable - Threat Factor
QDSc
Count of critical detections (QIDs)
count
Variable - Threat Factor
QDSh
Count of high detections (QIDs)
count
Variable - Threat Factor
QDSm
Count of medium detections (QIDs)
count
Variable - Threat Factor
QDSl
Count of low detections (QIDs)
count
Variable - Threat Factor
g
Weight based upon the Max QDS (QDSc = 1.3, QDSh = 1.1, QDSm = 1, QDSl = 1)
1 - 1.3
Weight
Wc
Weight critical
0.8
Weight
Wh
Weight high
0.15
Weight
Wm
Weight medium
0.03
Weight
Wl
Weight low
0.02
Simplifying:
Impact = ACS
Likelihood = External
Threat Severity = The computation of QDS and the buckets for severities
Finally, (and if if we leave off the MIN (X, 1000), which simply caps the score at 1000), we get:
TruRisk™ v1.0 Score = [Impact * Likelihood] * [Threat Severity (calculated with maxQID)] + Threat Severity (calculated with count of detections)]
Which actually looks a lot like the canonical equation for calculating risk, although the canonical risk score is the product of two variables, and TruRisk™ is the product of three variables. A term that is the product of three variables is more complex than a product of two variables and may lead to unexpected results.
Qualys Enterprise Threat Management (ETM) is a relatively new product and lacks the maturity of their flagship product VMDR, For ETM Qualys utilizes TruRisk™ v2.0. It is a similar looking equation, but it is different. Acronyms like QDSc have been replaced with the somewhat more human readable terms like numCriticalDetections. The capping of the score at 1000 is expressed in a separate, simpler equation.
More significantly, the TruRisk™ v1.0 model is based on Qualys Identification (QID). QID is a proprietary system for identifying and categorizing vulnerabilities. A single QID could contain multiple CVEs, Common Vulnerabilities and Exposures (CVEs) are published by the MITRE Corporation and sponsored by the US Cybersecurity and Infrastructure Security Agency (CISA). The new TruRisk™ 2.0 calculation model is based on CVE, and the TruRisk™ score is calculated based on individual CVEs. Qualys explained that this change is necessary in order for the ETM product to aggregate threats detected by non-Qualys applications and systems that rely on CVE to score threat severity.
Term
Name
Description
Value
Variable - Asset Factor
ACS - Asset Criticality Score
Asset criticality is determined by the business. ACS is a simplified measure of the Single Loss Expectancy of an asset.
1 - 5
Variable - Asset Factor
External - Asset Exposure
When an asset is exposed (typically to the Internet). When an asset is exposed, it's more likely to be compromised.
1 - 1.2
Variable - Threat Factor
MaxDetectionScore
Maximum CVSS. CVSS is a severity score generated for each CVE. Common Vulnerabilities and Exposures (CVEs) are published by the MITRE Corporation and sponsored by the US Cybersecurity and Infrastructure Security Agency (CISA).
0 - 100
Variable - Threat Factor
numCriticalDetections
Count of critical detections (CVEs)
count
Variable - Threat Factor
numHighDetections
Count of high detections (QVEs)
count
Variable - Threat Factor
numMediumDetections
Count of medium detections (CVEs)
count
Variable - Threat Factor
numLowDetections
Count of low detections (CVEs)
count
Variable - Threat Factor
g
Weight based upon the Max CVSS (CVSSc = 1.3, CVSS = 1.1, CVSS = 1, CVSS = 1)
1 - 1.3
Weight
WtCrt
Weight critical
0.8
Weight
WtHigh
Weight high
0.15
Weight
WtMed
Weight medium
0.03
Weight
WtLow
Weight low
0.02
While moving from proprietary QID to the widely used industry standard CVE is a notable difference, that change should not result in significantly different risk assessments. But there is a change that results in significantly different outputs. The positioning of the brackets, and hence the order of operations to calculate the Risk Score, are not the same.
Comparing the simplified equations: In v1.0, the third term, Threat Severity (calculated with count of detections)is added to the second term, BEFORE it is multiplied to the first. In v2.0 the third term is added AFTER the product of the first and second terms is generated. In v2.0 the third term contributes additively, not multiplicatively.
And when you start pushing numbers through the formula, the results are markedly different. This change has real world implications, as the new equation is less aggressive at assigning critical risks, which means that users of the Qualys ETM and TruRisk™ v2.0 will have a rosier view of their enterprise than those people using the Qualys TruRisk™ v1.0 with VMDR.
The calculator I built to test the scoring can be found, here: TruRisk_Calculator.
Conclusion: Measuring Asset Risk with Qualys TruRisk™
Calculating risk and more broadly the use of equations to model real-world systems is inherently complex. Even when the details of an equation are provided, as Qualys has done for their TruRisk™ model, the meaning and behavior of the equation may be opaque and generate unintended outputs.
In my meeting with the SMEs at Qualys, they explained the evolution of the different versions of TruRisk™, the change from using QID to CVE for identifying vulnerabilities, which versions are provided by default in which products, and finally, the intention to align all products onto TruRisk™ v2.0. However, this feedback did not address my headline concern, that TruRisk™ v2.0 scores risk much more rosily than TruRisk v1.0. Both models are working as “functionally” intended. They provide functionality for scoring risk and providing prioritization, but the material change to the design and the resulting significant changes to the outputs went unidentified.
Users of ETM (typically Senior leaders and managers) will have a rosier view of risks to the enterprise than team members using VMDR.
Qualys needs to:
Revamp their documentation related to TruRisk™, eliminate overlapping articles, and streamline the story.
Develop and implement design level controls to ensure continuity between updated versions of TruRisk™. If controls exist, they need improvement. Controls must be implemented to avoid design failures.
GenAI and advanced algorithms introduce new risks even when they are used for risk management. What’s happened at Qualys and TruRisk™ demonstrates the threat. Updates to algorithms are necessary to improve accuracy, remove defects and increase utility. But, unintended changes can be introduced to algorithms at unexpected times, including design. Information Security and Quality Assurance need to be included during design. The need to keep a human in the loop is essential. Changes need scrutiny by critically thinking humans. We must closely evaluate our tools and the vendors that produce them. Trust your vendor, but verify.