7 Key Performance Indicators When Testing PLM Solutions
Evaluating and testing PLM solutions require a clear understanding of business objectives, processes, levels of automation of enterprise platforms and interfaces. Key performance indicators (KPIs) are used to measure and track success. Success is defined in context of given critical success factors (CSFs) which describe business expectations and high-level requirements in tangible terms.
KPIs are used in context of business benefit realization; they are also used in context of enterprise solution implementations, tracking delivery progress and quality. Business value and delivery quality are two different things, both important, interconnected, but not to be confused. Like with any balanced scorecard, KPIs combine elements of strategy, execution, support and other functions. When it comes to testing PLM solutions, there are KPIs to consider when tracking progress toward verification and validation of enterprise solutions, ahead of their rollout to the business.
In this post, I elaborate on 7 KPIs to consider when testing PLM solutions (non-exhaustive list).
In project management terms, “testing” is a subset of quality control (PMBOK, PRINCE2 and other practices will support this principle). Verification and validation refer to different levels of testing, checking, double-checking, confirming, etc.
Adding a level of nuance, IEEE-STD-610 defines “verification” and “validation” as follows:
Verification: “a test of a system to prove that it meets all its specified requirements at a particular stage of its development.”
Validation: “an activity that ensures that an end product stakeholder’s true needs [a.k.a. requirements] and expectations are met.”
In software development terms, verification refers to "building the product right" (i.e., ensuring the solution is built as designed), while validation refers to "building the right product" (i.e., ensuring it is meeting expectations and delivering value).
There are different types of testing activities: unit testing, functional testing, non-functional testing (performance, usability, reliability, fail-over, etc.), branch testing, integration testing, end-to-end testing, stress and load testing, boundary testing, branch testing, smoke testing, bug testing, regression testing, etc. (non-exhaustive list).
Testing PLM solutions refers to more than just “software testing”. PLM platform, apps and other tools are COTS solutions, hence prebuilt. It is important to check that they are “fit-for-purpose” and configured / customized / integrated per a given scope and as per agreed business storyboards (tailoring to a given organizational context without compromising future upgrades or changes). Hence, testing PLM solutions involves elements of verification, combined with throughout validation; it is not about testing each and every out-of-the-box unit testing, which is done upfront by vendors.
Typical KPIs for testing PLM solutions include:
Defined “minimum viable product” related use cases with traceability to associated met / not met requirements: it is not a matter of testing each and every out-of-the-box capabilities, but to focus on step-by-step capabilities and processes which will be formally applied / used so satisfy a given set of business requirements.
Tangible (and sufficient) business benefits, with associated rationale: there should be no doubt about the value that the solution is to deliver; focusing on verifying and validating the PLM solution.
Quality of business storyboards, strategic alignment with the PLM platform “good” practices; deviations must be accordingly justified and approved by knowledgeable solution architects and business analysists.
Decision tracking metrics, with risk and impact assessment against the relevant elements and implications (quality, timing and cost).
Number and type of open issues, incidents, defects, bugs, etc. based on severity and importance (impact in the wider enterprise vs relative business context, including reducing possible “emotional” implications).
Triage and response time in addressing issues and implementing corrective actions and workarounds.
Severity and impact from potential showstoppers and blocking issues, remediation and associated risk mitigation plans.