Summary: I can apply analytical methods and effective measurement to drive achievement of key business goals.
Challenge: Support customers were complaining that their calls were not being responded to timely (based on established contractual service-level agreements (SLA)). Understanding the problem was complicated by the sheer volume of calls, and the large number of variables that defined the SLA for a particular customer and product. Engineering and Customer Support managers needed current, easily understood measures to monitor performance and head off failures, but coming up with a simple yet meaningful indicator to represent such a complex set of data was going to be a challenge.
Action: I analyzed the SLA goals and variables established by our SLA contracts, and the structure of our Call Center and Defect databases and determined that they contained all the raw data necessary to answer the essential questions. I worked with our IT department to develop queries to extract the raw
call data from our CRM system, and combined that with data I extracted
from our engineering defect database. In order to create a simple indicator, I created a novel twist on traditional aging and distribution charts to represent SLA compliance classes (normalized relative to the applicable SLAs for each call), producing a pair of simple charts that showed overall SLA compliance. I then added interactive drill down to these charts that allowed managers to identify their own defects within the data. An added history chart supported monitoring of the impact of changes to response practices. Finally, I automated the extraction of data and creation of the charts, and made them available to stakeholders via email (scheduled) and the web (real-time).
Results: The easy-to-understand charts, and drill down capabilities, enabled managers at all levels to visualize their goals and performance, and quickly drive changes as needed to meet their commitments. Within four weeks of the introduction of the measures, the number of non-compliances dropped by 50%. Within
a few months, the number had dropped to the point that failures to
achieve the SLA became the rare, but managed, exception.
Another, far reaching result, was that this experience with effective measurement triggered a significant, general shift toward more reliance on measurement to inform decisions. A small suite of well-defined complementary measures soon followed to address other important management goals.
Summary: I used active listening and quantitative methods to bring teams with diverse goals together to solve a cross-functional challenge.
Challenge: An aging and inadequate defect tracking system was rapidly approaching the end of its support life, and IT was rushing to select and install a replacement. But, engineering users didn't feel their needs had been heard: the new system needed to address some vexing problems and peculiar requirements driven by the long support life, and large number, of the company's products.
Actions: I selected a representative sample of user roles, then interviewed them using Contextual Inquiry to develop a complete set of requirements. I then had users weight these requirements and was able to quickly build a consensus on their ranking. Using these rankings, I conducted a trade study of available solutions, and recommended a commercial tool best suited as the basis of the new system. With engineering delighted, and a purchase under way, I was asked to stay on and develop a solution architecture using the new tool, then prototype and pilot the new system. When the system was rolled out, I traveled to the company's engineering sites worldwide and trained developers, managers, and support engineers in its use.
Results: A cross-functional, quantitative approach to this problem eliminated the subjective element from a decision with long term impact. The resultant defect management system was a hit with users and managers, who's needs had finally been met. Engineering reduced the handling effort of defect reports by 75%, Engineering and Customer Support were able to finally comply with heretofore unattainable service level agreements, and IT avoided a costly misstep. I was asked to train our staff worldwide when the system was rolled out, and reception of the new system by our engineers was enthusiastic - many expressed appreciation to me for a thoughtful solution and the hands-on manner in which it was rolled out. Managers quickly grew to love the system's quality metrics capabilities, which provided them insights to our quality data they had never experienced.
Summary: I can manage improvement, and software development, projects end to end.
Challenge: The daily build of a large software product required 12 hours to complete; combined with the subsequent automated regression tests (14 hrs), developers were not notified of errors until 2 days after their changes, and were wasting time reconstructing the context of a 2 day old change. This problem was exacerbated by unrepeatable build results. I proposed a project to replace our build system, and was selected to manage it. Our charter: Build, verify, and demonstrate a more repeatable system that would enable 24-hour turnaround, and deploy it immediately following the coming release with no disruption in development effort.
Action: I collected stakeholder requirements and planned the project in detail, selecting a 6 person multi-site team. In addition to my PM role, I also served as technical lead and solution architect. My team and I developed and tested a solution that met or surpassed expectations. Builds were 100% repeatable, and build time dropped from 12 hrs to 2. Our comparative testing found complete agreement with the existing system's results, except for the surprising discovery that the new system had re-enabled hundreds of intended features that had been inadvertently disabled in the old build.
Results: My team and I demonstrated the finished system and presented our results a month before the planned product release. News of the new system's correction of the undetected omissions spawned a lively discussion about switching to the new system prior to the upcoming release. While the closeness of the release made that too risky, any concerns about the upcoming switchover were completely eliminated.
Engineers were delighted to know of errors in their code the next day, while it was still fresh. Their time required to reconstruct the context was cut to 1/3.
The solution I developed was leveraged to accelerate a complimentary, subsequent project to improve performance of the automated regression test. That effort yielded a time reduction from 14 hrs to 5 - the combined efforts reduced the build-test turnaround from 26 hrs to only 7 - the nightly build could start at 9PM and results (errors) would be in the respective developers' email when they came in the following morning.