Case Study Part Three: The GigaTECH Software Development Strategy
Welcome to part three of our Case Study series. We are discussing the benefits of utilizing FHIR to help support a Healthcare organization in making decisions based on contract proposals. This blog will focus on the GigaTECH software development methodology and our DevSecOps utilization. In our introduction, we discussed that our Business Development team needed a new way of gathering and compiling contract information in a more user-friendly manner. The existing tool is burdensome to navigate and convoluted to enter and understand the data. Through our Human-Factors engineering process, we identified the pain points and created an acceptable prototype. Now, it was time to start development. Throughout this blog, we’ll highlight some key areas where our methodology makes an integral difference.
What is GAMe?
GAMe stands for GigaTECH Agile Methodology and is our software delivery-centric implementation of GLEAM. There are six stages of the GAMe methodology:
- Pre-Commit Stage– Early Detection of Issues, Reduced Costs of Fixing Issues, Better Code Quality, Consistent Security Hygiene, and Developer Efficiency.
- Source Stage– Security Assurance, Minimized Technical Debt, Increased Developer Confidence, Reduced Cost and Faster Remediation, Comprehensive Security Coverage.
- Build Stage– Enhanced Transparency and Traceability (SBOM), Improved Software Quality (Unit Testing and Code Coverage), Standardized Deployment Environment (Image Creation), Early Risk Mitigation.
- Test Stage– Ensure System-Wide Functionality (Integration Testing), Runtime Vulnerability Detection (DAST), Holistic Application Assurance, Reduced Deployment Risks, Continuous Improvement.
- Release Stage– Consistent and Controlled Deployments, Automated Environment Management (IaC), Improved Release Cycle Efficiency, Seamless Rollback and Versioning, Enhanced Security and Compliance, Reduced Configuration Drift.
- Monitor Stage – Proactive Identification of Issues, Increased Application Reliability (APM), Continuous Feedback Loop for Improvement, Enhanced Security Posture, Real-Time Visibility and Informed Decision Making (Dashboards), Reduced Mean Time to Resolution (MTTR).
Pre-Commit and Source Stages
The first two stages, pre-commit and source, utilize many of the same tools and provide similar benefits. When developers begin writing software to implement the agreed-upon prototypes, we want to ensure we build quality code. Emphasizing early detection of issues allows us to provide better code quality and significantly reduce the cost of fixing issues down the road. We also implement various security checks throughout the pre-commit phase, including scanning for vulnerabilities, insecure dependencies, hard-coded secrets, and misconfigurations. These scans improve our code security and drive our developers to minimize technical debt and increase our developer’s efficiency. Implementing rigorous pre-commit scanning processes increases our developer’s confidence in the code they build. Here are some of the scans we use and how we call them through our reusable workflows:
Linting using Super Linter:
Spell Check using CodeSpell:
Secret scan using Truffle Hog:
Build and Test Stages
The next two stages are Build and Test. One of the most important artifacts of the Build stage is the Software Bill of Materials (SBOM). This is an inventory of the software components built for a project, including open-source libraries and dependencies used. The transparency of the SBOM makes it easier to identify and mitigate vulnerabilities in third-party components and manage all dependencies effectively. Unit testing and code coverage metrics are two more facets of the Build stage that aim to improve software quality. These allow us to stress test individual components of an application and verify that the codebase is meeting high-quality benchmarks. This ensures quality code and reduces the occurrence of runtime errors. We also emphasize image creation throughout the build stage, so when we move through environments, each level is uniformly tested, and the final production environment reflects a well-tested application. Our Testing stage focuses on system-wide functionality (integration testing) and Runtime Vulnerability Detection (DAST). Applying integration testing ensures different components of the system work together as expected. Integration testing allows for issue identification not visible during unit testing and is a vital component to ensuring a smoother user experience and fewer errors in production. DAST stands for Dynamic Application Security Testing and focuses on security testing while the application is running. DAST detects vulnerabilities like SQL injection, cross-site scripting (XSS), or insecure APIs. Combining all of these different testing and security measures allows for holistic application assurance. We are testing individual components, how those components fit together, and how the application runs from an external perspective. This methodology allows for early detection of issues and provides constant feedback to developers. Developers can continuously improve their code, and the overall codebase benefits when these features run seamlessly together.
DAST scanning using ZAP Scanner:
Release and Monitor Stage
The final two stages of GAMe are Release and Monitor. We release from an artifact repository that ensures only approved and versioned artifacts are deployed to different environments (QA, staging, production). Automation plays a large role in how GigaTECH navigates the release stage. We utilize Infrastructure as Code (IaC) to control and manage configuring and deploying infrastructure to support different environments. We use IaC to prevent configuration drift and limit the manual setup required for releases and general environment configuration. This allows us to seamlessly roll back an application to previous versions if an issue arises in production. Versioning provides a safety net to reduce potential end-user disruptions. Security and compliance aspects are also enhanced by utilizing automation due to less manual input of potentially sensitive code fragments. Having key infrastructure housed separately and referenced in pipelines keeps secrets and proprietary information out of the application codebase.
Once released, our applications enter the Monitor stage, where we track the health, performance, and security of an application once it has been deployed. We focus on metric collection, application performance monitoring (APM), dashboards, log aggregation, and alerting to provide a comprehensive view of the system in operation. These tools and strategies allow us to proactively identify issues and deficiencies, review real-time visibility and application performance, and gain insights for future development iterations (bug fixing, optimizing performance, or hardening security). Alerting mechanisms provide information when anomalies are detected and reduce the mean time to resolution (MTTR) required to diagnose and fix issues. We’ve begun using Atlassian Compass to monitor and track application and general code quality. Once our GitHub action workflows described above run their checks, we send that data to Compass. We have different metric percentages setup in Compass that we expect our code to pass. Using Compass gives us the ability to manage our code and ensure we’re passing these metrics. It also provides insight into where deficiencies lie so we can address issues and code in a timely manner.
Code Metric Quality information being sent to Compass:
Combining DevSecOps with Product Thinking and HCD
Looking at our GLEAM and GAMe methodologies, early and continuous feedback is of the utmost importance. We actively seek out information and strive to ask all questions necessary to build an accurate prototype of an application before the start of development. This leads to faster development as blockers and pain points have been continually addressed and planned for in the early phases. Once development begins, we rely on automation within every stage to provide continuous codebase feedback regarding testing, compliance, and security. This feedback loop speaks directly to our developers to increase their confidence and strengthen the codebase. We implement a shift left mentality for security that increases our coverage and scanning so that a production-level application has been rigorously tested before deployment.
GAMe and Other Methodology Synchronization
Our GAMe methodology aligns with other industry standard methodologies. Specifically, we are in sync with the 12-Factor App methodology and several core principles of the US Digital Services Playbook. Our GAMe and software development methodology aligns with the following plays: Address the Whole Experience, From Start to Finish (play #3), Automate Testing and Deployment (play #8), Default to Open (play #13), and Use Data to Drive Decisions (play #11). With our GAMe methodology, we approach our Business Development team’s problem in a way that incorporates these plays. We asked questions to understand what the team needed in a solution, built a DevSecOps pipeline that automates both testing and deployments to ensure code is vetted, secure, and usable, incorporated existing open-source tools where possible, and implemented monitoring of metrics, logs, and user interactions to continuously improve.
GigaTECH Specializes In FHIR Applications
We specialize in working with FHIR and SMART on FHIR applications, creating functional, concise, and easy-to-use applications to accomplish tasks as small as triaging service tickets for customers all the way to importing bulk FHIR data from insurance databases for use by hospitals and clinicians as they seek a holistic view of integrated health information to support clinical decisions.