Assessment of Data-Related Risks in Gotechnical Monitoring Programs
Ali Siamaki
In this article, we present an assessment of the often-overlooked data-related risks inherent in modern geotechnical instrumentation and monitoring (I&M) programs. While advances in sensor technology, telemetry, and data visualization platforms offer unprecedented insight into ground and structural behavior, they also introduce subtle but significant risks. If not properly managed, these risks can compromise project safety, inflate costs, and lead to flawed engineering decisions that expose our projects and profession to liability.
The central thesis of this assessment is unequivocal: effective risk mitigation is not a function of data volume. It is the direct result of professional discipline across the entire data lifecycle, from strategic sensor placement and an understanding of inherent system limitations to rigorous data interpretation and validation. A monitoring program's value is derived from the quality of its insights, not the volume of its readings.
To understand these data-centric risks, this assessment begins with the most fundamental and frequently misunderstood system limitation: the latency inherent in any monitoring system and the pervasive myth of "real-time" data.
Risk Category 1: Inherent System Latency and the "Myth of Real-Time"
A strategic understanding of data latency is critical to managing project risk. The common industry term "real-time" creates false expectations among stakeholders, leading to a critical misunderstanding of a monitoring system's true response capabilities. No system is instantaneous. The journey from a physical event in the field to a data point on a dashboard involves a chain of processes, each introducing a delay.
The data transmission chain can be broken down into the following steps, each contributing to total system latency:
Sensor to Datalogger: The physical sensor measures a parameter. This reading must then be captured by a local datalogger, a process that can take seconds to minutes depending on the measurement frequency.
Datalogger to Gateway: The datalogger transmits its reading to a central gateway, often via a low-power wireless network such as LoRa or mesh networks. Network interference, data packet retries, and protocol-driven duty cycles add to the delay.
Gateway Aggregation: A gateway typically services dozens of dataloggers, cycling through them sequentially to collect data. This "network scan" introduces another layer of latency before all information is aggregated for transmission.
Gateway to Platform: The aggregated data packet is sent from the site gateway to the cloud platform via cellular, satellite, or wired internet. The quality and speed of this backhaul connection directly impact transmission time.
Platform Processing and Visualization: Once received by the central server, the raw data must be ingested, cleaned, and processed by visualization engines before it can be displayed on a dashboard. This computational step adds a final delay.
The primary consequence of this cumulative latency is that the data displayed on any monitoring dashboard represents the recent past, not the present moment.
This reality has a direct impact on project safety. For slow-moving phenomena, such as consolidation settlement, receiving data at hourly intervals is more than sufficient to identify trends and act appropriately. However, for sudden failures, such as a rapid slope collapse, the event can progress faster than any monitoring system can detect, transmit, process, and report it. Relying on such systems for instantaneous alerts in these scenarios is a dangerous misconception.
Having established the risks associated with when data arrives, we now turn to the risks associated with what data is collected and from where—risks which can either be amplified by latency or, in the case of a data gap, render the latency discussion moot.
Risk Category 2: Flawed Monitoring Program Design and Installation
The most critical data-related risks are often introduced long before a single data point is collected. They are embedded in the monitoring program's initial design and subsequent installation, stemming from a flawed focus on instrument quantity over strategic placement and absolute installation integrity. An improperly designed or installed system is not just ineffective; it is actively misleading and introduces unacceptable professional liability.
1- Risk: Misplaced Instruments and Misleading Data
The "garbage in, garbage out" principle is acutely relevant in geotechnical monitoring. When an instrument is misplaced, it will record data that is technically accurate for its physical location, yet completely misleading for the critical parameter it was intended to capture. This is not just bad data; it is plausible-looking bad data that actively deceives the project team.
Inclinometer: A casing installed just outside a critical shear zone will report stability, completely missing the deformation it was intended to detect.
Piezometer: A filter tip set in an unintended, non-representative soil layer due to an installation error will provide pore pressure data that is irrelevant to the stability of the target stratum.
Extensometer: An anchor that is not securely fixed in stable ground below the moving zone will produce settlement readings that either mask true displacement or indicate movement where none exists.
The severe consequences of this risk are twofold: it can create a false sense of security by failing to detect a developing issue, or it can trigger unnecessary and costly alarm responses based on erroneous data.
2- Risk: Data Overload and Dilution of Critical Insight
An excessive number of sensors installed in non-essential areas can be as detrimental as too few in critical ones. This "scattergun approach" floods project teams with a deluge of irrelevant data, creating several negative impacts. It needlessly taxes data acquisition systems, complicates processing, and, most importantly, diverts finite engineering attention away from the handful of instruments that are generating genuinely critical signals. The signs of impending failure can become buried in a mountain of insignificant readings.
These risks of poor data collection lead directly to the challenges inherent in interpreting the data that is successfully collected.
Risk Category 3: Data Interpretation and Signal Integrity
The shift from manual readings to high-frequency, automated data collection has fundamentally changed the nature of monitoring. While providing a continuous stream of information, it has also reduced the practical signal-to-noise ratio. It is therefore a non-negotiable skill for engineers to actively distinguish the true geotechnical "signal" from environmental and system "noise" to avoid a critical misdiagnosis of site conditions.
1- Risk: Environmental and System Noise Obscuring True Signals
Many factors unrelated to geotechnical performance can influence instrument readings. Failure to identify and filter out this noise can lead to significant misinterpretation and flawed decision-making.
Table 1: Common Sources of Geotechnical Noise
Noise Source | Description & Example |
---|---|
Thermoelastic Effects | Temperature fluctuations cause cyclical expansion and contraction in structures and instruments. This can manifest as apparent, non-geotechnical movement in AMTS data due to diurnal cycles, or a significant load change, such as a 65 kN increase in a ground anchor load cell from November to February due to seasonal effects. |
Barometric Pressure | Vibrating wire piezometers are sensitive to atmospheric pressure changes. A 1 kPa change in barometric pressure can translate to approximately 10 cm of water head, potentially masking true pore pressure trends if the data is not compensated. |
Construction Activity | Nearby work, such as piling or heavy traffic, can generate transient, vibration-induced readings. These spikes are noise relative to the assessment of long-term stability and must be identified and excluded from trend analysis. |
2- Risk: Failure to Validate Data Through Correlation
Data correlation across different instrument types is the most powerful tool for filtering noise and validating a signal. A true geotechnical signal should manifest across multiple, disparate instruments measuring related physical parameters. A reading from a single sensor, taken in isolation, carries an unacceptably high degree of uncertainty.
For example, a true increase in pore pressure recorded by a piezometer should be accompanied by a measured decrease in effective stress from a nearby pressure cell and potentially an increase in displacement measured by an extensometer.
The consequence of failing to perform this correlation is severe. An unvalidated reading, such as apparent movement from an AMTS that is not confirmed by a corresponding subsurface reading from an inclinometer, is likely noise. Acting upon such an uncorroborated signal can lead to significant errors in engineering judgment, unnecessary interventions, and erosion of stakeholder confidence.
Beyond the risks of misinterpreting the data one has, there is the equally critical risk of losing that data entirely due to system failures.
Risk Category 4: System Reliability and Single-Point Failures
Reliance on a single sensor or a single data stream, no matter how precise, creates a high-risk system that is brittle to common field failures. This lack of redundancy can leave a project blind at the most critical moments, transforming the monitoring program from a risk mitigation tool into a source of unmanaged liability.
1- Risk: Catastrophic Data Gaps
Instrumentation in active construction environments is vulnerable to a range of potential failures. These can be caused by:
Cable cuts from excavation or other activities
Power loss to dataloggers or gateways
Sensor drift due to aging or environmental factors
Lightning strikes causing electrical surges
Physical damage from crane movement, vehicle impacts, or vandalism
Internal component failure within the sensor or datalogger
In a critical monitoring zone, such as an area adjacent to a sensitive structure, a data gap is functionally equivalent to an uncontrolled risk. It can also represent a direct violation of regulatory compliance requirements that mandate continuous monitoring.
2- Risk: Unvalidated Alarms and False Positives
A single abnormal reading from a non-redundant system presents a high-consequence dilemma. For instance, if an AMTS system registers a 5 mm/day settlement trend on a critical prism, a project team may be forced to act. Without corroborating data from an in-place inclinometer or a deep extensometer to confirm the movement is real and not an instrument anomaly, the prudent (and costly) response may be an immediate stop-work order or the mobilization of unnecessary stabilization measures.
In this scenario, the inability to cross-reference an anomaly transforms raw data from a potential insight into a significant financial and operational liability.
These individual risks do not exist in isolation. Their combined consequences can severely impact a project's core objectives of safety, cost-effectiveness, and sound decision-making.
Synthesis of Consequences on Project Objectives
The data-related risks identified in the preceding sections translate directly into tangible impacts on project safety, cost, and the quality of engineering decisions. This section synthesizes those consequences to provide stakeholders with a clear, consolidated understanding of what is at stake when these risks are not actively managed.
Table 2: Risk Categories and Their Project Consequences
Risk Category | Potential Consequences | Impacted Objective |
---|---|---|
System Latency | False belief in instantaneous alerts; failure to detect rapid events; unrealistic stakeholder expectations. | Safety, Decision-Making |
Flawed Design | Misleading data creating a false sense of security; critical signals buried in data overload; unnecessary alarms triggering costly responses. | Safety, Cost, Decision-Making |
Poor Interpretation | Misinterpreting thermal noise as structural load; alarm fatigue causing real events to be missed; failure to identify accelerating trends indicative of instability. | Safety, Cost, Decision-Making |
Single-Point Failures | Data gaps during critical construction phases; inability to validate anomalies, leading to costly stop-work orders based on false positives; loss of regulatory compliance. | Safety, Cost, Decision-Making |
Having defined the problems and their consequences, the final section of this memorandum outlines a clear framework for proactive mitigation.
Recommendations for Risk Mitigation
Effective mitigation of data-related risks requires a strategic, multi-layered approach. This framework prioritizes engineering judgment, program design rigor, and system resilience over a purely technology-centric view that equates more data with better outcomes. The following directives must be implemented across all projects.
Set Realistic Expectations Regarding Latency: Mandate clear communication from all engineers and project managers to all stakeholders regarding the inherent delays in any monitoring system. Monitoring regimes must be designed to match the anticipated rate of change of the parameter being observed, not the marketing claims of "real-time" feeds.
Mandate a Precision-Driven Design Paradigm Over Instrument Quantity: The design process must prioritize the strategic placement of fewer, high-quality instruments in the most critical zones of influence. This precision approach is superior to a "scattergun" deployment. This focus must be paired with an absolute commitment to flawless, quality-controlled installation to ensure data integrity from the outset.
Implement Systematic Noise Filtering and Data Correlation: Require that all data interpretation protocols include systematic techniques to account for and filter out environmental noise, particularly effects from temperature and barometric pressure. Critically, mandate that no significant engineering decision be made based on a single, uncorroborated data stream from one instrument.
Deploy Risk-Based Redundancy: Divide project sites into risk zones (High, Medium, Low). In High-Risk Zones—those adjacent to critical assets or with known geological hazards—deploy multi-layered instrumentation. Pairing different instrument types (e.g., surface-based AMTS with subsurface IPIs) ensures data validation and continuity where it matters most, without incurring the excessive cost of universal redundancy.
These recommendations form the basis of a resilient monitoring strategy that acknowledges and manages inherent data-related risks.
The Primacy of Professional Judgment
Geotechnical monitoring data will never be truly real-time, its collection systems will never be perfectly installed in all conditions, and its signals will never be entirely free from noise. The true value and safety of a monitoring program, therefore, do not lie in the sensors themselves. They reside in the capabilities of the experienced professional who understands the system's limitations, demands rigor in its design and execution, and applies contextual engineering judgment to translate validated data into safe, timely, and actionable decisions. Technology provides the data, but professional judgment provides the wisdom.