Can new technology help detect earthquakes accurately?

Researchers at Northwestern University have devised a new earthquake model to update seismology’s main challenge of accurately predicting the next big earthquake on a fault. The study containing the same was recently published in the ‘Bulletin of the Seismological Society of America.’

Earthquakes are caused by movement in the tectonic plates or by volcanic activity. Higher magnitude earthquakes have the potential to inflict catastrophic damage to property and loss of life. Seismologists study earthquake causes and patterns to predict future earthquakes, but pinpointing the precise timing and location of an earthquake remains a significant challenge.

The traditional framework for earthquake prediction assumes that large earthquakes on faults occur at regular intervals with the time-lapse between two events being more or less the same. However, this is not the case as earthquakes are often unpredictable and can occur sooner or later than expected.

To improve things, seismologists and statisticians at Northwestern University have developed a new model that is more exhaustive and realistic than the previous framework. Instead of just considering the average time interval between past earthquakes, the new model considers the particular order and timing of previous earthquakes. It’s designed to decode the puzzle that earthquakes sometimes come in clusters with relatively short intervals between them, interspersed by longer durations without earthquakes.

One of the authors of the study, James Neely, said, “earthquakes behave like an unreliable bus. The bus might be scheduled to arrive every 30 minutes, but sometimes it’s very late, and other times it’s too early. Seismologists have assumed that even when a quake is late, the next one is no more likely to arrive early. Instead, in our model, if it’s late, it’s more likely to come soon. And the later the bus is, the sooner the next one will come after it.”

The old prediction model assumes that steady motion around the fault lines gradually builds-up strain that culminates in an earthquake. This implies a regular cycle wherein each earthquake is predicated upon the last earthquake – ignoring the impact of all other past earthquakes. On the contrary, “large earthquakes don’t occur like clockwork,” Neely states. “Sometimes we see several large earthquakes occur over relatively short time frames and then long periods when nothing happens. The traditional models can’t handle this behaviour,” he adds.

In comparison, the new model assumes that faults are an aggregation of past earthquakes – as opposed to the old assumption that faults can only be assessed through the lens of the previous earthquake and its subsequent impacts. In a nutshell, the new model considers comprehensive data and assumes that faults have long-term memory as opposed to short-term memory as previously believed.

Another researcher of the study, Leah Salditch, said, “earthquake clusters imply that faults have long-term memory. If it’s been a long time since a large earthquake, then even after another happens, the fault’s ‘memory’ sometimes isn’t erased by the earthquake, leaving left-over strain and an increased chance of having another. Our model calculates earthquake probabilities this way.”

For instance, although the Mojave segment of the San Andreas fault is rocked by earthquakes every 135 years on average, the previous one transpired in 1857, just after one in 1812. Although this goes against the underpinnings of the old model, because the 1812 earthquake happened after a 304-year gap since the previous one in 1505, the left-over strain culminated in a quake earlier than predicted.

One of the study contributors, Bruce Spencer, a professor of statistics, said, “it makes sense that the specific order and timing of past earthquakes matters. Many systems’ behaviour depends on their history over a long time. For example, your risk of spraining an ankle depends not just on the last sprain you had, but also on previous ones.”

AI for earthquake prediction

According to Hashem Al-Ghaili, a filmmaker and a science communicator based in Germany, scientists are on the brink of devising a new technology that could perhaps predict earthquakes 48 hours before they occur, according to a study. Ghaili released a video on LinkedIn explaining the recent developments.

The researchers have analysed probable precursors to several catastrophic earthquakes that happened in the previous 20 years. They embarked by studying the changes in the ionosphere – a component of the Earth’s upper atmosphere, from 80 km to 600 km above sea level, where x-ray solar radiation and Extreme UltraViolet (EUV) ionize atoms and molecules to create a layer of electrons. It’s the sliver of atmosphere that borders the vacuum of space.

To calculate the electron charge density in the ionosphere, special machine learning algorithms and GPS map data were utilized. The researchers claim that the new tech is effective for monitoring the Ionosphere’s state and can make predictions with an 80-percent accuracy rate.

According to seismologists, current remote sensing tech has become valuable for detecting early warning signs.

This would be a breakthrough as scientists have never predicted an earthquake until now and could only calculate the probability of a given earthquake over a period.

However, there is some controversy surrounding this tech as the precursors for an earthquake haven’t been accurately defined as natural events associated with earthquakes can occur in the lithosphere, troposphere, and ionosphere.

Disclaimer: The views expressed in this article are those of the author and do not necessarily reflect the views of ET Edge Insights, its management, or its members

Scroll to Top