Earthquake science, by nature, isn’t a lofty endeavor, but two recent experiments are doing even more to bring seismology to folks on the ground. Thanks to software, social media, and computer sensors, the next team of expert earthquake monitors could just as easily be a cadre of kids as a squad of scientists.

Among the latest in do-it-yourself seismology is the Quake-Catcher Network, a Stanford–University of California Riverside project that links computers to create a web of constant monitoring. This isn’t a swarm of supercomputers dedicated to crunching temblor data; instead, the network’s computers belong to volunteers, from mom and pop laptops to classroom desktops.

The computers use sensors called accelerometers to detect shaking—many laptops have them already installed and desktops can be easily outfitted with external models that cost about $50, according to the Los Angeles Times. When shaking is reported from multiple computers in a concentrated area, the data is uploaded to a central computer system, giving an idea of the size, scope, and direction of the quake. The personal computers would act in the same way as more expensive and difficult to install seismometers, according to the article.

“Ideally,” Project lead Elizabeth Cochran told the Times, “we would have seismometers in every building, or at least on every block. And in tall buildings, we'd have multiple sensors [on different floors]. That way, we would be able to actually get much higher detail…images of how the ground shakes during an earthquake.”

The more people join the network, the more accurate it becomes. The thinking is that a full-fledged network would be able to provide some degree of early warning, possibly allowing utilities and mass transit to be shut down before sever shaking, according to the article.

Meanwhile, in Colorado, a geophysicist at the U.S. Geological Survey is tapping into a much more established network for early quake info—the Twitter stream.

Paul Earle, director at the National Earthquake Information Center in Golden, has just released the “1st-ever Government case study on Twitter 101,” according to the USGS Twitter feed. A wordier account of Earle’s study, OMG Earthquake! Can Twitter Improve Earthquake Response?, in the Christian Science Monitor details how the USGS team was able to get on-the-ground information from tweets about two minutes before the more formal Did You Feel It? reporting system.

The study examined tweet data from a March 30, 2009, earthquake in California. The first tweet (“omfg, earthquake”) from 4.3 magnitude quake came 19 seconds after the quake, according to the study. “The potential response time for a Twitter-based earthquake detector is impressive,” the study states. “By running a simple automatic algorithm, the Morgan Hill earthquake could have been detected in under a minute.”

The study did identify problems with using Twitter data, including replication of data that is repeated via “retweets” and the inability to verify a tweeter’s geographic location, according to the CS Monitor article. Some of those issues have been addressed by Twitter advances since the study was completed. In any case though, the Twitter model is still useful as a way to get additional information quickly, Earle said.

“As an earthquake responder, at the same time I would have received an email that had magnitudes and an epicenter for an earthquake, I'd have 100 short, personal accounts of what happened,” Earle is quoted as saying in the Christian Science Monitor. “Most of those will only say 'earthquake,' but others will say a little bit more.”