Why We're Here

Science is about finding the truth, and much of the time the truth is that we barely know anything at all. Our story starts with the field of flood risk analysis.

Mismatched Structural Incentives

In the field of flood risk analysis, firms are structurally incentivized to pretend they can do half of what their clients actually need, regardless of whether the models or results they're selling can be at all relied upon. Clients end up with overconfident estimates and their resilience planning and adaptation decision-making are harmed thereby.

These harmful structural incentives are firmly baked into the market, and individual firms seeking to advance the state of the art are at a structural disadvantage. Unlike most applications of science, engineering, and statistics, results in flood risk analysis are not readily falsifiable. A failure of quality and rigor in microchip manufacturing results in observably faulty microchips and is direclty punished by market forces. A failure of quality and rigor in flood risk assessment is not so readily identified, because the core output is a probabilistic estimate of extreme events. Flood risk analysis cannot tell you what will happen in the next 100 years, only what is likely to occur in the next 100 years. If we see less flooding than expected, does that mean our analysis is wrong, or that we got lucky? If we see more flooding than expected, did we ignore something important, or did we just get unlucky? If we were making microchips, we could make 10,000 microchips and test them. In flood risk, we can't simply wait to observe another 10,000 years of flood data - we need to know if our estimates are reasonable today.

The only way to ensure that flood risk models are valid is through strict methodological scrutiny. Rigorous scientific modeling and theory-building seeks to map assumptions to conclusions. If we assume the flood risk statistics act like X, and our data is Y, then the outcomes are mathematically guaranteed to look like Z, plus or minus some measure of uncertainty. For our models to be valid, we have to make sure that the mathematical machinery mapping X and Y to Z is fully correct. We have to make sure that our data really does look like Y, and that our assumptions X are actually true, or that any likely violation of those assumptions doesn't matter much in the mathematical machinery mapping X and Y to Z.

Poking Holes, Burning Bridges

The only way to reliably ensure methodological validity is for somebody to actively poke holes in the state of practice. In the Architecture and Engineering (A&E) consulting space, which dominates the field of applied flood risk management in much of the United States, there simply isn't room for that. Because there's no way for clients to easily verify methodological appropriateness, and even academic researchers struggle to notice core failures of statistical rigor, firms are solely incentivized to deliver something that looks intuitively reasonable, or which is "consistent with the literature" (regardless of the quality of said literature), under budget, ahead of schedule, and in a way that perserves and nurtures interpersonal relationships with key players. Poking holes can blow out budgets and schedules. Patching a newly discovered hole in your methods can alienate a previous client, and identifying holes can upset institutionally powerful partner organizations.

As a consequence, advocating for rigor and scientific integrity is likely to upset people who are worried about keeping the lights on, and it's hard to blame them. It ends up being nobody's job to ensure that methods are rigorous. But in order to protect vulnerable communities from flood risk, somebody has to take responsibility.

What We Do

The intrinsic functional behavior of the system is not within our power to change. So we adjust the boundary conditions.

  1. Document Methodological Weaknesses: We are currently working to comprehensively document the key weaknesses of flood risk analysis methodologies in active use and their implications for resilience and adaptation decision-making. We advocate for the use of more rigorous and robust methods where available, and suggest plausible pathways for research and development where appropriate methods do not yet exist. In doing so, we hope to better align the need for methodological improvements in applied settings and the ongoing research and development being carried out in academic settings, charting a path forward for practitioners and reserachers alike in a field with a severe shortage of centralized leadership.
  2. Educate Stakeholders: We seek to educate decision-makers, stakeholders, local subject matter experts, granting agencies, and practitioners about the limits of the models and predictions they're currently working with. In doing so, we help limit the harms caused by the use of overly confident flood risk estimates in resilience and adapatation investments, and encourage investment in research which meaningfully addresses core weaknesses in the state of practice.
  3. Provide Open-Source Tools: Where possible, we seek to provide user-friendly open-source tools to displace incorrect or otherwise problematic methods, so that practitioners may no longer have to choose between feasibility and rigor. In the near term, this work will be performed by researchers at the Barbara Geldner Project, in collaboration with external partners whereever possible. In the future, we hope to fund aligned work by external researchers out of operational revenue, charitable donations, and grant funding as appropriate.
  4. Expand Beyond Flood Risk: This work begins but does not end with flood risk analsyis. The structural issues described above extend far beyond the field of flood risk analysis, and we intend to expand to fields facing similar structural barriers to rigorous and effective work. We're interested in supporting any application domain in which the core purpose is to protect people's homes, communities, and livelihoods. We're here to help overcome structural barriers to rigor in science that saves lives.

Why Us?

For reasons described above, firms with a vested interest in the status quo of the field of flood risk analysis, or those seeking to scale up legacy methods to sell questionable data to a global client base, are unlikely to make the necessary methodological improvements in a reasonable timeframe. Academic researchers continue to make valuable improvements, but have their own perverse incentives to reckon with, and must preserve their relationships with institutional actors invested in the status quo or otherwise become isolated in the ivory tower of academia, disconnected from on-the-ground needs.

Somebody has to make it their job to poke holes and kick the hornet's nest. Somebody has to be willing to upset people and burn bridges. Somebody has to point out when the emperor isn't wearing any clothes.

A for-profit firm cannot push for rigor and integrity without putting itself at a competitive disadvantage. A non-profit or research lab seeking to simply build (and market) a better mousetrap is similarly constrained by competitive market dynamics which fail to reward technical rigor and scientific integrity.

We don't need another player on the field. What we need is a referee to call foul, a commentator to help the audience understand the play, and a coach to show the players a better way to play the game. That is the purpose of the Barbara Geldner Project, and only an organization built for that specific purpose can overcome the structural headwinds to enforcing rigor and integrity in science that saves lives.

What's in a Name?

The Barbara Geldner Project is named for a woman who spoke out for what she thought was important, against forces that threatened to destory countless lives and communities. In her time and place, speaking up was a terribly risky prospect and posed mortal danger. She lost much, and suffered greatly.

We look to Barbara Geldner as a yardstick of moral courage.

Addressing the structurally enforced lack of rigor and scientific integrity in flood risk analysis is necessary to avoid catastrophic harm to vulnerable communities in the long term. But it is seen by some as a risky move. Many in the field are unwilling to risk being blackballed and losing their careers over something so abstract. It's so much to keep your head down, to not make waves, to deliver work that's in line with expectations even if you know it's not good enough to actually keep people safe. Some have suggested that directly and overtly criticizing techniques sold by institutionally powerful actors is "unwise".

At the Barbara Geldner Project, whenever someone asks us or we ask ourselves whether it's wise to poke holes in scientific endeavors meant to safeguard vulnerable communities and save lives, and whether we should be worried about the potential consequences---backlash, lost contracts, damaged careers---we look to our namesake. Compared the risks she took, the choice to speak up for scientific rigor and integrity in a free society isn't risky at all.

Our Core Values

Scientific Rigor
We address application domains in which direct falisfication of problematic results is infeasible, in which the only way we can tell that our results mean anything is by whether our assumptions are sensible, and whether the mathematics underlying our methods provably and objectively map our assumptions to our results. Where those assumptions might break down, we have to understand the consequences and spell them out---not tucked in the back of a tech report that nobody will ever read, but in big neon letters on the tin (metaphorically speaking). Without that, none of this means anything, and we might as well be playing Calvinball.
Transparency
Scientific rigor in such application domains relies entirely on methodological scrutiny, which cannot be acheived without methodological transparency. Just like in math class, your anwer is worth nothing if you do not show your work.
Accessibility
We've seen otherwise highly competent practitioners, whose careers center around quite difficult mathematics, treat probabilistic and statistical modeling as some kind of inaccessible arcane art. In other cases we've seen binding data requrements addressed with the sentence "I know a guy". We believe that science is for everyone. That mathematics is for everyone. We believe that everyone should have the opportunity to genuinely understand what's under the hood in our application domains. We seek to make our methods and data accessible and understandable to anyone interested. There are practical limitations here: Accessible educational information is difficult to produce. Oftentimes it's unethical to disclose property-level risk estimates---they can be confident enough to support decision-making on an aggregate scale but misleading at the property level and liable to cause harm if used in individual decision-making when choosing where to live or what insurance premium is appropriate for a given home. We will always seek to maximize the accessibility of our data and methods within those constraints.
Accountability
We believe that accountability is a strict moral obligation incumbent upon scientists and any others who present themselves as experts. It is the duty of scientists and other self-professed experts to take full responsibility for the validity and correctness of their work, and to hold their peers accountable for the same. This holds regardless of whether a non-expert client has agreed to accept work that an expert knows is incorrect. This holds regardless of whether it's easier to pass the buck to someone else who proposed an incorrect method. This holds regardless of whether the incorrect work is published by an influential senior researcher in a peer-reviewed journal. If you wish to claim that your work is valid and useful, the buck stops with you.
Scientific Humility
If the science were easy, the problems would already be solved. Mistakes happen. Methodological gaps can go undetected. The science evolves. The promise of perfection is inevitably a lie. We believe that scientists have the obligation to recognize and correct methodological issues openly as swiftly as possible. It is not permissible to refuse to engage with methodological concerns with an appeal to authority. Methodological concerns must be addressed by engaging directly with the concerns themselves.