Hmmm. Feel like we're talking past each other. Way past.
1) Part of my point - simply paraphrasing many climatologists - is that "El Nino" is not a categorical independent variable that is particularly useful in a regression model. It's not equivalent to, say, "June" or "high rainfall days." It's not a thing. Nor is it a continuous ratio level variable like "inches of snow." Rather, "El Nino" is an element in a ridiculously complex mathematical model (below) and at that, it's a multivariate n-dimensional proxy, taking a stab at representing the variance of thousands upon thousands of separate variables folded into it. Moreover, unlike "hIgh rainfall days," we're not even sure an El Nino from one cycle is even the same climatic entity as another El Nino from a decade earlier, or whether they can be compared meaningfully. Which means is that by using it in a simple parametric regression (or whatever you're doing), you're violating several fundamental statistical assumptions. You may shrug, and sure, I can regress anything on anything; # and size of flat screen TV's is fairly predictive of height. Maybe El Nino can predict snowfall at Mammoth. But in your case, impressive terms like "significant" and "confidence interval" are functionally meaningless, even if they appear on your output.
2) You seem confused about the difference between a model and a prediction equation. A model is a system of causal explanations for measurable events that is consistent with extant theory. Models, once they are sufficiently complete, can be used to generate specific hypotheses that can be tested with appropriate statistics. Typically these hypotheses begin as retrodiction (predicting what's already happened, like yield), and eventually become truly predictive. Simple chemical reactions or bullets coming out of rifles come to mind. But as any climatologist will tell you, global atmospheric chemistry and physics is a lot more complicated than a reaction in a test tube, or a ballistic projectile. So it's not at all surprising that even 2015 climate models are not locally "predictive" as you use the term. They do not have sufficient data, and the data they do have is often of unknown comparability and error. Twenty years ago, they were a lot cruder. So the fact that specific hypotheses back then did not produce predictions for southern California, or even the whole world, right now that fit inside 95% confidence intervals is a big yawn.
3) So no one is trusting the climate models on a local level; that was one of my points. Global climate models hold up in terms of explaining global trends; put another way, current trends in global temperature and certain phenomena like variance in storm intensity are consistent with climate change models (below). They are not complete enough (above) to generate testable predictions for southern California. You're saying that they don't and therefore they're suspect, but no climatologist I've ever heard of would even try to do that. You're saying that El Nino can be used as an independent categorical variable. Again, no climatologist would do that except as a kind of heuristic exercise. The media, and we skiers, love that stuff, scientists not so much.
4) You also seem confused about the basic mission of global warming models. They do not suggest we will meet your requirement of a neat multiple series of seasons that trump your beloved outliers. They are global models, y'see. Some places will get warmer, others will not. Most years will get warmer, some may be cooler. Some places will get less snow, some will get more. The total global averages will creep upward, not in a perfect straight line but irregularly. A good example is May 2014- April 2015. According to NOAA, it tied as the warmest 12 month period globally on record. And yet, Boston got record snowfall (107"), and right now it's been a weirdly cool spring. Global warming is about multivariance, not a neat march of averages.
5) I read your post with the colored tables and bar graphs. Again, I have respect for all the data you collect. Your stuff is fun. It's nice to see simple graphical data. And I don't want to be dismissive because there are, or should be, places for this kind of discussion. Epic is perfect. But you go a bridge or ten too far.
You state you'll leave explanation to the meteorologists and then make some fairly large generalizations about El Nino and snowfall and climate change models that seem waaay beyond your own data. So which league do you want to play in? If you're going to challenge consensus models as being "miserable," and go on about how you judge some recent years to be "extreme outliers of natural variation," and even specify the kinds of outcomes you'll need to believe the climate modelers, then you should play by the same rules as the guys you challenge. Or are you immune, the amateur who can take potshots at guys sweating out decades of training, mired in differential algebra, and then retreat behind the "hey-I'm-just-a-civilian-playing with-some-snowfall-numbers" bit?
Suggest you go check out a Methods section in a journal: For instance, you show a "trend line." How did you derive it? Did you fit families of curves? Which one did you settle on and why? Did you hand draw it to connect bars? Other? What's your justification for your analytic choices? What stat package did you use, and what tests and what assumptions behind choosing those tests? Why that stat test and not this other one? What is the variance and error of your variables? What is their quantitative reliability? Answers to stuff like this are hard, tedious work, but they have central meaning. They're what distinguishes solid science from mediocre science from pseudo science. And finally, since you don't like the consensus climate model for local snowfall prediction (which it never tried to do), what's yours? Where do your variables fit causally into that model?
Edited by beyond - 6/9/15 at 8:39pm