Monday, August 31, 2015

Too many lawyers...

I've been re-watching the science fiction series, "Farscape" over the last few weeks (if you haven't seen it, it's definitely worth watching on Netflix).

In one episode, the characters find themselves embroiled in trouble on the planet of Litigaria, a planet with libraries' worth of horribly convoluted and complex laws... due to the fact that 90% of the planet's population are lawyers, and also that the ruling bodies are law firms.

When trying to figure out how the laws became so out of hand, they found that there was originally one set or book of laws... all of the other books exist merely to expand on what's in here... As more and more people devoted themselves to the law, the law had to grow more complex to justify them all.

Eventually, our heroes get out of trouble and defeat the villainous lawyer/head of state by referring back to the original set of laws (which had nearly been forgotten).

I have to wonder if there are any parallels between the fictional world, and our own nearly incomprehensible volumes of laws, the growing numbers of members in the political leadership that use them to their advantage, and the fact that too many of them have lost site of our original laws -- the US Constitution.

Just a thought...

Bad Science...

Incompetence or Malice?

I have a lot of disgust with how the scientific process is often distorted to sway public opinion.  All too often, data are presented in a way that is downright misleading, and most of today's general public don't have the scientific training (or even exposure) to recognize that they are being bamboozled. 

For example...  These graphs were used to support someone's pet theory relating the study parameter to biological age as determined by what stage of puberty a child falls under  (in this case, making the case that middle and high school students need a later start to the school day)  This presentation highlights a commonly-used tactic to make one's data look more compelling than it really is).  As we will discuss, these data are totally meaningless...


In this case, the ranges of values for each category far exceed the trend that was concluded from the data. Also, the text of the report states that the error ranges shown are only +/- one half standard deviation (showing only ~34% of the overall range of the variation). If a range chosen showed 90% of the variation (as would be required to really demonstrate any correlation, the slope of the line would be completely unnoticeable (and even more meaningless).

Furthermore, the "puberty" categories used are completely subjective -- and were not selected a priori.  In other words, the category that each subject was placed in was not determined before evaluating the students, but after -- allowing the researcher to assign the category based on the behavior being studied studied.  

We can't evaluate the effect of the poor practice in assigning the categories, but CAN examine the data with a more realistic presentation.  If I were to graph the left chart properly (full scales on the axes and showing the full range of +/- 1 standard deviation), it would look like this:




Given the range of variability in the data for each category, I would have a very tough time claiming any validity for a meaningful trend in this data.  Yet, these are the kind of data tricks/deceits that are used to persuade voters (and the political machines in DC) to take all manner of extreme actions to "improve" our lives.

Author Robert Heinlein once observed, "Never attribute to malice that which can be adequately explained by stupidity, but don't rule out malice."

Sometimes it's tough to know which motivation to blame.



Sunday, August 30, 2015

That's all there is...

Finding wisdom in unexpected places

I am a huge fan of the space program, especially of what I like to think of at the golden age -- the Apollo missions to the Moon.  As a middle- and high-school student, I used to find excuses to stay home from school to watch the launches on TV, and remained glued to the TV any time there was live coverage.  Astronauts were (and still are) some of those people I consider my greatest heroes.

As I get older and more philosophical (and hopefully, marginally wiser), I started looking beyond the "coolness factor" of the space program, and seeing the humanity of those behind it -- not just the astronauts, but their families, flight controllers, spacecraft engineers, etc.  It was a truly great time for humanity as well as space exploration.

There are many good books and documentaries of this time, but a few of them are stand-outs:

1)  Andrew Chaiken's book "A Man on the Moon: The Voyages of the Apollo Astronauts" is probably the best book on some of the people who made this happen (particularly the astronauts).

2) Tom Hanks' HBO series, "From the Earth to the Moon" (derived from Chaikin's book) presents these stories in hugely accurate and spell-binding way.

3)  Astronaut Gene Cernan's book, "Last Man on the Moon"

4)  "In the Shadow of the Moon" is a video documentary gathering the thoughts and impressions of the remaining Apollo astronauts with some spectacular video.

This link shows a particularly good clip from  the HBO series (if you haven't seen it, you should). The narration is by the actor portraying Apollo 12 Lunar Module pilot, Alan Bean, realizing the most important aspect of any endeavor -- "... whether it's across town, or to the moon and back...."

Worth watching...

https://www.youtube.com/watch?v=dfdZO6-d-0Q&feature=youtu.be

"Cultural Appropriation:" Another way for Progressives to get upset

Yup, you heard right...  "Cultural Appropriation."

I first became aware of this phrase a couple of weeks ago when one of my liberal friends linked to this video elsewhere.  Be warned, there are a few F-bombs here (another pet peeve of mine -- why anyone thinks that such gratuitous language helps their case is beyond me).

http://www.upworthy.com/people-asked-him-if-it-was-ok-for-white-people-to-have-dreadlocks-heres-his-no-bs-response

I'd never heard of the concept before. The more I think about it, the more I consider the concept completely ludicrous. Let's look at the definition of "appropriation" 


noun; the action of taking something for one's own use, typically without the owner's permission

1st: Who "owns" a hair style or musical genre that requires permission to use?

2nd: Even if someone could claim "ownership" of a hair style, the act of me adopting it (had I sufficient hair to do such a thing) does not deprive the "owner" of the hair style the ability to wear it. 

The very idea that someone would object to someone like me "appropriating" something just boggles my mind... Should I not be allowed to enjoy, say, Spanish guitar? Should Darius Rucker not be allowed to put his own spin on country music? Should Barbara Streisand not be allowed to put out an album of Christmas songs? I could go on, but I think you get the idea... 

I'm sorry, but the very thought is so utterly alien and nonsensical to me that I'm having a tough time understanding how this is even a thing... (were I prone to such language in polite society, I'd be tempted to drop a few choice words on this topic). 

Aside from the occasional profanities, this guy in this video is mostly spot on. 

One point where I'd disagree... The gentleman (can't find his name on the videos or his page) implied he might have a problem with someone "appropriating" some cultural aspect for reasons other than liking it. Can't go along with that. That would be akin to saying that a company can't sell a certain item -- say a religious icon -- unless they were followers. I'm pretty sure all sellers of Christmas-related items are not Christian. 

In my mind, the closest thing to "cultural appropriation" would be in the definition of a "melting pot." Without the merging and adopting of various cultures outside our own, there would not be the rich tapestry of cultures that exist in today's world. Rock and Roll wouldn't exist, as wouldn't so many other musical styles. 

In short, like the term "microaggressions," cultural appropriations is another invention of the left -- used to further give people another excuse to be upset at the world in large.

Thursday, August 27, 2015

The Madness Continues

Yeah, we've gone off the deep end...


A school sent a little girl home with a note to her parents stating that the girl's lunch box was inappropriate for school because it was festooned with violent imagery...

And just what was this violent imagery that was so horrific as to warrant such fear that innocent school children would be psychologically scarred for life?  Was it Arnold Schwarzenegger's Terminator attempting to snuff the life out of Sarah Conner?  Or perhaps Anthony Hopkin's Hannibal Lecter having a late night snack of liver and chianti?  (I'm pretty sure it couldn't have been shots from the undercover Planned Parenthood videos).

No the image that prompted such immediate concern from school officials was Wonder Woman...  Yup, that's right -- Wonder Woman.


http://news.yahoo.com/school-bans-childs-wonder-woman-lunchbox-083018166.html

And my liberal friends -- who argued with me most strenuously about the need for controls on speech that can be seen as "micro-aggressions" (God, how I hate that term) -- are inexplicably surprised by this story.  

<Sigh>   Sometimes, it just doesn't pay to get out of bed and read the news.

Wednesday, August 26, 2015

I Hate to Say I Told You So...

A few days back, I penned a lengthy treatise on the flawed ideas of "protecting" students (and people in general) from being exposed to "harmful" thoughts and words:  http://thoughtfulcynic.blogspot.com/2015/08/emotional-inoculation-revisited.html

Supporting this idea, The Atlantic magazine published an article where the author described the desire of modern society to shield people (but especially young adults) from words and ideas that make some uncomfortable.  
But the implementation of this absurd notion frequently goes far beyond merely shielding people from "bad" thoughts.  
This movement seeks to punish anyone who interferes with that aim, even accidentally. You might call this impulse vindictive protectiveness.  (http://www.theatlantic.com/magazine/archive/2015/09/the-coddling-of-the-american-mind/399356/)   

The author then reached an all-to-prophetic conclusion:
When speech comes to be seen as a form of violence, vindictive protectiveness can justify a hostile, and perhaps even violent, response.” 
So now we reach yet another unfortunate tie-in with current events.  Earlier today, a man by the name of Vester Lee Flanagan shot and killed a reporter and cameraman during a live broadcast. A disgruntled former reporter for WDBJ TV in Roanoke, Flanagan's VA Facebook and Twitter feeds were filled with hateful and vindictive statements about how he had been mistreated by coworkers (and society, in general) because of his being black and gay.  

According to the WDBJ Station Manager:
Vester was an unhappy man. We employed him as a reporter and he had some talent in that respect and some experience. He quickly gathered a reputation of someone who was difficult to work with. He was sort of looking out to people to say things he could take offense to. Eventually, after many incidents of his anger, we dismissed him. He did not take that well. We had to call police to escort him from the building.  http://bgr.com/2015/08/26/virginia-shooter-vester-lee-flanagan/

It's still early, but it looks like the horrific events outside of Roanoke, VA were motivated by the shooter's inability to cope with perceived slights against him, and responded with the violence that is justified by vindictive protectiveness. Clearly, his employer and coworkers felt that he had been looking for reasons to be offended.  And the indoctrination from today's society dictated that such verbal offenses justified some sort of response.

We've seen -- and I fear we will continue to see -- more and more cases of violence as overly-sensitive young adults, conditioned by today's academic environment, encounter a real world which will cause them psychological trauma for every perceived slight, insult, or disappointment they experience.  


Proponents of such a system have blood on their hands.

Monday, August 24, 2015

Breakfast and the Science of Consensus


Interesting developments bringing into question the general consensus that "breakfast is the most important meal of the day." Increasing numbers of studies are now showing that this may not be true.
The bottom line of these studies is that no causal link is found between eating breakfast and the various health benefits ascribed to it.  In fact, no strong statistical links can be found between either eating or skipping breakfast and various measures of health.  I recommend reading the linked articles for details.
A couple of interesting side points, as well.

  1.  I heard a story (unconfirmed, but trying to track it down) that this "consensus" had its roots in a clever advertising campaign intended to sell more ham -- a campaign that was NOT directed at potential customers. The genius behind this marketing concept developed slick, glossy pamphlets quoting fictitious studies extolling the virtues of a healthy breakfast of ham and eggs. This was distributed to doctors who then advised their patients about this new research. I don't know if this is true or if I am mis-remembering the story, but it sounds like something an ad company would do.
  2. The linked article nicely illustrates the dangers of using "consensus" as a component of scientific research.  When such a strong consensus is assumed a priori, subconscious biases induce the following problems where researchers 

  • Offered biased interpretation of their own results
  • Improperly used causal language to describe their results
  • Misleadingly cited others' results
  • Improperly used causal language when citing others' work.
On the matter of consensus, I like to think about Dr. Richard Feynman's commentary on the Millikan oil drop experiment, used to determine the charge of electron: 

We have learned a lot from experience about how to handle some of the ways we fool ourselves. One example: Millikan measured the charge on an electron by an experiment with falling oil drops, and got an answer which we now know not to be quite right. It's a little bit off because he had the incorrect value for the viscosity of air.

It's interesting to look at the history of measurements of the charge of an electron, after Millikan. If you plot them as a function of time, you find that one is a little bit bigger than Millikan's, and the next one's a little bit bigger than that, and the next one's a little bit bigger than that, until finally they settle down to a number which is higher.

Why didn't they discover the new number was higher right away? It's a thing that scientists are ashamed of—this history—because it's apparent that people did things like this: When they got a number that was too high above Millikan's, they thought something must be wrong—and they would look for and find a reason why something might be wrong. When they got a number close to Millikan's value they didn't look so hard. And so they eliminated the numbers that were too far off, and did other things like that...

So, if scientists can be wrong for reasons of "consensus" on a single, measurable value of a physical constant, might they be more likely to be swayed by consensus on something as amorphous and unmeasurable as climate change? I think you know my answer to that.


Political Corruption and Donations -- a Chicken and Egg Quandary 

I get continually frustrated with the continuous cries to stop the corruption of politics by "big business" and their money. I counter that money couldn't corrupt politics if politicians didn't sell favors granted by their ability to wield power over others.
Recently, from one of my favorite blogs: "... the usual conception of corruption is that it occurs when business corrupts government, in reality exactly the reverse is the case: persons who wield political power actively seek to sell the use of that power to persons who can enrich them. In point of fact, business cannot corrupt government, because the power to create privileges in the marketplace by an action of the State must exist and be 'advertised for sale' before it can be purchased."
These are not new thoughts. The earliest reference to this concept (that I could find) comes from Isabel Paterson (January 22, 1886 – January 10, 1961), a Canadian-American journalist, novelist, political philosopher, and a leading literary critic of her day.
Just something worth thinking about.

Science, Models, and Verification

Lies, Damn Lies, Statistics, and Models

Some of the biggest tools used to warn of the impending perils of anthropogenic global warming (AGW) are computer models showing increasing global temperatures over the coming years. These models, it is said, are settled scientific proof that AGW is a threat to our very existence.
Having had some experience with computer models over the last few decades, I find them valuable tools... a good computer model can be used to anticipate all sorts of future events. Some examples are:
  • When the Space Shuttle flew for the very first time in 1981, it was the first spacecraft ever to fly with a crew on its very first flight. Engineers felt confident that their computer models of the Shuttle's performance in all flight domains adequately predicted how the spacecraft would actually fly. Turns out they were right.
  • Numerical weather prediction (NWP) models are usually quite accurate in predicting future weather a few days out (the occasional missed forecast notwithstanding -- but that's as much a failure of input data as it is of the model).
  • Engineers can build virtual prototypes of new products, testing them in cyberspace to find weaknesses before having to build real products -- saving cost in testing as well as reducing time from design to production.
There are more, but you get the idea. These examples cover a wide range of domains, but the models used in all of these areas have one thing in common -- Verification and Validation (V&V). The models, themselves, were put through a rigorous regime of tests. They were used to predict an outcome of some future event or system performance which was then compared to reality. If the predicted performance was different from reality, it was back to the drawing board. Thus, models used for operational decisions -- especially for critical things like the first Space Shuttle flight -- have been tested thoroughly enough to have a high degree of confidence their predictions will match reality.
No models are 100% accurate; very few are even mostly right... most require updates to include new knowledge or capabilities. Even our well-exercised weather models' predictions are compared to the actual weather occurrences on a daily basis -- any differences are used to regularly update or "tune" the models in hopes of improving their predictive skills.
This leads us to the models used to support AGW predictions. Certainly, a great deal of research and scientific knowledge has gone into their creation -- we can't deny that. However, there is one area where AGW models fall short -- Verification and Validation. The predicted AGW conditions and changes are unique to our time and are predicted to happen over decades (or more) -- thus, we can't truly compare model predictions to an outcome that hasn't happened, yet... and won't happen for a hundred years.
This, alone, would give me pause when considering the AGW predictions made by computer modeling. However, we have the ability to look at the model predictions made by the Intergovernmental Panel on Climate Change (IPPC) back in 2000. The dire predictions by the IPPC in their past and current reports are certainly frightening enough. But they are based on models whose performance has never been verified or validated.
In the 15 years since, there has been enough data collected so that we can assess the quality of these predictions in the short term -- our own V&V, if you will. The data of choice for this V&V is average global temperatures based on satellite-sensed atmospheric temperatures. These data come from microwave radiometers flying on DoD and NOAA weather satellites (and were available as far back as the late 70's). I have a good deal of experience with them, as one of my jobs in the Air Force was in charge of processing these sensor data from the spacecraft, and making them ready for inclusion in the Air Force NWP models. Thus, I can confirm that these are a good measure of temperature throughout the entire depth of the atmosphere.

So, what do we see when comparing satellite-sensed temperatures against prediction? Take a look at the this chart. 


The set of green lines represents the range of IPCC model predicted temperature change from 2000 to present. The black line represents the satellite-sensed average global temperature changes. While we clearly see a bump of about 0.25 degrees in the late 90s, the global temperatures have been, on average, remarkably steady since then. No rise in temperatures has been seen over the period of the IPCC forecasts, in complete contrast to the model predictions.
In any other business, we would say back to the drawing board -- clearly, something is amiss in the models that have skewed the predictions away from reality. Computer models are not "science" in and of their own. The scientific process isn't just research and prediction. Theories must be verified by actual measurements if they are to be accepted. If the predictions are wrong, then there is something wrong with the theory. In other words, the science isn't settled.
In the AGW industry, however, we charge full speed ahead, reality be damned. Excuses for the missed predictions are found (and disproved, BTW), and the cries of "The Science Is Settled" continue.
So, the next time AGW proponents try to scare you with dire predictions of a hellishly hot Earth, just remember the models used to make these predictions have either never been validated, or have failed the validation efforts made so far.

Emotional Inoculation (Revisited)

(Originally posted as a guest column on a blog of which I've been a long-time reader and admirer, http://bastionofliberty.blogspot.com/)

I thought it would be appropriate to repost this article in light of recent events.

-----------------------------------------------------------------

     If I were to tell you that this graph represented the distribution of the 27 worst outbreaks of some unspecified disease in all of US history grouped by decade, you might rightly wonder what recent events have occurred that have resulted in the recent surge of cases over the last few decades, but especially the most recent 10 years. I'll get back to this graph a little later, but for now, let's look at some background.



     Some years back, I found myself curious about the nearly obsessive way many of today's parents clean and disinfect everything within eyesight of their child. Growing up, my brothers and I constantly played in the dirt, local streams/creeks, etc., and never suffered any ill effects. Our parents (and friends' parents) never disinfected our toys and playground equipment with anti-microbial wipes. Nor would they keep us from playing in dirt or “dirty” environments.
     I started thinking that exposure to dirt and germs, like modern vaccinations, were vital to helping develop a healthy immune system. After all, how can a body develop resistance to germs if it is never exposed to any? Might this desire to overprotect children actually be harmful to them? Could this be the reason we are seeing increasing numbers of cases of food allergies, asthma, autoimmune disorders, etc.?
     Sure enough, an increasing number of medical specialists are coming to a similar conclusion. Parents are keeping things TOO clean. A simple web search will find numerous studies and analyses supporting this idea. Summaries of some are linked here:
  1. http://www.nytimes.com/2009/01/27/health/27brod.html?_r=0
  2. http://www.webmd.com/parenting/d2n-stopping-germs-12/kids-and-dirt-germs
     You might be wondering what this has to do with anything. Well, read on...
     Over the last decade or two, I've seen a disturbing trend towards “protecting” children (even high school- and college-age kids) from having their feelings hurt. While out and out “bullying” is reprehensible, we've gone so far as to eliminate typical school-age teasing. And I should know about that, having received more than my fair share of it (I AM a nerd/geek and DO have a big head and AM pretty poor at athletic pursuits). Did this hurt me? Yeah, I guess it did – at first. But I quickly learned that the old saying about “sticks and stones” was true – words could not hurt me unless I let them. I had control of my feelings, and nobody else.
     By being exposed to the “dirty environment” of typical school children, I developed an immunity to it – the ability to resist these barbs, and not letting them hurt me. Today, however, children's egos are not allowed to be bruised at all... Teachers can't use red pens when grading, trophies are given to everyone just for participating, and any conflict between students is immediate stopped by teachers. In short, children's psyches are coddled, being overprotected from anything that might cause them distress. In other words, they are not being inoculated against disappointment or insults. More recently, however, young minds are being shielded from “bad thoughts.”
     I recently came across a terrific article in the September issue of The Atlantic magazine (http://www.theatlantic.com/magazine/archive/2015/09/the-coddling-of-the-american-mind/399356/) that discusses this, not just at the grade-school level, but at the college level. I highly recommend reading the entire article, but will provide a few key quotes:
     “Something strange is happening at America’s colleges and universities. A movement is arising, undirected and driven largely by students, to scrub campuses clean of words, ideas, and subjects that might cause discomfort or give offense.”

     “Two terms have risen quickly from obscurity into common campus parlance. Microaggressions are small actions or word choices that seem on their face to have no malicious intent but that are thought of as a kind of violence nonetheless. For example, by some campus guidelines, it is a microaggression to ask an Asian American or Latino American “Where were you born?,” because this implies that he or she is not a real American. Trigger warnings are alerts that professors are expected to issue if something in a course might cause a strong emotional response. For example, some students have called for warnings that Chinua Achebe’s Things Fall Apart describes racial violence and that F. Scott Fitzgerald’s The Great Gatsby portrays misogyny and physical abuse, so that students who have been previously victimized by racism or domestic violence can choose to avoid these works, which they believe might “trigger” a recurrence of past trauma.”
     This movement – call it political correctness, for lack of a better phrase – “... presumes an extraordinary fragility of the collegiate psyche, and therefore elevates the goal of protecting students from psychological harm. The ultimate aim, it seems, is to turn campuses into “safe spaces” where young adults are shielded from words and ideas that make some uncomfortable. And more than the last, this movement seeks to punish anyone who interferes with that aim, even accidentally. You might call this impulse vindictive protectiveness. It is creating a culture in which everyone must think twice before speaking up, lest they face charges of insensitivity, aggression, or worse.” (emphasis mine)
     “... children born after 1980—the Millennials—got a consistent message from adults: life is dangerous, but adults will do everything in their power to protect you from harm, not just from strangers but from one another as well.”

     “Even joking about microaggressions can be seen as an aggression, warranting punishment.... When speech comes to be seen as a form of violence, vindictive protectiveness can justify a hostile, and perhaps even violent, response.” (emphasis mine)

     “Attempts to shield students from words, ideas, and people that might cause them emotional discomfort are bad for the students. They are bad for the workplace, which will be mired in unending litigation if student expectations of safety are carried forward. And they are bad for American democracy, which is already paralyzed by worsening partisanship. When the ideas, values, and speech of the other side are seen not just as wrong but as willfully aggressive toward innocent victims, it is hard to imagine the kind of mutual respect, negotiation, and compromise that are needed to make politics a positive-sum game.” (emphasis mine)


     Not exposing a child's developing immune system to a normal physical environment (e.g., dirt and bacteria) harms that child – with disastrous results as the child is then unable to deal with these germs when inevitably exposed to them later in life.
     Likewise, not exposing a child's developing psyche to a normal emotional environment (e.g., insults and disappointments) harms the child just as much – he or she will also inevitably be exposed to these realities later in life... there's no getting around it Someone not prepared to deal with these events as a child will be ill-equipped to deal with them as an adult. This practice has been increasing exponentially over the last few decades – we are seeing a new generation of kids and young adults entering the “real world” who have not been “inoculated” against disappointment. If you're wondering what the long-term effect of this might be, reconsider this quote from The Atlantic article:
     “When speech comes to be seen as a form of violence, vindictive protectiveness can justify a hostile, and perhaps even violent, response.”


     Now, let's get back to the graph shown at the beginning of this screed. The graph represents the occurrences of the 27 worst mass shootings in US History grouped by decade.  With the exception of one in 1949 and one in 1966, these events are a fairly recent phenomena.  Indeed, the most recent decade has seen almost half of the 27 worst shootings. And more than half of these are committed by people under 35.

Note:  With the recent shooting in Oregon, we can add another horrific number to the column for this decade, and another attributed to the product of today's school educational system.





Is there an actual connection between these events and the increasing inability of today's youth to deal with perceived slights? I think it's possible and certainly worth further study.