Organizing by themes VII: Words beginning with “epi-“

This site benefits/suffers/both from consisting of posts about a wide range of topics, all linked under the amorphous heading “data-driven storytelling.”

In an attempt to impose some coherent structure, I am organizing related posts both chronologically and thematically.

In this post, I sketched the winding road on which a 28-year-old man who had just resigned (without any degree) from a doctoral program in government ended up a 48-year-old with a doctorate in epidemiology.

And in this post, that degree turns out to the endgame (for now), not the starting point.

In between those two points, that man found a genuine resting place in the field of epidemiology. So much so, that when his blog—OK, my blog—debuted in December 2016, I was already contemplating the need to publish an epidemiology “primer” to provide context for the many epidemiology-centered posts I just knew I would be writing.

Ultimately, there was only one such post, based upon an unsettling implication from my doctoral research.

This latter post appeared in April 2017, just three months before I decided to stop looking for an epidemiology-related position (or, at least, one that built upon my 19 years as health-related data analyst that was commensurate with my salary history and requirements, education and experience[1]) and focus on writing and my film noir research.

In this two-part series (which includes links to my doctoral thesis and PowerPoint presentations for each of its three component studies), I describe my experience at the 2017 American Public Health Association Annual Meeting & Expo. In January 2017, when I still considered myself an epidemiologist, I submitted three oral presentation abstracts (one for each doctoral thesis study). Two were accepted, albeit after I had announced my career shift. Nonetheless, I traveled to Atlanta, GA to deliver the two talks; the conference became a test of whether the “public health analyst” fire still burned in me the way it had.

APHA 2017 1

APHA 2017 2

Spoiler alert: not so much.

**********

Here is the thing, however.

I still love epidemiology in the abstract. As I wrote in my previous post: “In epidemiology, I had found that perfect combination of applied math, logic and critical thinking…”

In fact, I even have a secular “bible”:

modern epidemiology

In essence, epidemiology was both an analytic toolkit and an epistemological framework: critical thinking with some wicked cool math. Moreover, the notion of “interrogating memory” is informed by my desire to “fact-check” EVERYTHING–I am innately a skeptic.

Well–I was not ALWAYS a skeptic.

And much of my writing about contemporary American politics reflects my concern that the United States is facing an epistemological crisis.

Given my ongoing love for epidemiology (even if it is not currently how I make a living) and my desire to promote critical thinking, it is very likely I will revisit my doctoral field in the future on this blog.

Until next time…

[1] I hesitate to say that I was the victim of age discrimination (at the age of 50), since I cannot back up that assertion with evidence. I am on far safer ground noting that the grant-funded positions I occupied for most of the last two decades barely exist anymore.

Two posts diverged…though not in a yellow wood

This post began as the seventh in the “organizing by themes” series, the one that would contain annotated links to my posts related to epidemiology, epistemology, public health and career changes.

THAT post may be found here.

When I started writing, though, I realized that I was telling the full back story of my adult professional and graduate student life. So rather than clunkily shoehorn the “theme organization” post at the end, I acceded to the inevitability of two distinct posts.

This was not the first time I had started writing one post only to find myself writing an entirely different post; it is a welcome process of literary free association.

**********

As I have alluded to elsewhere, I sort of stumbled into my previous career as a health-related data analyst.

On June 30, 1995, I walked away without a degree from a six-year-long pursuit of a doctorate in “government” (rea: political science) from Harvard’s Graduate School of Arts and Sciences (GSAS). In June 2015, however, I applied for—and received[1]—the Master’s Degree for which I had already qualified when I resigned; it was not the worst consolation prize ever.

IMG_2337 (3).JPG

With no idea what to do next (other than remain in the Boston area, having just moved into an apartment with my girlfriend of two years) and a set of quantitative and “critical thinking” skills, I spent the summer of 1995 performing data entry at a long-defunct firm called Pegasus Communications. That bought me some time…though I did not use it as wisely as I could have.

The following January, despite my better judgment, I accepted a position as an Assistant Registrar at Brandeis University. To this day, I do not know why I was offered the position: I was a 29-year-old political science major with zero experience in higher education administration who would be supervising three highly-competent professional women a few decade older than me.

In retrospect, I think my relative youth and inexperience equated to “willing to work long hours for a lower salary.”

Still…you get what you pay for: it was a terrible fit from the start, and I was unceremoniously let go late in May. As relieved as I was to be free from that position, that was the most drunk I would be until the day my mother was buried in March 2004[2].

Regrouping, I narrowed my focus to positions which would allow me to utilize the data analytic skills I had acquired at Yale and Harvard (though, in retrospect, I did not know nearly as much as I thought I did).

My break came in October 1996—just after I turned 30. I accepted an Analyst position with Health and Addictions Research, Inc. (HARI), in part using baseball statistics. And for the first time, I truly enjoyed a full-time adult job[3]. However, the federal grant funding for this position expired (not for the last time) in June 1998, so a few months later I moved on to North Charles Research and Planning Group then the MEDSTAT Group. These latter two gigs were, in order, horrific and not-bad-for-a-few-months.

All of these companies were located in or near Boston (and no longer exist in late-1990s form). However, as 2000 ended, so did my relationship with the woman my wife Nell half-jokingly calls my first wife. As a result, I decided to resign from MEDSTAT and seek a fresh start in the Philadelphia area, where I was raised.

I actually had a good position lined up with a psychometrics firm in King of Prussia (about 21 miles northwest of Philadelphia), but for still-unexplained reasons, I was “unhired” two days before I was scheduled to start. Nothing breeds paranoia like “we are withdrawing our offer but we won’t tell you why!”

The silver lining, however, was that I was unemployed when a Senior Research Associate position became available at the Family Planning Foundation of Southeast Philadelphia (FPC) in June 2001.

This was where a collection of loosely-related health data positions became a full-fledged career in “health-related data analysis.” Following the abrupt departure of my initial supervisor, I effectively ran a grant-funded research project. When that project ended after one year, I was promoted to direct a new grant-funded project; this latter project remains the most rewarding professional work I have ever done.

In the meantime, I was preparing and delivering talks at scientific conferences (American Public Health Association, Eastern Evaluation Research Society—on whose Board of Directors I would serve for a year). My colleagues and I wrote and published a peer-reviewed journal article for yet a third grant-funded project; I was listed as second author[4]. When the woman who directed the Research Department retired, she hired me as a data-analytic consultant.

And so forth.

That first project for which I was hired related to the association between the establishment of neighborhood youth development activities and teen pregnancy rates. As I recall (more than 16 years later), these activities were established in selected zip codes in North Philadelphia (the “exposed” group), but not in West Philadelphia (the “unexposed” group—unless it was the other way around.

FPC was one of 12 sites chosen nationwide to receive one of these teen pregnancy prevention grants. At the end of the project, we began to write an article summarizing our findings. This was scheduled to appear in a special edition of a peer-reviewed journal (I forget which one) presenting the results from each funding site. While I was well-educated in quantitative methods (albeit from a social science perspective), we needed a more specific type of statistical expertise.

Enter Dr. Constantine Daskalakis on a consulting contract.

This man was a revelation to me. I had not known there was such a thing as “biostatistics,” and, despite working in public health as a data analyst, I was only vaguely aware of what “epidemiology” was.

In fact, all I really knew about epidemiology was an odd remark my Harvard doctoral committee chair made while teaching one of my graduate American politics classes: “Getting a PhD in political science is tough, but if you really want to do something hard, get a PhD in epidemiology.”

Make of this what you will: I did not complete the political science doctorate; I did complete the supposedly much-harder epidemiology doctorate.

What most impressed me about Dr. Daskalakis-who had only recently completed his own biostatistics/epidemiology doctorate—was his sheer clarity of thought. He laid out an effective analytic approach in a few quick steps.

It was, for all intents and purposes, my first epidemiology lesson.

For various reasons (the timing and efficacy of the youth development activities was wonky?), we wrote a solid draft but never submitted it for publication; there went my first chance to be a first author.

Until then, I had fully rejected the idea of completing a doctorate in a different field; the wounds were still too raw. But the idea of directing my own grant-funded projects—even directing a non-profit research department myself—began to appeal to me. And that would require pursuing a public-health-related doctorate in either biostatistics or epidemiology (they were already cleaving into distinct fields of study).

It remained simply a vague notion, however, until the summer of 2004 when in quick succession 1) my mother died, leaving my stepfather and I co0-executors of her modest (but not trivial) estate, 2) the second grant project ended, 3) the next grant-funded project proved less appealing and 4) the siren call of Boston grew ever louder, especially after a trip there which combined a HARI reunion and catching up with friends at the 2004 Democratic National Convention[5].

At the reunion, I heard excellent things about the Boston University School of Public Health (BUSPH). With no desire to return to Harvard (and/or fearing they would not want me back, even in a different graduate school), that was the only viable option I had.

That Fall, as the lawyer-driven[6] rift between my stepfather and me grew wider, a solution to our impasse occurred to me: sell the condominium my mother had intended me to have (and from which I was earning rent) and use the proceeds to pursue a doctorate at BUSPH.

Starting around my 39th birthday, no less.

My intention had been to apply for a doctorate in epidemiology, but the deadline for biostatistics was later, so that was what I chose. My GRE scores had long since expired, so I needed to take those again. My scores, after re-learning how to study for any kind of exam (the last time I had taken anything close to an exam was May 1991, when I somehow passed my Harvard GSAS oral and written exams), were…good enough.

But when I submitted my application to BUSPH, their response was a qualified acceptance: given how many years (20) had passed since I had taken a pure mathematics class, they enrolled me in the Master’s Degree program. I was excited and disappointed in roughly equal measure.

[Spoiler alert: they were not wrong]

Nonetheless, I was returning to Boston for what was shaping up to be a multi-step process. I submitted my resignation at FPC, and left—with an emotional send-off—at the end of June 2005.

In the meantime, I was still waiting for my stepfather to settle my mother’s estate with me…which he finally did in July 2005. In the interim, I had to borrow money from a friend to secure the apartment I had located in the Boston suburb of Waltham (yes, where Brandeis is located).

The final dispensation check was dated August 9, 2005; I know the date because I took an enlarged photocopy of it (it is resting comfortably in a filing cabinet behind me and to the left). No, I am not going to include a photograph of the photocopy.

However, just bear with me for a brief romantic digression.

*********

On October 31, 2005, my first Halloween night back in Boston, I received a message from a woman named “Nell” on Friendster, one of the original social networks (and quasi-dating site). On a lark, I had posted on my profile page 10 trivia questions based upon key interests/likes (sample question: “Freddie Freeloader sits between what two greats?”[7])

Only a few miles away in the Boston neighborhood of Brighton, Nell, a private school teacher from Washington DC, was bored. Something about my profile appealed to her, so she took the time to research the questions to which she did not already know the answers.

Naturally, I was deeply flattered—and intrigued by her profile (and, later, her use of the word “persiflage” as the subject line for her first e-mail to me). We struck up a  brief correspondence then went on our first date (meeting in Harvard Square to eat at Bertucci’s—which is no longer there—and watch Good Night, and Good Luck—at a movie theatre which no longer exists). I was so nervous, I kept dropping the movie tickets.

I must not have been too nervous, though: we married 23 months (and one day) later[8].

*********

My plan had been to complete all of my coursework in two semesters (while not earning any income other than interest) to save money. I had already paid off some substantial credit card debts and lingering student loans—and a few days after I returned to Boston, my 1995 Buick Century died. Rather than incur new debt, I paid in full for my black 2005 Honda Accord (it was love at first sight when I spotted it on the dealership lot); I still drive that Accord.

Four courses a semester proved too stressful, though, so I paid for an additional semester.

On a Thursday night in early September 2005, I drove down to the Albany Street campus, parked and walked into a classroom—more of a small auditorium, really—for the first time (as a student) in nearly 15 years. It was Dan Brooks’ Introduction to Epidemiological Methods; the two disciplines may have cleaved into different departments but they were still interconnected.

And, just like that, I was home. In epidemiology, I had found that perfect combination of applied math, logic and critical thinking I had not even known I was searching for until I found it. Even as I labored joyfully through, first, Intermediate then Modern Epidemiology (perhaps the best course I have ever taken), I knew I would soon be applying to the BUSPH doctoral program in epidemiology.

It had to be soon, actually, because my GRE scores would expire in 2010.

By January 2007, I had completed both my “theoretical” and “applied” qualifying exams, and I received my diploma a short time later. I had already parlayed my impending degree into a Quality Researcher position at the Massachusetts Behavioral Health Partnership (MBHP), where I would remain until I was laid off (expiration of grant funds again) in June 2010.

My application to the BUSPH epidemiology doctoral program was accepted early in 2009 (“We were wondering when you were going to apply!”), and I enrolled that September. Thank goodness I did, because when I left MBPH the following June, we lost our health insurance; BUSPH picked up the slack.

In May 2011, I accepted an Outcomes Analyst position with Joslin Diabetes Center, where I would remain until June 2015, when—you guessed it—the federal grant funding expired. Yes, not only did my father die on June 30 (1982), I left four different positions (only one truly voluntarily) on that day in 1998, 2005, 2010 and 2015. And yet it is not even close to my least favorite day of the year; I reserve that honor for Valentine’s Day, which I utterly loathe.

Unlike my doctoral program at Harvard, the BUSPH epidemiology program had an elegant, well-ordered rhythm to it: two years of coursework—culminating with the dreaded hurdle known colloquially as “Dan Brooks’ seminar.” After that came the “biostatistics” and “epidemiology” qualifying exams, selection of a three-person committee and a thesis topic, drafting of a short letter of intent outlining the three connected studies you were going to conduct, drafting of a very-detailed 25-page outline of the final dissertation, then the researching and writing of the thesis itself.

Nothing to it, he wrote with a shudder of remembrance.

And, of course, what followed that five-year journey (nine if you count the biostatistics MA) was the doctoral defense.

Oh my…the defense.

img_1460

Technically, this photograph was taken (on the late afternoon of December 16, 2014) after I had successfully defended (when the three doctoral committee members leave the room to “confer”—and return with cake and champagne), but my slides are still being projected, so it is close enough.

Not long after, I collected this from…somewhere…on campus.

IMG_1757 (2).JPG

Nearly 20 years after I had walked away from one doctoral program, I had successfully completed an entirely different one.

And this is essentially where you came in to the movie.

Until next time…

[1] In December 2015

[2] After the funeral (at which I eulogized my mother), I spent much of the evening walking around my late stepfather’s house, where we were sitting shiva for my mother, swigging directly from a bottle of Scotch. When I walked out the house later that night in the direction of my parked car, a family friend with the superb nickname “Yo!” said he would “rip out [my] fucking distributor cap” if I attempted to do drive myself home. Not being a complete fool, I permitted a close male cousin to drive me home.

[3] And where I taught myself my first geographic information systems (GIS) software package.

[4] A 2000 article based on HARI research listed me as third author.

[5] In June 1991, a late friend of mine from suburban Philadelphia asked me to come to St. Louis to support his candidacy for Treasurer of the Young Democrats of America. I rented a car and drove to St. Louis, renting my very own room in the conference hotel, and joining the Pennsylvania delegation. I became friends with some members of the Alaska delegation, one of whom served as a whip at the 2004 convention in Boston. She was the one who invited me to Boston. I was actually in the rafters of the Fleet Center (the former Boston Garden, now the TD Garden) for former president Bill Clinton’s address—having walked by then-Representative Dennis Kucinich of Ohio on the way in to the building. I was in a local bar watching with dropped jaw as a charismatic young Illinois State Senator and candidate for United States Senate named Barack Obama gave the keynote address. While I was there, Mr. Obama spoke to few dozen or so people at nearby Christopher Columbus Waterfront Park; I saw his speech, but I regret not meeting him and/or getting a photograph with him.

[6] I still do not quite understand why he chose to fight my mother’s—his wife’s—crystal-clear distribution of what property she had. But he did so—then tried to intimidate me by hiring a man named Vito Canuso, who had been the chair of the Philadelphia Republican Party…at some point. I countered by hiring the lawyer—Barbara Harrington Hladik—my mother had used for my sister Mindy’s guardianship hearing (she is severely mentally retarded; I am her legal guardian now). It was a mismatch from the start—Canuso never had a chance.

[7] Answer: “Freddie Freeloader” is the 2nd track on the Miles Davis masterpiece Kind of Blue, “sitting” between “So What” and “Blue in Green,” my favorite track…period.

[8] It was not all smooth sailing—but we made it there in the end.

Separating the art from the artist

The director David Lynch—who I dressed as this past Halloween—gave this response to a question about the meaning of a puzzling moment toward the end of episode 15 of Twin Peaks: The Return.

“What matters is what you believe happened,” he clarified. “That’s the whole thing. There are lots of things in life, and we wonder about them, and we have to come to our own conclusions. You can, for example, read a book that raises a series of questions, and you want to talk to the author, but he died a hundred years ago. That’s why everything is up to you.”

On the surface, this is a straightforward answer, one Lynch has restated in different ways over the years: the meaning of a piece of art is whatever you think it is. Every individual understands a piece of art through her/his own beliefs and experiences.

I am reminded of a therapeutic approach to the interpretation of dreams that particularly resonates with me.

You tell your therapist what you remember of a dream. The therapist then probes a little more, attempting to elicit forgotten details. The conversation then turns to the “meaning” of the dream. Some therapists may pursue the Freudian notion of a dream as the disguised fulfillment of a repressed wish (so what is the wish?). Other therapists may look to the symbolism of characters and objects in the dream (is every character in a dream really a version of the dreamer?) for interpretation.

Then there is what you might call the Socratic approach; this is the approach that resonates with me. The therapist allows the patient to speculate what s/he thinks the dream means. Eventually, the patient will arrive at a meaning that “clicks” with her/him, the interpretation that feels correct. The therapist then accepts this interpretation as the “true” one.

That the “dreams mean whatever you think they mean” approach aligns nicely with Lynch’s musing is not surprising, given how central dreams and dream logic are to his film and television work.

We live inside a dream

However, there is a subtext to Lynch’s musing about artistic meaning that is particularly relevant today.

**********

The November 20, 2017 issue of The Paris Review includes author Claire Dederer’s essay “What Do We Do with the Art of Monstrous Men?”

I highly recommend this elegant and provocative essay.

For simplicity, I will focus on two questions raised by the essay:

  1. To what extent should we divorce the artist from her/his art when assessing its aesthetic quality?
  2. Does successful art require the artist to be “monstrously” selfish?

Dederer describes many “monstrous” artists, nearly all men (she struggles when cataloging the monstrosity of women, despite how odious she finds the impact of Sylvia Plath’s suicide on her children) before singling out Woody Allen as the “ur-monster.”

And here is where I discern a deeper meaning in Lynch’s “dead author” illustration.

Lynch’s notion that one brings one’s own meaning to any piece of art is premised on the idea that the artist may no longer be able to (or may choose not to) reveal her/his intent.

But that implies that something about the artist is relevant to understanding her/his art. Otherwise, one would never have sought out the artist in the first place.

The disturbing implication is that it is all-but-impossible to separate art from artist.

This is Dederer’s conundrum, and it is mine as well.

**********

A few years ago, a group of work colleagues and I were engaging in a “getting to know each other” exercise in which each person writes down a fact nobody else knows about them, and then everyone else has to guess whose fact that is.

I wrote, “All of my favorite authors were falling-down drunks.”

Nobody guessed that was me, which was a mild surprise.

Of course, the statement was an exaggeration, a tongue-in-cheek poke at the mock seriousness of the process.

Still, when I think about many of the authors I love, including Dashiell Hammett, Raymond Chandler, Edgar Allan Poe, John Dickson Carr, Cornell Woolrich, David Goodis[1]

…what first jumps to mind is that every author I just listed is male (not to mention inhabiting the more noir corners of detective fiction). So far as I know, my favorite female authors (Sara Paretsky, Ngaio Marsh and Agatha Christie, among others) do/did not have substance abuse problems.

Gender differences aside, while not all of these authors were alcoholics, they did all battle serious socially-repugnant demons.

Carr, for example, was a virulently racist and misogynistic alcoholic.

He also produced some of the most breathtakingly-inventive and original detective fiction ever written.

Woolrich was an agoraphobic malcontent who was psychologically cruel to his wife during and just after their brief, unconsummated marriage[2].

He also basically single-handedly invented the psychological suspense novel. More films noir (including the seminal Rear Window) have been based on his stories than those of any other author.

And so forth.

It is not just the authors I admire who are loathsome in their way.

I never ceased to be amazed by the music of Miles Davis, who ranks behind only Genesis and “noir troubadour” Stan Ridgway in my musical pantheon. His “Blue in Green” is my favorite song in any genre, and his Kind of Blue is my favorite album.

But this is the same Miles Davis who purportedly beat his wives, abused painkillers and cocaine, was taciturn and full of rage, and supposedly once said, “If somebody told me I only had an hour to live, I’d spend it choking a white man. I’d do it nice and slow.[3]

Moving on, my favorite movie is L.A. Confidential.

Leaving aside the shenanigans of co-star Russell Crowe, there is the problem of Kevin Spacey, an actor I once greatly respected.

Given the slew of allegations leveled at Spacey, the character arc of his “Jack Vincennes” in Confidential is ironic.

But first, let me warn any reader who has not seen the film that there are spoilers ahead. For those who want to skip ahead, I have italicized the relevant paragraphs.

Vincennes is an amoral 1950s Los Angeles police officer whose lucrative sideline is selling “inside” information to Sid Hudgens, publisher of Hush Hush magazine, reaping both financial rewards and high public visibility. Late in the film, he arranges for a young bisexual actor to have a secret (and then-illegal) sexual liaison with the District Attorney, a closeted homosexual. Vincennes and Hudgens would then catch the DA and the young actor in flagrante delicto.

Sitting in the Formosa Club that night, however, Vincennes has a sudden pang of conscience and leaves the bar (symbolically leaving his payoff—a 50-dollar bill—atop his glass of whiskey), intending to stop the male actor from “playing his part.” Unfortunately, he arrives at the motel room too late; the actor has been murdered.

Determined to make amends, he teams up with two other detectives to solve a related set of crimes, including the murder of the young actor. In the course of his “noble” investigation, he questions his superior officer, Captain Dudley Smith, one quiet night in the latter’s kitchen. Realizing that Vincennes is perilously close to learning the full extent of his criminal enterprise, Smith suddenly pulls out a .32 and shoots Vincennes in the chest, killing him.

OK, the spoilers are behind us.

**********

This listing of magnificent art made by morally damaged people demonstrates I am in the same boat as Claire Dederer: I have been struggling for years to separate art from artist.[4]

And that is before discussing the film that serves as Dederer’s Exhibit A: Woody Allen’s Manhattan.

Dederer singles out Manhattan (still one of my favorite films) because of the relationship it depicts between a divorced man of around 40 (Isaac, played by Allen himself) and a 17-year-old high school named Tracy (Mariel Hemingway).

Not only is the relationship inherently creepy (especially in light of recent allegations by Hemingway and the fact that in December 1997, the 62-year-old Allen married the 27-year-old Soon-Yi Previn, the adopted daughter of his long-time romantic partner Mia Farrow[5]), but, as Dederer observes, the blasé reaction to it from other adult characters in the film makes us cringe even more.

As I formulated this post—having just read Dederer’s essay—I thought about why I love Manhattan so much.

My reasons are primarily aesthetic: the opening montage backed by George Gershwin’s Rhapsody in Blue (and Allen’s voiceover narration), Gordon Willis’ stunning black-and-white cinematography, the omnipresence of a vibrant Manhattan itself.

In addition, the story, a complex narrative of intertwined relationships and their aftermath, is highly engaging. The dialogue is fresh and witty—and often very funny. The characters are quirky (far from being a two-dimensional character, I see Tracy as the moral center of the film) but still familiar.

And then there is the way saw the film for the first time.

The movie was released on April 25, 1979. At some point in the next few months, my father took me to see it at the now-defunct City Line Center Theater (now a T.J. Maxx) in the Overbrook neighborhood of Philadelphia. Given that I was 12 years old, it was an odd choice on my father’s part, but I suspect he wanted to see the film and seized the opportunity of his night with me (my parents had been separated two years at this point) to do so.

City Line Theater

I recall little about seeing Manhattan with him, other than being vaguely bored. I mean, it was one thing for old movies and television shows to be in black-and-white (like my beloved Charlie Chan films), but a new movie?

I do not remember when I saw Manhattan again. At one of Yale’s six film societies? While flipping through television channels in the 1990s? Whenever it was, the film clicked with me that second viewing, and I have only become fonder of it since then.

Two observations are relevant here.

One, it is clear to me that the fact that I first saw Manhattan at the behest of my father, who I adored in spite of his many flaws, heavily influenced my later appreciation of the film[6].

Two, this appreciation cemented itself years before Allen’s perfidy became public knowledge.

These two facts help explain (but not condone) why I still…sidestep…my conscience to admire Manhattan as a work of art.

**********

Ultimately, I think the following question best frames any possible resolution of the ethical dilemma of appreciating the art of monstrous artists:

Which did you encounter first, the monstrous reputation of the artist…or the art itself?

I ask this question because my experience is that once I hear that a given artist is monstrous, I have no desire to experience any of her/his art.

Conscience clear. No muss, no fuss.

That includes not-yet-experienced works by an artist I have learned is loathsome. I have not, for example, seen a new Woody Allen since the execrable The Curse of the Jade Scorpion in 2001.

But if I learn about the artist’s monstrous behavior AFTER reacting favorably to a piece of her/his art, I will often find myself still drawn to the art.[7]

Conscience compartmentalized. Definitely some muss, some fuss.

My love of these works is just too firmly embedded in my consciousness to unwind. Thus, I still love the music of Miles Davis. L.A.Confidential remains my favorite movie. Manhattan may have dropped some in my estimation, but it is still in my top 10.

I am reminded of this line from “Seen and Not Seen” on the Talking Heads album Remain in Light:

“This is why first impressions are often correct.”

**********

And here is where I think Lynch’s impressionistic approach to finding meaning in art and the patient-centered approach to dream interpretation—art and dreams mean whatever we think they mean—relate to the question of loving art while loathing the artist.

Art is a deeply personal experience. The “Authority” Dederer so pointedly disdains in her essay can provide guidance, but (s)he cannot experience the art for you or me.

Put simply, each of us is an “Authority” on any given piece of art—and also on whether or not to seek out that art.

For example:

As a child, I found myself hating The Beatles simply because I was supposed to love them. However, once I discovered their music on my own terms, purchasing used vinyl copies of the “Red” and “Blue” albums (which I still own 30+ years later) along with Abbey Road, The Beatles (the “White” Album), Sgt. Peppers’s Lonely Hearts Club Band, Revolver and Rubber Soul…suffice to say I have 124 Beatles tracks (out of 9,504) in my iTunes, second only to Genesis (288). The Beatles also rank sixth in total “plays” behind The Cars, Steely Dan, Miles Davis (there he is again), Stan Ridgway and Genesis.

Each of us is also the Authority on our changing attitudes toward a given piece of art, including what we learn about the artist, knowledge which then becomes one more element we bring to the subjective experience of art.

**********

Dederer speculates about whether artists (particularly writers) somehow NEED to be monstrous to be successful.

(Upon writing that last sentence, the phrase “madness-genius” began to careen around my brain).

As a writer with advanced academic training in epistemology-driven-epidemiology, I would suggest this study to assess this question.

A group of aspiring artists who had not yet produced notable works would be identified. They would be divided into “more monstrous” and “less monstrous,”[8] definitions to be determined. These artists would be followed for, say, 10 years, after which time each artist still it the study would be defined as “more successful” and “less successful,” definitions to be determined The percentages of artists in each category who were “more successful” would be compared, to see whether being “monstrous” made an aspiring artist more or less likely to be “successful,” or even made no difference at all.

This would not settle the question of the link between monstrosity and art by any means, but it would sure be entertaining.

**********

When Dederer talks about the monstrous selfishness of the full-time writer, she focuses on the temporal trade-offs writers must make—time with family and friends versus time spent writing. Writing is an almost-uniquely solitary endeavor, as I first learned writing my doctoral thesis, and as I continue to experience in my new career.

Luckily, my wife and daughters remain strongly supportive of my choice to become a “writer,” so I have not yet felt monstrously selfish.

There is a different kind of authorial “selfishness,” though, that I would argue is both more benign and more beneficial to the author.

When I began this blog, my stated aim was to focus solely on objective, data-driven stories; my personal feelings and life story were irrelevant (outside of this introductory post).

Looking back over my first 48 posts, though, I was surprised to count 17 (35.4%) I would characterize as “personal” (of which three are a hybrid of personal and impersonal). These personal posts, I observed, have also become more frequent.

Even more surprising was how much more “popular” these “personal” posts were. As of this writing, my personal posts averaged 28.4 views (95% confidence interval [CI]=19.9-36.9), while my “impersonal” posts averaged 14.5 views (95% CI=10.8-18.1); the 95% CI around the difference in means (14.0) was 6.3-21.6.[9]

Moreover, the most popular post (77 views, 32 more than this post) is a very personal exploration of my love of film noir.

In other words, while none of my posts have been especially popular (although I am immensely grateful to every single reader), my “personal” posts have been twice as popular as my “impersonal” posts.

I had already absorbed this lesson somewhat as I began to formulate the book I am writing[10]. Initially inspired by my “film noir personal journey” post, it has morphed into a deep dive not only into my personal history, but also the history of my family (legal and genetic) going back three or four generations.

This, then, is the “selfish” part: the discovery that the most popular posts I have written are the ones in which I speak directly about my own life and thoughts, leading me to begin to write what amounts to a “hey, I really like film noir…and here are some really fun stories about my family and me” memoir-research hybrid. One that I think will be very entertaining.

Whether an agent, publisher and/or the book-buying public ever agree remains an open question.

**********

Just bear with me (I had to write that phrase at some point) while I fumble around for a worthwhile conclusion to these thoughts and memories.

I am very hesitant ever to argue that means justify the ends, meaning that my first instinct is to say that art produced by monstrous artists should be avoided.

But I cannot say that because, having formed highly favorable “first (and later) impressions” of various works of art produced by “monstrous” artists, I continue to love those works of art. I may see them differently, but the art itself has not changed. “Blue in Green” is still “Blue in Green,” regardless of what I learn about Miles Davis, and it is still my favorite song.

And that may be the key. Our store of information about a piece of art may change, but the art itself does not change. It is fixed, unchanging.

Of course, if Lynch and the patient-centered therapists are correct that we each need to interpret/appreciate (or not) works of art as individuals, then how we react to that piece of art WILL change as our store of information changes.

Shoot. I thought I had something there.

Well, then, what about the “slippery slope” argument?

Once we start down the path of singling out certain artists (and, by extension, their works of art) for opprobrium, where does that path lead?

The French Revolution devolved into an anarchic cycle of guillotining because (at least as I understand it) competing groups of revolutionaries began to point the finger at each other, condemning rival groups to death as power shifted between the groups.

This is admittedly an extreme example, but my point is that we once start condemning monstrosity in our public figures, it is difficult to stop.

It is also the case that very few of us are pure enough to condemn others. We all have our Henry Jekyll, and we all have our Edward Hyde, within us. I think the vast majority of us contain far more of the noble Dr. Jekyll than of the odious Mr. Hyde, but we all enough of the latter to be wary of hypocrisy.

And if THAT is not a good argument, then I have one more.

Simply put, let us all put on our Lynchian-therapeutic cloaks and make our own decisions about works of art, bringing to bear everything we know and feel and think, including our conscience…while also understanding that blatant censorship (through public boycott or private influence) is equally problematic…

These decisions may be ethically uncomfortable, but as “Authorities,” they are ultimately ours and ours alone.

Until next time…

[1] Fun fact about Goodis: Philadelphia-born-and-raised, he is buried in the same cemetery as my father.

[2] Woolrich was also a self-loathing homosexual.

[3] This quote is found on page 61 of the March 25, 1985 issue of Jet, in a blurb titled “Miles Davis Can’t Shake Boyhood Racial Abuse.” The quote is apparently from a recent interview with Miles White of USA Today, but I cannot find the actual USA Today article.

As a counter, and for some context, here is a long excerpt from Davis’ September 1962 Playboy interview.

Playboy: You feel that the complaints about you are because of your race?

Davis: I know damn well a lot of it is race. White people have certain things they expect from Negro musicians — just like they’ve got labels for the whole Negro race. It goes clear back to the slavery days. That was when Uncle Tomming got started because white people demanded it. Every little black child grew up seeing that getting along with white people meant grinning and acting clowns. It helped white people to feel easy about what they had done, and were doing, to Negroes, and that’s carried right on over to now. You bring it down to musicians, they want you to not only play your instrument, but to entertain them, too, with grinning and dancing.

Playboy: Generally speaking, what are your feelings with regard to race?

Davis: I hate to talk about what I think of the mess because my friends are all colors. When I say that some of my best friends are white, I sure ain’t lying. The only white people I don’t like are the prejudiced white people. Those the shoe don’t fit, well, they don’t wear it. I don’t like the white people that show me they can’t understand that not just the Negroes, but the Chinese and Puerto Ricans and any other races that ain’t white, should be given dignity and respect like everybody else.

But let me straighten you — I ain’t saying I think all Negroes are the salt of the earth. It’s plenty of Negroes I can’t stand, too. Especially those that act like they think white people want them to. They bug me worse than Uncle Toms.

But prejudiced white people can’t see any of the other races as just individual people. If a white man robs a bank, it’s just a man robbed a bank. But if a Negro or a Puerto Rican does it, it’s them awful Negroes or Puerto Ricans. Hardly anybody not white hasn’t suffered from some of white people’s labels. It used to be said that all Negroes were shiftless and happy-go-lucky and lazy. But that’s been proved a lie so much that now the label is that what Negroes want integration for is so they can sleep in the bed with white people. It’s another damn lie. All Negroes want is to be free to do in this country just like anybody else. Prejudiced white people ask one another, “Would you want your sister to marry a Negro?” It’s a jive question to ask in the first place — as if white women stand around helpless if some Negro wants to drag one off to a preacher. It makes me sick to hear that. A Negro just might not want your sister. The Negro is always to blame if some white woman decides she wants him. But it’s all right that ever since slavery, white men been having Negro women. Every Negro you see that ain’t black, that’s what’s happened somewhere in his background. The slaves they brought here were all black.

What makes me mad about these labels for Negroes is that very few white people really know what Negroes really feel like. A lot of white people have never even been in the company of an intelligent Negro. But you can hardly meet a white person, especially a white man, that don’t think he’s qualified to tell you all about Negroes.

You know the story the minute you meet some white cat and he comes off with a big show that he’s with you. It’s 10,000 things you can talk about, but the only thing he can think of is some other Negro he’s such close friends with. Intelligent Negroes are sick of hearing this. I don’t know how many times different whites have started talking, telling me they was raised up with a Negro boy. But I ain’t found one yet that knows whatever happened to that boy after they grew up.

Playboy: Did you grow up with any white boys?

Davis: I didn’t grow up with any, not as friends, to speak of. But I went to school with some. In high school, I was the best in the music class on the trumpet. I knew it and all the rest knew it — but all the contest first prizes went to the boys with blue eyes. It made me so mad I made up my mind to outdo anybody white on my horn. If I hadn’t met that prejudice, I probably wouldn’t have had as much drive in my work. I have thought about that a lot. I have thought that prejudice and curiosity have been responsible for what I have done in music.

[4] This has actually impacted me directly. Privacy concerns prevent me from using names, but I have had long and painful discussions with people close to me who were either related to, or knew very well, artists whose work they admired but who were/are loathsome human beings.

[5] Purportedly, Allen and his quasi-step-daughter (Allen and Farrow never married) had been having a long-term affair.

[6] And, perhaps, of black-and-white cinematography more generally.

[7] There are exceptions to this, of course. As much as I love the Father Brown stories by G.K. Chesterton, his blatant anti-Semitism has likely permanently soured me on his writing.

[8] Acknowledging that “monstrosity” is not binary, but a continuum. We have all had monstrous moments, and even the most monstrous people have had a moment or two of being above reproach.

[9] Using a somewhat stricter definition of “personal” made the difference even starker.

[10] Tentative title: Interrogating Memory: How a Love of Film Noir Led Me to Investigate My Own Identity.

Final thoughts from what is almost certainly my final APHA meeting

I debuted this blog 11 months ago yesterday as a place to tell what I hoped would be entertaining and informative data-driven stories. Given my proclivity for, and advanced academic training in, quantitative data analysis, the vast majority of my 47 prior posts have involved the rigorous and systematic manipulation of numbers.

But not all data are quantitative. Sometimes they are “qualitative,” or simply impressionistic.

A few weeks ago, I wrote a post about my impending trip to Atlanta to attend the American Public Health Association (APHA) Annual Meeting and Expo. This post served two purposes:

  1. To allow me to archive online:
    1. The full text (minus Acknowledgments and CV) of my doctoral thesis (Epidemiology, Boston University School of Public Health, May 2015)
    2. The PowerPoint presentation I delivered in defense of that thesis (minus some Acknowledgment slides) in December 2014
    3. Both oral presentations I delivered at the APHA Meeting
  1. To explore the idea that the decision to change careers (which I detail here) actually began two years earlier than I thought, with the completion of this doctorate.

I submitted three abstracts to APHA (one for each dissertation study) when I was still looking for ways to jumpstart my health-data-analyst job search (and my flagging interest in the endeavor). I was shocked that any of my abstracts were accepted for oral presentation (if only because I had no institutional affiliation) and quite humbled that two were accepted.

Once they were accepted, though, I felt an obligation to prepare and deliver the two oral presentations, despite the fact that I had decided to embark on a different career path.

(I did, however, truncate the length of my attendance from all four days to only the final two days, the days on which I was scheduled to give my presentations.)

I also recalled how much I used to enjoy attending APHA Meetings with my work colleagues. My first APHA Meeting—Atlanta, October 2001—was also the place I delivered an oral presentation to a large scientific conference for the first time.

APHA 2001

                 **********

There are two interesting coincidences related to this presentation.

One, I gave this presentation at the Atlanta Marriott Marquis, the same hotel in which I just stayed for the 2017 APHA Meeting[1].

Two, the presentation itself—GIS Mapping: A Unique Approach to Surveillance of Teen Pregnancy Prevention Efforts (coauthored with my then-supervisor)—drew upon a long-term interest of mine: what you might call “geographical determinism,” which is a pretentious way of saying that “place matters.”

To explain, just bear with me while I stroll down a slightly bumpy memory lane.

I have always loved maps—street maps, maps of historical events, atlases, you name it. As a political science major at Yale, I discovered “electoral geography.” At one point while I was working as a research assistant for Professor David Mayhew, I mentioned the field to him.

Hmm, he responded. I should teach a course about that next semester.

He did.

I still have the syllabus.

As a doctoral student at Harvard (the doctorate I did NOT finish), I formulated a theory for my dissertation about why some areas tended to vote reliably Democratic while others tended to vote reliably Republican that was based on the way demographic traits (e.g., race, socioceconomic status [SES], religion) were distributed among an area’s population. The idea was that because everyone has a race AND an age AND a gender AND a SES level AND a religion AND so on, the areal distribution of these traits makes some more politically salient than others in that area.

Well…it all made perfect sense to me back in the early 1990s.

Because this was not already complicated enough to model and measure, I originally chose to test this theory using data from presidential primary elections, with all of their attendant flukiness. I even spent a pleasant afternoon in Concord, New Hampshire collecting (hand-written) town-level data on their 1976 presidential primary elections.

Did I mention that New Hampshire has 10 counties, 13 cities, 221 towns, and 25 unincorporated places?

From the start, however, it was an uphill battle getting this work taken seriously[2]. One of the four components of my oral exams in May 1991 was a grilling on the electoral geography literature review I had recently completed.

Rather than ask me questions about (for example) J. Clark Archer’s work on the geography of presidential elections, however, the professor who would soon chair my doctoral committee peppered me with questions about why we should study political/electoral geography when academic geography departments were closing or what James Madison’ antipathy to faction said about viewing elections through the lens of geography.

I have no recollection of how I answered those questions, but I know that I passed those exams by the skin of my teeth[3].

(Ironically, just nine years later, the nation would be riveted by Republican “red states” and Democratic “blue states” during the Florida recount that decided the 2000 presidential election between Texas Governor George W. Bush and Vice President Al Gore).

The real kicker, though, came a year later.

Harvard at the time had a program with a name like “sophomore seminars.” These small-group classes were a chance for doctoral students to prepare and teach a semester-length seminar of their own design to undergraduate political science majors.

I eagerly jumped at the chance and applied to teach one in American electoral geography, drafting a syllabus in the process. Once it was accepted, I organized the first class, including getting permission to copy a Scientific American article, which I then made copied.

Towards the end of the summer, they posted (I do not remember where, but it was 1992, so it was literally a piece of paper tacked to a bulletin board) the names of the students who would be taking each seminar.

I looked for my class.

I could not find it.

I soon discovered why. Only one student had signed up (and it was not even her/his first choice), so the seminar had been cancelled.

That was one of the most crushingly disappointing moments of my life.

In retrospect, this was most likely when my interest in completing this doctoral program began to seriously wane—even though I stuck it out for three more years.

(In a bittersweet bit of irony, five years after I walked away from that doctoral program came the 2000 U.S. presidential election. Because of the month-long Florida recount, the “red state-blue state” map of the election burned into the public consciousness. Electoral geography, at least at this very basic level, suddenly became a “thing.” To this day, there is talk of “red,” “blue” and even “purple” states.)

The good news was that the idea of looking at data geographically still appealed to me tremendously, and I was lucky enough to be able to learn and use ArcGIS mapping software in my first professional job as a health-related data analyst. The best moment in this regard there came when I produced a town-level map of alcohol and substance use problems in Massachusetts. The towns with the most severe issues were colored in red, and I noticed that they followed two parallel east-west lines emanating from Boston, and that they were crossed by a north-south line in the western part of the state.

Oh, I exclaimed. The northern east-west line is Route 2, the southern east-west line is I-90 (the Massachusetts Turnpike) and the intersecting north-south line is I-91. Of course, these are state-wide drug distribution routes.

Three professional positions later, temporarily living in Philadelphia, I was doing similar work, but now in the area of teen pregnancy–which brings us back to the oral presentation I delivered late on the afternoon of November 7, 2017 and to the second coincidence.

Its title was “Challenges in measuring neighborhood walkability: A comparison of disparate approaches,” and it was the second presentation (of six) in a 90-minute-long session titled Geo-Spatial Epidemiology in Public Health Research.

In other words, 16 years after my first APHA oral presentation, in the same city, I was once again talking about ways to organize and analyze data geographically.

And while the five-speaker session in which I spoke the following morning (Social Determinants in Health and Disease) was not “geo-spatial,” per sé, the study I discussed (“Neighborhood walkability and depressive symptoms in black women: A prospective cohort study”) did feature a geographic exposure.

**********

I again coauthored and delivered oral presentations at the APHA Meetings in 2002[4] (Philadelphia) and 2003 (San Francisco); for the 2004 Meeting (Washington, DC) I prepared a poster which I displayed along with a woman I supervised.

That talented young woman—now one of my closest friends—was a huge reason why the 2003 APHA Meeting in San Francisco was so memorable. Other, of course, than the fact that it was IN SAN FRANCISCO!

IMG_1547

IMG_1546

IMG_1533

IMG_0853

As much as fun as it was to wander through the exhibit halls and chat with the folks from schools of public health, research organizations, public health advocacy groups, medical device firms and so forth; to amass a full bag of free goodies (“swag,” I prefer to call it) in the process; to read and ask questions about scientific posters; and to sit in a wide range of scientific sessions…

(no, I am serious. I really used to enjoy that stuff, especially in the company (during the day and/or over dinner and drinks in the evenings) of friendly work colleagues)

…after about two days, my colleague and I had had enough.

So we literally played hooky from the Meeting one day.

First, I dragged the poor woman on a “Dashiell Hammett” tour, which took place only a few blocks from our Union Square hotel.

IMG_0736

IMG_0738

Then, we meandered through Chinatown (whose entrance was mere steps away)—stopping for bubble teas along the way—all the way to Fishermen’s Wharf.

IMG_0742

Our ultimate destination was the ferry to Alcatraz. The Alcatraz tour may have been the highlight of that trip. That place is eerie, creepy and endlessly fascinating.

IMG_1557

Someday I will take my wife and daughters there.

That Meeting was also the apex of my APHA experiences. After three years of them, the 2004 version in DC felt stale. I skipped the 2005 APHA Meeting in Philadelphia, as I had just returned to Boston to start my master’s program in biostatistics at Boston University, though I did briefly attend the 2006 APHA Meeting since it was in Boston, and it was a chance to see former work colleagues.

**********

Ultimately, then, attending the 2017 APHA Meeting in Atlanta was a life experiment, a way to gather qualitative “data” to assess the notion that I had put a health-related data analysis career behind for good.

I arrived in Atlanta on the evening of November 6 and took a taxi to the Marriott Marquis.

Holy moley, is this place huge…and it had those internal glass elevators which allow passengers to watch the lobby recede or approach at great speed.

IMG_3284

It was both liberating and lonely not to have work colleagues attending with me. As great as it was not to have to report to anybody, it also meant my time was far more unstructured (other than attending the sessions in which I was presenting).

On Tuesday morning, I dressed in my “presentation” clothes and made my way to the Georgia World Congress Center. This meant taking a mile-long walk in drenching humidity carrying a fully-packed satchel because the APHA chose to reduce its carbon footprint by eliminating shuttle buses.

So I was a sweaty mess when I arrived at the heart of the action. Still, I soldiered on, registering and then checking the location of my session room (luckily, both of the my sessions were in the same room—if only because it allowed me, on Wednesday morning, to retrieve the reading glasses I had left on the podium Tuesday evening).

This place was also massive and labyrinthine. It took me a good 30 minutes just to locate the Exhibit Halls.

I wandered through them for an hour or so, talking to some interesting folks and reading a couple of posters. The swag was wholly uninspiring, I am sorry to say.

And I felt…nothing.

No pangs of regret.

No overwhelming desire to return to this field of work.

No longing for work colleagues (other than a general loneliness).

In fact, I mostly felt like a ghost, the way one sometimes does walking around an old alma mater or place you used to live.

This was my past, and I was perfectly fine with that[5].

That is not to say I did not enjoy giving my talks (which were very well received—I am usually nervous before giving oral presentations…until I open my mouth, and the performer in me takes charge). I did, very much. I also enjoyed listening to the nine other speakers with whom I shared a dais. I picked up terms like “geographic-weighted regression” I plan to explore further. I even took the opportunity to distribute dozens of my new business cards (the ones that describe me, tongue somewhat in cheek, as “Writer * Blogger * Film Noir Researcher * Data Analyst”).

But none of that altered my conviction that I have made the right career path decision. I have no idea where the writing path will ultimately lead (although the research for my book has already taken me down some unexpected and vaguely disturbing alleys), professionally or financially, but I remain glad I chose that path.

One final thing…or perspective.

Tuesday, November 7 was also the day that governor’s races were held in New Jersey and Virginia, along with a mayor’s race in New York City and a wide range of state and local elections nationwide.

I had expected to settle in for a long night of room service and MSNBC viewing, but the key races were called so early that I decided to take quick advantage of the hotel swimming pool.

Yes, I waited at least 30 minutes after eating to enter the water.

The pool at the Atlanta Marquis Marriott is primarily indoors (and includes a VERT hot hot tub, almost—but not quite—too hot for me), but a small segment of it is outside; you can swim between the two pool segments through a narrow opening.

If you look directly up from the three shallow steps descending into the outdoor segment of the pool, you see this (if you can find the 27th floor, one of those windows was my room):

IMG_3287

I literally carried my iPhone into the pool to take this photograph, leaning as far back as I could. Thankfully, I did not drop my iPhone in the pool.

Until next time…

[1] The coincidence is not perfect, though, as I do not think we STAYED at the Marriott Marquis in 2001.

[2] Other than the fact that I was awarded a Mellon Dissertation Completion Fellowship in 1994. It was kind of a last-ditch spur to completion. It did not work.

[3] This was the same professor who proclaimed as an aside in a graduate American politics seminar that if you really want to do something hard, get a PhD in epidemiology. Which, of course, I did…25 years later.

[4] Where the Keynote Address was delivered—passionately and to great applause—by an obscure Democratic governor of Vermont named Howard Dean, whose presidential campaign I supported from that moment.

[5] The one caveat to this blanket page-turning is my ongoing interest in the geographic determinism, which I am indulging through state- and county-level analyses of the 2016 presidential elections. This may be the one successful way to lure me back into the professional data-analytic world.

As I head to the APHA meeting in Atlanta in November…

There have been times, especially lately, that I start to write one post and end up writing an entirely different post.

I originally conceived this post to be a simple repository for a set of documents related to my previous career. The impetus for this was two oral presentations I will be delivering in Atlanta on November 7 and 8, 2017.

As I began to explain why I was posting these documents, however, I found myself plummeting down a rabbit hole, describing a series of unpleasant interactions I had with my doctoral committee a few months after I successfully defended my doctoral dissertation in epidemiology.

It made sense to me at the time (doesn’t it always?), but it soon dawned on me that the tone of that section was…off, and that this is simply not the venue to rehash these private interactions, even as I am still processing them.

But once I stepped back (metaphorically, as I was sitting down at the time), I understood more clearly what I was trying to say.

Let me start at the beginning, if you will just bear with me…

**********

While writing my doctoral dissertation, the members of my doctoral committee and I agreed in principle that after my defense we would work together to publish as many as three peer-reviewed journal articles from it (publication was not a graduation requirement).

From my perspective—a 48-year-old married father of two who was 18 years into career as a health-related data analyst/project manager—publication was more “cherry on top” than  necessity, and perhaps also a courtesy to the members of my doctoral committee and other Boston University School of Public Health (BUSPH) personnel to whom I felt grateful.

I defended my dissertation on December 16, 2014. I was not actually in dark shadows, nor was there a bottle of champagne in front of me, but I love this noir-tinted photograph, and it gives you the flavor of that happy day.

IMG_1458

This was my moment of vindication, the culmination of a journey I had started 26 years earlier. In September 1989, I enrolled in a doctoral program in government at Harvard’s Graduate School of Arts and Sciences (GSAS). Six years later, I resigned from that program with no degree to show for my time there[1]. But just 15 months later I landed the data analyst gig with a Boston non-profit specializing in substance use and abuse that launched my career. Nine years after that, following a four-year sojourn in Philadelphia, I was back in Boston, enrolling in the BUSPH biostatistics master’s degree program. Four years later, I enrolled in their doctoral program in epidemiology.

**********

I have written elsewhere about the deliberations that led me to walk away from that analytic career towards a writing career (although this blog still allows me to analyze data and write about my findings). That transition “officially” occurred in late June 2017.

However, in February 2017, before I made the career-change leap, I was still actively pursuing positions related to my doctoral studies (assessing the health impact of the built environment, as I detail here).

A few months earlier, I had renewed my long-lapsed membership in the American Public Health Association (APHA); that is how I knew that they would be holding their Annual Meeting & Expo (Meeting) in Atlanta, Georgia November 4-8, 2017. I had delivered work-related talks at their 2001, 2002 and 2003 Meetings, and I had presented a poster at their 2004 Meeting, but I had not attended a Meeting since 2006.

Given that this year’s APHA Meeting theme is “Creating the Healthiest Nation: Climate Changes Health,” it appeared to be a perfect opportunity to advance the job search ball down the field. I thus submitted three abstracts, one for each of my three doctoral dissertation studies. To my surprise, two of them were accepted for oral presentation[2]. And as Meatloaf once sang, “two out of three ain’t bad.”

A few weeks ago, I began to pare the hour-plus-long PowerPoint presentation I had delivered at my doctoral defense down to two 12-minute-long talks. This meant  leaving out many interesting “sensitivity” analyses, including estimates of what my incident rate (IRR) and risk ratios (RR) would have been without exposure or outcome misclassification.

(For a rough translation of that last bit, please see here.)

Realizing how much important detail I was forced to remove from these PowerPoint presentations, I hit upon the idea of making all of the background materials (i.e., my actual dissertation and the PowerPoint defense presentation) publicly available.

And thus you find here:

  1. A PDF of the full text of my doctoral dissertation—Measures of Neighborhood Walkability and Their Association with Diabetes and Depressive Symptoms in Black Women—minus the Acknowledgments (to protect privacy) and CV[3].

Berger Doctoral Dissertation Dec 2014

  1. The PowerPoint presentation I delivered in defense of my dissertation (excluding the “thank you” slides). The last slide was originally this short clip showing the 10th Doctor towards the end of the 2005 episode “The Christmas Invasion.”

Berger Doctoral Defense 2014

  1. The PowerPoint presentations I will be delivering at the APHA Meeting (although not until after I have presented them on November 7 and November 8).

Matthew Berger Measurement Talk 11-7-2017

Matthew Berger Depression Talk 11-8-2017

But this begs a question.

Why haven’t I already published these studies in peer-reviewed epidemiology journals? Isn’t that the usual procedure?

And here we find the rabbit hole I found myself hurtling down as I wrote an earlier draft of this post.

*********

A few months after my successful defense (and once the final logistical requirements had been completed), I received an e-mail from a committee member asking, in effect, where the drafts of my articles were.

Technically, my doctoral dissertation was on track to be published in the ProQuest Dissertation and Theses Global database, where it currently resides.

That is not the same, however, as advancing science through a peer-reviewed publication process; I understood (and had a very high regard for) that then, and I still do now.

But in the spring of 2015, I was still wicked burned out from completing the doctorate itself (with all that had preceded it) while working full time and helping to raise a young family.

I also had higher priorities in my life at that time. My grant-funded Data Manager position was ending in June 2015, and I needed to a) complete the data analysis and final report for that project and b) search for a new gig (or so I thought at the time). My eldest daughter had her tonsils removed and needed a lot of parental TLC. And so forth.

In short, while I was perfectly happy to draft peer-reviewed journal articles from my three dissertation studies, I was not able to do so at that time.

Cutting right to the chase, the member of my doctoral committee and I engaged in an increasingly unpleasant e-mail exchange which ultimately ended in December 2015, when they decided no longer to pursue publication. The details of that exchange are irrelevant.

It is only now, however, that I understand what was really happening then.

For example, as I concluded my Data Analyst requirements, I was actively discussing a related, higher-level position with a different organization. Something kept holding me back, however, and I kept offering (sensible to me at the time) objections. I clearly never accepted that position.

Over the next two-plus years, as I applied to the few relevant positions I could find (58, although some of them were re-postings), my heart was simply never in the search. When I earned in-person interviews, I attended them with what you might call “subdued enthusiasm.” There was always some reason why this position was not quite right…even the last one, in March 2017, that seemed perfect when I first applied.

Even when I was twice offered exciting adjunct teaching positions (I would love to teach again), I ultimately talked myself out of both of them.

Do you see a pattern here?

What I have come to understand as I prepare for APHA, leading me to “publish” my doctoral dissertation here, is that my decision to change careers did not happen a few months ago. It happened, ironically, almost as soon as I walked out of that small meeting room on Albany Street in Boston on December 16, 2014.

In the perceived necessity to find a new position in my then-current career, supplemented with my newly-minted PhD, I could not comprehend, or accept, or grasp, that decision for another two-and-a-half years.

And so this post is not about reliving my unsettling communications with the members of my doctoral committee. It is about squaring a circle, or closing a loop, or whatever “completion” metaphor you prefer.

When I submitted those three abstracts to APHA in February, I was filled with optimism that the November Meeting in Atlanta would be just the place to rekindle my health-related data analysis spark, and where I would joyously engage in the networking necessary to land my next (first?) epidemiology-related position.

It turns out that it will actually be the last hurrah, the period at the end of a nearly 21-year-long sentence.

If you attend the APHA conference next week, I would be thrilled to have you listen to either or both of my presentations.

Otherwise….until next time…

[1] Upon completing my epidemiology doctorate, I finally (and successfully) applied to Harvard GSAS for the Master’s Degree I had earned before resigning.

[2] The incident diabetes study was not accepted.

[3] And, as far as I am concerned, this is tantamount to publication. Consider this passage from the BUSPH Epidemiology Doctoral Program Guidelines (2007, pg. 8): The research…must meet the current standards of publication quality in refereed journals such as American Journal of Epidemiology, American Journal of Public Health, Annals of Epidemiology, Epidemiology, International Journal of Epidemiology, Journal of the American Medical Association, and New England Journal of Medicine. It is understood that the thesis papers may be longer and have more tables and figures than permitted in published papers. Basically, once the members of my doctoral committee signed off on my doctoral dissertation, they were admitting that it already met those standards. Ergo

Positively pondering pesky probabilities, perchance

One inspiration to start this “data-driven storytelling” blog was the pioneering work of Nate Silver and his fellow data journalists at FiveThirtyEight.com; their analyses are an essential “critical thinking” reality check to my own conclusions and perceptions. Indeed, when I finally get around to designing and teaching my course on critical thinking (along with my film noir course), the required reading would include Silver’s The Signal and the Noise and a deep dive into Robert Todd Carroll’s The Skeptic’s Dictionary. I will also include Ken Rothman’s Epidemiology: An Introduction; what drew me to epidemiology (besides my long career as a public health data analyst) was its epistemological aspect. By that I mean how the fundamental methods and principals of epidemiology allow us to critically assess any narrative or story.

To that end, I have been reading with great interest Silver’s 11-part series that “reviews news coverage of the 2016 general election, explores how Donald Trump won and why his chances were underrated by most of the American media.” And while I highly recommend the entire series of articles, the September 21 conclusion is the jumping off point for my own observations about assessing the likelihood of various events.

**********

Let me begin with a passage from that article:

In recent elections, the media has often overestimated the precision of polling, cherry-picked data and portrayed elections as sure things when that conclusion very much wasn’t supported by polls or other empirical evidence.

I personally think investigative journalists are heroic figures who will ultimately save American democracy from its current self-induced peril. But they are trained in a very specific way: deliver the fact of a story with certainty and immediacy. In so doing, they are responding to media consumers with little patience for complex narratives suffused with uncertainty.

To quote Silver again, “a story can be 1. fast, 2. interesting and/or 3. true — two out of the three — but it’s hard for it to be all three at the same time.”

One narrative that developed fairly early about the 2016 presidential election campaign was that Democratic nominee Hillary Clinton was the all-but-inevitable victor. I wrote about one version of this flawed narrative here.

Reinforcing this narrative were election forecasts issued during the last weeks of the campaign that practically said “stick a fork in Trump, he is finished.” But as Silver rightly observes, some of these models were flawed because they failed to account for the “correlation in outcomes between [demographically similar] states.” For example, were Republican nominee Donald Trump to outperform his polls in Wisconsin on Election Day, he would likely also do so in Michigan, Minnesota and Iowa. And that is essentially what happened.

Still, because aggregating polls yields a more precise picture of the state of an election at a given point in time, I aggregated these 2016 election forecasts. Going into Election Day, here were some estimated probabilities of a Clinton victory, ranked lowest to highest.

FiveThirtyEight 71.4%
Betting markets 82.9%[1]
The New York Times Upshot 84.0%
DailyKos 92.0%
HuffingtonPost Pollster 98.2%
Princeton Election Consortium (Sam Wang) 99.5%

The average and median forecast was 88.0%. Remove the most skeptical forecast (though Clinton still a 5:2 favorite), and the average and median jump to 91.3% and 92.0%, respectively. By contrast, if you remove the least forecast, the average and median drop to 84.1% and 83.5%, respectively.

It is an understandable human tendency to look at a probability over 80% and “round up” from “very likely, but not guaranteed” to “event will happen.” And, under the frequentist definition of probability, we would be correct more than 80% of the time in the long run.

But we would not be correct as much as 20% of the time.

Ignoring Wang’s insanely optimistic forecast for various reasons, the “aggregate” forecast I had in mind on Election Day was that Clinton had about an 84% chance of winning.

The flip side, of course, was that Trump had about a 16% chance of winning.

A good way to interpret this probability is to think about rolling a fair, six-sided die.

Pick a number from one to six. The chance that if you roll the die, the number you picked will come up, is 1 in 6, or 16.7%.

On Election Day, Trump metaphorically needed to roll his chosen number…and he did.

But even if take the Wang-inclusive average of 88%, that is still a 1 in 8 chance. Throw eight slips of paper with the numbers one through eight written on them in a hat (I like fedoras, myself), pick one and draw. If your number comes up (which will happen 12% of the time over many draws), you win.

Trump picked a number between one and eight then pulled it out of our hypothetical fedora, and he won the election.

One way people misunderstand probability (and one of many reasons I am resolutely opposed to classical statistical significance testing) is mentally converting event x has a very low probability (like, say, matching DNA in a murder trial—only a 1 in 2 million chance!) with that event cannot happen.

So, even the Wang forecast—which gave Trump only a 1 in 200 chance of winning—did NOT mean that Clinton would definitely win. It only meant that Trump had to pull a specific number between one and 200 out of our hypothetical fedora. He did, and he won.

**********

On the other end of the spectrum is an overabundance of caution in assessing the likelihood of an event. This usually occurs when interpreting election polls.

In this post, I discussed Democratic prospects in the 2017 and 2018 races for governor.

One of the two governor’s races in November 2017 is in Virginia, where Democratic governor Terry McAuliffe is term-limited. The Democratic nominee is Lieutenant Governor Ralph Northam, and the Republican nominee is former Republican National Committee chair Ed Gillespie.

Here are the 13 public polls of this race listed on RealClearPolitics.com[2] taken after the June 13, 2017 primary elections:

Poll Date Sample MoE Northam (D) Gillespie (R) Spread
Monmouth* 9/21 – 9/25 499 LV 4.4 49 44 Northam +5
Roanoke College* 9/16 – 9/23 596 LV 4 47 43 Northam +4
Christopher Newport Univ.* 9/12 – 9/22 776 LV 3.7 47 41 Northam +6
FOX News* 9/16 – 9/17 507 RV 4 42 38 Northam +4
Quinnipiac* 9/14 – 9/18 850 LV 4.2 51 41 Northam +10
Suffolk* 9/13 – 9/17 500 LV 4.4 42 42 Tie
Mason-Dixon* 9/10 – 9/15 625 LV 4 44 43 Northam +1
Univ. of Mary Washington* 9/5 – 9/12 562 LV 5.2 44 39 Northam +5
Roanoke College* 8/12 – 8/19 599 LV 4 43 36 Northam +7
Quinnipiac* 8/3 – 8/8 1082 RV 3.8 44 38 Northam +6
VCU* 7/17 – 7/25 538 LV 5 42 37 Northam +5
Monmouth* 7/20 – 7/23 502 LV 4.3 44 44 Tie
Quinnipiac 6/15 – 6/20 1145 RV 3.8 47 39 Northam +8

Eight of these polls have Northam up between four and seven percentage points, including four of the last six. Two polls show a tied race. No poll gives Gillespie the lead.

And yet, here was the headline on Taegan Goddard’s otherwise-reliable Political Wire on September 19, 2017, referring to the just-released University of Mary Washington (Northam +5) and Suffolk polls (Even): Race For Virginia Governor May Be Close.

Granted, the two polls gave Northam an average lead of only 2.5 percentage points, which, without context, suggest a close race on Election Day. Furthermore, all three Political Wire Virginia governor’s race poll headlines since then have been on the order of: Northam Maintains Lead In Virginia.

Here is the thing, however. Most people (as I did) will equate “close” with “toss-up.” But there is a huge difference between “we have no idea who is going to win because the polls average out to a point or two either way” and “one candidate consistently has the lead, but the margin is relatively narrow.”

The latter is clearly the case in the 2017 Virginia governor’s race, with Northam’s lead averaging 4.4 percentage points in eight September polls within a narrow range (standard deviation [SD]=3.3). We are still more than five weeks from 2017 Election Day (November 7), so this is unlikely to be “herding,” the tendency of some pollsters to adjust their demographic weights and turnout estimates to avoid an “outlier” result (undermining the rationale for aggregating polls in the first place).

The problem comes when members of the media try to interpret the results of individual polls. They have absorbed the lesson of the “margin of error” (MoE) almost too well.

For example, the Monmouth poll conducted September 21-25, 2017 gives Northam a five percentage point lead, with a 4.4 percentage point MoE. Applying that MoE to both candidates’ vote estimates, we have 95% confidence that the “actual” result (if we had accurately surveyed every likely voter, not a sample of 499) is somewhere between Gillespie 48.4, Northam 44.6 (Northam down 3.8) and Northam 53.4, Gillespie 39.6 (Northam up 13.8). It is this range of possible outcomes, from a somewhat narrow Gillespie victory to a comfortable Northam win that leads members of the media to imply through oversimplification that this race will be close, meaning “toss-up.”

And yet, even within this poll, the probability (using a normal distribution, mean= 5.0, SD=4.4) that Northam is as little as 0.0001 percentage points ahead is 87.2%, making him a 7:1 favorite, about what Hillary Clinton was on Election Day 2016.

OK, maybe that was not the best example…

But when you aggregate the eight September polls, the MoE drops to about 1.3[3], putting the probability Northam is ahead at well over 99%. Even if the MoE only dropped to 3.0, the probability of a Northam lead would still be about 93%.

My point is this. Every poll needs to be considered not just as an item in itself (polls as NEWS!) but within the larger context of other polls of the same race. And in the 2017 Virginia governor’s race, the available polling paints a picture of a narrow but durable lead for Northam.

I have no idea who will be the next governor of Virginia. But a careful reading of the data suggests that, as of September 29, 2017, Lt. Governor Ralph Northam is a heavy favorite to be the next governor of Virginia, despite being ahead “only” 4 or 5 percentage points.

**********

Finally, here is an update on this post about the Democrats’ chances of regaining control of the United States House of Representatives (House) in 2018.

Out of curiosity, I built two simple linear regression models. One estimates the number of House seats Democrats will gain in 2018 only as a function of the change from 2016 in the Democratic share of the total vote cast in House elections. The Democrats lost the total 2016 House vote by 1.1 percentage points, so if they were to win the 2018 House vote by 7.0 percentage points, that would be an 8.1 percentage point shift.

Right now, FiveThirtyEight estimates Democrats have an 8.0 percentage point advantage on the “generic ballot” question (whether a respondent would vote for the Democratic or the Republican House candidate in their district if the election were held today).

My simple model estimates a pro-Democratic House vote shift of 9.1 percentage points would result in a net pickup of 26.7 House seats, a few more than the 24 they need to regain control. The 95% confidence interval (CI) is a gain of 17.0 to 36.4 seats.

But the probability that Democrats net AT LEAST 24 House seats is 71.1%, making the Democrats 5:2 favorites to regain control of the House in 2018.

My more complex model adds a variable that is simply 1 for a midterm election and 0 otherwise, as well as the product of this “dummy” variable and the change in Democratic House vote share. I hypothesized (correctly) that this relationship would be stronger in midterm elections.

This model estimates that a 9.1 percentage point increase from 2016 in the Democratic share of the House vote would result in a net gain of 31.8 seats. However, with two additional independent variables (and only 24 data points), the 95% CI is much wider, from a loss of 7.0 seats to a history-making gain of 68.3 seats.

Still, this translates to a 66.1% probability (2:1 favorites) the Democrats regain the House in 2018.

Figure 1 shows the estimated probability the Democrats regain the House in 2018 using both models and a range of percentage point changes in House vote share from 2016.

Figure 1: Probability Democrats Control U.S. House of Representatives After 2018 Elections Based Upon the Change in Democratic Share of the House Vote, 2016-18

Democratic Probability 2018 House capture

The simple model (blue curve) gives the Democrats no chance to recapture the House in 2018 until the pro-Democratic change in vote share reaches 6.5 percentage points, after which the probability rises sharply and dramatically to a near-certainty at the 10.0 percentage point change mark. The more complex model (red curve), meanwhile, assigns steadily increasing chances for the Democrats, flipping to “more likely than not” at the 7.0 percentage point change mark; even at a truly historic 15 percentage point change, the complex model only gives the Democrats an 85.3% chance to recapture the House in 2018.

For the record, I lean toward the more complex model.

It is worth noting that in the current FiveThirtyEight estimate, 15.8% of the electorate is undecided or chose a third party candidate (when an option). If the undecided vote breaks heavily toward the party not controlling the White House in a midterm election (one way electoral “waves” form), a 66-71% would likely be an underestimate of the Democrats’ chances of regaining control of the House in 2018.

And…apropos of nothing…Happy 51st Birthday to me (September 30, 2017)!!

Until next time…

[1]  To be honest, I do not recall where I got this number from…possibly from fivethirtyeight.com or maybe from https://betting.betfair.com/politics/us-politics/…

[2] Accessed September 28, 2017

[3] The total number of voters sampled across these eight polls is 4,915, which is 9.85 times higher than the 499 sampled in the Monmouth poll. The square root of 9.85 is 3.14. Dividing 4.1 by 3.14 gives you 1.31.

Using Jon Ossoff polling data to make a point about statistical significance testing

I do not like the phrase “statistical dead heat,” nor do I like the phrase “statistical tie.” These phrases oversimplify the level of uncertainty accruing to any value (e.g., polling percentage or margin) estimated from a sample of a larger population of interest, such as the universe of election-day voters; when you sample, you are only estimating the value you wish to discern. These phrases also reduce quantifiable uncertainty (containing interesting and useful information) to a metaphorical shoulder shrug: we really have no idea which candidate is leading in the poll, or whether two estimated values differ or not.

For example, a poll released June 16, 2017 showed Democrat Jon Ossoff leading Republican Karen Handel 49.7% to 49.4% among 537 likely voters in the special election runoff in Georgia’s 6th Congressional District. The margin of error (MOE) for the poll was +/-4.2%, meaning that we are 95% confident that Ossoff’s “true” percentage is between 45.5 and 53.9%, while Handel’s is between 45.2% and 53.6%.

In other words, these data suggest a wide range of possible values, anywhere from Ossoff being ahead 53.9 to 45.2% to Handel being ahead 53.6 to 45.5%. In fact, there is a 5% chance that either candidate is further ahead than that. Finally, because random samples such as these are drawn from a normal (or “bell curve”) distribution, percentages closer to those reported (Ossoff ahead 49.7 to 49.4%) are more likely than percentages further from those reported.

But this is a lot to report, and to digest, so we use phrases like “statistical dead heat” or “statistical tie” as cognitive shorthand for “there is a wide range of possible values consistent with the data we collected, including each candidate having the exact same percentage of the vote.”

Each phrase has its roots in classical statistical significance testing. The goal of this testing is to assure ourselves that any value we estimate from data we have collected (a percentage in a poll, a relative risk, a difference between two means) is not 0.

To do so, we use the following, somewhat convoluted, logic.

Let’s assume that the value (or some test statistic derived from that value) we have estimated actually is 0; we will call this the null hypothesis. What is the probability (we will call this the “p-value”) that we would have obtained this value/test statistic or one even higher purely by chance?

Got that?

We are measuring the probability—assuming that the null hypothesis is true—that a value (or one higher) was obtained purely by chance.

And if the probability is very low, it would therefore be very unlikely that we have gotten our value purely by chance, so it must be the case that we did NOT get it by chance. And so we can “reject” the null hypothesis (even though we assumed it to be true to arrive at this rejection), given that value that we got was so high.

The higher the probability, the more difficult it becomes to reject the null hypothesis.

By a historical accident,  any p-value less than 0.05 is considered “statistically significant,” meaning that we can reject the null hypothesis.

Of course, we REALLY want to know how probable the null hypothesis itself is, but that is a vastly trickier proposition.

Or, even better…we REALLY want to know how likely the actual value we observed is.

Think about it. All we are really learning from classical statistical significance testing is either “our value is probably not 0” or “we can’t be certain that our value is not 0…it just might be.” This tells us nothing about the quality of the actual estimate we obtained is, how near the “true” value it actually is.

Now, to be fair to the 0.05 cut-point for determining “statistical significance,” it does have an analogue in the 95% confidence interval.

The 95% confidence interval (CI) is very similar to the polling MOE discussed earlier. It is a range of values (often calculated as value +/-1.96*standard error[1]) which we are 95% confident includes the “true” value.

Let’s say you estimate the impact of living in a less walkable neighborhood relative to living in a more walkable neighborhood on incident diabetes over 16 years of follow-up. Your estimate is 1.06 (i.e., you have 6% higher risk of contracting diabetes), with a 95% CI of 0.90 to 1.24. In other words, you are 95% confident that the “true” effect is somewhere between a 10% decrease in incident diabetes risk and a 24% increase in incident diabetes risk.

Ahh, but this is where that pesky cognitive shorthand comes back. See, that 95% CI you reported includes the value 1.00 (i.e., no effect at all). Therefore, there is likely no effect of neighborhood walkability on incident diabetes.

No, no, a thousand times no.

It simply means that there is a specified range of possible measures of effect, only one of which is “no effect.” In fact, the bulk of the possible effects are on the risk side (1.01-1.24), rather than on the “protective side” (0.90-0.99).

Just bear with me while I come to the point of this statistical rigmarole.

Early this morning, I posted this on Facebook:

The election-eve consensus is that the Jon Ossoff-Karen Handel race (special election runoff in Georgia’s 6th Congressional District) is a dead heat, with Handel barely ahead. This consensus is based in large part on the RealClearPolitics polling average (Handel +0.25). However, the RCP only looks at the most recent poll by any given pollster, and only within a very narrow time frame

Hogwash (for the most part).

All polls are samples from a population of interest, meaning that you WANT to pool recent polls from the same pollster (each is a separate dive into the same pool using the same methods). Also, I found no evidence that the polling average has changed much since the first election April 19

My analysis (90% hard science, 10% voodoo) is that Ossoff is ahead by 1.4 percentage points. Assume a very wide “real” margin of error of 9 percentage points, and Ossoff is about a 62% favorite to win today. 

Meaning, of course, that there is a 38% chance Handel wins

That is still a very close race, but I would give Ossoff a small edge

And, bloviating punditry aside, for Ossoff even to lose by a percentage point would be a remarkable pro-Democratic shift for a Congressional seat Republicans have dominated for 40 years.

Polls close at 7 pm EST. 

Here is the full extent of my reasoning.

I collected all 12 polls of this race taken after the first round of voting on April 20, 2017. Four were conducted by WSVB-TV/Landmark and showed Ossoff ahead by 1 percentage point (polling midpoint 5/31/2017), 3 (6/7), 2 (6/15) and 0 (6/19) percentage points. Two each were conducted by the Republican firm Trafalgar Group (Ossoff +3 [6/12], Ossoff -2 [6/18]) and by WXIA-TV/SurveyUSA (Ossoff +7 [5/18], even [6/9]). Other polls were conducted by Landmark Communications (Ossoff -2 [5/7]), Gravis Marketing (Ossoff +2 [5/9]), the Atlanta Journal-Constitution (Ossoff +7 [6/7]) and Fox 5 Atlanta/Opinion Savvy (Ossoff +1 [6/15]).

On average, these polls show Ossoff ahead by an average of 1.85 percentage points.

Using a procedure I suggest here, I subtracted the average of all other polls from those from a single pollster. For example, the average of the four WSVB-TV/Landmark was Ossoff +1.5, while the average of the other eight polls was Ossoff +2.0. This difference—or “bias”—of -0.5 percentage points shows the WSVB-TV/Landmark polls may have slightly underestimated the Ossoff margin.

I then “adjusted” each poll by subtracting its “bias” from the original polling value (e.g., I added 0.5 to each WSVB-TV/Landmark Ossoff margin). For convenience, I lumped the pollsters releasing only one poll into a single “other” category; its “bias” was only 0.2.

The “adjusted” Ossoff margin was now +1.865.

To see whether the Ossoff margin had been increasing or decreasing monotonically over time, I ran an ordinary least squares (OLS) regression of Ossoff margin against polling date midpoint (using the average, if polls had the same midpoint date). There was no evidence of change over time; the r-squared (a measure of the variance in Ossoff margin accounted for by time) was 0.01.

Still, out of a surfeit of caution, I decided to assign a weight of “2” to the most recent poll by WSVB-TV/Landmark, Trafalgar Group and WXIA-TV/SurveyUSA and a weight of 1 to the other nine polls.

Using the bias-adjusted polls and this simple weighting scheme, I calculated an Ossoff margin of 1.38, suggesting recent tightening in the race not captured by my OLS regression[2].

So, let’s say that our best estimate is that Ossoff is ahead by 1.38 percentage points heading into today’s voting. There is a great deal of uncertainty around this estimate, resulting both from sampling error (an overall MOE of 2.5 to 3 percentage points around an average Ossoff percentage and an average Handel percentage, which you would double to get the MOE for the Ossoff margin—say, 5 to 6 percentage points) and the quality of the polls themselves.

Now, let’s say that our Ossoff margin MOE is nine percentage points. I admit up front that this is a somewhat arbitrary MOE-larger-than-6-percentage-points I am using to make a point.

In a normal distribution, 95% of all values are within two (OK, 1.96) standard deviations (SD) of the midpoint, or mean. If you think of the Ossoff margin of +1.38 as the midpoint of a range of possible margins distributed normally around the midpoint, then the MOE is analogous to the 95% CI, and the standard deviation of this normal distribution is thus 9/1.96 = 4.59.

To win this two-candidate race, Ossoff needs a margin of one vote more than 0%. We can use the normal distribution (mean=1.38, SD=4.59) to determine the probability (based purely upon these 12 polls taken over two months with varying quality) that Ossoff’s margin will be AT LEAST 0.01%.

And the answer is…61.7%!

Using a higher SD will yield a win probability somewhat closer (but still larger than) 50%, while a lower SD will yield an even higher win probability.

Here is the larger point.

It may sound like Ossoff +1.38 +/-9.0 is a “statistical dead heat” or “statistical tie” because it includes 0.00 and covers a wide range of possible margins (Ossoff -7.62 to Ossoff +10.38, with 95% confidence), but the reality is that this range of values includes more Ossoff wins than Ossoff losses, by ratio of 62 to 38.

You can reanalyze these polls and/or question my assumptions, but you cannot change the mathematical fact that a positive margin, however small and however large the MOE, is still indicative of a slight advantage (more values above 0 than below).

Until next time…

**********

This is an addendum started at 12:13 am on June 21, 2017.

According the New York Times, Handel beat Ossoff by 3.8 percentage points, 51.9 to 48.1%. My polling average (Ossoff+1.4) was thus off by -5.2 percentage points. That is a sizable polling error. RealClearPolitics (RCP) was somewhat closer (-3.6 percentage points), while HuffPostPollster (HPP) was the most dramatically different (-6.2 percentage points).

Why such a stark difference? And why was EVERY pollster off (the best Handel did in any poll was +2 percentage points, twice)?

I think the answer can be found in a simple difference in aggregation methods. RCP used four polls in its final average, with starting dates of June 7, June 14, June 17 and June 18, and their final average was Handel+0.2. HPP, however, included no polls AFTER June 7, and their final average was Ossoff+2.4, a difference of 2.6 percentage points in Handel’s favor.

Moreover, Handel’s final polling average was 2.1 percentage points higher in RCP (49.0 vs. 46.9%), while Ossoff’s final polling average was only 0.5% lower (48.8 vs. 49.3%).

In other words, over the last week or so of the race, Handel was clearly gaining ground, while Ossoff was fading slightly.

What could have caused this shift?

On the morning of June 14, 2017, a man named James T. Hodgkinson opened fire on a group of Republican members of Congress, members of the Capitol Police and others on an Alexandria, Virginia baseball diamond. Mr. Hodgkinson, who claimed to have volunteered on Senator Bernie Sanders’ 2016 presidential campaign, appeared to be singling out Republicans for attack; he had posted violent anti-Trump and anti-Republican screeds on his Facebook page.

When this ad, brazenly (and absurdly) tying Ossoff to the left-wing rage and violence deemed responsible for the Alexandria shooting, started playing in Georgia’s 6th Congressional District, I thought it was a despicable and desperate attempt to save Handel from a certain loss.

But the overarching message of “blame the left” appears to have resonated with district residents who otherwise may not have voted. The final poll of the campaign found that “…a majority of voters who had yet to cast their ballots said the recent shootings had no effect on their decision. About one-third of election-day voters said the attack would make them ‘more likely’ to cast their ballots, and most of those were Republican.”

It is conceivable that this event changed a narrow Ossoff win into a narrow loss, as disillusioned Republicans decided to cast an election-day ballot for Handel in defense of their party. While Ossoff won the early vote by 5.6 percentage points (and 9,363 votes), he lost the election day vote by a whopping 16.4 percentage points (and 19,073 votes).

Ossoff may well have lost anyway, for other reasons: his non-residence in the district, the difference between Republican opposition to Trump and support for mainstream Republicans, the amount of outside money which flowed into the district (making it harder for Ossoff to cast himself as a more centrist, district-friendly Democrat; the Democrat in the most expensive U.S. House race in history lost by a larger margin [3.8 percentage points vs. 3.2 percentage points] than the Democrat in the barely-noticed South Carolina 5th Congressional District special election held the same day) and his inexperience as a politician.

But the fact that Handel herself cited the Alexandria shooting in her victory speech (starting at 03:23) speaks loudly about why SHE thinks she won the election.

Until next time…again…

[1] Itself usually calculated as standard deviation divided by the square root of the sample population.

[2] Other recency weighting schemes yielded similar results.